My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the
code.
What happens in this case?
The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.
Thiago Adams <thiago.adams@gmail.com> writes:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the
code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constant
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the code. What happens in this case?
The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.
On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the code. >> What happens in this case?
A constant expression must be evaluated in the way that would happen
if it were translated to code on the target machine.
Thus, if necessary, the features of the target machine's arithmetic must
be simulated on the build machine.
(Modulo issues not relevant to the debate, like if the expression
has ambiguous evaluation orders that affect the result, or undefined >behaviors, they don't have to play out the same way under different
modes of processing in the same implementation.)
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
They have to; if a constant-folding optimization produces a different
result (in an expression which has no issue) that is then an incorrect >optimization.
GCC uses arbitrary-precision libraries (GNU GMP for integer, and GNU
MPFR for floating-point), which are in part for this issue, I think.
On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the code. >> What happens in this case?
A constant expression must be evaluated in the way that would happen
if it were translated to code on the target machine.
Thus, if necessary, the features of the target machine's arithmetic must
be simulated on the build machine.
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
They have to; if a constant-folding optimization produces a different
result (in an expression which has no issue) that is then an incorrect optimization.
On 2025-08-29 16:19, Kaz Kylheku wrote:
On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the code.
What happens in this case?
A constant expression must be evaluated in the way that would happen
if it were translated to code on the target machine.
Thus, if necessary, the features of the target machine's arithmetic must
be simulated on the build machine.
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
They have to; if a constant-folding optimization produces a different
result (in an expression which has no issue) that is then an incorrect
optimization.
Emulation is necessary only if the value of the constant expression
changes which code is generated. If the value is simply used by the calculations, then the value can be calculated at run time on the target machine, as if done before the start of main().
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the
code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the >>>> code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
Yes, it can be hard to achieve in some cases. For things like integer arithmetic, it's no serious challenge - floating point is the biggie for
the challenge of getting the details correct when the host and the
target are different. (And even if the compiler is native, different floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
On 9/1/2025 5:10 AM, David Brown wrote:
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the >>>>> code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
Yes, it can be hard to achieve in some cases. For things like integer
arithmetic, it's no serious challenge - floating point is the biggie
for the challenge of getting the details correct when the host and the
target are different. (And even if the compiler is native, different
floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
Interesting.
Yes, I think for integers it is not so difficult.
if the compiler has the range int8_t ... int64_t then it is just a matter of selection the of fixed size according with the
abstract type for that platform.
For floating points I think at least for "desktop" computers the result
may be the same.
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point emulation.
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point emulation.
On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point
emulation.
GCC also uses not only GNU GMP but also GNU MPFR.
On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point
emulation.
GCC also uses not only GNU GMP but also GNU MPFR.
MPFR is of no help when you want to emulate an exact behavior of
particular hardware format.
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the >>>> code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
Yes, it can be hard to achieve in some cases. For things like integer arithmetic, it's no serious challenge - floating point is the biggie for
the challenge of getting the details correct when the host and the
target are different. (And even if the compiler is native, different floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
On 29/08/2025 22:10, Thiago Adams wrote:...
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams writes:
My curiosity is the following:
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Even a non-cross compiler might not be implemented in exactlyso in
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
Yes, it can be hard to achieve in some cases. For things like integer >arithmetic, it's no serious challenge - floating point is the biggie for
the challenge of getting the details correct when the host and the
target are different.
(And even if the compiler is native, different
floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that is >bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used to >simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
On 2025-09-02, Michael S <already5chosen@yahoo.com> wrote:
On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com>
wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use
it as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
What library are you referring to? Is it something internal to
gcc? I know gcc uses GMP, but that's not really floating-point
emulation.
GCC also uses not only GNU GMP but also GNU MPFR.
MPFR is of no help when you want to emulate an exact behavior of
particular hardware format.
Then why is it there?
David Brown <david.brown@hesbynett.no> wrote:For majority of "interesting" targets native compilers do not exist.
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that
runs the code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work
correctly.
So in theory it has to be the same result. This may be hard do
achieve.
Yes, it can be hard to achieve in some cases. For things like
integer arithmetic, it's no serious challenge - floating point is
the biggie for the challenge of getting the details correct when
the host and the target are different. (And even if the compiler
is native, different floating point options can lead to
significantly different results.)
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the compiler.
I don't know about other compilers, but gcc has a /huge/ library
that is used to simulate floating point on a wide range of targets
and options, precisely so that it can get this right.
AFAIK in normal mode gcc do not consider differences between
compile-time and run-time evaluation of floating point constants as a
bug. And the may differ, with compile-time evaluation usually giving
more accuracy. OTOH they care vary much that cross-compiler and
native compiler produce the same results.
So they do not use nativeI certainly would not be happy if compiler that I am using for embedded
floating point arithmetic to evaluate constants. Rather, both
native compiler and cross complier use the same partable library
(that is MPFR). One can probably request more strict mode.
If it is available, then I do not know how it is done. One
possible approach is to delay anything non-trivial to runtime.
Non-trivial for floating point is likely mean transcendental
functions: different libraries almost surely will produce different
results and for legal reasons alone compiler can not assume access
to target library.
Ordinary four arithmetic operations for IEEE are easy: rounding is
handled by MPFR and things like overflow, infinities, etc, are just
a bunch of tediouis special cases. But tanscendental functions
usually do not have well specified rounding behaviour, so exact
rounding in MPFR is of no help when trying to reproduce results
from runtime libraries.
Old (and possibly some new) embedded targets are in a sense more "interesting", as they implemented basic operations in software,
frequently taking some shortcuts to gain speed.
On Tue, 2 Sep 2025 16:58:45 -0000 (UTC)
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
On 2025-09-02, Michael S <already5chosen@yahoo.com> wrote:
On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com>
wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use
it as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
What library are you referring to? Is it something internal to
gcc? I know gcc uses GMP, but that's not really floating-point
emulation.
GCC also uses not only GNU GMP but also GNU MPFR.
MPFR is of no help when you want to emulate an exact behavior of
particular hardware format.
Then why is it there?
Most likely because people that think that compilers make extraordinary effort in order to match FP results evaluated at compile time with
those evaluated at run time do not know what they are talking about.
As suggested above by Waldek Hebisch, compilers are quite happy to do compile-time evaluation at higher (preferably much higher) precision
than at run time.
On 01/09/2025 23:11, Keith Thompson wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point emulation.
I am afraid I don't know the details here, and to what extent it is
internal to the GCC project or external. I /think/, but I could easily
be wrong, that general libraries like GMP are used for the actual calculations, while there is GCC-specific stuff to make sure things
match up with the target details.
On Tue, 2 Sep 2025 17:32:56 -0000 (UTC)
antispam@fricas.org (Waldek Hebisch) wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that
runs the code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work
correctly.
So in theory it has to be the same result. This may be hard do
achieve.
Yes, it can be hard to achieve in some cases. For things like
integer arithmetic, it's no serious challenge - floating point is
the biggie for the challenge of getting the details correct when
the host and the target are different. (And even if the compiler
is native, different floating point options can lead to
significantly different results.)
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the compiler.
I don't know about other compilers, but gcc has a /huge/ library
that is used to simulate floating point on a wide range of targets
and options, precisely so that it can get this right.
AFAIK in normal mode gcc do not consider differences between
compile-time and run-time evaluation of floating point constants as a
bug. And the may differ, with compile-time evaluation usually giving
more accuracy. OTOH they care vary much that cross-compiler and
native compiler produce the same results.
For majority of "interesting" targets native compilers do not exist.
So they do not use native
floating point arithmetic to evaluate constants. Rather, both
native compiler and cross complier use the same partable library
(that is MPFR). One can probably request more strict mode.
If it is available, then I do not know how it is done. One
possible approach is to delay anything non-trivial to runtime.
I certainly would not be happy if compiler that I am using for embedded targets that typically do not have hardware support for 'double' will
fail to evaluate DP constant expressions in compile time.
Luckily, it never happens.
Non-trivial for floating point is likely mean transcendental
functions: different libraries almost surely will produce different
results and for legal reasons alone compiler can not assume access
to target library.
Right now in C, including C23, transcendental functions can not be parts
of constant expression.
Ordinary four arithmetic operations for IEEE are easy: rounding is
handled by MPFR and things like overflow, infinities, etc, are just
a bunch of tediouis special cases. But tanscendental functions
usually do not have well specified rounding behaviour, so exact
rounding in MPFR is of no help when trying to reproduce results
from runtime libraries.
Old (and possibly some new) embedded targets are in a sense more
"interesting", as they implemented basic operations in software,
frequently taking some shortcuts to gain speed.
Why "some new"? Ovewhelming majority of microcontrollers, both old and
new, do not implement double precision FP math in hardware.
Michael S <already5chosen@yahoo.com> wrote:
On Tue, 2 Sep 2025 17:32:56 -0000 (UTC)
antispam@fricas.org (Waldek Hebisch) wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in
the machine that compiles the code compared with the machine
that runs the code.
What happens in this case?
The solution I can think of is emulation when evaluating
constant expressions. But I don't if any compiler is doing
this way.
For example, 65535u + 1u will evaluate to 0u if the target
system has 16-bit int, 65536u otherwise. (I picked an example
that doesn't depend on UINT_MAX or any other macros defined in
the standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system. Whether they do
so by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work
correctly.
So in theory it has to be the same result. This may be hard do
achieve.
Yes, it can be hard to achieve in some cases. For things like
integer arithmetic, it's no serious challenge - floating point is
the biggie for the challenge of getting the details correct when
the host and the target are different. (And even if the compiler
is native, different floating point options can lead to
significantly different results.)
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
AFAIK in normal mode gcc do not consider differences between
compile-time and run-time evaluation of floating point constants
as a bug. And the may differ, with compile-time evaluation
usually giving more accuracy. OTOH they care vary much that
cross-compiler and native compiler produce the same results.
For majority of "interesting" targets native compilers do not exist.
So they do not use native
floating point arithmetic to evaluate constants. Rather, both
native compiler and cross complier use the same partable library
(that is MPFR). One can probably request more strict mode.
If it is available, then I do not know how it is done. One
possible approach is to delay anything non-trivial to runtime.
I certainly would not be happy if compiler that I am using for
embedded targets that typically do not have hardware support for
'double' will fail to evaluate DP constant expressions in compile
time. Luckily, it never happens.
Non-trivial for floating point is likely mean transcendental
functions: different libraries almost surely will produce different
results and for legal reasons alone compiler can not assume access
to target library.
Right now in C, including C23, transcendental functions can not be
parts of constant expression.
That is irrelevant for current question. Computing constants at
compile time is an optimization and compilers do this also for
transcendental constants.
Ordinary four arithmetic operations for IEEE are easy: rounding is
handled by MPFR and things like overflow, infinities, etc, are just
a bunch of tediouis special cases. But tanscendental functions
usually do not have well specified rounding behaviour, so exact
rounding in MPFR is of no help when trying to reproduce results
from runtime libraries.
Old (and possibly some new) embedded targets are in a sense more
"interesting", as they implemented basic operations in software,
frequently taking some shortcuts to gain speed.
Why "some new"? Ovewhelming majority of microcontrollers, both old
and new, do not implement double precision FP math in hardware.
Old libraries often took shortcuts to make them faster. For new
targets there is pressure to implement precise rounding, so I do
not know if new targets are doing "interesting" things. Also,
while minority of embedded targets has hardware floating point,
need for _fast_ floating point is limited, and when needed
application may use processor with hardware floating point.
Michael S <already5chosen@yahoo.com> wrote:
On Tue, 2 Sep 2025 17:32:56 -0000 (UTC)
antispam@fricas.org (Waldek Hebisch) wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the >>>>>>> machine that compiles the code compared with the machine that
runs the code.
What happens in this case?
The solution I can think of is emulation when evaluating constant >>>>>>> expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work
correctly.
So in theory it has to be the same result. This may be hard do
achieve.
Yes, it can be hard to achieve in some cases. For things like
integer arithmetic, it's no serious challenge - floating point is
the biggie for the challenge of getting the details correct when
the host and the target are different. (And even if the compiler
is native, different floating point options can lead to
significantly different results.)
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the compiler.
I don't know about other compilers, but gcc has a /huge/ library
that is used to simulate floating point on a wide range of targets
and options, precisely so that it can get this right.
AFAIK in normal mode gcc do not consider differences between
compile-time and run-time evaluation of floating point constants as a
bug. And the may differ, with compile-time evaluation usually giving
more accuracy. OTOH they care vary much that cross-compiler and
native compiler produce the same results.
For majority of "interesting" targets native compilers do not exist.
So they do not use native
floating point arithmetic to evaluate constants. Rather, both
native compiler and cross complier use the same partable library
(that is MPFR). One can probably request more strict mode.
If it is available, then I do not know how it is done. One
possible approach is to delay anything non-trivial to runtime.
I certainly would not be happy if compiler that I am using for embedded
targets that typically do not have hardware support for 'double' will
fail to evaluate DP constant expressions in compile time.
Luckily, it never happens.
Non-trivial for floating point is likely mean transcendental
functions: different libraries almost surely will produce different
results and for legal reasons alone compiler can not assume access
to target library.
Right now in C, including C23, transcendental functions can not be parts
of constant expression.
That is irrelevant for current question. Computing constants at
compile time is an optimization and compilers do this also for
transcendental constants.
On Thu, 4 Sep 2025 12:35:08 -0000 (UTC)
antispam@fricas.org (Waldek Hebisch) wrote:
Michael S <already5chosen@yahoo.com> wrote:
Non-trivial for floating point is likely mean transcendental
functions: different libraries almost surely will produce
different results and for legal reasons alone compiler can not
assume access to target library.
Right now in C, including C23, transcendental functions can not be
parts of constant expression.
That is irrelevant for current question. Computing constants at
compile time is an optimization and compilers do this also for transcendental constants.
Can you demonstrate it done by any compiler for any transcendental
function other than sqrt() or trivial case like
sin(0)/tan(0)/asin(0)/atan(0) ?
gcc went one step further and accepts following code.[...]
double bar = sin(1); // at file scope
My understanding is that this code is *not* a legal C, but language
lawyers among us would say that it is merely 'undefined' and as such
does not require rejection.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the
code.
What happens in this case?
The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.
Am 29.08.2025 um 20:46 schrieb Thiago Adams:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs
the code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
Cross compilation compiles with the machine code of the local
machine and embedds a runtime translator in the executable.
Am 29.08.2025 um 20:46 schrieb Thiago Adams:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the
code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
Cross compilation compiles with the machine code of the local
machine and embedds a runtime translator in the executable.
On Mon, 1 Sep 2025 10:10:17 +0200, David Brown wrote:
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams writes:
My curiosity is the following:
...For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Even a non-cross compiler might not be implemented in exactlyso in
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
Yes, it can be hard to achieve in some cases. For things like integer
arithmetic, it's no serious challenge - floating point is the biggie for
the challenge of getting the details correct when the host and the
target are different.
float point is not ieee stardadizated?
(And even if the compiler is native, different
floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that is
bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used to
simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
i think there are few problems when one use a type of fixed size as
u32 or unsigned int32_t
Wuns Haerst <Wuns.Haerst@wurstfabrik.at> writes:
Am 29.08.2025 um 20:46 schrieb Thiago Adams:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs
the code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
Cross compilation compiles with the machine code of the local
machine and embedds a runtime translator in the executable.
Where did you get that idea?
Am 06.09.2025 um 05:05 schrieb Keith Thompson:
Wuns Haerst <Wuns.Haerst@wurstfabrik.at> writes:
Am 29.08.2025 um 20:46 schrieb Thiago Adams:Where did you get that idea?
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs
the code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
Cross compilation compiles with the machine code of the local
machine and embedds a runtime translator in the executable.
That's done since the 70s because cross-compilation often would
have inconsistent behaviour of the code than on the source-machine
because of the different ISA.
Wuns Haerst <Wuns.Haerst@wurstfabrik.at> writes:
Am 06.09.2025 um 05:05 schrieb Keith Thompson:
Wuns Haerst <Wuns.Haerst@wurstfabrik.at> writes:
Am 29.08.2025 um 20:46 schrieb Thiago Adams:Where did you get that idea?
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs
the code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
Cross compilation compiles with the machine code of the local
machine and embedds a runtime translator in the executable.
That's done since the 70s because cross-compilation often would
have inconsistent behaviour of the code than on the source-machine
because of the different ISA.
I've used a number of cross-compilers. I've never heard of one
that embeds a runtime translator in the executable.
For example, an ARM cross-compiler running on an x86-64 system will
generate ARM machine code. Running the generated code on the ARM
target does not involve any kind of translation (which would be
impractical for small target systems anyway).
Do you have a concrete example of the kind of cross-compiler you're
talking about?
Newer tools rarely if ever cut corners in implementation of IEEE
binary64. That is, they can cut "other" corners, like generation of
INEXACT flag, but not precision/rounding.
On 9/2/2025 2:22 PM, Rosario19 wrote:
On Mon, 1 Sep 2025 10:10:17 +0200, David Brown wrote:
On 29/08/2025 22:10, Thiago Adams wrote:...
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams writes:
My curiosity is the following:
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Even a non-cross compiler might not be implemented in exactlyso in
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly. >>>>>
So in theory it has to be the same result. This may be hard do achieve. >>>>
Yes, it can be hard to achieve in some cases. For things like integer
arithmetic, it's no serious challenge - floating point is the biggie for >>> the challenge of getting the details correct when the host and the
target are different.
float point is not ieee stardadizated?
It is... But...
IEEE-754 aims pretty high.
How well real-world systems achieve it, is still subject to some level
of variability.
On many targets, while full IEEE-754 semantics are possible in theory, >programmers disable some features to make the FPU "less slow".
And, programmers may not always be entirely that they had done so (or,
more often, that the compiler, or some other 3rd party library they are >using, had done so). Or, on some targets, it may be the default setting
at the OS level or similar.
Some behaviors specified in IEEE-754 don't quite always manifest on real >hardware in practice (such as strict 0.5 ULP rounding).
Even high-end systems, like desktop PCs, are not immune to this.
I had before noted when comparing:
Double precision values produced by my own FPU design;
Where, I had cut a whole lot of corners to keep costs low;
Double precision values produced by the FPU in my PC (via SSE);
Double precision values calculated according to IEEE rules
(internally using much wider calculations).
That there were cases where my desktop PC's results agreed with my >corner-cutting FPU, but *not* with the value calculated along strict >adherence to IEEE-754 rules.
Sometimes, it is subtle:
In theory, the FPU can do the full version;
But, the hardware doesn't "really" do so directly,
but, instead relies on hidden traps or firmware/microcode.
But, often, subtly, there is a flag to control these behaviors;
Compiling with some optimizations may, quietly,
set the FPU to not so strictly conform to IEEE rules...
The more obvious and well known issue is DAZ/FTZ:
In IEEE-754, if the exponent is 0, it may theoretically represent a
range of non-normalize values, which may still do math in this "almost
but not quite 0" state;
But, in hardware, this is expensive, with the cheaper option being to
just assume that, for exponent 0, the whole range is 0.
Maybe less obvious, is that when this happens, the FPU might also stop >giving 0.5ULP rounding, but instead, say, 0.501 ULP (where, say, there
is a 1/1000 chance or so that the result is not rounded as the standard >would mandate). While it might not seem like much, the cost difference >between an FPU that gives around 0.5 ULP, and one that can actually give
0.5 ULP in hardware, is quite significant.
If one assumes strict determinism between machines regarding floating
point calculations, it is a headache.
If one instead assumes "close enough, probably fine", then it is a lot >easier.
In practice, it isn't always a bad thing:
You can't change fundamental implementation limits;
In most use cases, you would much rather have fast but slightly
inaccurate, than more strict accuracy but significantly slower.
Perfection here is not always attainable.
Like, we can just be happy enough:
Pretty much everyone uses the same representations;
The differences are usually so small as to be unnoticeable in practice; >Pretty much any FPU one is likely to use in practice, if given values >representing exact integers, with a result that can be represented as an >exact integer (when the width of the integer value is less than the
width of the mantissa), will give an exact result (this was not always
true in the early days, *1).
*1: I consider this one a limit for how much corner cutting is allowed
in a scalar FPU. However, I feel worse precision can be considered >acceptable for things like SIMD vector operations.
But, if the compiler tries to auto vectorize floating point math on a
target where the SIMD unit does not give results suitable for the
acceptable accuracy for scalar operations, this is bad.
Sometimes, the problem exists the other way as well:
Sometimes, the ISA might provide a single-rounded FMA operator;
And, the compiler might implicitly merge "z=a*x+b;" into a single
operator, which may in turn generate different results than had the FMUL
and FADD been performed separately.
This is also bad, but some compilers (particularly GCC) are prone to try
to do this whenever such an instruction is available in the ISA (forcing
the programmer to realize that they need to use "-ffp-contract=off", or
that such an option exists).
(And even if the compiler is native, different
floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that is
bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used to >>> simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
i think there are few problems when one use a type of fixed size as
u32 or unsigned int32_t
Luckily, for the most part, integer rules are reasonably well defined.
Some compilers "try to get too clever for their own good" in these areas.
Model I took in my compiler is that for expression evaluation it also
models the types and then models some behaviors (like integer overflows, >etc) as expected for the type.
There are some cases where things may differ between machines, like if a >shift is outside the allowed range. Though, common is for modern
compilers to generate a warning or error for this.
When trying to port some old code before, I had to try to figure out a
case. It had been using a negative constant shift, and I needed to try
to figure out what exactly this did.
Turns out the program was expecting that when the shift constant it is >negative, it flipped the shift direction and shifted by that amount in
the opposite direction (even if on actual machines, the typical behavior
is for the shift to simply be masked by the range of the type).
so, for example:
uint32_t x, y;
y=x<<(-1);
Might be expected to turn into:
y=x<<31;
Matching typical x86 behavior, but not:
y=x>>1;
Which was what the program seemingly expected to happen here.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,069 |
Nodes: | 10 (0 / 10) |
Uptime: | 70:53:22 |
Calls: | 13,725 |
Files: | 186,960 |
D/L today: |
4,351 files (1,095M bytes) |
Messages: | 2,410,344 |