• how cross compilation works?

    From Thiago Adams@thiago.adams@gmail.com to comp.lang.c on Fri Aug 29 15:46:44 2025
    From Newsgroup: comp.lang.c

    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code. What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Fri Aug 29 12:54:27 2025
    From Newsgroup: comp.lang.c

    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the
    code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise. (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system. Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Thiago Adams@thiago.adams@gmail.com to comp.lang.c on Fri Aug 29 17:10:25 2025
    From Newsgroup: comp.lang.c

    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the
    code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise. (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system. Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Fri Aug 29 20:19:34 2025
    From Newsgroup: comp.lang.c

    On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code. What happens in this case?

    A constant expression must be evaluated in the way that would happen
    if it were translated to code on the target machine.

    Thus, if necessary, the features of the target machine's arithmetic must
    be simulated on the build machine.

    (Modulo issues not relevant to the debate, like if the expression
    has ambiguous evaluation orders that affect the result, or undefined
    behaviors, they don't have to play out the same way under different
    modes of processing in the same implementation.)

    The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.

    They have to; if a constant-folding optimization produces a different
    result (in an expression which has no issue) that is then an incorrect optimization.

    GCC uses arbitrary-precision libraries (GNU GMP for integer, and GNU
    MPFR for floating-point), which are in part for this issue, I think.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.lang.c on Fri Aug 29 21:54:16 2025
    From Newsgroup: comp.lang.c

    In article <20250829131023.130@kylheku.com>,
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code. >> What happens in this case?

    A constant expression must be evaluated in the way that would happen
    if it were translated to code on the target machine.

    Thus, if necessary, the features of the target machine's arithmetic must
    be simulated on the build machine.

    (Modulo issues not relevant to the debate, like if the expression
    has ambiguous evaluation orders that affect the result, or undefined >behaviors, they don't have to play out the same way under different
    modes of processing in the same implementation.)

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    They have to; if a constant-folding optimization produces a different
    result (in an expression which has no issue) that is then an incorrect >optimization.

    GCC uses arbitrary-precision libraries (GNU GMP for integer, and GNU
    MPFR for floating-point), which are in part for this issue, I think.

    Dealing with integer arithmetic, boolean expressions, character
    manipulation, and so on is often pretty straight-forward at to
    handle for a given target system at compile time. The thing,
    that throws a lot of systems off, is floating point: there exist
    FPUs with different hardware characteristics, even within a
    single architectural family, that can yield different results in
    a way that is simply unknowable until runtime. A classic
    example is hardware that uses 80-bit internal representations
    for double-precision FP arithmetic, versus a 64-bit
    representation. In that world, unless you know precisely what microarchitecture the program is going to run on, you just can't
    make a "correct" decision at compile time at all in the general
    case.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Fri Aug 29 20:20:14 2025
    From Newsgroup: comp.lang.c

    On 2025-08-29 16:19, Kaz Kylheku wrote:
    On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code. >> What happens in this case?

    A constant expression must be evaluated in the way that would happen
    if it were translated to code on the target machine.

    Thus, if necessary, the features of the target machine's arithmetic must
    be simulated on the build machine.
    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    They have to; if a constant-folding optimization produces a different
    result (in an expression which has no issue) that is then an incorrect optimization.

    Emulation is necessary only if the value of the constant expression
    changes which code is generated. If the value is simply used by the calculations, then the value can be calculated at run time on the target machine, as if done before the start of main().
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Sat Aug 30 01:00:35 2025
    From Newsgroup: comp.lang.c

    On 2025-08-30, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    On 2025-08-29 16:19, Kaz Kylheku wrote:
    On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code.
    What happens in this case?

    A constant expression must be evaluated in the way that would happen
    if it were translated to code on the target machine.

    Thus, if necessary, the features of the target machine's arithmetic must
    be simulated on the build machine.
    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    They have to; if a constant-folding optimization produces a different
    result (in an expression which has no issue) that is then an incorrect
    optimization.

    Emulation is necessary only if the value of the constant expression
    changes which code is generated. If the value is simply used by the calculations, then the value can be calculated at run time on the target machine, as if done before the start of main().

    But since the former situations occur regularly (e.g. dead code elimination based on conditionals with constant test expressions) you will need to implement that target evaluation strategy anyway. Then, if you have it, why wouldn't you just use it for all constant expressions, and not have to make arrangements for load-time initializations.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Mon Sep 1 10:10:17 2025
    From Newsgroup: comp.lang.c

    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the
    code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system.  Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    Yes, it can be hard to achieve in some cases. For things like integer arithmetic, it's no serious challenge - floating point is the biggie for
    the challenge of getting the details correct when the host and the
    target are different. (And even if the compiler is native, different
    floating point options can lead to significantly different results.)

    Compilers have to make sure they can do compile-time evaluation that is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Thiago Adams@thiago.adams@gmail.com to comp.lang.c on Mon Sep 1 08:14:52 2025
    From Newsgroup: comp.lang.c

    On 9/1/2025 5:10 AM, David Brown wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the >>>> code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system.  Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    Yes, it can be hard to achieve in some cases.  For things like integer arithmetic, it's no serious challenge - floating point is the biggie for
    the challenge of getting the details correct when the host and the
    target are different.  (And even if the compiler is native, different floating point options can lead to significantly different results.)

    Compilers have to make sure they can do compile-time evaluation that is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler.  I don't
    know about other compilers, but gcc has a /huge/ library that is used to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.




    Interesting.
    Yes, I think for integers it is not so difficult.
    if the compiler has the range int8_t ... int64_t then it is just a
    matter of selection the of fixed size according with the
    abstract type for that platform.

    For floating points I think at least for "desktop" computers the result
    may be the same.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c on Mon Sep 1 12:48:28 2025
    From Newsgroup: comp.lang.c

    On 9/1/2025 4:14 AM, Thiago Adams wrote:
    On 9/1/2025 5:10 AM, David Brown wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the >>>>> code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system.  Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    Yes, it can be hard to achieve in some cases.  For things like integer
    arithmetic, it's no serious challenge - floating point is the biggie
    for the challenge of getting the details correct when the host and the
    target are different.  (And even if the compiler is native, different
    floating point options can lead to significantly different results.)

    Compilers have to make sure they can do compile-time evaluation that
    is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler.  I don't
    know about other compilers, but gcc has a /huge/ library that is used
    to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.




    Interesting.
    Yes, I think for integers it is not so difficult.
    if the compiler has the range int8_t  ...  int64_t  then it is just a matter of selection the of fixed size according with the
    abstract type for that platform.

    For floating points I think at least for "desktop" computers the result
    may be the same.

    Think of a program that is sensitive to floating point errors...
    Something like this crazy shit:

    https://groups.google.com/g/comp.lang.c++/c/bB1wA4wvoFc/m/GdzmMd41AQAJ



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Mon Sep 1 14:11:47 2025
    From Newsgroup: comp.lang.c

    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation that
    is bit-perfect to run-time evaluation before they can use it as an optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used
    to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.

    What library are you referring to? Is it something internal to gcc?
    I know gcc uses GMP, but that's not really floating-point emulation.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Tue Sep 2 06:40:43 2025
    From Newsgroup: comp.lang.c

    On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation that
    is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used
    to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.

    What library are you referring to? Is it something internal to gcc?
    I know gcc uses GMP, but that's not really floating-point emulation.

    GCC also uses not only GNU GMP but also GNU MPFR.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Tue Sep 2 13:06:09 2025
    From Newsgroup: comp.lang.c

    On 01/09/2025 23:11, Keith Thompson wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation that
    is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used
    to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.

    What library are you referring to? Is it something internal to gcc?
    I know gcc uses GMP, but that's not really floating-point emulation.


    I am afraid I don't know the details here, and to what extent it is
    internal to the GCC project or external. I /think/, but I could easily
    be wrong, that general libraries like GMP are used for the actual calculations, while there is GCC-specific stuff to make sure things
    match up with the target details.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Tue Sep 2 14:48:33 2025
    From Newsgroup: comp.lang.c

    On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation
    that is bit-perfect to run-time evaluation before they can use it
    as an optimisation - any risk of an error is a bug in the
    compiler. I don't know about other compilers, but gcc has a
    /huge/ library that is used to simulate floating point on a wide
    range of targets and options, precisely so that it can get this
    right.

    What library are you referring to? Is it something internal to gcc?
    I know gcc uses GMP, but that's not really floating-point
    emulation.

    GCC also uses not only GNU GMP but also GNU MPFR.


    MPFR is of no help when you want to emulate an exact behavior of
    particular hardware format. More so, it can not even emulate an exact
    behavior of IEEE-754 binary formats because it uses much wider range
    of exponent than any of them. Even IEEE binary256 has only 19
    exponent bits. OTOH AFAIR the range of exponent in MPFR can not be
    reduced below 32 bits.

    The problem is not dissimilar to inability of x87 FPU to emulate an
    exact behavior of IEEE binary32 and binary64, because x87 arithmetic
    OPs always use wider (16b) exponent.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Tue Sep 2 16:58:45 2025
    From Newsgroup: comp.lang.c

    On 2025-09-02, Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation
    that is bit-perfect to run-time evaluation before they can use it
    as an optimisation - any risk of an error is a bug in the
    compiler. I don't know about other compilers, but gcc has a
    /huge/ library that is used to simulate floating point on a wide
    range of targets and options, precisely so that it can get this
    right.

    What library are you referring to? Is it something internal to gcc?
    I know gcc uses GMP, but that's not really floating-point
    emulation.

    GCC also uses not only GNU GMP but also GNU MPFR.


    MPFR is of no help when you want to emulate an exact behavior of
    particular hardware format.

    Then why is it there? I can easily see how such a library can be
    used as the underlying framework for that kind of computation.

    It provides the substrate in which an exact answer can be calculated in
    a platform-independent way, and could then be coerced into a
    platform-specific result according to the abstract rule by which the
    platform reduces the abstract result to the actual one.

    My hard-earned intuition (lovely oxymoron, ha!) says that this would be
    easier than ad-hoc methods not using a floating point calculation
    library.

    Disclaimer: the preceding remarks are just conjecture, not based on
    examining how MPFR is used in the GNU Compiler Collection.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Tue Sep 2 17:32:56 2025
    From Newsgroup: comp.lang.c

    David Brown <david.brown@hesbynett.no> wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the >>>> code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system.  Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    Yes, it can be hard to achieve in some cases. For things like integer arithmetic, it's no serious challenge - floating point is the biggie for
    the challenge of getting the details correct when the host and the
    target are different. (And even if the compiler is native, different floating point options can lead to significantly different results.)

    Compilers have to make sure they can do compile-time evaluation that is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.

    AFAIK in normal mode gcc do not consider differences between compile-time
    and run-time evaluation of floating point constants as a bug. And
    the may differ, with compile-time evaluation usually giving more
    accuracy. OTOH they care vary much that cross-compiler and native
    compiler produce the same results. So they do not use native
    floating point arithmetic to evaluate constants. Rather, both
    native compiler and cross complier use the same partable library
    (that is MPFR). One can probably request more strict mode.
    If it is available, then I do not know how it is done. One
    possible approach is to delay anything non-trivial to runtime.
    Non-trivial for floating point is likely mean transcendental
    functions: different libraries almost surely will produce different
    results and for legal reasons alone compiler can not assume access
    to target library.

    Ordinary four arithmetic operations for IEEE are easy: rounding is
    handled by MPFR and things like overflow, infinities, etc, are just
    a bunch of tediouis special cases. But tanscendental functions
    usually do not have well specified rounding behaviour, so exact
    rounding in MPFR is of no help when trying to reproduce results
    from runtime libraries.

    Old (and possibly some new) embedded targets are in a sense more
    "interesting", as they implemented basic operations in software,
    frequently taking some shortcuts to gain speed.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Rosario19@Ros@invalid.invalid to comp.lang.c on Tue Sep 2 21:22:13 2025
    From Newsgroup: comp.lang.c

    On Mon, 1 Sep 2025 10:10:17 +0200, David Brown wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams writes:
    My curiosity is the following:

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)
    ...
    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    Yes, it can be hard to achieve in some cases. For things like integer >arithmetic, it's no serious challenge - floating point is the biggie for
    the challenge of getting the details correct when the host and the
    target are different.

    float point is not ieee stardadizated?

    (And even if the compiler is native, different
    floating point options can lead to significantly different results.)

    Compilers have to make sure they can do compile-time evaluation that is >bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used to >simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.



    i think there are few problems when one use a type of fixed size as
    u32 or unsigned int32_t

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Tue Sep 2 22:40:57 2025
    From Newsgroup: comp.lang.c

    On Tue, 2 Sep 2025 16:58:45 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-09-02, Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com>
    wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation
    that is bit-perfect to run-time evaluation before they can use
    it as an optimisation - any risk of an error is a bug in the
    compiler. I don't know about other compilers, but gcc has a
    /huge/ library that is used to simulate floating point on a wide
    range of targets and options, precisely so that it can get this
    right.

    What library are you referring to? Is it something internal to
    gcc? I know gcc uses GMP, but that's not really floating-point
    emulation.

    GCC also uses not only GNU GMP but also GNU MPFR.


    MPFR is of no help when you want to emulate an exact behavior of
    particular hardware format.

    Then why is it there?

    Most likely because people that think that compilers make extraordinary
    effort in order to match FP results evaluated at compile time with
    those evaluated at run time do not know what they are talking about.
    As suggested above by Waldek Hebisch, compilers are quite happy to do compile-time evaluation at higher (preferably much higher) precision
    than at run time.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Tue Sep 2 22:59:59 2025
    From Newsgroup: comp.lang.c

    On Tue, 2 Sep 2025 17:32:56 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that
    runs the code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise. (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system. Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work
    correctly.
    so in
    So in theory it has to be the same result. This may be hard do
    achieve.

    Yes, it can be hard to achieve in some cases. For things like
    integer arithmetic, it's no serious challenge - floating point is
    the biggie for the challenge of getting the details correct when
    the host and the target are different. (And even if the compiler
    is native, different floating point options can lead to
    significantly different results.)

    Compilers have to make sure they can do compile-time evaluation
    that is bit-perfect to run-time evaluation before they can use it
    as an optimisation - any risk of an error is a bug in the compiler.
    I don't know about other compilers, but gcc has a /huge/ library
    that is used to simulate floating point on a wide range of targets
    and options, precisely so that it can get this right.

    AFAIK in normal mode gcc do not consider differences between
    compile-time and run-time evaluation of floating point constants as a
    bug. And the may differ, with compile-time evaluation usually giving
    more accuracy. OTOH they care vary much that cross-compiler and
    native compiler produce the same results.
    For majority of "interesting" targets native compilers do not exist.
    So they do not use native
    floating point arithmetic to evaluate constants. Rather, both
    native compiler and cross complier use the same partable library
    (that is MPFR). One can probably request more strict mode.
    If it is available, then I do not know how it is done. One
    possible approach is to delay anything non-trivial to runtime.
    I certainly would not be happy if compiler that I am using for embedded
    targets that typically do not have hardware support for 'double' will
    fail to evaluate DP constant expressions in compile time.
    Luckily, it never happens.
    Non-trivial for floating point is likely mean transcendental
    functions: different libraries almost surely will produce different
    results and for legal reasons alone compiler can not assume access
    to target library.

    Right now in C, including C23, transcendental functions can not be parts
    of constant expression.
    Ordinary four arithmetic operations for IEEE are easy: rounding is
    handled by MPFR and things like overflow, infinities, etc, are just
    a bunch of tediouis special cases. But tanscendental functions
    usually do not have well specified rounding behaviour, so exact
    rounding in MPFR is of no help when trying to reproduce results
    from runtime libraries.

    Old (and possibly some new) embedded targets are in a sense more "interesting", as they implemented basic operations in software,
    frequently taking some shortcuts to gain speed.

    Why "some new"? Ovewhelming majority of microcontrollers, both old and
    new, do not implement double precision FP math in hardware.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Wed Sep 3 08:46:27 2025
    From Newsgroup: comp.lang.c

    On 02/09/2025 21:40, Michael S wrote:
    On Tue, 2 Sep 2025 16:58:45 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-09-02, Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com>
    wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation
    that is bit-perfect to run-time evaluation before they can use
    it as an optimisation - any risk of an error is a bug in the
    compiler. I don't know about other compilers, but gcc has a
    /huge/ library that is used to simulate floating point on a wide
    range of targets and options, precisely so that it can get this
    right.

    What library are you referring to? Is it something internal to
    gcc? I know gcc uses GMP, but that's not really floating-point
    emulation.

    GCC also uses not only GNU GMP but also GNU MPFR.


    MPFR is of no help when you want to emulate an exact behavior of
    particular hardware format.

    Then why is it there?

    Most likely because people that think that compilers make extraordinary effort in order to match FP results evaluated at compile time with
    those evaluated at run time do not know what they are talking about.
    As suggested above by Waldek Hebisch, compilers are quite happy to do compile-time evaluation at higher (preferably much higher) precision
    than at run time.


    Doing the calculations at higher precision does not necessarily help.

    If a run-time calculation can give slightly inaccurate results because
    the rounding errors happen to build up that way, a compile-time
    calculation has to replicate that if it is a valid optimisation.

    I don't know the details of how GCC achieves this, but I /do/ know that
    a significant body of code is used to get this right - across widely
    different targets, and supporting a range of different floating point
    options. If the compiler can't see how to get it bit-perfect at compile
    time, it has to do the calculation at run-time.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From aph@aph@littlepinkcloud.invalid to comp.lang.c on Wed Sep 3 09:16:56 2025
    From Newsgroup: comp.lang.c

    David Brown <david.brown@hesbynett.no> wrote:
    On 01/09/2025 23:11, Keith Thompson wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation that
    is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used
    to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.

    What library are you referring to? Is it something internal to gcc?
    I know gcc uses GMP, but that's not really floating-point emulation.


    I am afraid I don't know the details here, and to what extent it is
    internal to the GCC project or external. I /think/, but I could easily
    be wrong, that general libraries like GMP are used for the actual calculations, while there is GCC-specific stuff to make sure things
    match up with the target details.

    Indeed. . There's emulation for everything, even decimal floating
    point.

    See https://github.com/gcc-mirror/gcc/blob/master/gcc/real.cc

    Andrew.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Thu Sep 4 12:35:08 2025
    From Newsgroup: comp.lang.c

    Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 2 Sep 2025 17:32:56 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wrote:

    David Brown <david.brown@hesbynett.no> wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that
    runs the code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system.  Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work
    correctly.
    so in
    So in theory it has to be the same result. This may be hard do
    achieve.

    Yes, it can be hard to achieve in some cases. For things like
    integer arithmetic, it's no serious challenge - floating point is
    the biggie for the challenge of getting the details correct when
    the host and the target are different. (And even if the compiler
    is native, different floating point options can lead to
    significantly different results.)

    Compilers have to make sure they can do compile-time evaluation
    that is bit-perfect to run-time evaluation before they can use it
    as an optimisation - any risk of an error is a bug in the compiler.
    I don't know about other compilers, but gcc has a /huge/ library
    that is used to simulate floating point on a wide range of targets
    and options, precisely so that it can get this right.

    AFAIK in normal mode gcc do not consider differences between
    compile-time and run-time evaluation of floating point constants as a
    bug. And the may differ, with compile-time evaluation usually giving
    more accuracy. OTOH they care vary much that cross-compiler and
    native compiler produce the same results.

    For majority of "interesting" targets native compilers do not exist.

    So they do not use native
    floating point arithmetic to evaluate constants. Rather, both
    native compiler and cross complier use the same partable library
    (that is MPFR). One can probably request more strict mode.
    If it is available, then I do not know how it is done. One
    possible approach is to delay anything non-trivial to runtime.


    I certainly would not be happy if compiler that I am using for embedded targets that typically do not have hardware support for 'double' will
    fail to evaluate DP constant expressions in compile time.
    Luckily, it never happens.

    Non-trivial for floating point is likely mean transcendental
    functions: different libraries almost surely will produce different
    results and for legal reasons alone compiler can not assume access
    to target library.


    Right now in C, including C23, transcendental functions can not be parts
    of constant expression.

    That is irrelevant for current question. Computing constants at
    compile time is an optimization and compilers do this also for
    transcendental constants.

    Ordinary four arithmetic operations for IEEE are easy: rounding is
    handled by MPFR and things like overflow, infinities, etc, are just
    a bunch of tediouis special cases. But tanscendental functions
    usually do not have well specified rounding behaviour, so exact
    rounding in MPFR is of no help when trying to reproduce results
    from runtime libraries.

    Old (and possibly some new) embedded targets are in a sense more
    "interesting", as they implemented basic operations in software,
    frequently taking some shortcuts to gain speed.


    Why "some new"? Ovewhelming majority of microcontrollers, both old and
    new, do not implement double precision FP math in hardware.

    Old libraries often took shortcuts to make them faster. For new
    targets there is pressure to implement precise rounding, so I do
    not know if new targets are doing "interesting" things. Also,
    while minority of embedded targets has hardware floating point,
    need for _fast_ floating point is limited, and when needed
    application may use processor with hardware floating point.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Thu Sep 4 17:21:24 2025
    From Newsgroup: comp.lang.c

    On Thu, 4 Sep 2025 12:35:08 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wrote:
    Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 2 Sep 2025 17:32:56 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wrote:

    David Brown <david.brown@hesbynett.no> wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in
    the machine that compiles the code compared with the machine
    that runs the code.
    What happens in this case?

    The solution I can think of is emulation when evaluating
    constant expressions. But I don't if any compiler is doing
    this way.

    For example, 65535u + 1u will evaluate to 0u if the target
    system has 16-bit int, 65536u otherwise. (I picked an example
    that doesn't depend on UINT_MAX or any other macros defined in
    the standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system. Whether they do
    so by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work
    correctly.
    so in
    So in theory it has to be the same result. This may be hard do
    achieve.

    Yes, it can be hard to achieve in some cases. For things like
    integer arithmetic, it's no serious challenge - floating point is
    the biggie for the challenge of getting the details correct when
    the host and the target are different. (And even if the compiler
    is native, different floating point options can lead to
    significantly different results.)

    Compilers have to make sure they can do compile-time evaluation
    that is bit-perfect to run-time evaluation before they can use it
    as an optimisation - any risk of an error is a bug in the
    compiler. I don't know about other compilers, but gcc has a
    /huge/ library that is used to simulate floating point on a wide
    range of targets and options, precisely so that it can get this
    right.

    AFAIK in normal mode gcc do not consider differences between
    compile-time and run-time evaluation of floating point constants
    as a bug. And the may differ, with compile-time evaluation
    usually giving more accuracy. OTOH they care vary much that
    cross-compiler and native compiler produce the same results.

    For majority of "interesting" targets native compilers do not exist.

    So they do not use native
    floating point arithmetic to evaluate constants. Rather, both
    native compiler and cross complier use the same partable library
    (that is MPFR). One can probably request more strict mode.
    If it is available, then I do not know how it is done. One
    possible approach is to delay anything non-trivial to runtime.


    I certainly would not be happy if compiler that I am using for
    embedded targets that typically do not have hardware support for
    'double' will fail to evaluate DP constant expressions in compile
    time. Luckily, it never happens.

    Non-trivial for floating point is likely mean transcendental
    functions: different libraries almost surely will produce different
    results and for legal reasons alone compiler can not assume access
    to target library.


    Right now in C, including C23, transcendental functions can not be
    parts of constant expression.

    That is irrelevant for current question. Computing constants at
    compile time is an optimization and compilers do this also for
    transcendental constants.

    Can you demonstrate it done by any compiler for any transcendental
    function other than sqrt() or trivial case like
    sin(0)/tan(0)/asin(0)/atan(0) ?
    Ordinary four arithmetic operations for IEEE are easy: rounding is
    handled by MPFR and things like overflow, infinities, etc, are just
    a bunch of tediouis special cases. But tanscendental functions
    usually do not have well specified rounding behaviour, so exact
    rounding in MPFR is of no help when trying to reproduce results
    from runtime libraries.

    Old (and possibly some new) embedded targets are in a sense more
    "interesting", as they implemented basic operations in software,
    frequently taking some shortcuts to gain speed.


    Why "some new"? Ovewhelming majority of microcontrollers, both old
    and new, do not implement double precision FP math in hardware.

    Old libraries often took shortcuts to make them faster. For new
    targets there is pressure to implement precise rounding, so I do
    not know if new targets are doing "interesting" things. Also,
    while minority of embedded targets has hardware floating point,
    need for _fast_ floating point is limited, and when needed
    application may use processor with hardware floating point.

    If you put it that way then I agree. Newer tools rarely if ever cut
    corners in implementation of IEEE binary64. That is, they can cut
    "other" corners, like generation of INEXACT flag, but not
    precision/rounding.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Thiago Adams@thiago.adams@gmail.com to comp.lang.c on Thu Sep 4 11:24:24 2025
    From Newsgroup: comp.lang.c

    On 9/4/2025 9:35 AM, Waldek Hebisch wrote:
    Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 2 Sep 2025 17:32:56 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wrote:

    David Brown <david.brown@hesbynett.no> wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the >>>>>>> machine that compiles the code compared with the machine that
    runs the code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant >>>>>>> expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system.  Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work
    correctly.
    so in
    So in theory it has to be the same result. This may be hard do
    achieve.

    Yes, it can be hard to achieve in some cases. For things like
    integer arithmetic, it's no serious challenge - floating point is
    the biggie for the challenge of getting the details correct when
    the host and the target are different. (And even if the compiler
    is native, different floating point options can lead to
    significantly different results.)

    Compilers have to make sure they can do compile-time evaluation
    that is bit-perfect to run-time evaluation before they can use it
    as an optimisation - any risk of an error is a bug in the compiler.
    I don't know about other compilers, but gcc has a /huge/ library
    that is used to simulate floating point on a wide range of targets
    and options, precisely so that it can get this right.

    AFAIK in normal mode gcc do not consider differences between
    compile-time and run-time evaluation of floating point constants as a
    bug. And the may differ, with compile-time evaluation usually giving
    more accuracy. OTOH they care vary much that cross-compiler and
    native compiler produce the same results.

    For majority of "interesting" targets native compilers do not exist.

    So they do not use native
    floating point arithmetic to evaluate constants. Rather, both
    native compiler and cross complier use the same partable library
    (that is MPFR). One can probably request more strict mode.
    If it is available, then I do not know how it is done. One
    possible approach is to delay anything non-trivial to runtime.


    I certainly would not be happy if compiler that I am using for embedded
    targets that typically do not have hardware support for 'double' will
    fail to evaluate DP constant expressions in compile time.
    Luckily, it never happens.

    Non-trivial for floating point is likely mean transcendental
    functions: different libraries almost surely will produce different
    results and for legal reasons alone compiler can not assume access
    to target library.


    Right now in C, including C23, transcendental functions can not be parts
    of constant expression.

    That is irrelevant for current question. Computing constants at
    compile time is an optimization and compilers do this also for
    transcendental constants.

    Compiler requires constant expressions in places like enumerators, file
    scope variables and switch case.

    To compute these values in runtime the compiler would have to use
    variables, and initialize before main.
    I don't think this happens in practice.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Fri Sep 5 14:37:30 2025
    From Newsgroup: comp.lang.c

    On Thu, 4 Sep 2025 17:21:24 +0300
    Michael S <already5chosen@yahoo.com> wrote:

    On Thu, 4 Sep 2025 12:35:08 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wrote:

    Michael S <already5chosen@yahoo.com> wrote:

    Non-trivial for floating point is likely mean transcendental
    functions: different libraries almost surely will produce
    different results and for legal reasons alone compiler can not
    assume access to target library.


    Right now in C, including C23, transcendental functions can not be
    parts of constant expression.

    That is irrelevant for current question. Computing constants at
    compile time is an optimization and compilers do this also for transcendental constants.


    Can you demonstrate it done by any compiler for any transcendental
    function other than sqrt() or trivial case like
    sin(0)/tan(0)/asin(0)/atan(0) ?



    I tested it with 3 compilers that I have installed on my old home PC.

    #include <math.h>
    double foo(void)
    {
    return sin(1);
    }

    And indeed Waldek Hebisch is correct - both gcc 14.2.0 and clang 20.1.1 evaluated sin() in compile time. Only MSVC 19.30.30706 (default
    compiler of VS2022) generated run-time call to sin().

    gcc went one step further and accepts following code.

    double bar = sin(1); // at file scope

    My understanding is that this code is *not* a legal C, but language
    lawyers among us would say that it is merely 'undefined' and as such
    does not require rejection.

    Then I used this non-standard feature of gcc to test the original
    hypothesis:


    test.c:

    #include <stdio.h>
    #include <math.h>

    double bar_x[8] = {
    5.233606290070041966e-01,
    8.168152180671991447e-01,
    9.888599151404926513e-01,
    8.049170023013325626e-01,
    1.818135817921265052e-01,
    4.700523654044072019e-01,
    8.323912344266561902e-01,
    8.561549002973436462e-01,
    };

    double bar_y[8] = {
    sin(5.233606290070041966e-01),
    sin(8.168152180671991447e-01),
    sin(9.888599151404926513e-01),
    sin(8.049170023013325626e-01),
    sin(1.818135817921265052e-01),
    sin(4.700523654044072019e-01),
    sin(8.323912344266561902e-01),
    sin(8.561549002973436462e-01),
    };


    int main()
    {
    double y[8];
    for (int i = 0; i < 8; ++i)
    y[i] = sin(bar_x[i]);

    for (int i = 0; i < 8; ++i)
    printf("%23a %s %23a\n",
    y[i], y[i] == bar_y[i] ? "==" : "!=" , bar_y[i]);
    }

    $ gcc -Wall -O abar_sin_main1.c

    gcc -Wall -O test.c
    $ ./a.exe
    0x1.ffc9ee7315e4ep-2 != 0x1.ffc9ee7315e4fp-2
    0x1.753b7a26d5dd7p-1 != 0x1.753b7a26d5dd6p-1
    0x1.abb9888c016cep-1 != 0x1.abb9888c016cdp-1
    0x1.71092c6fe47e5p-1 != 0x1.71092c6fe47e4p-1
    0x1.724e6115ccc9fp-3 != 0x1.724e6115ccca0p-3
    0x1.cfcda939a9867p-2 != 0x1.cfcda939a9868p-2
    0x1.7aa562d8eaa82p-1 != 0x1.7aa562d8eaa81p-1
    0x1.82ba63661572fp-1 != 0x1.82ba636615730p-1


    Conclusion: Waldek Hebisch is correct.
    At least in the environment that I tested gcc make no special effort for matching of results of compile-time evaluation of transcedental
    functions with result of evaluation in run time.










    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Fri Sep 5 11:17:47 2025
    From Newsgroup: comp.lang.c

    Michael S <already5chosen@yahoo.com> writes:
    [...]
    gcc went one step further and accepts following code.

    double bar = sin(1); // at file scope

    My understanding is that this code is *not* a legal C, but language
    lawyers among us would say that it is merely 'undefined' and as such
    does not require rejection.
    [...]

    That declaration at file scope violates a constraint. It doesn't
    require rejection (nothing does, other than a #error directive), but it
    does require a diagnostic.

    gcc is not fully conforming by default. With "-pedantic", it produces
    the required diagnostic:

    c.c:2:14: warning: initializer element is not a constant expression [-Wpedantic]
    2 | double bar = sin(1);
    | ^~~
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Wuns Haerst@Wuns.Haerst@wurstfabrik.at to comp.lang.c on Sat Sep 6 04:57:26 2025
    From Newsgroup: comp.lang.c

    Am 29.08.2025 um 20:46 schrieb Thiago Adams:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the
    code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.




    Cross compilation compiles with the machine code of the local
    machine and embedds a runtime translator in the executable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Fri Sep 5 20:05:37 2025
    From Newsgroup: comp.lang.c

    Wuns Haerst <Wuns.Haerst@wurstfabrik.at> writes:
    Am 29.08.2025 um 20:46 schrieb Thiago Adams:
    My curiosity is the following:
    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs
    the code.
    What happens in this case?
    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    Cross compilation compiles with the machine code of the local
    machine and embedds a runtime translator in the executable.

    Where did you get that idea?
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Sat Sep 6 03:35:13 2025
    From Newsgroup: comp.lang.c

    On 2025-09-06, Wuns Haerst <Wuns.Haerst@wurstfabrik.at> wrote:
    Am 29.08.2025 um 20:46 schrieb Thiago Adams:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the
    code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    Cross compilation compiles with the machine code of the local
    machine and embedds a runtime translator in the executable.

    While such a technology is possible, that isn't an example of cross-compilation.

    Cross-compilation simply means machine B is producing machine
    code for machine A. For instance an x86-64 build machine is
    producing code for ARM64.

    Compilers themselves can be cross-compiled, which brings in
    a third machine: the target machine of the compiler being
    compiled.

    When machine B is producing machine code that runs on A,
    and that machine code to be run on A is a compiler targeted to produce
    code for machine C, that is called a "Canadian cross-compile".

    E.g. x86-64 build machine cross compiles a compiler to run
    on PPC64, and that compiler will generate code for ARM64.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From BGB@cr88192@gmail.com to comp.lang.c on Sat Sep 6 02:15:59 2025
    From Newsgroup: comp.lang.c

    On 9/2/2025 2:22 PM, Rosario19 wrote:
    On Mon, 1 Sep 2025 10:10:17 +0200, David Brown wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams writes:
    My curiosity is the following:

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)
    ...
    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    Yes, it can be hard to achieve in some cases. For things like integer
    arithmetic, it's no serious challenge - floating point is the biggie for
    the challenge of getting the details correct when the host and the
    target are different.

    float point is not ieee stardadizated?


    It is... But...

    IEEE-754 aims pretty high.


    How well real-world systems achieve it, is still subject to some level
    of variability.

    On many targets, while full IEEE-754 semantics are possible in theory, programmers disable some features to make the FPU "less slow".

    And, programmers may not always be entirely that they had done so (or,
    more often, that the compiler, or some other 3rd party library they are
    using, had done so). Or, on some targets, it may be the default setting
    at the OS level or similar.


    Some behaviors specified in IEEE-754 don't quite always manifest on real hardware in practice (such as strict 0.5 ULP rounding).

    Even high-end systems, like desktop PCs, are not immune to this.

    I had before noted when comparing:
    Double precision values produced by my own FPU design;
    Where, I had cut a whole lot of corners to keep costs low;
    Double precision values produced by the FPU in my PC (via SSE);
    Double precision values calculated according to IEEE rules
    (internally using much wider calculations).

    That there were cases where my desktop PC's results agreed with my corner-cutting FPU, but *not* with the value calculated along strict
    adherence to IEEE-754 rules.

    Sometimes, it is subtle:
    In theory, the FPU can do the full version;
    But, the hardware doesn't "really" do so directly,
    but, instead relies on hidden traps or firmware/microcode.
    But, often, subtly, there is a flag to control these behaviors;
    Compiling with some optimizations may, quietly,
    set the FPU to not so strictly conform to IEEE rules...


    The more obvious and well known issue is DAZ/FTZ:
    In IEEE-754, if the exponent is 0, it may theoretically represent a
    range of non-normalize values, which may still do math in this "almost
    but not quite 0" state;
    But, in hardware, this is expensive, with the cheaper option being to
    just assume that, for exponent 0, the whole range is 0.

    Maybe less obvious, is that when this happens, the FPU might also stop
    giving 0.5ULP rounding, but instead, say, 0.501 ULP (where, say, there
    is a 1/1000 chance or so that the result is not rounded as the standard
    would mandate). While it might not seem like much, the cost difference
    between an FPU that gives around 0.5 ULP, and one that can actually give
    0.5 ULP in hardware, is quite significant.


    If one assumes strict determinism between machines regarding floating
    point calculations, it is a headache.

    If one instead assumes "close enough, probably fine", then it is a lot
    easier.


    In practice, it isn't always a bad thing:
    You can't change fundamental implementation limits;
    In most use cases, you would much rather have fast but slightly
    inaccurate, than more strict accuracy but significantly slower.

    Perfection here is not always attainable.



    Like, we can just be happy enough:
    Pretty much everyone uses the same representations;
    The differences are usually so small as to be unnoticeable in practice;
    Pretty much any FPU one is likely to use in practice, if given values representing exact integers, with a result that can be represented as an
    exact integer (when the width of the integer value is less than the
    width of the mantissa), will give an exact result (this was not always
    true in the early days, *1).

    *1: I consider this one a limit for how much corner cutting is allowed
    in a scalar FPU. However, I feel worse precision can be considered
    acceptable for things like SIMD vector operations.

    But, if the compiler tries to auto vectorize floating point math on a
    target where the SIMD unit does not give results suitable for the
    acceptable accuracy for scalar operations, this is bad.


    Sometimes, the problem exists the other way as well:
    Sometimes, the ISA might provide a single-rounded FMA operator;
    And, the compiler might implicitly merge "z=a*x+b;" into a single
    operator, which may in turn generate different results than had the FMUL
    and FADD been performed separately.

    This is also bad, but some compilers (particularly GCC) are prone to try
    to do this whenever such an instruction is available in the ISA (forcing
    the programmer to realize that they need to use "-ffp-contract=off", or
    that such an option exists).





    (And even if the compiler is native, different
    floating point options can lead to significantly different results.)

    Compilers have to make sure they can do compile-time evaluation that is
    bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used to
    simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.



    i think there are few problems when one use a type of fixed size as
    u32 or unsigned int32_t


    Luckily, for the most part, integer rules are reasonably well defined.

    Some compilers "try to get too clever for their own good" in these areas.


    Model I took in my compiler is that for expression evaluation it also
    models the types and then models some behaviors (like integer overflows,
    etc) as expected for the type.

    There are some cases where things may differ between machines, like if a
    shift is outside the allowed range. Though, common is for modern
    compilers to generate a warning or error for this.


    When trying to port some old code before, I had to try to figure out a
    case. It had been using a negative constant shift, and I needed to try
    to figure out what exactly this did.

    Turns out the program was expecting that when the shift constant it is negative, it flipped the shift direction and shifted by that amount in
    the opposite direction (even if on actual machines, the typical behavior
    is for the shift to simply be masked by the range of the type).

    so, for example:
    uint32_t x, y;
    y=x<<(-1);
    Might be expected to turn into:
    y=x<<31;
    Matching typical x86 behavior, but not:
    y=x>>1;
    Which was what the program seemingly expected to happen here.


    Some other edge cases involved integer overflow, such as, what exactly
    happens with:
    int x, y;
    long z;
    //assume x and y are values that will overflow
    z=x+y;

    Where, the program depended on 'x+y' to wrap on overflow first, and then
    be promoted to long. In this case, the compiler had mistakenly promoted
    early (and just used a 64-bit ADD instruction), but this caused a bug in
    the program...

    Granted, the original program was written in a time when 'long' was
    32-bits, and just sorta casually mixes them a lot. So, the possibility
    of early promotion to 64 bits was not likely originally a consideration
    (and, the program still had a non-zero amount of code with K&R style declaration syntax floating around as well).

    Basically, usual sort of stuff from code that was written 30+ years ago
    (and then one needs to find the right mix of command-line options to get
    this sort of code to build and sorta work with GCC; often some
    combination of "-fwrapv -fno-strict-aliasing -std=gnu89 ...").


    With any code that is much older than this, porting effort is often more involved, as before one can port it to 64 bits, first need to semi port
    it to 32, and then look over code correctly to figure out, say:
    Which 'int' needs to converted to 'short' (depends on being 16 bit);
    Which 'int' needs to be left as 'int';
    Which 'int' needs to be converted to 'long' (eg, pointer difference).

    Then usually also the fun that in those days people liked using lots of
    inline assembler and tended to assume ability to directly poke at the hardware.


    In a few cases, it has exceeded my effort limits.

    So, most stuff I had ended up porting for my own uses had been from the
    90s, as this was (usually) more reasonable.

    XT era 16-bit CGA/EGA stuff mostly isn't usually worth the hassle even
    in cases where I have the code.


    Then again, I guess I could always go and try to look and see if anyone
    had done a modern source port. Looks online for one of the games, seems
    to be another case where people went and wrote a ground-up clone of the original game engine rather than try to port the 16-bit DOS era code.


    But, sometimes the "modern source ports" don't work for me either, as
    they tend to assume a more full-featured OS and modern hardware
    resources (like the ability to casually burn through lots of RAM);
    because "modern" coding practices suck and people can't be asked to not
    burn excessive amounts of RAM it seems. Sometimes, would like something
    that sticks pretty close to the original in terms of memory use and
    resource requirements (but, more popular it seems is to mimic the
    original; but then somehow still burn through a lot of RAM).



    Well, and there was a case in my case, where I could have ported the
    code (eg, for Wolf3D), but it wasn't available under GPL or some other
    similar license (license terms sucked), so couldn't release the code if
    I did so. In this case, it was easier to modify the engine for a
    descendant game (ROTT), that was released under GPL, to mimic the
    original game. But, it doesn't use the original games' data files and I couldn't legally distribute the recreated data files, so, ...

    In that case, did try to recreate something with mock-up assets (partly derived from FreeDoom), but it was incomplete and far inferior.

    So, "ROTT enging pretending to be Wolf3D with WAD's and assets derived
    from FreeDoom and some crappy placeholder levels" just kinda sucked...

    Somehow figured out when messing with the asset data that the enemies in Wolf3D were "actually saying stuff"; audio quality was generally poor
    enough as to be largely unintelligible as speech.

    Though, I had noted that seemingly I have a hard time easily
    understanding speech for sample rates much lower than 16kHz. Where,
    seemingly, 16kHz is a good tradeoff between "passable quality" and
    "doesn't waste too much space". I am kinda weird though possibly in that
    my preference has mostly ended up being for 16kHz ADPCM. Wolf3D had used
    8kHz 8-bit PCM.


    Though, say, 8kHz ADPCM is OK when one doesn't care about quality, and
    it seems like ADPCM somehow "sharpens" the intelligibility of speech at
    low sample rates (while also using less space than 8-bit PCM).

    Though, depends some on the ADPCM encoder. Generally works better in
    this case if one does a naive "try to get as close as possible to the
    original sample". Any attempt to "smooth" the approximation renders the
    speech unintelligible (actually, in this case it is seemingly almost
    better to intentionally overshoot to create ringing artifacts).

    Had noted also that my hearing perception and RMSE values seem to
    somewhat disagree over this point (getting the best RMSE results in very muffled results, but higher intelligibility seems to give worse RMSE in
    this case).

    ...


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Wuns Haerst@Wuns.Haerst@wurstfabrik.at to comp.lang.c on Sun Sep 7 13:09:51 2025
    From Newsgroup: comp.lang.c

    Am 06.09.2025 um 05:05 schrieb Keith Thompson:
    Wuns Haerst <Wuns.Haerst@wurstfabrik.at> writes:
    Am 29.08.2025 um 20:46 schrieb Thiago Adams:
    My curiosity is the following:
    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs
    the code.
    What happens in this case?
    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    Cross compilation compiles with the machine code of the local
    machine and embedds a runtime translator in the executable.

    Where did you get that idea?

    That's done since the 70s because cross-compilation often would
    have inconsistent behaviour of the code than on the source-machine
    because of the different ISA.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Sun Sep 7 13:21:17 2025
    From Newsgroup: comp.lang.c

    Wuns Haerst <Wuns.Haerst@wurstfabrik.at> writes:
    Am 06.09.2025 um 05:05 schrieb Keith Thompson:
    Wuns Haerst <Wuns.Haerst@wurstfabrik.at> writes:
    Am 29.08.2025 um 20:46 schrieb Thiago Adams:
    My curiosity is the following:
    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs
    the code.
    What happens in this case?
    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    Cross compilation compiles with the machine code of the local
    machine and embedds a runtime translator in the executable.
    Where did you get that idea?

    That's done since the 70s because cross-compilation often would
    have inconsistent behaviour of the code than on the source-machine
    because of the different ISA.

    I've used a number of cross-compilers. I've never heard of one
    that embeds a runtime translator in the executable.

    For example, an ARM cross-compiler running on an x86-64 system will
    generate ARM machine code. Running the generated code on the ARM
    target does not involve any kind of translation (which would be
    impractical for small target systems anyway).

    Do you have a concrete example of the kind of cross-compiler you're
    talking about?
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Sun Sep 7 22:01:43 2025
    From Newsgroup: comp.lang.c

    On 2025-09-07, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Wuns Haerst <Wuns.Haerst@wurstfabrik.at> writes:
    Am 06.09.2025 um 05:05 schrieb Keith Thompson:
    Wuns Haerst <Wuns.Haerst@wurstfabrik.at> writes:
    Am 29.08.2025 um 20:46 schrieb Thiago Adams:
    My curiosity is the following:
    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs
    the code.
    What happens in this case?
    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    Cross compilation compiles with the machine code of the local
    machine and embedds a runtime translator in the executable.
    Where did you get that idea?

    That's done since the 70s because cross-compilation often would
    have inconsistent behaviour of the code than on the source-machine
    because of the different ISA.

    I've used a number of cross-compilers. I've never heard of one
    that embeds a runtime translator in the executable.

    Producing a bundle consisting of an executable and a VM that understands
    its instruction set is simply not called cross-compilation, whether
    you've seen it or not.

    For example, an ARM cross-compiler running on an x86-64 system will
    generate ARM machine code. Running the generated code on the ARM
    target does not involve any kind of translation (which would be
    impractical for small target systems anyway).

    Do you have a concrete example of the kind of cross-compiler you're
    talking about?

    I estimate that in about one week, I could produce a packaging system
    whereby we take, say, an ARM64 program, catenate it onto a qemu-arm
    executable (say on an Intel box) such that this resulting executable
    will run that program, as if qemu-arm were invoked on a stand-alone
    file. qemu-arm will just have to map the program out of the appended
    part of its own executable, rather than processing a path argument.

    This sort of thing has undoubtedly been done here and there.

    Come to think of it, when the Intel 4004 processor was used for its
    first product, the Busicom 141-PF calculator, the actual calculator
    firmware was written not in 4004 machine language but in a more expressive/capable virtual machine instruction set. The firmware image
    bundled the application code together with a 4004 implementation of the
    virtual machine.

    We wouldn't say that the VM program was cross-compiled to the 4004,
    though, haha!
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.c on Wed Sep 10 06:58:43 2025
    From Newsgroup: comp.lang.c

    On Thu, 4 Sep 2025 17:21:24 +0300, Michael S wrote:

    Newer tools rarely if ever cut corners in implementation of IEEE
    binary64. That is, they can cut "other" corners, like generation of
    INEXACT flag, but not precision/rounding.

    Haven’t the hardware/software engineers learned the importance yet of implementing *every* part of the IEEE754 spec? That includes INEXACT.

    Back in the 1990s we had this attitude of “the whole spec’s too complicated, who needs it all anyway?” Bit by bit the lesson was learnt, that every feature of the spec was there for a good reason. Maybe the
    lesson hasn’t been completely learnt yet?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Rosario19@Ros@invalid.invalid to comp.lang.c on Thu Sep 11 00:15:11 2025
    From Newsgroup: comp.lang.c

    On Sat, 6 Sep 2025 02:15:59 -0500, BGB <> wrote:
    On 9/2/2025 2:22 PM, Rosario19 wrote:
    On Mon, 1 Sep 2025 10:10:17 +0200, David Brown wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams writes:
    My curiosity is the following:

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)
    ...
    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly. >>>>>
    so in
    So in theory it has to be the same result. This may be hard do achieve. >>>>

    Yes, it can be hard to achieve in some cases. For things like integer
    arithmetic, it's no serious challenge - floating point is the biggie for >>> the challenge of getting the details correct when the host and the
    target are different.

    float point is not ieee stardadizated?


    It is... But...

    IEEE-754 aims pretty high.


    How well real-world systems achieve it, is still subject to some level
    of variability.

    On many targets, while full IEEE-754 semantics are possible in theory, >programmers disable some features to make the FPU "less slow".

    And, programmers may not always be entirely that they had done so (or,
    more often, that the compiler, or some other 3rd party library they are >using, had done so). Or, on some targets, it may be the default setting
    at the OS level or similar.


    Some behaviors specified in IEEE-754 don't quite always manifest on real >hardware in practice (such as strict 0.5 ULP rounding).

    Even high-end systems, like desktop PCs, are not immune to this.

    I had before noted when comparing:
    Double precision values produced by my own FPU design;
    Where, I had cut a whole lot of corners to keep costs low;
    Double precision values produced by the FPU in my PC (via SSE);
    Double precision values calculated according to IEEE rules
    (internally using much wider calculations).

    That there were cases where my desktop PC's results agreed with my >corner-cutting FPU, but *not* with the value calculated along strict >adherence to IEEE-754 rules.

    Sometimes, it is subtle:
    In theory, the FPU can do the full version;
    But, the hardware doesn't "really" do so directly,
    but, instead relies on hidden traps or firmware/microcode.
    But, often, subtly, there is a flag to control these behaviors;
    Compiling with some optimizations may, quietly,
    set the FPU to not so strictly conform to IEEE rules...


    The more obvious and well known issue is DAZ/FTZ:
    In IEEE-754, if the exponent is 0, it may theoretically represent a
    range of non-normalize values, which may still do math in this "almost
    but not quite 0" state;
    But, in hardware, this is expensive, with the cheaper option being to
    just assume that, for exponent 0, the whole range is 0.

    Maybe less obvious, is that when this happens, the FPU might also stop >giving 0.5ULP rounding, but instead, say, 0.501 ULP (where, say, there
    is a 1/1000 chance or so that the result is not rounded as the standard >would mandate). While it might not seem like much, the cost difference >between an FPU that gives around 0.5 ULP, and one that can actually give
    0.5 ULP in hardware, is quite significant.


    If one assumes strict determinism between machines regarding floating
    point calculations, it is a headache.

    If one instead assumes "close enough, probably fine", then it is a lot >easier.


    In practice, it isn't always a bad thing:
    You can't change fundamental implementation limits;
    In most use cases, you would much rather have fast but slightly
    inaccurate, than more strict accuracy but significantly slower.

    Perfection here is not always attainable.



    Like, we can just be happy enough:
    Pretty much everyone uses the same representations;
    The differences are usually so small as to be unnoticeable in practice; >Pretty much any FPU one is likely to use in practice, if given values >representing exact integers, with a result that can be represented as an >exact integer (when the width of the integer value is less than the
    width of the mantissa), will give an exact result (this was not always
    true in the early days, *1).

    *1: I consider this one a limit for how much corner cutting is allowed
    in a scalar FPU. However, I feel worse precision can be considered >acceptable for things like SIMD vector operations.

    I'm not one expert but I think to round a number is one operation only
    for print a number in the screen, it is always an error in my way of
    see in other context. It add error to errors.

    But, if the compiler tries to auto vectorize floating point math on a
    target where the SIMD unit does not give results suitable for the
    acceptable accuracy for scalar operations, this is bad.

    if one algo converge to a number for me, I see, it will converge until
    the last bit.

    Sometimes, the problem exists the other way as well:
    Sometimes, the ISA might provide a single-rounded FMA operator;
    And, the compiler might implicitly merge "z=a*x+b;" into a single
    operator, which may in turn generate different results than had the FMUL
    and FADD been performed separately.

    This is also bad, but some compilers (particularly GCC) are prone to try
    to do this whenever such an instruction is available in the ISA (forcing
    the programmer to realize that they need to use "-ffp-contract=off", or
    that such an option exists).





    (And even if the compiler is native, different
    floating point options can lead to significantly different results.)

    Compilers have to make sure they can do compile-time evaluation that is
    bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used to >>> simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.



    i think there are few problems when one use a type of fixed size as
    u32 or unsigned int32_t


    Luckily, for the most part, integer rules are reasonably well defined.

    Some compilers "try to get too clever for their own good" in these areas.


    Model I took in my compiler is that for expression evaluation it also
    models the types and then models some behaviors (like integer overflows, >etc) as expected for the type.

    There are some cases where things may differ between machines, like if a >shift is outside the allowed range. Though, common is for modern
    compilers to generate a warning or error for this.

    f(a)

    for to be portable the size in bit of "a" has to be the same, and the
    function f has to be the same

    When trying to port some old code before, I had to try to figure out a
    case. It had been using a negative constant shift, and I needed to try
    to figure out what exactly this did.

    Turns out the program was expecting that when the shift constant it is >negative, it flipped the shift direction and shifted by that amount in
    the opposite direction (even if on actual machines, the typical behavior
    is for the shift to simply be masked by the range of the type).

    so, for example:
    uint32_t x, y;
    y=x<<(-1);
    Might be expected to turn into:
    y=x<<31;
    Matching typical x86 behavior, but not:
    y=x>>1;
    Which was what the program seemingly expected to happen here.

    --- Synchronet 3.21a-Linux NewsLink 1.2