• Re: Bare Metal C vs. libc: Is the overhead worth it on smallMCUs?,08:52

    From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.c on Sat Apr 4 22:29:05 2026
    From Newsgroup: comp.lang.c

    On Tue, 17 Mar 2026 12:30:14 +0300, Oguz Kaan Ocal wrote:

    At 32KB, you want to spend your time fighting the memory
    constraints, not the toolchain.

    At 32kiB, I would write it all in assembler. Seems like it would be
    worth the trouble to hand-tune every instruction to get the size down
    and the speed up.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.c on Sat Apr 4 22:30:49 2026
    From Newsgroup: comp.lang.c

    On Tue, 17 Mar 2026 22:36:48 +0100, Bonita Montero wrote:

    Am 17.03.2026 um 19:34 schrieb Scott Lurndal:

    Startup performance (think BIOS and OS boot time, for example), is
    very important to real customers. It is particularly important for
    industrial and commercial microcontrollers in appliances and
    industrial applications.

    Yes, 1us vs 1ms.

    Or even better, 1µs.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Mon Apr 6 12:21:01 2026
    From Newsgroup: comp.lang.c

    On 05/04/2026 00:29, Lawrence D’Oliveiro wrote:
    On Tue, 17 Mar 2026 12:30:14 +0300, Oguz Kaan Ocal wrote:

    At 32KB, you want to spend your time fighting the memory
    constraints, not the toolchain.

    At 32kiB, I would write it all in assembler. Seems like it would be
    worth the trouble to hand-tune every instruction to get the size down
    and the speed up.

    No, it would not. The time when that made sense is decades past.
    People use C (and even C++) for microcontrollers with a lot less than
    32KB of flash. You need a decent compiler, you need to know how to use
    it properly, and you need to know how to write code appropriate for such devices - but you do it in C, not assembly.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Bart@bc@freeuk.com to comp.lang.c on Mon Apr 6 12:31:20 2026
    From Newsgroup: comp.lang.c

    On 06/04/2026 11:21, David Brown wrote:
    On 05/04/2026 00:29, Lawrence D’Oliveiro wrote:
    On Tue, 17 Mar 2026 12:30:14 +0300, Oguz Kaan Ocal wrote:

    At 32KB, you want to spend your time fighting the memory
    constraints, not the toolchain.

    At 32kiB, I would write it all in assembler. Seems like it would be
    worth the trouble to hand-tune every instruction to get the size down
    and the speed up.

    No, it would not.  The time when that made sense is decades past. People use C (and even C++) for microcontrollers with a lot less than 32KB of flash.  You need a decent compiler, you need to know how to use it properly, and you need to know how to write code appropriate for such devices - but you do it in C, not assembly.


    I spent some years programming Z80 systems that had up to 64KB. While I
    was adept at writing decent assembly code, I still strived to create a
    HLL for it.

    Assembly was only be used where necessary; implementing the HLL compiler
    for a start, then where there were bottlenecks, or working with closely
    with hardware (and it was usually done from inline assembly within the HLL).

    Having limited memory was always going be an issue on such systems; I
    don't remember that using a simple compiler made much difference.

    In any case, even if I'd used C at the time, the available compilers
    wouldn't have been that much better, in the size and speed of the code generated, but they'd've been considerably poorer in how they were deployed.

    (I'm agreeing with you BTW.)

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Mon Apr 6 14:52:24 2026
    From Newsgroup: comp.lang.c

    On Mon, 6 Apr 2026 12:31:20 +0100
    Bart <bc@freeuk.com> wrote:
    On 06/04/2026 11:21, David Brown wrote:
    On 05/04/2026 00:29, Lawrence D’Oliveiro wrote:
    On Tue, 17 Mar 2026 12:30:14 +0300, Oguz Kaan Ocal wrote:

    At 32KB, you want to spend your time fighting the memory
    constraints, not the toolchain.

    At 32kiB, I would write it all in assembler. Seems like it would be
    worth the trouble to hand-tune every instruction to get the size
    down and the speed up.

    No, it would not.  The time when that made sense is decades past.
    People use C (and even C++) for microcontrollers with a lot less
    than 32KB of flash.  You need a decent compiler, you need to know
    how to use it properly, and you need to know how to write code
    appropriate for such devices - but you do it in C, not assembly.


    I spent some years programming Z80 systems that had up to 64KB. While
    I was adept at writing decent assembly code, I still strived to
    create a HLL for it.

    Assembly was only be used where necessary; implementing the HLL
    compiler for a start, then where there were bottlenecks, or working
    with closely with hardware (and it was usually done from inline
    assembly within the HLL).

    Having limited memory was always going be an issue on such systems; I
    don't remember that using a simple compiler made much difference.

    In any case, even if I'd used C at the time, the available compilers wouldn't have been that much better, in the size and speed of the
    code generated, but they'd've been considerably poorer in how they
    were deployed.

    (I'm agreeing with you BTW.)

    It sounds like you're talking about limitations of self-hosted C
    compilers.
    The rest of the thread is concerned with cross-development.
    Except of the post of Lawrence D’Oliveiro for which I can't quite figure
    out what he had in mind.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Bart@bc@freeuk.com to comp.lang.c on Mon Apr 6 17:02:18 2026
    From Newsgroup: comp.lang.c

    On 06/04/2026 12:52, Michael S wrote:
    On Mon, 6 Apr 2026 12:31:20 +0100
    Bart <bc@freeuk.com> wrote:

    On 06/04/2026 11:21, David Brown wrote:
    On 05/04/2026 00:29, Lawrence D’Oliveiro wrote:
    On Tue, 17 Mar 2026 12:30:14 +0300, Oguz Kaan Ocal wrote:

    At 32KB, you want to spend your time fighting the memory
    constraints, not the toolchain.

    At 32kiB, I would write it all in assembler. Seems like it would be
    worth the trouble to hand-tune every instruction to get the size
    down and the speed up.

    No, it would not.  The time when that made sense is decades past.
    People use C (and even C++) for microcontrollers with a lot less
    than 32KB of flash.  You need a decent compiler, you need to know
    how to use it properly, and you need to know how to write code
    appropriate for such devices - but you do it in C, not assembly.


    I spent some years programming Z80 systems that had up to 64KB. While
    I was adept at writing decent assembly code, I still strived to
    create a HLL for it.

    Assembly was only be used where necessary; implementing the HLL
    compiler for a start, then where there were bottlenecks, or working
    with closely with hardware (and it was usually done from inline
    assembly within the HLL).

    Having limited memory was always going be an issue on such systems; I
    don't remember that using a simple compiler made much difference.

    In any case, even if I'd used C at the time, the available compilers
    wouldn't have been that much better, in the size and speed of the
    code generated, but they'd've been considerably poorer in how they
    were deployed.

    (I'm agreeing with you BTW.)


    It sounds like you're talking about limitations of self-hosted C
    compilers.
    The rest of the thread is concerned with cross-development.
    Except of the post of Lawrence D’Oliveiro for which I can't quite figure out what he had in mind.

    LD'O's suggestion was to use 100% assembly to get the best performance
    and code density, for systems with around 32KB of memory.

    My experience showed that even long ago, a HLL was preferable to using assembly. That is, at a time when compilers weren't so good at
    optimising, especially for small devices.

    That they also ran on the same hardware was another matter, but cross-compiling by running a compiler on a mini- or mainframe computer probably wasn't much better. However that wasn't practical for our very
    small company anyway, for several reasons such as cost.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Mon Apr 6 16:52:28 2026
    From Newsgroup: comp.lang.c

    Lawrence D’Oliveiro <ldo@nz.invalid> wrote:
    On Tue, 17 Mar 2026 12:30:14 +0300, Oguz Kaan Ocal wrote:

    At 32KB, you want to spend your time fighting the memory
    constraints, not the toolchain.

    At 32kiB, I would write it all in assembler. Seems like it would be
    worth the trouble to hand-tune every instruction to get the size down
    and the speed up.

    In old times usual trick was to compile to bytecode and use bytecode interpreter. That could produce much smaller program (6 times
    smaller was a reasonable estimate). With appropriate tuning,
    rewriting speed critial parts in assemby one could get quite
    decent speed.

    Now, this could lead to saving, but smaller one. Namely big part
    of saving was due to mismatch of typical 8-bit instruction set
    with application needs. Modern 32-bit microcontrollers have
    decent and compact instruction set, so mismatch is limited.
    But there is still possibility for savings.

    OTOH, while it is worth to avoid gross waste of resources, spending
    a lot of effort on making programs smaller is not worth it. So
    people just use C.
    --
    Waldek Hebisch
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Mon Apr 6 16:57:03 2026
    From Newsgroup: comp.lang.c

    Bart <bc@freeuk.com> writes:
    On 06/04/2026 12:52, Michael S wrote:
    On Mon, 6 Apr 2026 12:31:20 +0100
    Bart <bc@freeuk.com> wrote:


    Assembly was only be used where necessary; implementing the HLL
    compiler for a start, then where there were bottlenecks, or working
    with closely with hardware (and it was usually done from inline
    assembly within the HLL).

    Having limited memory was always going be an issue on such systems; I
    don't remember that using a simple compiler made much difference.

    In any case, even if I'd used C at the time, the available compilers
    wouldn't have been that much better, in the size and speed of the
    code generated, but they'd've been considerably poorer in how they
    were deployed.

    (I'm agreeing with you BTW.)


    It sounds like you're talking about limitations of self-hosted C
    compilers.
    The rest of the thread is concerned with cross-development.
    Except of the post of Lawrence D’Oliveiro for which I can't quite figure >> out what he had in mind.

    LD'O's suggestion was to use 100% assembly to get the best performance
    and code density, for systems with around 32KB of memory.

    That hasn't been true for decades, if it ever was true. L d'o has
    a history of posting nonesense about computing history.


    My experience showed that even long ago, a HLL was preferable to using >assembly. That is, at a time when compilers weren't so good at
    optimising, especially for small devices.

    Indeed, that was generally true even in the 1970s.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.c on Tue Apr 7 00:17:11 2026
    From Newsgroup: comp.lang.c

    On Mon, 6 Apr 2026 16:52:28 -0000 (UTC), Waldek Hebisch wrote:

    Lawrence D’Oliveiro <ldo@nz.invalid> wrote:

    At 32kiB, I would write it all in assembler. Seems like it would be
    worth the trouble to hand-tune every instruction to get the size
    down and the speed up.

    In old times usual trick was to compile to bytecode and use bytecode interpreter.

    That trick still works for small code. But not necessarily for fast
    code.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Tue Apr 7 14:41:00 2026
    From Newsgroup: comp.lang.c

    On 06/04/2026 18:57, Scott Lurndal wrote:
    Bart <bc@freeuk.com> writes:
    On 06/04/2026 12:52, Michael S wrote:
    On Mon, 6 Apr 2026 12:31:20 +0100
    Bart <bc@freeuk.com> wrote:


    Assembly was only be used where necessary; implementing the HLL
    compiler for a start, then where there were bottlenecks, or working
    with closely with hardware (and it was usually done from inline
    assembly within the HLL).

    Having limited memory was always going be an issue on such systems; I
    don't remember that using a simple compiler made much difference.

    In any case, even if I'd used C at the time, the available compilers
    wouldn't have been that much better, in the size and speed of the
    code generated, but they'd've been considerably poorer in how they
    were deployed.

    (I'm agreeing with you BTW.)


    It sounds like you're talking about limitations of self-hosted C
    compilers.
    The rest of the thread is concerned with cross-development.
    Except of the post of Lawrence D’Oliveiro for which I can't quite figure >>> out what he had in mind.

    LD'O's suggestion was to use 100% assembly to get the best performance
    and code density, for systems with around 32KB of memory.

    That hasn't been true for decades, if it ever was true. L d'o has
    a history of posting nonesense about computing history.


    My experience showed that even long ago, a HLL was preferable to using
    assembly. That is, at a time when compilers weren't so good at
    optimising, especially for small devices.

    Indeed, that was generally true even in the 1970s.

    To be fair, there are differences depending on the processor in
    question. For some of the older "brain-dead" 8-bit CISC microcontroller cores, C compilers often could not come close to the space or time
    efficiency of assembly - and those that were good, were usually very expensive. And you typically had to program in "8051 IAR C" rather than
    in "C" - your code was so specialised for the target compiler supplier
    and target processor that it was more like a C-assembly hybrid. HLL's
    were a lot better for better processors (including the Z80).

    But for a long time, a "small microcontroller" means a 32-bit ARM
    processor with limited flash and ram, for which C works extremely well.
    The only 8-bit microcontrollers with any market significance outside of updates of old systems are AVR devices, and they are also good with C.



    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Tue Apr 7 15:46:31 2026
    From Newsgroup: comp.lang.c

    On Tue, 7 Apr 2026 14:41:00 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    On 06/04/2026 18:57, Scott Lurndal wrote:
    Bart <bc@freeuk.com> writes:
    On 06/04/2026 12:52, Michael S wrote:
    On Mon, 6 Apr 2026 12:31:20 +0100
    Bart <bc@freeuk.com> wrote:


    Assembly was only be used where necessary; implementing the HLL
    compiler for a start, then where there were bottlenecks, or
    working with closely with hardware (and it was usually done from
    inline assembly within the HLL).

    Having limited memory was always going be an issue on such
    systems; I don't remember that using a simple compiler made much
    difference.

    In any case, even if I'd used C at the time, the available
    compilers wouldn't have been that much better, in the size and
    speed of the code generated, but they'd've been considerably
    poorer in how they were deployed.

    (I'm agreeing with you BTW.)


    It sounds like you're talking about limitations of self-hosted C
    compilers.
    The rest of the thread is concerned with cross-development.
    Except of the post of Lawrence D’Oliveiro for which I can't quite
    figure out what he had in mind.

    LD'O's suggestion was to use 100% assembly to get the best
    performance and code density, for systems with around 32KB of
    memory.

    That hasn't been true for decades, if it ever was true. L d'o has
    a history of posting nonesense about computing history.


    My experience showed that even long ago, a HLL was preferable to
    using assembly. That is, at a time when compilers weren't so good
    at optimising, especially for small devices.

    Indeed, that was generally true even in the 1970s.

    To be fair, there are differences depending on the processor in
    question. For some of the older "brain-dead" 8-bit CISC
    microcontroller cores, C compilers often could not come close to the
    space or time efficiency of assembly - and those that were good, were
    usually very expensive. And you typically had to program in "8051
    IAR C" rather than in "C" - your code was so specialised for the
    target compiler supplier and target processor that it was more like a C-assembly hybrid. HLL's were a lot better for better processors
    (including the Z80).

    8051 is MUCH cleaner architecture than 8080/Z80.
    But for a long time, a "small microcontroller" means a 32-bit ARM
    processor with limited flash and ram, for which C works extremely
    well. The only 8-bit microcontrollers with any market significance
    outside of updates of old systems are AVR devices, and they are also
    good with C.



    The only 8-bit microcontrollers with any market significance are PIC.
    AVR may have hobbyists following, but no market share.
    --- Synchronet 3.21f-Linux NewsLink 1.2