• New and improved version of cdecl

    From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c,comp.lang.c++ on Wed Oct 22 14:39:43 2025
    From Newsgroup: comp.lang.c

    This is cross-posted to comp.lang.c and comp.lang.c++.
    Consider redirecting followups as appropriate.

    cdecl, along with c++decl, is a tool that translates C or C++
    declaration syntax into English, and vice versa. For example :

    $ cdecl
    Type `help' or `?' for help
    cdecl> explain const char *foo[42]
    declare foo as array 42 of pointer to const char
    cdecl> declare bar as pointer to function (void) returning int
    int (*bar)(void )

    It's also available via the web site <https://cdecl.org/>.

    The original version of cdecl was posted on comp.sources.unix, probably
    in the 1980s.

    "ridiculous_fish" added support for Apple's blocks syntax. That
    version, 2.5, is the one that's most widely available (it's provided by
    the "cdecl" package on Debian and Ubuntu) and used by the cdecl.org
    website.

    There's a newer fork of cdecl, available in source code at

    https://github.com/paul-j-lucas/cdecl/

    It supports newer versions of C and C++ and adds a number of
    new features. See the README.md file, visible at the above URL,
    for more information. (It doesn't support Apple's block syntax.)

    There doesn't seem to be a binary distribution, but the latest
    source tarball is at

    https://github.com/paul-j-lucas/cdecl/releases/download/cdecl-18.5/cdecl-18.5.tar.gz

    It can be built on Unix-like systems with the usual "./configure;
    make; make install" sequence. To build from a copy of the git repo,
    run "./bootstrap" first to generate the "configure" script.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Thiago Adams@thiago.adams@gmail.com to comp.lang.c,comp.lang.c++ on Wed Oct 22 22:19:24 2025
    From Newsgroup: comp.lang.c

    Em 22/10/2025 18:39, Keith Thompson escreveu:
    This is cross-posted to comp.lang.c and comp.lang.c++.
    Consider redirecting followups as appropriate.

    cdecl, along with c++decl, is a tool that translates C or C++
    declaration syntax into English, and vice versa. For example :

    $ cdecl
    Type `help' or `?' for help
    cdecl> explain const char *foo[42]
    declare foo as array 42 of pointer to const char
    cdecl> declare bar as pointer to function (void) returning int
    int (*bar)(void )

    It's also available via the web site <https://cdecl.org/>.

    This one does not work:

    void (*f(int i))(void)



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ben Bacarisse@ben@bsb.me.uk to comp.lang.c on Thu Oct 23 02:42:43 2025
    From Newsgroup: comp.lang.c

    Thiago Adams <thiago.adams@gmail.com> writes:

    Em 22/10/2025 18:39, Keith Thompson escreveu:
    This is cross-posted to comp.lang.c and comp.lang.c++.
    Consider redirecting followups as appropriate.
    cdecl, along with c++decl, is a tool that translates C or C++
    declaration syntax into English, and vice versa. For example :
    $ cdecl
    Type `help' or `?' for help
    cdecl> explain const char *foo[42]
    declare foo as array 42 of pointer to const char
    cdecl> declare bar as pointer to function (void) returning int
    int (*bar)(void )
    It's also available via the web site <https://cdecl.org/>.

    This one does not work:

    void (*f(int i))(void)

    Right. But the new version Keith was posting about does work with that declaration.
    --
    Ben.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Thu Oct 23 03:04:00 2025
    From Newsgroup: comp.lang.c

    On 2025-10-23, Ben Bacarisse <ben@bsb.me.uk> wrote:
    Thiago Adams <thiago.adams@gmail.com> writes:

    Em 22/10/2025 18:39, Keith Thompson escreveu:
    This is cross-posted to comp.lang.c and comp.lang.c++.
    Consider redirecting followups as appropriate.
    cdecl, along with c++decl, is a tool that translates C or C++
    declaration syntax into English, and vice versa. For example :
    $ cdecl
    Type `help' or `?' for help
    cdecl> explain const char *foo[42]
    declare foo as array 42 of pointer to const char
    cdecl> declare bar as pointer to function (void) returning int
    int (*bar)(void )
    It's also available via the web site <https://cdecl.org/>.

    This one does not work:

    void (*f(int i))(void)

    Right. But the new version Keith was posting about does work with that declaration.

    A cdecl in Unbuntu, accompanied by a 1996 dated man page, handles this:

    cdecl> explain void (*signal(int, void (*)(int)))(int);
    declare signal as function (int, pointer to function (int) returning void)
    returning pointer to function (int) returning void

    But chokes if we add parameter names to the function being declared:

    cdecl> explain void (*signal(int sig, void (*)(int)))(int);
    syntax error
    cdecl> explain void (*signal(int, void (*handler)(int)))(int);
    syntax error

    Or to the function pointer being passed in:

    cdecl> explain void (*signal(int, void (*)(int sig)))(int);
    syntax error

    Or to the one being returned:

    cdecl> explain void (*signal(int, void (*)(int)))(int sig);
    syntax error

    I'm astonished that every cdecl out there would not have cases
    covering this: function with a function pointer param, returning
    a function pointer param, with and without param names.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Thu Oct 23 11:36:35 2025
    From Newsgroup: comp.lang.c

    On 23/10/2025 02:19, Thiago Adams wrote:
    Em 22/10/2025 18:39, Keith Thompson escreveu:
    This is cross-posted to comp.lang.c and comp.lang.c++.
    Consider redirecting followups as appropriate.

    cdecl, along with c++decl, is a tool that translates C or C++
    declaration syntax into English, and vice versa.  For example :

         $ cdecl
         Type `help' or `?' for help
         cdecl> explain const char *foo[42]
         declare foo as array 42 of pointer to const char
         cdecl> declare bar as pointer to function (void) returning int
         int (*bar)(void )

    It's also available via the web site <https://cdecl.org/>.

    This one does not work:

    void (*f(int i))(void)

    KT said the newer version is only available by building from source
    code, which must be done under some Linux-compatible system.

    I've had a look: it comprises 32Kloc of configure script, and 68Kloc of
    C sources, so 100Kloc just to decode declarations! (A bit longer than
    the 2-page version in K&R2.)

    (There's a further 30Kloc of what looks like C library code. So is this
    a complete C compiler, or does it still only do declarations?)

    Regarding your example, my old C compiler (which is a fraction the size
    of this new Cdecl) 'explains' it as:

    'ref proc(int)ref proc()void'

    (Not quite English, more Algol68-ish.)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Thiago Adams@thiago.adams@gmail.com to comp.lang.c on Thu Oct 23 07:59:29 2025
    From Newsgroup: comp.lang.c

    On 10/23/2025 7:36 AM, bart wrote:
    On 23/10/2025 02:19, Thiago Adams wrote:
    Em 22/10/2025 18:39, Keith Thompson escreveu:
    This is cross-posted to comp.lang.c and comp.lang.c++.
    Consider redirecting followups as appropriate.

    cdecl, along with c++decl, is a tool that translates C or C++
    declaration syntax into English, and vice versa.  For example :

         $ cdecl
         Type `help' or `?' for help
         cdecl> explain const char *foo[42]
         declare foo as array 42 of pointer to const char
         cdecl> declare bar as pointer to function (void) returning int
         int (*bar)(void )

    It's also available via the web site <https://cdecl.org/>.

    This one does not work:

    void (*f(int i))(void)

    KT said the newer version is only available by building from source
    code, which must be done under some Linux-compatible system.

    I've had a look: it comprises 32Kloc of configure script, and 68Kloc of
    C sources, so 100Kloc just to decode declarations! (A bit longer than
    the 2-page version in K&R2.)

    (There's a further 30Kloc of what looks like C library code. So is this
    a complete C compiler, or does it still only do declarations?)

    Regarding your example, my old C compiler (which is a fraction the size
    of this new Cdecl) 'explains' it as:

      'ref proc(int)ref proc()void'

    (Not quite English, more Algol68-ish.)


    This algorithm can be done is ~300 lines.. it is present at the c
    programming language 2 edition.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ben Bacarisse@ben@bsb.me.uk to comp.lang.c on Thu Oct 23 15:05:05 2025
    From Newsgroup: comp.lang.c

    Kaz Kylheku <643-408-1753@kylheku.com> writes:

    On 2025-10-23, Ben Bacarisse <ben@bsb.me.uk> wrote:
    Thiago Adams <thiago.adams@gmail.com> writes:

    Em 22/10/2025 18:39, Keith Thompson escreveu:
    This is cross-posted to comp.lang.c and comp.lang.c++.
    Consider redirecting followups as appropriate.
    cdecl, along with c++decl, is a tool that translates C or C++
    declaration syntax into English, and vice versa. For example :
    $ cdecl
    Type `help' or `?' for help
    cdecl> explain const char *foo[42]
    declare foo as array 42 of pointer to const char
    cdecl> declare bar as pointer to function (void) returning int
    int (*bar)(void )
    It's also available via the web site <https://cdecl.org/>.

    This one does not work:

    void (*f(int i))(void)

    Right. But the new version Keith was posting about does work with that
    declaration.

    A cdecl in Unbuntu, accompanied by a 1996 dated man page, handles this:

    cdecl> explain void (*signal(int, void (*)(int)))(int);
    declare signal as function (int, pointer to function (int) returning void)
    returning pointer to function (int) returning void

    But chokes if we add parameter names to the function being declared:

    cdecl> explain void (*signal(int sig, void (*)(int)))(int);
    syntax error
    cdecl> explain void (*signal(int, void (*handler)(int)))(int);
    syntax error

    Thanks. Yes, it seems it's having "int i" rather than "int" that causes
    the issue.

    Or to the function pointer being passed in:

    cdecl> explain void (*signal(int, void (*)(int sig)))(int);
    syntax error

    Or to the one being returned:

    cdecl> explain void (*signal(int, void (*)(int)))(int sig);
    syntax error

    I'm astonished that every cdecl out there would not have cases
    covering this: function with a function pointer param, returning
    a function pointer param, with and without param names.

    Indeed. Although declarations might often omit the parameter names, if
    you copy and paste from a function definition the names with be there!
    --
    Ben.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Thu Oct 23 16:04:18 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 23/10/2025 02:19, Thiago Adams wrote:
    Em 22/10/2025 18:39, Keith Thompson escreveu:
    This is cross-posted to comp.lang.c and comp.lang.c++.
    Consider redirecting followups as appropriate.

    cdecl, along with c++decl, is a tool that translates C or C++
    declaration syntax into English, and vice versa.  For example :

         $ cdecl
         Type `help' or `?' for help
         cdecl> explain const char *foo[42]
         declare foo as array 42 of pointer to const char
         cdecl> declare bar as pointer to function (void) returning int
         int (*bar)(void )

    It's also available via the web site <https://cdecl.org/>.
    This one does not work:
    void (*f(int i))(void)

    KT said the newer version is only available by building from source
    code, which must be done under some Linux-compatible system.

    As far as I know, it should build on just about any Unix-like system,
    not just ones that happen to use the Linux kernel. Perhaps that's what
    you mean by "Linux-compatible"? If so, I suggest "Unix-like" would be
    clearer. (I'm building it under Cygwin as I write this.)

    I've had a look: it comprises 32Kloc of configure script, and 68Kloc
    of C sources, so 100Kloc just to decode declarations! (A bit longer
    than the 2-page version in K&R2.)

    Yes, and neither you nor I had to write any of it. I cloned the repo,
    ran one command (my wrapper script for builds like this), and it works.

    I wonder how many lines of code are required for the specification of
    the x86_64 CPU in the computer I'm using to write this. But really,
    it doesn't matter to me, since that work has been done, and all I
    have to do is use it.

    The configure script is automatically generated (I mentioned the
    "bootstrap" script that generates it if you build from the git repo).

    I suppose building it under Windows (without some Unix-like layer
    like MinGW or Cygwin) would be more difficult. That's true of
    a lot of tools that are primarily used on Unix-like systems.
    It's likely that the author of the code doesn't care about Windows.

    I agree that it can be a problem that a lot of code developed for
    Unix-like systems is difficult to build on Windows. For a lot
    of users, an emulation layer like Cygwin, MinGW, or WSL is a good
    enough solution. If it isn't for you, perhaps you could help solve
    the problem. Perhaps the GNU autotools could be updated with better
    Windows support. I wouldn't know how to do that; perhaps you would.

    "Don't use autotools" is not a good solution, since there are so
    many software packages that depend on it, often maintained by people
    who don't care about Windows.

    (There's a further 30Kloc of what looks like C library code. So is
    this a complete C compiler, or does it still only do declarations?)

    I haven't looked at the source code (I haven't needed to), but the man
    page indicates that this version of cdecl recognizes a number of types
    defined in the standard library, such as FILE, clock_t, and std::partial_ordering (remember that it includes C++ support).

    I don't know whether all this could be done in fewer lines of code, and
    frankly I don't much care. The tool works and is useful, and I didn't
    have to write it.

    Have you tried using it? I'm sure you have some system where you could
    build it.

    Regarding your example, my old C compiler (which is a fraction the
    size of this new Cdecl) 'explains' it as:

    'ref proc(int)ref proc()void'

    (Not quite English, more Algol68-ish.)

    Can I run your old C compiler on my Ubuntu system?
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Fri Oct 24 01:44:45 2025
    From Newsgroup: comp.lang.c

    On 24/10/2025 00:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 23/10/2025 02:19, Thiago Adams wrote:
    Em 22/10/2025 18:39, Keith Thompson escreveu:
    This is cross-posted to comp.lang.c and comp.lang.c++.
    Consider redirecting followups as appropriate.

    cdecl, along with c++decl, is a tool that translates C or C++
    declaration syntax into English, and vice versa.  For example :

         $ cdecl
         Type `help' or `?' for help
         cdecl> explain const char *foo[42]
         declare foo as array 42 of pointer to const char
         cdecl> declare bar as pointer to function (void) returning int >>>>      int (*bar)(void )

    It's also available via the web site <https://cdecl.org/>.
    This one does not work:
    void (*f(int i))(void)

    KT said the newer version is only available by building from source
    code, which must be done under some Linux-compatible system.

    As far as I know, it should build on just about any Unix-like system,
    not just ones that happen to use the Linux kernel. Perhaps that's what
    you mean by "Linux-compatible"? If so, I suggest "Unix-like" would be clearer. (I'm building it under Cygwin as I write this.)

    I've had a look: it comprises 32Kloc of configure script, and 68Kloc
    of C sources, so 100Kloc just to decode declarations! (A bit longer
    than the 2-page version in K&R2.)

    Yes, and neither you nor I had to write any of it. I cloned the repo,
    ran one command (my wrapper script for builds like this), and it works.

    I wonder how many lines of code are required for the specification of
    the x86_64 CPU in the computer I'm using to write this. But really,
    it doesn't matter to me, since that work has been done, and all I
    have to do is use it.

    The configure script is automatically generated (I mentioned the
    "bootstrap" script that generates it if you build from the git repo).

    I suppose building it under Windows (without some Unix-like layer
    like MinGW or Cygwin) would be more difficult. That's true of
    a lot of tools that are primarily used on Unix-like systems.
    It's likely that the author of the code doesn't care about Windows.

    I agree that it can be a problem that a lot of code developed for
    Unix-like systems is difficult to build on Windows. For a lot
    of users, an emulation layer like Cygwin, MinGW, or WSL is a good
    enough solution. If it isn't for you, perhaps you could help solve
    the problem. Perhaps the GNU autotools could be updated with better
    Windows support. I wouldn't know how to do that; perhaps you would.

    "Don't use autotools" is not a good solution, since there are so
    many software packages that depend on it, often maintained by people
    who don't care about Windows.

    (There's a further 30Kloc of what looks like C library code. So is
    this a complete C compiler, or does it still only do declarations?)

    I haven't looked at the source code (I haven't needed to), but the man
    page indicates that this version of cdecl recognizes a number of types defined in the standard library, such as FILE, clock_t, and std::partial_ordering (remember that it includes C++ support).

    I don't know whether all this could be done in fewer lines of code, and frankly I don't much care. The tool works and is useful, and I didn't
    have to write it.

    Have you tried using it? I'm sure you have some system where you could
    build it.

    Regarding your example, my old C compiler (which is a fraction the
    size of this new Cdecl) 'explains' it as:

    'ref proc(int)ref proc()void'

    (Not quite English, more Algol68-ish.)

    Can I run your old C compiler on my Ubuntu system?


    The old one needed a tweak to bring it up-to-date for my newer C
    transpiler. So it was easier to port the feature to the newer product.

    Download https://github.com/sal55/langs/blob/master/ccu.c

    (Note: 86Kloc/2MB file; this is poor quality, linear C generated from intermediate language.)

    Build instructions are at the top. Although this targets Win64, it works enough to demonstrate the feature above. Create this C file (say test.c):

    int main(void) {
    void (*f(int i))(void);
    $showmode f;
    }

    Run as follows (if built as 'ccu'):

    ./ccu -s test

    It will display the type during compilation.

    Obviously this is not a dedicated product (and doing the reverse needs a separate program), but I only needed to add about 10 lines of code to
    support '$showmode'.

    Original source, omitting the unneeded output options, would be 2/3 the
    size of that configure script.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Thu Oct 23 19:00:29 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 24/10/2025 00:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]

    I note that you've ignored the vast majority of my previous article.

    Regarding your example, my old C compiler (which is a fraction the
    size of this new Cdecl) 'explains' it as:

    'ref proc(int)ref proc()void'

    (Not quite English, more Algol68-ish.)
    Can I run your old C compiler on my Ubuntu system?


    The old one needed a tweak to bring it up-to-date for my newer C
    transpiler. So it was easier to port the feature to the newer product.

    Download https://github.com/sal55/langs/blob/master/ccu.c

    (Note: 86Kloc/2MB file; this is poor quality, linear C generated from intermediate language.)

    Build instructions are at the top. Although this targets Win64, it
    works enough to demonstrate the feature above. Create this C file (say test.c):

    int main(void) {
    void (*f(int i))(void);
    $showmode f;
    }

    Run as follows (if built as 'ccu'):

    ./ccu -s test

    It will display the type during compilation.

    Obviously this is not a dedicated product (and doing the reverse needs
    a separate program), but I only needed to add about 10 lines of code
    to support '$showmode'.

    Original source, omitting the unneeded output options, would be 2/3
    the size of that configure script.

    OK, I was able to compile and run your ccu.c, and at least on this
    example it works as you've described it. It looks interesting,
    but I personally don't find it particularly useful, given that I
    already have cdecl, I prefer its syntax, and it's easier to use
    (and I almost literally could not care less about the number of
    lines of code needed to implement cdecl).
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Fri Oct 24 14:27:40 2025
    From Newsgroup: comp.lang.c

    On 24/10/2025 03:00, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 24/10/2025 00:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]

    I note that you've ignored the vast majority of my previous article.

    I've noted it, but chose not to reply. You have a point of view and
    attitude which I don't share.

    Mainly that you don't care how complicated a program for even a simple
    task is, and how laborious and OS-dependent its build process is, so
    long as it (eventually) works.

    That it favours your own OS, leaving users of other to have to jump
    through extra hoops, doesn't appear to bother you.

    The actual task in this case must be one of the most OS-agnostic tasks
    there is: read some text, write some text.

    (The ccu.c file I posted was configured for Linux, but an OS-agnostic
    version is possible too. The file mcc.c on the same site, will build on
    both Windows and Linux, but it lacks the '$showmode' feature.)


    Regarding your example, my old C compiler (which is a fraction the
    size of this new Cdecl) 'explains' it as:

    'ref proc(int)ref proc()void'

    (Not quite English, more Algol68-ish.)
    Can I run your old C compiler on my Ubuntu system?


    The old one needed a tweak to bring it up-to-date for my newer C
    transpiler. So it was easier to port the feature to the newer product.

    Download https://github.com/sal55/langs/blob/master/ccu.c

    (Note: 86Kloc/2MB file; this is poor quality, linear C generated from
    intermediate language.)

    Build instructions are at the top. Although this targets Win64, it
    works enough to demonstrate the feature above. Create this C file (say
    test.c):

    int main(void) {
    void (*f(int i))(void);
    $showmode f;
    }

    Run as follows (if built as 'ccu'):

    ./ccu -s test

    It will display the type during compilation.

    Obviously this is not a dedicated product (and doing the reverse needs
    a separate program), but I only needed to add about 10 lines of code
    to support '$showmode'.

    Original source, omitting the unneeded output options, would be 2/3
    the size of that configure script.

    OK, I was able to compile and run your ccu.c, and at least on this
    example it works as you've described it. It looks interesting,
    but I personally don't find it particularly useful, given that I
    already have cdecl, I prefer its syntax, and it's easier to use
    (and I almost literally could not care less about the number of
    lines of code needed to implement cdecl).


    Well I built cdecl too, under WSL. Jesus, that looked a lot of work!

    However, it took me a while to find where it put the executable, as the
    make process doesn't directly tell you that. It seems it puts it inside
    the src directory, which is unusual. It further appears that you have to
    do 'make install' to be able to run it without a path.

    (Yes, I did glance at the readme, but it is a .md file which I didn't
    notice, and in plain text it looked unreadable.)

    When I did run it, then while it had a fair number of options, it didn't appear to do much beyond converting C declarations to and from an
    English description.

    That program is 2.8 MB (10 times the size of my C compiler).

    I guess you don't care about that either. But surely, you must be
    curious about WHY it is so big? You must surely know, with your decades
    of experience, that this is 100 times bigger than necessary for such a task?


    I decided to make my own mini-cdecl. It took 20 minutes and works like this:

    c:\cx>qq cdecl
    Mycdecl> explain void (*f(int i))(void);
    f = proc(i32)ref proc()void
    Mycdecl> q

    It relies on a new feature of my C compiler, an extra linkage kind to go
    with 'typedef' and 'static', called '$showtype'. When used, it displays
    the type of the name being declared, and then stops compilation.

    That allows me to invoke it from the script shown below. It doesn't do 'declare' though.


    -----------------------------------------
    do
    print "Mycdecl> "
    readln cmd:"n", rest:"l"

    case cmd
    when "explain" then
    explaintype(rest)

    when "x", "q", "exit" then
    stop
    else
    println "?"
    esac
    od

    proc explaintype(ctype)=
    writestrfile("$temp.c", "$showtype "+ctype+";")
    if system("cc $temp.c >result") = 0 then
    println readtextfile("result")[2]
    else
    println "Error"
    fi
    end







    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Fri Oct 24 19:35:02 2025
    From Newsgroup: comp.lang.c

    On 24/10/2025 15:27, bart wrote:
    On 24/10/2025 03:00, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 24/10/2025 00:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]

    I note that you've ignored the vast majority of my previous article.

    I've noted it, but chose not to reply. You have a point of view and
    attitude which I don't share.

    Mainly that you don't care how complicated a program for even a simple
    task is, and how laborious and OS-dependent its build process is, so
    long as it (eventually) works.

    That it favours your own OS, leaving users of other to have to jump
    through extra hoops, doesn't appear to bother you.


    Why would someone care what how someone else writes their code, or what
    it does, or what systems it runs on? They guy who wrote cdecl gets to
    choose exactly how he wants to write it, and what systems it supports.
    We others get it for free - we can use it if we like and it if it suits
    our needs. But neither Keith nor anyone else paid that guy to do the
    work, or contributed anything to the task, and we have no right to judge
    what he choose to do, or how he choose to do it.



    Well I built cdecl too, under WSL. Jesus, that looked a lot of work!

    I have no experience with WSL, so I can't comment on the effort there.
    For my own use on a Linux system, I had to install a package (apt-get
    install libreadline-dev), but that's neither difficult to do, or time-consuming, and it was not hard to see what was needed. Of course,
    a non-programmer might not have realised that was needed, but if you are stumped on a configure script error "readline.h header not found, use --without-readline" and can't figure out how to get "readline.h" or
    configure the program to avoid using it, and can't at least google for
    help, then you are probably not the target audience for cdecl.


    However, it took me a while to find where it put the executable, as the
    make process doesn't directly tell you that. It seems it puts it inside
    the src directory, which is unusual. It further appears that you have to
    do 'make install' to be able to run it without a path.


    I agree that putting the executable in "src" is a little odd. But
    running "make install" is hardly unusual - it is as standard as it gets.
    (And of course there are a dozen other different ways you can arrange
    to run the programs without a path if you don't like "make install".)

    (Yes, I did glance at the readme, but it is a .md file which I didn't notice, and in plain text it looked unreadable.)


    I agree that this README.md file is unusually clumsy when viewed as
    plain text - part of the point of Markdown is that it can be easily read
    and written as plain text. But github shows the readme in nicely
    rendered html, so it's hardly a big issue.

    When I did run it, then while it had a fair number of options, it didn't appear to do much beyond converting C declarations to and from an
    English description.


    It seems to be quite a flexible program, with a number of options.

    That program is 2.8 MB (10 times the size of my C compiler).

    First, as usual, nobody cares about a couple of megabytes. Secondly, if
    you /do/ care, then you might do at least a /tiny/ bit of investigation.
    First, run "strip" on it to remove debugging symbols - now it is a bit
    over 600 KB. By running "strings" on it, I can see that about 100 KB is strings - messages, rules, types, keywords, etc.

    Considering that it supports C from the dark ages up to C23, C++ up to
    C++26 (with each standard along the way), lots of extensions - including
    MS stuff - as well as macro expansion tracing, I don't think the program sounds at all excessive in size.


    I guess you don't care about that either. But surely, you must be
    curious about WHY it is so big? You must surely know, with your decades
    of experience, that this is 100 times bigger than necessary for such a
    task?

    Were you not curious, or did you just pull random sizes out of thin air
    as an excuse to complain again about any program written by anyone else
    but you?

    It is entirely possible that your little alternative is a useful program
    and does what you personally want and need with a smaller executable.
    But cdecl does a great deal more, doing things that other people need
    and want (like handling C++ declarations - surely 100 times more effort
    than handling C declarations, especially the limited older standards you
    use).

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Fri Oct 24 19:50:13 2025
    From Newsgroup: comp.lang.c

    On 24/10/2025 18:35, David Brown wrote:
    On 24/10/2025 15:27, bart wrote:
    On 24/10/2025 03:00, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 24/10/2025 00:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]

    I note that you've ignored the vast majority of my previous article.

    I've noted it, but chose not to reply. You have a point of view and
    attitude which I don't share.

    Mainly that you don't care how complicated a program for even a simple
    task is, and how laborious and OS-dependent its build process is, so
    long as it (eventually) works.

    That it favours your own OS, leaving users of other to have to jump
    through extra hoops, doesn't appear to bother you.


    Why would someone care what how someone else writes their code, or what
    it does, or what systems it runs on?  They guy who wrote cdecl gets to choose exactly how he wants to write it, and what systems it supports.
    We others get it for free - we can use it if we like and it if it suits
    our needs.  But neither Keith nor anyone else paid that guy to do the
    work, or contributed anything to the task, and we have no right to judge what he choose to do, or how he choose to do it.

    This a curious argument: it's free software so you don't care in the
    slightest how efficient it is or how user-friendly it might be to build?

    This is a program that reads lines of text from the terminal and
    translates them into another line of text. THAT needs thirty thousand
    lines of configure script?! And that's even before you start compiling
    the program itself.

    I'm thinking of making available some software that does even less, but
    wrap enough extra and POINTLESS levels complexity around that you'd need
    to lease time on a super-computer to build it. But the software is free
    so that makes it alright?




    Well I built cdecl too, under WSL. Jesus, that looked a lot of work!

    I have no experience with WSL, so I can't comment on the effort there.

    I was talking about all the stuff scrolling endlessly up to the screen
    for a minute and a half while running the configure script and then
    compiling the modules.

    That program is 2.8 MB (10 times the size of my C compiler).

    First, as usual, nobody cares about a couple of megabytes.  Secondly, if you /do/ care, then you might do at least a /tiny/ bit of investigation.
     First, run "strip" on it to remove debugging symbols - now it is a bit over 600 KB.  By running "strings" on it, I can see that about 100 KB is strings - messages, rules, types, keywords, etc.

    If I was directly building it myself, then I would use -s with gcc. But
    since the process is automatic via makefiles, I assumed it would give me
    a working, production version, not a version needing to be debugged!

    I guess you don't care about that either. But surely, you must be
    curious about WHY it is so big? You must surely know, with your
    decades of experience, that this is 100 times bigger than necessary
    for such a task?

    Were you not curious, or did you just pull random sizes out of thin air
    as an excuse to complain again about any program written by anyone else
    but you?

    I have a version of Algol68 Genie which is nearly 4MB on Windows.
    Apparently the Linux version is 1-2MB only. You may recall this coming
    up on comp.lang.c.

    The point is, this is an implementation of an entire language, not just printing some type info. And maybe it includes debugging info, and the
    true size is even smaller; who knows?

    While Tiny C, even if you don't care for its abilities, still HAS to be
    able to decode full C99 type declarations. So understanding such types
    has to be accomplished within its 200KB size.
    It is entirely possible that your little alternative is a useful program
    and does what you personally want and need with a smaller executable.
    But cdecl does a great deal more,

    CDECL translates a single C type specification into linear LTR form. Or
    vice versa. That's what nearly everyone needs it for, and why it exists.
    Why, what other stuff does it do?

    So, yes, anyone with an inquiring mind can form an idea of how much code
    might be needed for such a task, and how it ought to compare with a
    complete language implementations.

    In fact, people have posted algorithms here for doing exactly the same.
    I don't recall that they took tens of thousands of lines to describe.

    doing things that other people need
    and want (like handling C++ declarations - surely 100 times more effort
    than handling C declarations, especially the limited older standards you use).

    So, it's a hundred times bigger than necessary due to C++. That explains
    that then. (Sorry, 20 times bigger because whoever provided the build
    system decided it should include debug info to make it 5 times as big
    for no reason.)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Fri Oct 24 18:59:33 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 24/10/2025 18:35, David Brown wrote:
    On 24/10/2025 15:27, bart wrote:
    On 24/10/2025 03:00, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 24/10/2025 00:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]

    I note that you've ignored the vast majority of my previous article.

    I've noted it, but chose not to reply. You have a point of view and
    attitude which I don't share.

    Mainly that you don't care how complicated a program for even a simple
    task is, and how laborious and OS-dependent its build process is, so
    long as it (eventually) works.

    That it favours your own OS, leaving users of other to have to jump
    through extra hoops, doesn't appear to bother you.


    Why would someone care what how someone else writes their code, or what
    it does, or what systems it runs on?


    This a curious argument: it's free software so you don't care in the >slightest how efficient it is or how user-friendly it might be to build?

    Your antique ideas of "efficient", which rely on unsuitable
    metrics like executable size (which you apparently don't understand sufficiently to realize that the bulk of the content of the
    executable is optional and can easily be removed if you are so
    tight on disk space that a couple hundred kilobytes matters).

    $ man strip

    The strip command has been part of unix for a half century.

    But instead of accepting David's opinion as just that, David's
    opinion, you continue to criticize his opinion, the C language,
    the build system (make, autotools),
    and by proxy, the universe of developers and users that rely on
    that particular build system.


    This is a program that reads lines of text from the terminal and
    translates them into another line of text. THAT needs thirty thousand
    lines of configure script?! And that's even before you start compiling
    the program itself.

    You're repeating yourself. Your outrage won't change anyone's mind.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Fri Oct 24 21:36:50 2025
    From Newsgroup: comp.lang.c

    On 24.10.2025 19:35, David Brown wrote:
    On 24/10/2025 15:27, bart wrote:

    However, it took me a while to find where it put the executable, as
    the make process doesn't directly tell you that. It seems it puts it
    inside the src directory, which is unusual.

    Unusual for whom?

    It further appears that
    you have to do 'make install' to be able to run it without a path.

    Yes, sure; if you want to use an executable as a regular program
    you have to install it at the (or at some) appropriate place.

    I'd consider it a horror if I'd compile a source and any previous
    installed version gets overwritten (or shadowed); even more so in
    a multi-user system!


    I agree that putting the executable in "src" is a little odd. But
    running "make install" is hardly unusual - it is as standard as it gets.
    (And of course there are a dozen other different ways you can arrange
    to run the programs without a path if you don't like "make install".)

    It is quite common that the executable file of a software package
    is created in the directory where the sources reside. (I've just a
    hand-full of third-party packages on my system but all do exactly
    that. In my own projects I have typically also a two-step process;
    some "make" (or alike) generation process (in the source directory)
    and some "install" (or alike) installation or upload process.)

    There's of course also larger projects that may have an organized
    hierarchy of directories for system components and/or libraries.
    Then the makefiles(-hierarchies) handle that, each in its scope.

    For the use of executables or libraries there's the install step,
    of course.

    In professional contexts you don't want these steps combined.
    After the build you want to pre-install it in an QA area for the
    tests. Another step is the packaging to deliver the software
    products. And the package can then be installed; first in the
    next level QA test, later, after approval, at the production site.
    For private, primitive, or toy projects these steps are usually
    not (or not all) necessary. But even there it's typically not
    the right thing to have the build and install step combined, as
    explained initially.

    But anyway I also wonder about bart that he couldn't find it;
    maybe he's expecting some Windows "convention" on Unix? - If in
    doubt I'm just typing 'ls -ltr' to see the latest files created
    or directory that got updated to get a concrete hint on where
    the software products have been generated.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Fri Oct 24 13:01:55 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 24/10/2025 03:00, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 24/10/2025 00:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    I note that you've ignored the vast majority of my previous article.

    I've noted it, but chose not to reply. You have a point of view and
    attitude which I don't share.

    Mainly that you don't care how complicated a program for even a simple
    task is, and how laborious and OS-dependent its build process is, so
    long as it (eventually) works.

    Eventually? Once I cloned the repository, it took about 45 seconds to
    build and install from source on my system, and I was able to run it immediately.

    I happen to have a script that automates much of the process that could otherwise be done with about 3 commands. Writing that script was
    worthwhile *for me* because I happen to build a lot of autotools-based
    packages from source. Without that, it might have taken me several
    minutes.

    That it favours your own OS, leaving users of other to have to jump
    through extra hoops, doesn't appear to bother you.

    Why should it bother me? I can certainly see that an easier way to
    build cdecl on Windows would be a good thing, but it doesn't affect me.

    [...]

    Well I built cdecl too, under WSL. Jesus, that looked a lot of work!

    Really? I built it under WSL myself, using exactly the same method I
    used on my Ubuntu system. Cygwin too. The latter was a bit slow, but I
    did other things while it was building.

    Are you at all interested in learning how to build this kind of software package more easily? If so, email me. This isn't really about C, so
    I'd rather not dive into it too deeply here.

    There are certainly things to dislike about autotools, but there are
    thousands of software packages that use it. Once you learn how to build
    one such package, you can build most of them (at least in a Unix-like environment).

    No, it doesn't work well for Windows without a Unix-like emulation
    layer. Maybe it could be enhanced to work better on Windows. Maybe you
    could contribute to making that happen.

    However, it took me a while to find where it put the executable, as
    the make process doesn't directly tell you that. It seems it puts it
    inside the src directory, which is unusual. It further appears that
    you have to do 'make install' to be able to run it without a path.

    That's not unusual -- and the "make install" step is common to hundreds
    of software packages. I usually don't notice where the executable
    initially appears; "make install" puts it where I want it.

    (Yes, I did glance at the readme, but it is a .md file which I didn't
    notice, and in plain text it looked unreadable.)

    Sure, it would be nice if the README.md were more legible. Most of
    them are. Apparently the author was more interested in it being
    readable using a markdown viewer. But if you view the project's
    site on GitHub, the README.md is formatted for you.

    When I did run it, then while it had a fair number of options, it
    didn't appear to do much beyond converting C declarations to and from
    an English description.

    That program is 2.8 MB (10 times the size of my C compiler).

    As mentioned later in this thread, you didn't strip the executable. It
    might be nice if stripping the executable were the default.

    I guess you don't care about that either. But surely, you must be
    curious about WHY it is so big? You must surely know, with your
    decades of experience, that this is 100 times bigger than necessary
    for such a task?

    How many lines of assembly language were generated and discarded during compilation? I'm guessing you don't know or care. Why do you care so
    much about other things that you don't need?

    I decided to make my own mini-cdecl. It took 20 minutes and works like this:

    c:\cx>qq cdecl
    Mycdecl> explain void (*f(int i))(void);
    f = proc(i32)ref proc()void
    Mycdecl> q

    I doubt that that obscure syntax would be of interest to most people,
    though I'm sure it works for you. If you wanted to generate something
    more readable by more users, you might have an interesting competitor to
    cdecl.

    [...]
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Fri Oct 24 13:07:45 2025
    From Newsgroup: comp.lang.c

    David Brown <david.brown@hesbynett.no> writes:
    On 24/10/2025 15:27, bart wrote:
    [...]
    Well I built cdecl too, under WSL. Jesus, that looked a lot of work!

    I have no experience with WSL, so I can't comment on the effort
    there. For my own use on a Linux system, I had to install a package
    (apt-get install libreadline-dev), but that's neither difficult to do,
    or time-consuming, and it was not hard to see what was needed. Of
    course, a non-programmer might not have realised that was needed, but
    if you are stumped on a configure script error "readline.h header not
    found, use --without-readline" and can't figure out how to get
    "readline.h" or configure the program to avoid using it, and can't at
    least google for help, then you are probably not the target audience
    for cdecl.

    WSL, "Windows Subsystem for Linux" (which should probably have been
    called "Linux Subsystem for Windows") provides something that looks just
    like a direct Linux desktop system. It supports several different
    Linux-based distributions. I use Ubuntu, and the build procedure under
    WSL is exactly the same as under Ubuntu.

    However, it took me a while to find where it put the executable, as
    the make process doesn't directly tell you that. It seems it puts it
    inside the src directory, which is unusual. It further appears that
    you have to do 'make install' to be able to run it without a path.

    I agree that putting the executable in "src" is a little odd. But
    running "make install" is hardly unusual - it is as standard as it
    gets. (And of course there are a dozen other different ways you can
    arrange to run the programs without a path if you don't like "make
    install".)

    Putting the executable in src is very common for this kind of package.
    I generally don't notice, since I always run "make install", which knows
    where to find the executable and where to copy it.

    [...]

    That program is 2.8 MB (10 times the size of my C compiler).

    First, as usual, nobody cares about a couple of megabytes. Secondly,
    if you /do/ care, then you might do at least a /tiny/ bit of
    investigation. First, run "strip" on it to remove debugging symbols
    - now it is a bit over 600 KB. By running "strings" on it, I can see
    that about 100 KB is strings - messages, rules, types, keywords, etc.

    It's easier than that. The Makefile provides an "install-strip" option
    that does the installation and strips the executable. A lot of packages
    like this support "make install-strip". For those that don't, just run
    "strip" manually after installation.

    [...]
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Fri Oct 24 13:20:45 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 24/10/2025 18:35, David Brown wrote:
    On 24/10/2025 15:27, bart wrote:
    On 24/10/2025 03:00, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 24/10/2025 00:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]

    I note that you've ignored the vast majority of my previous article.

    I've noted it, but chose not to reply. You have a point of view and
    attitude which I don't share.

    Mainly that you don't care how complicated a program for even a
    simple task is, and how laborious and OS-dependent its build
    process is, so long as it (eventually) works.

    That it favours your own OS, leaving users of other to have to jump
    through extra hoops, doesn't appear to bother you.

    Why would someone care what how someone else writes their code, or
    what it does, or what systems it runs on?  They guy who wrote cdecl
    gets to choose exactly how he wants to write it, and what systems it
    supports. We others get it for free - we can use it if we like and
    it if it suits our needs.  But neither Keith nor anyone else paid
    that guy to do the work, or contributed anything to the task, and we
    have no right to judge what he choose to do, or how he choose to do
    it.

    This a curious argument: it's free software so you don't care in the slightest how efficient it is or how user-friendly it might be to
    build?

    Its efficiency is not a great concern. I've seen no perceptible delay
    between issuing a command to cdecl and seeing the result. No, I don't
    much care what it does behind the scenes. If I did care, I might look
    through the sources and try to think of ways to improve it. But the
    effort to do so would vastly exceed any time I might save running it.

    The build and installation process for cdecl is very user-friendly. It
    matches the process for thousands of other software packages that are distributed in source. I can see that the process might be confusing if
    you're not accustomed to it. If you *asked* rather than just
    complaining, you might learn something.

    The stripped executable occupies about 0.000008% of my hard drive.

    This is a program that reads lines of text from the terminal and
    translates them into another line of text. THAT needs thirty thousand
    lines of configure script?! And that's even before you start compiling
    the program itself.

    The configure script is automatically generated from "configure.ac",
    which is 343 lines, 241 lines if comments and blank lines are
    deleted. I've never written a configure.ac file myself, but most
    of it looks like boilerplate. It would probably be fairly easy
    (with some experience) to create one by modifying an existing one
    from another project.

    I'm thinking of making available some software that does even less,
    but wrap enough extra and POINTLESS levels complexity around that
    you'd need to lease time on a super-computer to build it. But the
    software is free so that makes it alright?

    Free software still has to be usable. cdecl is usable for most of us.

    [...]

    I was talking about all the stuff scrolling endlessly up to the screen
    for a minute and a half while running the configure script and then
    compiling the modules.

    Why is that a problem? If you like, you can redirect the output of "./configure" and "make" to a file, and take a look at the output later
    if you need to (you probably won't).

    [...]
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c on Fri Oct 24 15:01:19 2025
    From Newsgroup: comp.lang.c

    On 10/23/2025 4:04 PM, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 23/10/2025 02:19, Thiago Adams wrote:
    Em 22/10/2025 18:39, Keith Thompson escreveu:
    This is cross-posted to comp.lang.c and comp.lang.c++.
    Consider redirecting followups as appropriate.

    cdecl, along with c++decl, is a tool that translates C or C++
    declaration syntax into English, and vice versa.  For example :

         $ cdecl
         Type `help' or `?' for help
         cdecl> explain const char *foo[42]
         declare foo as array 42 of pointer to const char
         cdecl> declare bar as pointer to function (void) returning int >>>>      int (*bar)(void )

    It's also available via the web site <https://cdecl.org/>.
    This one does not work:
    void (*f(int i))(void)

    KT said the newer version is only available by building from source
    code, which must be done under some Linux-compatible system.

    As far as I know, it should build on just about any Unix-like system,
    not just ones that happen to use the Linux kernel. Perhaps that's what
    you mean by "Linux-compatible"? If so, I suggest "Unix-like" would be clearer. (I'm building it under Cygwin as I write this.)

    I've had a look: it comprises 32Kloc of configure script, and 68Kloc
    of C sources, so 100Kloc just to decode declarations! (A bit longer
    than the 2-page version in K&R2.)

    Yes, and neither you nor I had to write any of it. I cloned the repo,
    ran one command (my wrapper script for builds like this), and it works.

    I wonder how many lines of code are required for the specification of
    the x86_64 CPU in the computer I'm using to write this. But really,
    it doesn't matter to me, since that work has been done, and all I
    have to do is use it.

    The configure script is automatically generated (I mentioned the
    "bootstrap" script that generates it if you build from the git repo).

    I suppose building it under Windows (without some Unix-like layer
    like MinGW or Cygwin) would be more difficult. That's true of
    a lot of tools that are primarily used on Unix-like systems.
    It's likely that the author of the code doesn't care about Windows.

    I agree that it can be a problem that a lot of code developed for
    Unix-like systems is difficult to build on Windows. For a lot
    of users, an emulation layer like Cygwin, MinGW, or WSL is a good
    enough solution. If it isn't for you, perhaps you could help solve
    the problem. Perhaps the GNU autotools could be updated with better
    Windows support. I wouldn't know how to do that; perhaps you would.

    Fwiw, if you can find the tool that you want to use here:

    https://vcpkg.io

    it can be automatically built and integrated into MSVC. So far, it works okay...




    [...]

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Fri Oct 24 23:18:40 2025
    From Newsgroup: comp.lang.c

    On 24/10/2025 21:20, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 24/10/2025 18:35, David Brown wrote:
    On 24/10/2025 15:27, bart wrote:
    On 24/10/2025 03:00, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 24/10/2025 00:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]

    I note that you've ignored the vast majority of my previous article.

    I've noted it, but chose not to reply. You have a point of view and
    attitude which I don't share.

    Mainly that you don't care how complicated a program for even a
    simple task is, and how laborious and OS-dependent its build
    process is, so long as it (eventually) works.

    That it favours your own OS, leaving users of other to have to jump
    through extra hoops, doesn't appear to bother you.

    Why would someone care what how someone else writes their code, or
    what it does, or what systems it runs on?  They guy who wrote cdecl
    gets to choose exactly how he wants to write it, and what systems it
    supports. We others get it for free - we can use it if we like and
    it if it suits our needs.  But neither Keith nor anyone else paid
    that guy to do the work, or contributed anything to the task, and we
    have no right to judge what he choose to do, or how he choose to do
    it.

    This a curious argument: it's free software so you don't care in the
    slightest how efficient it is or how user-friendly it might be to
    build?

    'Efficiency' is about lots of different aspects. If you use a metric
    which compares the scale of the actual task (compile a small,
    interactive console-based program into an executable) with what actually
    needs to be done here, it is wildly out of proportion.

    As one consequence of that, it is impossible to build on pure Windows.

    Its efficiency is not a great concern. I've seen no perceptible delay between issuing a command to cdecl and seeing the result. No, I don't
    much care what it does behind the scenes. If I did care, I might look through the sources and try to think of ways to improve it. But the
    effort to do so would vastly exceed any time I might save running it.

    The build and installation process for cdecl is very user-friendly.

    Unless you're on Windows, because it is designed for Linux and Unix
    systems ONLY. Even on the latter, nobody messes with it because it is so complicated. Just cross your fingers and hope nothing goes wrong
    otherwise you're f***ed.


    It
    matches the process for thousands of other software packages that are distributed in source. I can see that the process might be confusing if you're not accustomed to it. If you *asked* rather than just
    complaining, you might learn something.

    The stripped executable occupies about 0.000008% of my hard drive.

    This is a program that reads lines of text from the terminal and
    translates them into another line of text. THAT needs thirty thousand
    lines of configure script?! And that's even before you start compiling
    the program itself.

    The configure script is automatically generated from "configure.ac",
    which is 343 lines, 241 lines if comments and blank lines are
    deleted. I've never written a configure.ac file myself, but most
    of it looks like boilerplate. It would probably be fairly easy
    (with some experience) to create one by modifying an existing one
    from another project.

    Or you could scrap the configure script completely. How about that?

    I was talking about all the stuff scrolling endlessly up to the screen
    for a minute and a half while running the configure script and then
    compiling the modules.

    Why is that a problem? If you like, you can redirect the output of "./configure" and "make" to a file, and take a look at the output later
    if you need to (you probably won't).

    It's a problem because I can see all the stupid crap it wastes time
    doing. Does it really need to test whether 'stdio.h' is available, and
    what happens if it isn't? Wouldn't you find out as soon as you try to
    compile anything?

    (I remember trying to build A68G, an interpreter, on Windows, and the 'configure' step was a major obstacle. But I was willing to isolate the
    12 C source files involved, then it was built in one second.

    I did of course try building it in Linux too, and it took about 5
    minutes that I recall, using a spinnning hard drive, mostly spent
    running through that configure script.

    Utterly pointless. And if I'd wanted to build another application on the
    same machine, it would have to do all the tests again!

    What on earth could have changed? If you say that the environment
    /could/ change between those two builds, then equally it could have
    changed during the minute or so between the configure tests, and
    starting to compile code.)



    [...]


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Sat Oct 25 13:04:36 2025
    From Newsgroup: comp.lang.c

    On 24/10/2025 20:50, bart wrote:
    On 24/10/2025 18:35, David Brown wrote:
    On 24/10/2025 15:27, bart wrote:
    On 24/10/2025 03:00, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 24/10/2025 00:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]

    I note that you've ignored the vast majority of my previous article.

    I've noted it, but chose not to reply. You have a point of view and
    attitude which I don't share.

    Mainly that you don't care how complicated a program for even a
    simple task is, and how laborious and OS-dependent its build process
    is, so long as it (eventually) works.

    That it favours your own OS, leaving users of other to have to jump
    through extra hoops, doesn't appear to bother you.


    Why would someone care what how someone else writes their code, or
    what it does, or what systems it runs on?  They guy who wrote cdecl
    gets to choose exactly how he wants to write it, and what systems it
    supports. We others get it for free - we can use it if we like and it
    if it suits our needs.  But neither Keith nor anyone else paid that
    guy to do the work, or contributed anything to the task, and we have
    no right to judge what he choose to do, or how he choose to do it.

    This a curious argument: it's free software so you don't care in the slightest how efficient it is or how user-friendly it might be to build?


    If I find that installing or using software is difficult, then I would
    look elsewhere for alternatives that worked better for me. But unless
    the software author is doing some kind of harm to other people (such as knowingly or intentionally giving incorrect results from his program, or violating other people's copyrights or "stealing" their work), who am I
    to judge how he choices to write his software? He makes his own
    decisions on how to write it, and I make my own decisions on whether or
    not to use it. Equally, as he has chosen to publish the software under
    the GPL, he has no right to judge or care about how I use the software.

    Of course people can have opinions, and express them, but that's very different from having some sort of requirement to be bothered about
    something we don't feel is as good as it could be. (And in this case, I
    don't see that the software is too big, given what it does, and I don't
    see the build process as unduly difficult, given the author's
    preferences, interests and aims. And no doubt someone will publish an
    easy to install Windows build of it sooner or later.)

    This is a program that reads lines of text from the terminal and
    translates them into another line of text. THAT needs thirty thousand
    lines of configure script?! And that's even before you start compiling
    the program itself.

    I'm thinking of making available some software that does even less, but
    wrap enough extra and POINTLESS levels complexity around that you'd need
    to lease time on a super-computer to build it. But the software is free
    so that makes it alright?


    Feel free do to that if you like. If it is useful enough, people might
    use it - if it is not, then they won't. No one is going to complain if
    you publish such software.




    Well I built cdecl too, under WSL. Jesus, that looked a lot of work!

    I have no experience with WSL, so I can't comment on the effort there.

    I was talking about all the stuff scrolling endlessly up to the screen
    for a minute and a half while running the configure script and then compiling the modules.


    You are getting worked up about some text output that scrolled
    "endlessly" for a minute and a half? (Do you spot the exaggeration
    here?) Of course it is less of an issue for me - "./configure" took a
    mere 10 seconds on my ten year old machine. But even at a minute and a
    half, it's just a task that the computer runs, once, and it is done
    without effort. Try relaxing a little more, and perhaps use that minute
    and a half to stretch your legs or drink some coffee, rather than to
    build up a pointless fury.

    That program is 2.8 MB (10 times the size of my C compiler).

    First, as usual, nobody cares about a couple of megabytes.  Secondly,
    if you /do/ care, then you might do at least a /tiny/ bit of
    investigation.   First, run "strip" on it to remove debugging symbols
    - now it is a bit over 600 KB.  By running "strings" on it, I can see
    that about 100 KB is strings - messages, rules, types, keywords, etc.

    If I was directly building it myself, then I would use -s with gcc. But since the process is automatic via makefiles, I assumed it would give me
    a working, production version, not a version needing to be debugged!


    Actually, on closer checking (not because /I/ care, but because /you/ apparently care) it was not debugging information, but all the linking
    and symbolic information that is a normal part of elf format files when
    they are built (allowing for incremental linking, using the files as
    static libraries for other programs, tracing the programs,
    fault-finding, etc.). Sometimes build processes strip these as part of
    their build or "make install" process (indeed there is a "make
    install-strip" option). Usually it doesn't matter much on *nix systems
    - after all, the extra disk space has a cost of about a microdollar and
    the OS won't even bother reading that part of the elf file into ram when running the program.


    It is entirely possible that your little alternative is a useful
    program and does what you personally want and need with a smaller
    executable. But cdecl does a great deal more,

    CDECL translates a single C type specification into linear LTR form. Or
    vice versa. That's what nearly everyone needs it for, and why it exists. Why, what other stuff does it do?


    RTFM.

    So, yes, anyone with an inquiring mind can form an idea of how much code might be needed for such a task, and how it ought to compare with a
    complete language implementations.

    In fact, people have posted algorithms here for doing exactly the same.
    I don't recall that they took tens of thousands of lines to describe.

    doing things that other people need and want (like handling C++
    declarations - surely 100 times more effort than handling C
    declarations, especially the limited older standards you use).

    So, it's a hundred times bigger than necessary due to C++. That explains that then. (Sorry, 20 times bigger because whoever provided the build
    system decided it should include debug info to make it 5 times as big
    for no reason.)

    It is not a program for expanding or explaining C declarations. It is a program for expanding or explaining C and C++ declarations. C++ is not "unnecessary", it is part of what it does.

    Feel free to imagine that you could do much better, in orders of
    magnitude less space. Feel free to say so. But don't feel free to
    berate others for "not caring", and don't feel free to post
    self-righteous crap that somehow blames people here for other people's code.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Sat Oct 25 13:15:45 2025
    From Newsgroup: comp.lang.c

    On 24/10/2025 22:07, Keith Thompson wrote:
    David Brown <david.brown@hesbynett.no> writes:
    On 24/10/2025 15:27, bart wrote:
    [...]
    Well I built cdecl too, under WSL. Jesus, that looked a lot of work!

    I have no experience with WSL, so I can't comment on the effort
    there. For my own use on a Linux system, I had to install a package
    (apt-get install libreadline-dev), but that's neither difficult to do,
    or time-consuming, and it was not hard to see what was needed. Of
    course, a non-programmer might not have realised that was needed, but
    if you are stumped on a configure script error "readline.h header not
    found, use --without-readline" and can't figure out how to get
    "readline.h" or configure the program to avoid using it, and can't at
    least google for help, then you are probably not the target audience
    for cdecl.

    WSL, "Windows Subsystem for Linux" (which should probably have been
    called "Linux Subsystem for Windows") provides something that looks just
    like a direct Linux desktop system. It supports several different Linux-based distributions. I use Ubuntu, and the build procedure under
    WSL is exactly the same as under Ubuntu.


    Sure. I know what WSL is, I just haven't used it. (In my office I have
    a Windows machine and a Linux machine, and it's not often that I need to
    use more than a basic set of msys2 stuff on Windows or Wine on Linux,
    because I can usually run things on their "native" OS.)

    However, it took me a while to find where it put the executable, as
    the make process doesn't directly tell you that. It seems it puts it
    inside the src directory, which is unusual. It further appears that
    you have to do 'make install' to be able to run it without a path.

    I agree that putting the executable in "src" is a little odd. But
    running "make install" is hardly unusual - it is as standard as it
    gets. (And of course there are a dozen other different ways you can
    arrange to run the programs without a path if you don't like "make
    install".)

    Putting the executable in src is very common for this kind of package.
    I generally don't notice, since I always run "make install", which knows where to find the executable and where to copy it.


    Maybe I am coloured by my preferences - I prefer to keep the build in a
    tree adjacent to the source code, rather than in the source code
    directories, at least for projects of a certain size. Of course you
    (and others) are right that lots of builds /do/ make the executable in
    the source directory.

    [...]

    That program is 2.8 MB (10 times the size of my C compiler).

    First, as usual, nobody cares about a couple of megabytes. Secondly,
    if you /do/ care, then you might do at least a /tiny/ bit of
    investigation. First, run "strip" on it to remove debugging symbols
    - now it is a bit over 600 KB. By running "strings" on it, I can see
    that about 100 KB is strings - messages, rules, types, keywords, etc.

    It's easier than that. The Makefile provides an "install-strip" option
    that does the installation and strips the executable. A lot of packages
    like this support "make install-strip". For those that don't, just run "strip" manually after installation.


    Yes.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Sat Oct 25 13:51:42 2025
    From Newsgroup: comp.lang.c

    On 25/10/2025 12:04, David Brown wrote:
    On 24/10/2025 20:50, bart wrote:

    You are getting worked up about some text output that scrolled
    "endlessly" for a minute and a half?

    You don't know when it's happening how long it will end up taking.

    In the past, and on a slower machine with a spinning hard drive, some
    builds have taken the best part of an hour (one for a binary that would
    have been only 0.5MB).

    But on the same machine, my stuff still worked more or less instantly.


    If I was directly building it myself, then I would use -s with gcc.
    But since the process is automatic via makefiles, I assumed it would
    give me a working, production version, not a version needing to be
    debugged!


    Actually, on closer checking (not because /I/ care, but because /you/ apparently care) it was not debugging information, but all the linking
    and symbolic information that is a normal part of elf format files when
    they are built (allowing for incremental linking, using the files as
    static libraries for other programs, tracing the programs, fault-
    finding, etc.).

    From looking at the C sources which are 50Kloc (68Kloc with headers),
    I'd expect an executable for x64 to be upwards of 500KB. I think 600KB
    was mentioned as the stripped size.

    This is one way of spotting if an executable is unreasonably large. This
    can be important, even if you have vast amounts of storage. For example,
    it might have been tampered with somehow. In any case, it is suspicious
    and worth investigating.

    CDECL translates a single C type specification into linear LTR form.
    Or vice versa. That's what nearly everyone needs it for, and why it
    exists. Why, what other stuff does it do?


    RTFM.

    OK, it does rather more than the --help summary suggests. It includes
    defining typedefs for example (I haven't tried it).

    For my purposes, CDECL has always been buggy in the past and not so
    useful (I used the online version).

    In any case, sometimes you want to 'explain' some complex type in
    someone else's code, which involves macros and/or typedefs defined 1000s
    of lines earlier, or in some nested header.

    Then you can't just extract the line and give it to CDECL.

    This is why my older compiler included special directives that could be temporarily inserted into the code, and would generate the info during compilation.


    So, it's a hundred times bigger than necessary due to C++. That
    explains that then. (Sorry, 20 times bigger because whoever provided
    the build system decided it should include debug info to make it 5
    times as big for no reason.)

    It is not a program for expanding or explaining C declarations.  It is a program for expanding or explaining C and C++ declarations.  C++ is not "unnecessary", it is part of what it does.


    This is another matter. The CDECL docs talk about C and C++ type
    declarations being 'gibberish'.

    What do you feel about that, and the *need* for such a substantial tool
    to help understand or write such declarations?

    I would rather have put some effort into fixing the syntax so that such
    tools are not necessary!
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Sat Oct 25 17:18:30 2025
    From Newsgroup: comp.lang.c

    On 25/10/2025 14:51, bart wrote:

    This is another matter. The CDECL docs talk about C and C++ type declarations being 'gibberish'.

    What do you feel about that, and the *need* for such a substantial tool
    to help understand or write such declarations?

    I would rather have put some effort into fixing the syntax so that such tools are not necessary!

    Most C and C++ programmers don't need such tools - they are /not/
    necessary. They can sometimes save a little effort when you have to
    deal with poorly written code from elsewhere, or when you are faced with
    code written in a substantially different style from what you usually
    see. I don't think more than a very small proportion of C or C++
    programmers use cdecl (online or offline), at least not on a regular basis.

    Neither language requires that declarations be "gibberish" - but both
    language syntaxes allow people to write gibberish. The same applies to
    your language, and any other language. The sole reason why you think
    that your language's syntax is clear is because the only examples you
    ever see, you wrote yourself - and thus you understand them.

    That does not mean that I or anyone else thinks that C's syntax is
    "perfect" in any sense. But it is good enough, and we all understand
    that different programmers will have different ideas about what to
    "fix". The only possible way to get a language that any one person
    thinks is ideal and always clear and simple, is for that person to
    design their own language according to their own needs and preferences.
    And from your example, we know that even that doesn't always work - you
    have on multiple occasions said you don't know details of your own
    languages.

    And I'd love to hear your plan for "fixing" the syntax of C - noting
    that changing the syntax of C means getting the C standards committee to accept your suggestions, getting at least all major C compilers to
    support them, and getting the millions of C programmers to use them.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Sat Oct 25 11:40:29 2025
    From Newsgroup: comp.lang.c

    On 25/10/2025 14:51, bart wrote:

    This is another matter. The CDECL docs talk about C and C++ type declarations being 'gibberish'.

    What do you feel about that, and the *need* for such a substantial tool
    to help understand or write such declarations?

    I would rather have put some effort into fixing the syntax so that such tools are not necessary!

    They aren't. I've never needed them and have only rarely used them - and
    never found the results worth the trouble. For me, converting a C type
    into cdecl format has a feeling similar to what I feel when a C
    expression statement is translated into COBOL - lots of unnecessary
    extra verbiage that gets in the way of my understanding, it doesn't aid it.

    C declaration syntax builds upon a simple principle: declaration
    reflects use. It then adds some unavoidable complications on that
    principle. There's multiple ways that any given identifier can be used,
    but there's one particular one that is the model for the declaration. A
    given way of using an identifier can be used by identifiers of several different types, but at most one of those types is the one that id
    declared using a declaration that mirrors that particular usage.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Sat Oct 25 16:48:01 2025
    From Newsgroup: comp.lang.c

    On 25/10/2025 12:04, David Brown wrote:
    On 24/10/2025 20:50, bart wrote:

    I was talking about all the stuff scrolling endlessly up to the screen
    for a minute and a half while running the configure script and then
    compiling the modules.


    You are getting worked up about some text output that scrolled
    "endlessly" for a minute and a half?  (Do you spot the exaggeration here?)  Of course it is less of an issue for me - "./configure" took a
    mere 10 seconds on my ten year old machine.  But even at a minute and a half, it's just a task that the computer runs, once, and it is done
    without effort.  Try relaxing a little more, and perhaps use that minute and a half to stretch your legs or drink some coffee, rather than to
    build up a pointless fury.

    The point about the minute and a half is that a fast compiler even on my machine could translate tens of millions of lines of source code in that
    time. If the app was actually that size (say, a web browser) then fine.

    But the C source is only 0.07Mloc. So what TF is going on?

    It appears that this is one of those apps that is superfically written
    in 'C' but it actually relies on a plethora of other languages, files,
    tools and myriad kinds of options. You can't just go into ./src and do
    'gcc *.c'.

    Even the makefile has to first be generated. There are files with .in,
    .am and .m4 extensions. The eventual 'makefile' has 2000 lines of gobbledygook, to build 49 C modules. (My projects are also around 40
    modules; the build info comprises, funnily enough, some 40 lines.)

    So this is a complicated build process! Unfortunately it is typical of
    such products originating from Unix-Linux (you really want one term that
    you can use for both).

    This is not specific to CDECL; it's nearly everything that comes of Unix-Linux. But this came up and I had a look.

    But let me ask then about this particularly app (an interactive
    text-based program where performance is irrelevant; it could have been
    written in Python): do you think it would have been possible to
    distribute this as a set of 100% *standard* C source files, with the
    only dependency being *any* C compiler?



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Sat Oct 25 19:14:45 2025
    From Newsgroup: comp.lang.c

    On 25/10/2025 17:48, bart wrote:
    On 25/10/2025 12:04, David Brown wrote:
    On 24/10/2025 20:50, bart wrote:

    I was talking about all the stuff scrolling endlessly up to the
    screen for a minute and a half while running the configure script and
    then compiling the modules.


    You are getting worked up about some text output that scrolled
    "endlessly" for a minute and a half?  (Do you spot the exaggeration
    here?)  Of course it is less of an issue for me - "./configure" took a
    mere 10 seconds on my ten year old machine.  But even at a minute and
    a half, it's just a task that the computer runs, once, and it is done
    without effort.  Try relaxing a little more, and perhaps use that
    minute and a half to stretch your legs or drink some coffee, rather
    than to build up a pointless fury.

    The point about the minute and a half is that a fast compiler even on my machine could translate tens of millions of lines of source code in that time. If the app was actually that size (say, a web browser) then fine.

    But the C source is only 0.07Mloc. So what TF is going on?

    It appears that this is one of those apps that is superfically written
    in 'C' but it actually relies on a plethora of other languages, files,
    tools and myriad kinds of options. You can't just go into ./src and do
    'gcc *.c'.


    It uses yacc and lex for generating the parsing code. Fair enough -
    that's what those tools are for.

    Even the makefile has to first be generated. There are files
    with .in, .am and .m4 extensions. The eventual 'makefile' has 2000 lines
    of gobbledygook, to build 49 C modules. (My projects are also around 40 modules; the build info comprises, funnily enough, some 40 lines.)

    So this is a complicated build process! Unfortunately it is typical of
    such products originating from Unix-Linux (you really want one term that
    you can use for both).

    Lots of people use *nix, or POSIX, or unix-like as single terms. Most software that is for "big" systems (rather than small embedded ones),
    and is not Windows or Mac only, is *nix and works on various Unix
    systems, Solaris, AIX, BSD, Linux, and a wide variety of related systems
    - including *nix layers on Windows (msys2, cygwin, WSL, etc.). On of
    the reasons why such a lot of software is so widely portable is the
    widespread use of autotools - with the common "./configure && make -j &&
    make install" combination for building the software. The "configure"
    part handles the details and differences between the systems so that the software is portable.

    Now, everyone who has ever used this can see potential for improvement.
    There are large numbers of checks that autotools does that are always
    "yes" on most systems from the last decade. But there are also people
    who want to use software on their ancient Sun workstations, or their big-endian MIPS machines. And there are lots of people who want to
    compile software on their machines but haven't installed various
    libraries that are essential or potentially useful - configure will see
    that and tell them about it.

    So the autotools systems is very useful, and helps keep things highly
    portable - but there is certainly a potential for improvement with
    caching the results of the tests. Maybe someday someone will feel
    bothered enough by the 10 second configure that they will make such improvements.


    Yes, the whole thing is complicated. But the complications are hidden,
    so for the people using autotools builds, and who have some basic
    familiarity with them, the process is simple and almost all automatic.
    We can all be curious about what is going on behind the scenes, but you
    can also just do the build and use the program without bothering about
    these details. Life is too short to try to understand /everything/.


    This is not specific to CDECL; it's nearly everything that comes of Unix-Linux. But this came up and I had a look.

    But let me ask then about this particularly app (an interactive text-
    based program where performance is irrelevant; it could have been
    written in Python): do you think it would have been possible to
    distribute this as a set of 100% *standard* C source files, with the
    only dependency being *any* C compiler?


    Would it have been possible? Yes, of course. Would it have been the
    real source here? No - the author used source generator tools like yacc
    and lex to generate C code. Would it have made the build process easier
    or faster for normal users of the source code? No - the typical person
    who is interested in such software and is keen enough on getting the
    very latest version, can build it in a couple of minutes at most. Would
    it help people who just want a binary for normal usage? No, most would
    just get cdecl from their package manager. Generally speaking, if you
    want binaries for such a program for Windows, you can google for it - if
    an open source program is useful, someone will have made a windows
    build. Unfortunately in this case, googling for "windows cdecl binary"
    is going to give you results for the "cdecl" calling convention for
    MSVC, swamping any useful results. Still, googling for "msys2 cdecl"
    does fine.

    I can appreciate that the way this program is written, and the way the
    build process works, is awkward for /you/. But I think you are a very
    unique person, and the program and its build system are absolutely fine
    for the vast majority of people who would want to get the source from
    the github page.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Sat Oct 25 15:14:38 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 25/10/2025 12:04, David Brown wrote:
    On 24/10/2025 20:50, bart wrote:
    I was talking about all the stuff scrolling endlessly up to the
    screen for a minute and a half while running the configure script
    and then compiling the modules.

    You are getting worked up about some text output that scrolled
    "endlessly" for a minute and a half?  (Do you spot the exaggeration
    here?)  Of course it is less of an issue for me - "./configure" took
    a mere 10 seconds on my ten year old machine.  But even at a minute
    and a half, it's just a task that the computer runs, once, and it is
    done without effort.  Try relaxing a little more, and perhaps use
    that minute and a half to stretch your legs or drink some coffee,
    rather than to build up a pointless fury.

    The point about the minute and a half is that a fast compiler even on
    my machine could translate tens of millions of lines of source code in
    that time. If the app was actually that size (say, a web browser) then
    fine.

    But the C source is only 0.07Mloc. So what TF is going on?

    It appears that this is one of those apps that is superfically written
    in 'C' but it actually relies on a plethora of other languages, files,
    tools and myriad kinds of options. You can't just go into ./src and do
    'gcc *.c'.

    Even the makefile has to first be generated. There are files with .in,
    .am and .m4 extensions. The eventual 'makefile' has 2000 lines of gobbledygook, to build 49 C modules. (My projects are also around 40
    modules; the build info comprises, funnily enough, some 40 lines.)

    So this is a complicated build process! Unfortunately it is typical of
    such products originating from Unix-Linux (you really want one term
    that you can use for both).

    This is not specific to CDECL; it's nearly everything that comes of Unix-Linux. But this came up and I had a look.

    Yes, the build process is *internally* complicated. When I type
    the three or so commands needed to build and install the tool from
    source, it uses a lot of non-C input files, and generates some large intermediate files. None of that bothers me when I install it on
    a Unix-like system. There's no particular reason for me to care.

    It obviously bothers you, particularly if you want to install it
    on pure Windows. So what are you going to do about it? So far,
    you've been complaining *for years* to a group of people who are
    not in a position to do anything about it. I am not a GNU autotools maintainer, and as far as I know nobody else here is either.

    Maybe GNU autotools could be modernized, not bothering to check for
    language and library features that are now universally supported.
    Maybe it could be updated to work better on pure Windows, without
    Cygwin or WSL or MSYS. Maybe that's something useful you could
    work on. That kind of work could make hundreds or thousands of
    existing software tools easier to build and install.

    Or if you only want to talk about it, maybe it would be more
    productive to do so on one of the GNU mailing lists. (If you do,
    be aware that a lot of the people you'll be talking to don't care
    about MS Windows.)

    But let me ask then about this particularly app (an interactive
    text-based program where performance is irrelevant; it could have been written in Python): do you think it would have been possible to
    distribute this as a set of 100% *standard* C source files, with the
    only dependency being *any* C compiler?

    I haven't studied the source. Maybe you're right. Maybe *you* could
    work on it. If you can adapt the existing cdecl program into, say,
    a single portable C source file, so it can easily be built either
    on Unix-like systems or on Windows, that could actually be useful.
    It's GPL licensed, so you can create your own forked version, just
    as the current maintainer has. (I would ask that you at least
    retain the option to use GNU readline, so there would have to be
    *some* configuration.)
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Sun Oct 26 07:25:23 2025
    From Newsgroup: comp.lang.c

    (This reply is not meant for bart, but rather for all interested
    folks who should not get repelled by his FUD posts.)

    On 25.10.2025 00:18, bart wrote:
    [...]

    (I remember trying to build A68G, an interpreter, on Windows, and the 'configure' step was a major obstacle. But I was willing to isolate the
    12 C source files involved, then it was built in one second.

    I did of course try building it in Linux too, and it took about 5
    minutes that I recall, using a spinnning hard drive, mostly spent
    running through that configure script.

    (I don't know what system or system configuration the poster runs.
    I'm well aware that if you are using the Windows platform you may
    suffer from many things; but the platform choice is your decision!
    But maybe he's just misremembering; and nonetheless spreading FUD.)

    I've a quite old (~16+ years old) Linux system that was back these
    days when I bought it already at the _very low performance range_.
    With this old system the ./configure needs less than 10 seconds,
    and the build process with make about _half a minute_ for the whole
    a68g Genie system. - The whole procedure, from software download,
    extraction, configure/make, and start an Algol application, needs
    one minute! (Make that two minutes if you are typing v_e_r_y slowly
    or have a slow download link. Or just put the necessary commands in
    a shell file; just did that and it needed (including the download)
    less than 45 seconds, and ready to run.)

    Janis

    [...]

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Sun Oct 26 07:44:06 2025
    From Newsgroup: comp.lang.c

    On 25.10.2025 17:18, David Brown wrote:
    On 25/10/2025 14:51, bart wrote:
    [...]
    [...]

    And I'd love to hear your plan for "fixing" the syntax of C - noting
    that changing the syntax of C means getting the C standards committee to accept your suggestions, getting at least all major C compilers to
    support them, and getting the millions of C programmers to use them.

    I don't think that works. - If I see what they've done (to "C" and to
    "C++") to fix (or add) things; it seems it's rather getting worse.
    And, in that light, whom do you do a favor with such changes; if you
    want to stay widely compatible you can't really fix it (but details).
    Don't forget that bart's criticism had (at least partly) touched even
    the heart of "C" design. - There's a reason why so many new languages
    appear, and some even get established.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Sun Oct 26 08:07:15 2025
    From Newsgroup: comp.lang.c

    On 25.10.2025 17:48, bart wrote:

    [...] Unfortunately it is typical of such products originating from Unix-Linux (you really want one term that you can use for both).

    There are such terms. - I've used since decades (and it's meanwhile
    also widely used!) the term Unix for the family of UNIX-like systems.
    I suggest using that term. (But you find also other terms, like *nix,
    that I personally don't like much, but that is also well and widely understood.) For complaining and bloviating about that OS-family any
    of these terms might suit you well enough (and will be understood).

    Janis

    [...]
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Sun Oct 26 13:15:15 2025
    From Newsgroup: comp.lang.c

    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    bart <bc@freeuk.com> writes:
    On 24/10/2025 18:35, David Brown wrote:
    On 24/10/2025 15:27, bart wrote:
    On 24/10/2025 03:00, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 24/10/2025 00:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]

    I note that you've ignored the vast majority of my previous
    article.

    I've noted it, but chose not to reply. You have a point of view
    and attitude which I don't share.

    Mainly that you don't care how complicated a program for even a
    simple task is, and how laborious and OS-dependent its build
    process is, so long as it (eventually) works.

    That it favours your own OS, leaving users of other to have to
    jump through extra hoops, doesn't appear to bother you.

    Why would someone care what how someone else writes their code, or
    what it does, or what systems it runs on?  They guy who wrote cdecl
    gets to choose exactly how he wants to write it, and what systems
    it supports. We others get it for free - we can use it if we like
    and it if it suits our needs.  But neither Keith nor anyone else
    paid that guy to do the work, or contributed anything to the task,
    and we have no right to judge what he choose to do, or how he
    choose to do it.

    This a curious argument: it's free software so you don't care in the slightest how efficient it is or how user-friendly it might be to
    build?

    Its efficiency is not a great concern. I've seen no perceptible delay between issuing a command to cdecl and seeing the result. No, I don't
    much care what it does behind the scenes. If I did care, I might look through the sources and try to think of ways to improve it. But the
    effort to do so would vastly exceed any time I might save running it.

    The build and installation process for cdecl is very user-friendly.
    It matches the process for thousands of other software packages that
    are distributed in source. I can see that the process might be
    confusing if you're not accustomed to it. If you *asked* rather than
    just complaining, you might learn something.

    The stripped executable occupies about 0.000008% of my hard drive.

    This is a program that reads lines of text from the terminal and
    translates them into another line of text. THAT needs thirty
    thousand lines of configure script?! And that's even before you
    start compiling the program itself.

    The configure script is automatically generated from "configure.ac",
    which is 343 lines, 241 lines if comments and blank lines are
    deleted. I've never written a configure.ac file myself, but most
    of it looks like boilerplate. It would probably be fairly easy
    (with some experience) to create one by modifying an existing one
    from another project.

    I'm thinking of making available some software that does even less,
    but wrap enough extra and POINTLESS levels complexity around that
    you'd need to lease time on a super-computer to build it. But the
    software is free so that makes it alright?

    Free software still has to be usable. cdecl is usable for most of us.

    [...]

    I'd say that it is not sufficiently usable for most of us to actually
    use it.
    I was talking about all the stuff scrolling endlessly up to the
    screen for a minute and a half while running the configure script
    and then compiling the modules.

    Why is that a problem? If you like, you can redirect the output of "./configure" and "make" to a file, and take a look at the output
    later if you need to (you probably won't).

    [...]

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Sun Oct 26 11:26:01 2025
    From Newsgroup: comp.lang.c

    On 26/10/2025 06:25, Janis Papanagnou wrote:
    (This reply is not meant for bart, but rather for all interested
    folks who should not get repelled by his FUD posts.)

    On 25.10.2025 00:18, bart wrote:
    [...]

    (I remember trying to build A68G, an interpreter, on Windows, and the
    'configure' step was a major obstacle. But I was willing to isolate the
    12 C source files involved, then it was built in one second.

    I did of course try building it in Linux too, and it took about 5
    minutes that I recall, using a spinnning hard drive, mostly spent
    running through that configure script.

    (I don't know what system or system configuration the poster runs.
    I'm well aware that if you are using the Windows platform you may
    suffer from many things; but the platform choice is your decision!
    But maybe he's just misremembering; and nonetheless spreading FUD.)

    I've a quite old (~16+ years old) Linux system that was back these
    days when I bought it already at the _very low performance range_.
    With this old system the ./configure needs less than 10 seconds,
    and the build process with make about _half a minute_ for the whole
    a68g Genie system. - The whole procedure, from software download,
    extraction, configure/make, and start an Algol application, needs
    one minute! (Make that two minutes if you are typing v_e_r_y slowly
    or have a slow download link. Or just put the necessary commands in
    a shell file; just did that and it needed (including the download)
    less than 45 seconds, and ready to run.)


    The 5 minutes I quoted may have been for CPython. It would be for some
    Linux running under VirtualBox on a 2010 cheapest-in-the-shop PC.

    If I try A68G now, under WSL, using a 2021 second-cheapest PC but with
    SSD, I get:

    ./configure 20 seconds
    make 90 seconds

    Trying CDECL again (I've done it several times after deleting the folder):

    ./configure 35 seconds
    make 49 seconds

    However the A68G configure script is 11000 lines; the CDECL one 31600 lines.

    (I wonder why the latter needs 20000 more lines? I guess nobody is
    curious - or they simply don't care. OK, let's make 100,000 and see if
    anyone complains! Is it possible this is some elaborate joke on the part
    of auto-conf to discover just how trusting and tolerant people can be?)


    Anyway, I then tried this new 3.10 A68G on the Fannkuch(9) benchmark:

    a68g fann.a68 5 seconds
    ./fann 3+3 seconds (via a68g --compile -O3 fann.a68)

    I then tried it under my scripting language (not statically typed):

    qq fann 0.4 seconds (qq built with my non-optimising compiler)

    'qq' takes about 0.1 seconds to build - under Windows which is
    considered slow for development. So, 1000 times faster to build, and it
    runs this program at least, 10 times faster, despite being dynamically
    typed.

    This is the vast difference between my world and yours.

    The whole procedure, from software download,
    extraction, configure/make, and start an Algol application, needs
    one minute!

    Only one minute; impressive! How about this:

    c:\qx>tm mm -r \mx\mm -r qq hello
    Hello World
    TM: 0.21

    This runs my systems language /from source code/, whch then runs my interpreter /from source code/ (ie. compiles into memory and runs
    immediately) then runs that test program.

    In 1/5th of a second (or 1/300th of a minute). This is equivalent to
    first compiling gcc from source (and all those extra utilities you seem
    to need) before using it/them to build a68g. I guess that would take a
    bit more than a minute.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Sun Oct 26 13:26:10 2025
    From Newsgroup: comp.lang.c

    On 26/10/2025 11:26, bart wrote:
    On 26/10/2025 06:25, Janis Papanagnou wrote:
    (This reply is not meant for bart, but rather for all interested
    folks who should not get repelled by his FUD posts.)

    On 25.10.2025 00:18, bart wrote:
    [...]

    (I remember trying to build A68G, an interpreter, on Windows, and the
    'configure' step was a major obstacle. But I was willing to isolate the
    12 C source files involved, then it was built in one second.

    I did of course try building it in Linux too, and it took about 5
    minutes that I recall, using a spinnning hard drive, mostly spent
    running through that configure script.

    (I don't know what system or system configuration the poster runs.
    I'm well aware that if you are using the Windows platform you may
    suffer from many things; but the platform choice is your decision!
    But maybe he's just misremembering; and nonetheless spreading FUD.)

    I've a quite old (~16+ years old) Linux system that was back these
    days when I bought it already at the _very low performance range_.
    With this old system the ./configure needs less than 10 seconds,
    and the build process with make about _half a minute_ for the whole
    a68g Genie system. - The whole procedure, from software download,
    extraction, configure/make, and start an Algol application, needs
    one minute! (Make that two minutes if you are typing v_e_r_y slowly
    or have a slow download link. Or just put the necessary commands in
    a shell file; just did that and it needed (including the download)
    less than 45 seconds, and ready to run.)


    The 5 minutes I quoted may have been for CPython. It would be for some
    Linux running under VirtualBox on a 2010 cheapest-in-the-shop PC.

    If I try A68G now, under WSL, using a 2021 second-cheapest PC but with
    SSD, I get:

       ./configure   20 seconds
       make          90 seconds

    Only one minute; impressive! How about this:

      c:\qx>tm mm -r \mx\mm -r qq hello
      Hello World
      TM: 0.21

    This runs my systems language /from source code/, whch then runs my interpreter /from source code/ (ie. compiles into memory and runs immediately) then runs that test program.

    TBF, all such tests need to be from 'cold'. I don't normally do that,
    because in routine compilation, you are building something that was just edited, or just compiled, just generated, or even just downloaded and extracted. So any files involved will already be cached.

    So the following are after a restart of my PC:

    Build CDECL under WSL (files were extracted before the restart):
    60/56 seconds instead 35/49 seconds for configure/make

    My demo above running both compiler and interpreter from source:
    0.31 seconds instead of 0.21 seconds

    New test of gcc compiling hello.c:
    1 second, settling down to 0.23 seconds on subsequent builds

    So gcc on Windows still takes longer longer to build a 4-line C program,
    than it takes my tools to build an entire compiler and interpreter.

    This further emphasises the mismatch between my own everyday experience
    and that of most others here.

    To be clear, the speed of my tools is 99% due to massive advances in
    hardware over decades, rather than my own efforts; I didn't have to do much!

    However all those other tools run on the exact same hardware....

    Sometimes it pays to be on the ball, to question everything, and to be intolerant. Something takes even 2 seconds to some task; it's not a long
    time, but ... why IS it taking 10 times as long as necessary?

    Paradoxically, huge efforts are expended in getting the fastest possible
    code from these large compilers, and sometimes the smallest code.

    Well, perhaps it's to try and keep up with users writing ever more
    inefficient programs! Maybe that 2-second program would have taken 10
    seconds.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Sun Oct 26 15:12:42 2025
    From Newsgroup: comp.lang.c

    On 25/10/2025 16:18, David Brown wrote:
    On 25/10/2025 14:51, bart wrote:

    This is another matter. The CDECL docs talk about C and C++ type
    declarations being 'gibberish'.

    What do you feel about that, and the *need* for such a substantial
    tool to help understand or write such declarations?

    I would rather have put some effort into fixing the syntax so that
    such tools are not necessary!

    And I'd love to hear your plan for "fixing" the syntax of C - noting
    that changing the syntax of C means getting the C standards committee to accept your suggestions, getting at least all major C compilers to
    support them, and getting the millions of C programmers to use them.

    I have posted such proposals in the past (probably before 2010).

    I can't remember the exact details, but I think it is possible to
    superimpose LTR type syntax on top of the existing language.

    It's clear however that nobody would be interested in actually doing
    that. I might have done it as a proof of concept, but with little
    appetite (it only fixes a fraction of what I'd like to change).

    If creating such a proposal today, I think it would require two new
    keywords, for example:

    ref replaces '*' for pointer modifiers
    func marks functions otherwise there is too much ambiguity and
    confusion

    The above two could be type-spec starter symbols. In addition, a '['
    could also start a type-spec (we don't want a third 'array' keyword).

    So, in the new scheme, either of these symbols can start a new typespec:

    ref
    [

    A type starting with T (built-in or user-defined type) is a regular
    type-spec.

    'func', is normally used inside the the type for function pointer: 'ref
    func'. At the start, I suppose it could be used to declare regular
    non-pointer functions.

    Examples:

    ref int p, q, r; // int *p, *q, *r;
    [N]int a, b, c; // int a[N], b[N], c[N];
    ref []int p; // int (*p)[]; I think
    [N]ref int q; // int *q[N];
    ref func(int, int)float F // float (*F)(int,int);
    ref ref[M][N]double x // pointer to pointer to array ...

    If you wanted to make it CDECL-clear, then make these tweaks:

    - use 'pointer to' as an alias to 'ref'

    - Require 'array' before [ in my examples

    - Require 'returning' after ')'

    (In this scheme, parentheses only occur around parameter lists.)

    Old and new can be mixed, but they need to be distinct sequences:

    ref func(int*, ref int)float F

    int* G(ref func()void);

    So, anywhere where a type could start in the current syntax, that can be
    old or new.





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Sun Oct 26 16:04:28 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 26/10/2025 06:25, Janis Papanagnou wrote:


    However the A68G configure script is 11000 lines; the CDECL one 31600 lines.

    (I wonder why the latter needs 20000 more lines? I guess nobody is
    curious - or they simply don't care.)

    You should be able to figure that out yourself. You may actually
    learn something useful along the way.

    Start with reading the autoconf documentation, fully, until you
    understand the goals and the mechanisms used to meet those goals.

    www.gnu.org/software/autoconf/manual/autoconf.html
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Sun Oct 26 16:07:19 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 26/10/2025 11:26, bart wrote:
    On 26/10/2025 06:25, Janis Papanagnou wrote:


    So the following are after a restart of my PC:

    Build CDECL under WSL (files were extracted before the restart):
    60/56 seconds instead 35/49 seconds for configure/make

    My demo above running both compiler and interpreter from source:
    0.31 seconds instead of 0.21 seconds

    New test of gcc compiling hello.c:
    1 second, settling down to 0.23 seconds on subsequent builds

    Get back to us when your "build + compiler" system will successfully
    build all the software that currently builds with autoconf, make and gcc.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Sun Oct 26 16:58:19 2025
    From Newsgroup: comp.lang.c

    On 26/10/2025 16:04, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 26/10/2025 06:25, Janis Papanagnou wrote:


    However the A68G configure script is 11000 lines; the CDECL one 31600 lines. >>
    (I wonder why the latter needs 20000 more lines? I guess nobody is
    curious - or they simply don't care.)

    You should be able to figure that out yourself. You may actually
    learn something useful along the way.


    So you don't know.

    What special requirements does CDECL have (which has a task that is at
    least a magnitude simpler than A68G's), that requires those 20,000 extra lines?


    Start with reading the autoconf documentation, fully, until you
    understand the goals and the mechanisms used to meet those goals.

    www.gnu.org/software/autoconf/manual/autoconf.html
    Whatever the goals are, if they are even needed, the execution is poor.
    That is even acknowledged in your link:

    "(Before each check, they print a one-line message stating what they are checking for, so the user doesn’t get too bored while waiting for the
    script to finish.)"

    That document is a classic example of making a fantastically complicated mountain out of a molehill.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Sun Oct 26 17:03:57 2025
    From Newsgroup: comp.lang.c

    On 26/10/2025 16:07, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 26/10/2025 11:26, bart wrote:
    On 26/10/2025 06:25, Janis Papanagnou wrote:


    So the following are after a restart of my PC:

    Build CDECL under WSL (files were extracted before the restart):
    60/56 seconds instead 35/49 seconds for configure/make

    My demo above running both compiler and interpreter from source:
    0.31 seconds instead of 0.21 seconds

    New test of gcc compiling hello.c:
    1 second, settling down to 0.23 seconds on subsequent builds

    Get back to us when your "build + compiler" system will successfully
    build all the software that currently builds with autoconf, make and gcc.

    So you are telling me it is IMPOSSIBLE to build a product with the specification of CDECL, entirely in portable C? You HAVE to use all
    those extra utilities, macro languages and what-not?

    Obviously, 'all the software' that currently builds with those tools
    will have been designed and developed *with* those tools; they will be essential dependencies.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Sun Oct 26 17:27:06 2025
    From Newsgroup: comp.lang.c

    On 2025-10-26, bart <bc@freeuk.com> wrote:
    On 26/10/2025 16:04, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 26/10/2025 06:25, Janis Papanagnou wrote:


    However the A68G configure script is 11000 lines; the CDECL one 31600 lines.

    (I wonder why the latter needs 20000 more lines? I guess nobody is
    curious - or they simply don't care.)

    You should be able to figure that out yourself. You may actually
    learn something useful along the way.


    So you don't know.

    What special requirements does CDECL have (which has a task that is at
    least a magnitude simpler than A68G's), that requires those 20,000 extra lines?

    I can't imagine why anyone would write cdecl (if it is written in C)
    such that it's anything but a maximally conforming ISO C program, which
    can be built like this:

    make cdecl

    without any Makefile present, in a directory in which there is just
    one file: cdecl.c.

    An empty ./configure script can be provided so that downstream package maintainers are less confused by the simplicity:

    #!/bin/sh
    echo cdecl succesfully configured; run make

    There may be additional material for testing, of course.

    www.gnu.org/software/autoconf/manual/autoconf.html
    Whatever the goals are, if they are even needed, the execution is poor.
    That is even acknowledged in your link:

    "(Before each check, they print a one-line message stating what they are checking for, so the user doesn’t get too bored while waiting for the script to finish.)"

    That document is a classic example of making a fantastically complicated mountain out of a molehill.

    It's a pile of crap developed by (and for) imbeciles, which made a
    certain small amount of sense 30+ years ago when the Unix landscape was
    a much more fragmented mess than it is now.

    When you write a file called Makefile.am, it's like taping a piece
    of paper to your ass saying "kick me with an ugly mountain of technical
    debt which doesn't contribute a fucking thing to my actual application
    logic".
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andrey Tarasevich@noone@noone.net to comp.lang.c,comp.lang.c++ on Sun Oct 26 12:09:22 2025
    From Newsgroup: comp.lang.c

    On Wed 10/22/2025 2:39 PM, Keith Thompson wrote:
    ...

    I believe I have already posted about it here... or maybe not?

    cdecl.org reports a "syntax error" for declarations with top-level
    `const` qualifiers on function parameters:

    void foo(char *const)

    Such declarations are perfectly valid. (And adding an explcit parameter
    name does not help).

    The current version of cdecl.org still complains about it.
    --
    Best regards,
    Andrey

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Sun Oct 26 22:49:26 2025
    From Newsgroup: comp.lang.c

    On Sun, 26 Oct 2025 17:27:06 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-10-26, bart <bc@freeuk.com> wrote:
    On 26/10/2025 16:04, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 26/10/2025 06:25, Janis Papanagnou wrote:


    However the A68G configure script is 11000 lines; the CDECL one
    31600 lines.

    (I wonder why the latter needs 20000 more lines? I guess nobody is
    curious - or they simply don't care.)

    You should be able to figure that out yourself. You may actually
    learn something useful along the way.


    So you don't know.

    What special requirements does CDECL have (which has a task that is
    at least a magnitude simpler than A68G's), that requires those
    20,000 extra lines?

    I can't imagine why anyone would write cdecl (if it is written in C)
    such that it's anything but a maximally conforming ISO C program,
    which can be built like this:

    make cdecl

    without any Makefile present, in a directory in which there is just
    one file: cdecl.c.

    An empty ./configure script can be provided so that downstream package maintainers are less confused by the simplicity:

    #!/bin/sh
    echo cdecl succesfully configured; run make

    There may be additional material for testing, of course.

    www.gnu.org/software/autoconf/manual/autoconf.html
    Whatever the goals are, if they are even needed, the execution is
    poor. That is even acknowledged in your link:

    "(Before each check, they print a one-line message stating what
    they are checking for, so the user doesn’t get too bored while
    waiting for the script to finish.)"

    That document is a classic example of making a fantastically
    complicated mountain out of a molehill.

    It's a pile of crap developed by (and for) imbeciles, which made a
    certain small amount of sense 30+ years ago when the Unix landscape
    was a much more fragmented mess than it is now.

    When you write a file called Makefile.am, it's like taping a piece
    of paper to your ass saying "kick me with an ugly mountain of
    technical debt which doesn't contribute a fucking thing to my actual application logic".

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Sun Oct 26 23:07:11 2025
    From Newsgroup: comp.lang.c

    On Sun, 26 Oct 2025 17:27:06 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-10-26, bart <bc@freeuk.com> wrote:
    On 26/10/2025 16:04, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 26/10/2025 06:25, Janis Papanagnou wrote:


    However the A68G configure script is 11000 lines; the CDECL one
    31600 lines.

    (I wonder why the latter needs 20000 more lines? I guess nobody is
    curious - or they simply don't care.)

    You should be able to figure that out yourself. You may actually
    learn something useful along the way.


    So you don't know.

    What special requirements does CDECL have (which has a task that is
    at least a magnitude simpler than A68G's), that requires those
    20,000 extra lines?

    I can't imagine why anyone would write cdecl (if it is written in C)
    such that it's anything but a maximally conforming ISO C program,
    which can be built like this:

    make cdecl

    without any Makefile present, in a directory in which there is just
    one file: cdecl.c.

    You are exaggerating.
    There is nothing wrong with multiple files and small nice manually
    written Makefile. Esp. if you expect from [small percentage of] your
    users to not just compile your code, but to make modifications. Who
    knows, may be even to contribute changes to project.
    An empty ./configure script can be provided so that downstream package maintainers are less confused by the simplicity:

    #!/bin/sh
    echo cdecl succesfully configured; run make

    There may be additional material for testing, of course.

    www.gnu.org/software/autoconf/manual/autoconf.html
    Whatever the goals are, if they are even needed, the execution is
    poor. That is even acknowledged in your link:

    "(Before each check, they print a one-line message stating what
    they are checking for, so the user doesn’t get too bored while
    waiting for the script to finish.)"

    That document is a classic example of making a fantastically
    complicated mountain out of a molehill.

    It's a pile of crap developed by (and for) imbeciles, which made a
    certain small amount of sense 30+ years ago when the Unix landscape
    was a much more fragmented mess than it is now.

    30 years ago things already were not THAT bad.
    32-33 years ago - may be.
    I remember Sun workstation with no C90 compiler in 1993.
    Even if sub-C90 compilers were still around 30 years ago then the
    correct behavior on part of devs should have been to help to these
    compilers and to their vendors to die ASAP instead of helping them
    to continue making life of poor programmers miserable.
    In that regard autotools resemble Postel's principle - the most harmful
    idea that ever happened to networking and one of the more harmful for
    computing at large.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c,comp.lang.c++ on Sun Oct 26 14:44:49 2025
    From Newsgroup: comp.lang.c

    On 10/22/2025 2:39 PM, Keith Thompson wrote:
    This is cross-posted to comp.lang.c and comp.lang.c++.
    Consider redirecting followups as appropriate.

    cdecl, along with c++decl, is a tool that translates C or C++
    declaration syntax into English, and vice versa. For example :

    $ cdecl
    Type `help' or `?' for help
    cdecl> explain const char *foo[42]
    declare foo as array 42 of pointer to const char
    cdecl> declare bar as pointer to function (void) returning int
    int (*bar)(void )

    It's also available via the web site <https://cdecl.org/>.
    I must be doing something wrong:

    int (*fp_read) (void* const, void*, size_t)

    is syntax error. It from one of my older experiments:

    struct device_prv_vtable {
    int (*fp_read) (void* const, void*, size_t);
    int (*fp_write) (void* const, void const*, size_t);
    };

    https://groups.google.com/g/comp.lang.c/c/-BFbjYxcBQg/m/2uRErOV6AgAJ

    https://pastebin.com/raw/f52a443b1
    (link to raw text...)

    [...]
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Sun Oct 26 14:56:56 2025
    From Newsgroup: comp.lang.c

    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    [...]
    Free software still has to be usable. cdecl is usable for most of us.

    [...]


    I'd say that it is not sufficiently usable for most of us to actually
    use it.

    Why do you say that?
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Mon Oct 27 00:34:14 2025
    From Newsgroup: comp.lang.c

    On Sun, 26 Oct 2025 14:56:56 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    [...]
    Free software still has to be usable. cdecl is usable for most of
    us.

    [...]


    I'd say that it is not sufficiently usable for most of us to
    actually use it.

    Why do you say that?


    I would guess that less than 1 per cent of C programmers ever used it
    and less than 5% of those who used it once continued to use it
    regularly.
    All numbers pulled out of thin air...

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c,comp.lang.c++ on Sun Oct 26 15:36:07 2025
    From Newsgroup: comp.lang.c

    Andrey Tarasevich <noone@noone.net> writes:
    On Wed 10/22/2025 2:39 PM, Keith Thompson wrote:
    ...

    I believe I have already posted about it here... or maybe not?

    cdecl.org reports a "syntax error" for declarations with top-level
    `const` qualifiers on function parameters:

    void foo(char *const)

    Such declarations are perfectly valid. (And adding an explcit
    parameter name does not help).

    The current version of cdecl.org still complains about it.

    You're probably using 2.5, the version most commonly packaged with Linux distributions. The cdecl.org site uses that same old version.

    This entire thread is about the newer version available at <https://github.com/paul-j-lucas/cdecl/>. It doesn't have that bug.

    $ cdecl --version
    cdecl 18.6
    Copyright (C) 2025 Paul J. Lucas
    License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it.
    There is NO WARRANTY to the extent permitted by law.
    $ cdecl explain 'void foo(char *const)'
    declare foo as function (constant pointer to character) returning void
    $ cdecl explain 'void foo(char *const foo)'
    declare foo as function (foo as constant pointer to character) returning void
    $

    (I'm not entirely pleased that the newer version expands "char" to
    "character" and, worse, "int" to "integer", but I can live with it.)
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c,comp.lang.c++ on Sun Oct 26 15:38:18 2025
    From Newsgroup: comp.lang.c

    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 10/22/2025 2:39 PM, Keith Thompson wrote:
    This is cross-posted to comp.lang.c and comp.lang.c++.
    Consider redirecting followups as appropriate.
    cdecl, along with c++decl, is a tool that translates C or C++
    declaration syntax into English, and vice versa. For example :
    $ cdecl
    Type `help' or `?' for help
    cdecl> explain const char *foo[42]
    declare foo as array 42 of pointer to const char
    cdecl> declare bar as pointer to function (void) returning int
    int (*bar)(void )
    It's also available via the web site <https://cdecl.org/>.
    I must be doing something wrong:

    Yes.

    int (*fp_read) (void* const, void*, size_t)

    is syntax error. It from one of my older experiments:

    You're using the old 2.5 version. The newer forked version handles that declaration correctly, but you have to build it from source. cdecl.org
    uses the old version.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Sun Oct 26 15:45:34 2025
    From Newsgroup: comp.lang.c

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 26 Oct 2025 14:56:56 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    [...]
    Free software still has to be usable. cdecl is usable for most of
    us.

    [...]


    I'd say that it is not sufficiently usable for most of us to
    actually use it.

    Why do you say that?

    I would guess that less than 1 per cent of C programmers ever used it
    and less than 5% of those who used it once continued to use it
    regularly.
    All numbers pulled out of thin air...

    So it's about usefulness, not usability. You're not saying that
    it works incorrectly or that it's difficult to use (which would be
    usability issues), but that the job it performs is not useful for
    most C programmers.

    (One data point: I use it occasionally.)
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Sun Oct 26 16:07:13 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    [...]
    However the A68G configure script is 11000 lines; the CDECL one 31600 lines.

    (I wonder why the latter needs 20000 more lines? I guess nobody is
    curious - or they simply don't care. OK, let's make 100,000 and see if
    anyone complains! Is it possible this is some elaborate joke on the
    part of auto-conf to discover just how trusting and tolerant people
    can be?)

    Yes, that's pretty much it. Most of us really don't care why
    one configure script is longer than another. I've run both, and
    they work. I went off and did other things while they were running,
    so I didn't even notice how long they took. (It was a few seconds
    for each.)

    On the other hand, you apparently do care about all this -- but
    you've done nothing useful to learn about it. You merely complain
    incessantly *for years* to people who are not in a position to do
    anything about it. And when we point you to forums where you could
    ask about it, or even make some useful contribution, you ignore us.

    What most of the people you're talking to have in common is that
    we know the C language and are interested in discussing it.
    We aren't GNU autotools maintainers. We didn't write cdecl
    (all I did was announce a new version written by someone else).
    I've never written anything that uses GNU autotools. I don't know
    what you're expecting to accomplish here.

    Do you disagree with any of the above?
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Mon Oct 27 01:12:39 2025
    From Newsgroup: comp.lang.c

    On Sun, 26 Oct 2025 15:45:34 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 26 Oct 2025 14:56:56 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    [...]
    Free software still has to be usable. cdecl is usable for most
    of us.

    [...]


    I'd say that it is not sufficiently usable for most of us to
    actually use it.

    Why do you say that?

    I would guess that less than 1 per cent of C programmers ever used
    it and less than 5% of those who used it once continued to use it regularly.
    All numbers pulled out of thin air...

    So it's about usefulness, not usability. You're not saying that
    it works incorrectly or that it's difficult to use (which would be
    usability issues), but that the job it performs is not useful for
    most C programmers.


    No, it's about usability.
    I'd imagine that in order to be closer to usable tools like that would
    better be integrated into programmer's text editor/IDE.


    (One data point: I use it occasionally.)


    How about your co-workers?





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Sun Oct 26 16:15:17 2025
    From Newsgroup: comp.lang.c

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 26 Oct 2025 15:45:34 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 26 Oct 2025 14:56:56 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    [...]
    Free software still has to be usable. cdecl is usable for most
    of us.

    [...]


    I'd say that it is not sufficiently usable for most of us to
    actually use it.

    Why do you say that?

    I would guess that less than 1 per cent of C programmers ever used
    it and less than 5% of those who used it once continued to use it
    regularly.
    All numbers pulled out of thin air...

    So it's about usefulness, not usability. You're not saying that
    it works incorrectly or that it's difficult to use (which would be
    usability issues), but that the job it performs is not useful for
    most C programmers.

    No, it's about usability.
    I'd imagine that in order to be closer to usable tools like that would
    better be integrated into programmer's text editor/IDE.

    Personally, integration into a text editor or IDE would not be useful
    for me. Of course I accept that it would be useful for you.

    (One data point: I use it occasionally.)

    How about your co-workers?

    I don't have any at the moment.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Mon Oct 27 03:08:38 2025
    From Newsgroup: comp.lang.c

    On 26.10.2025 12:26, bart wrote:
    On 26/10/2025 06:25, Janis Papanagnou wrote:
    (This reply is not meant for bart, but rather for all interested
    folks who should not get repelled by his FUD posts.)

    On 25.10.2025 00:18, bart wrote:
    [...]

    (I remember trying to build A68G, an interpreter, on Windows, and the
    'configure' step was a major obstacle. But I was willing to isolate the
    12 C source files involved, then it was built in one second.

    I did of course try building it in Linux too, and it took about 5
    minutes that I recall, using a spinnning hard drive, mostly spent
    running through that configure script.

    (I don't know what system or system configuration the poster runs.
    I'm well aware that if you are using the Windows platform you may
    suffer from many things; but the platform choice is your decision!
    But maybe he's just misremembering; and nonetheless spreading FUD.)

    I've a quite old (~16+ years old) Linux system that was back these
    days when I bought it already at the _very low performance range_.
    With this old system the ./configure needs less than 10 seconds,
    and the build process with make about _half a minute_ for the whole
    a68g Genie system. - The whole procedure, from software download,
    extraction, configure/make, and start an Algol application, needs
    one minute! (Make that two minutes if you are typing v_e_r_y slowly
    or have a slow download link. Or just put the necessary commands in
    a shell file; just did that and it needed (including the download)
    less than 45 seconds, and ready to run.)


    The 5 minutes I quoted may have been for CPython. It would be for some
    Linux running under VirtualBox on a 2010 cheapest-in-the-shop PC.

    If I try A68G now, under WSL, using a 2021 second-cheapest PC but with
    SSD, I get:

    ./configure 20 seconds
    make 90 seconds

    Have you examined what WSL and Windows is adding to your numbers?

    (As I've noted several times already I'd not be surprised if your
    platform contributes to your disappointment here.)

    And you've seen my numbers. (Older PC, no SSDs, etc. - but Unix.)

    (But I also don't think that SSDs would here any significant impact
    have. - But, yes, I know you're counting "quality" in microseconds
    (while completely ignoring other more important factors), so it may
    be important for you; I acknowledge that.)


    Trying CDECL again (I've done it several times after deleting the folder):

    ./configure 35 seconds
    make 49 seconds

    However the A68G configure script is 11000 lines; the CDECL one 31600
    lines.

    (I wonder why the latter needs 20000 more lines? I guess nobody is
    curious - or they simply don't care. OK, let's make 100,000 and see if
    anyone complains! Is it possible this is some elaborate joke on the part
    of auto-conf to discover just how trusting and tolerant people can be?)

    Frankly, I don't know what features 'cdecl' actually all supports.
    And, honestly, I understand that you want _for a simple task_ no
    overhead in any case. From the posts here I've got the impression
    that 'cdecl' might do a bit more than you expect; no? To judge any
    misuse of resources or any unjustified complexity we'd need to know
    the intention of the tools, its feature coverage, and platforms
    supported. If there's something to enhance there's luckily options
    you have, issue bug/feature requests, or (in case of open source),
    you could change the things that you think are "obviously wrong".



    Anyway, I then tried this new 3.10 A68G on the Fannkuch(9) benchmark:

    a68g fann.a68 5 seconds
    ./fann 3+3 seconds (via a68g --compile -O3 fann.a68)

    I then tried it under my scripting language (not statically typed):

    qq fann 0.4 seconds (qq built with my non-optimising compiler)

    Your 'qq' is an Algol 68 implementation?

    (If not then you're comparing apples to oranges!)


    'qq' takes about 0.1 seconds to build - under Windows which is
    considered slow for development. So, 1000 times faster to build, and it
    runs this program at least, 10 times faster, despite being dynamically
    typed.

    This is the vast difference between my world and yours.

    If 'qq' is some language unrelated to Algol 68 this difference tells
    nothing. (So please clarify. - Or else stop vacuous comparisons.)


    The whole procedure, from software download,
    extraction, configure/make, and start an Algol application, needs
    one minute!

    Only one minute; impressive!

    Is that meant ironic/sarcastic? - Remember I was replying to your FUD
    and misinformation post that purported that the Genie compile process
    would have required five minutes!

    How about this:

    c:\qx>tm mm -r \mx\mm -r qq hello
    Hello World
    TM: 0.21

    (This doesn't tell me anything. But most likely it's anyway irrelevant
    on the Algol 68 topic - or rather non-topic - that you've made up.)


    This runs my systems language /from source code/, whch then runs my interpreter /from source code/ (ie. compiles into memory and runs immediately) then runs that test program.

    In 1/5th of a second (or 1/300th of a minute). This is equivalent to
    first compiling gcc from source (and all those extra utilities you seem
    to need) before using it/them to build a68g. I guess that would take a
    bit more than a minute.

    So you're again advertising your personal language and tools. - I'm not interested in non-standard language (or Windows-) tools, as you've been
    told so many times (also by others).

    The tools I'm using for my personal purposes, and those that I had been
    using for professional purposes, all served the necessary requirements.
    Your's don't.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Mon Oct 27 10:44:10 2025
    From Newsgroup: comp.lang.c

    On 26/10/2025 16:12, bart wrote:
    On 25/10/2025 16:18, David Brown wrote:
    On 25/10/2025 14:51, bart wrote:

    This is another matter. The CDECL docs talk about C and C++ type
    declarations being 'gibberish'.

    What do you feel about that, and the *need* for such a substantial
    tool to help understand or write such declarations?

    I would rather have put some effort into fixing the syntax so that
    such tools are not necessary!

    And I'd love to hear your plan for "fixing" the syntax of C - noting
    that changing the syntax of C means getting the C standards committee
    to accept your suggestions, getting at least all major C compilers to
    support them, and getting the millions of C programmers to use them.

    I have posted such proposals in the past (probably before 2010).


    No, you have not.

    What you have proposed is a different way to write types in
    declarations, in a different language. That's fine if you are making a different language. (For the record, I like some of your suggestions,
    and dislike others - my own choice for an "ideal" syntax would be
    different from both your syntax and C's.)

    I asked you if you had a plan for /fixing/ the syntax of /C/. You don't.

    As an analogy, suppose I invited you - as an architect and builder - to
    see my house, and you said you didn't like the layout of the rooms, the kitchen was too small, and you thought the cellar was pointless
    complexity. I ask you if you can give me a plan to fix it, and you
    respond by telling me your own house is nicer.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Mon Oct 27 11:22:03 2025
    From Newsgroup: comp.lang.c

    On 27/10/2025 09:44, David Brown wrote:
    On 26/10/2025 16:12, bart wrote:
    On 25/10/2025 16:18, David Brown wrote:
    On 25/10/2025 14:51, bart wrote:

    This is another matter. The CDECL docs talk about C and C++ type
    declarations being 'gibberish'.

    What do you feel about that, and the *need* for such a substantial
    tool to help understand or write such declarations?

    I would rather have put some effort into fixing the syntax so that
    such tools are not necessary!

    And I'd love to hear your plan for "fixing" the syntax of C - noting
    that changing the syntax of C means getting the C standards committee
    to accept your suggestions, getting at least all major C compilers to
    support them, and getting the millions of C programmers to use them.

    I have posted such proposals in the past (probably before 2010).


    No, you have not.

    What you have proposed is a different way to write types in
    declarations, in a different language.  That's fine if you are making a different language.  (For the record, I like some of your suggestions,
    and dislike others - my own choice for an "ideal" syntax would be
    different from both your syntax and C's.)

    I asked you if you had a plan for /fixing/ the syntax of /C/.  You don't.

    As an analogy, suppose I invited you - as an architect and builder - to
    see my house, and you said you didn't like the layout of the rooms, the kitchen was too small, and you thought the cellar was pointless complexity.  I ask you if you can give me a plan to fix it, and you
    respond by telling me your own house is nicer.

    Where did I say anything about my own house?

    I added a scheme for LTR type declarations such as you find in many
    other languages (where do you think I copied mine from?).

    I had to use 'ref' instead of '*' to avoid grammar ambiguities since in
    C, '*' can also start an expression.

    Your analogy is poor, but I would set up additional kitchen facilities
    in another room, or in an extension. (My brother did exactly this.)

    If my scheme was actually added and become popular, the old one could eventually be deprecated.

    And yes it does 'fix' it by not requiring the use of tools like CDECL
    when writing new code: type-specs are already in LTR, more English-like
    form.

    CDECL might still be needed for decoding gibberish in legacy code, or
    where you have to maintain such code in the same style.

    - my own choice for an "ideal" syntax would be
    different from both your syntax and C's.)


    It sounds like it would also be different from CDECL. Perhaps you should contact the author to tell him what he's doing wrong!

    BTW on an old compiler for my language, I had a bit of fun by
    implementation those ideas I mentioned for making syntax CDECL-like. So
    this was legal code:

    pointer to function(int, int) returning real fnptr

    What's missing is having the variable name on the left (I added that as
    an experiment - allowing it on left OR right - in a different version!)

    However it is too long-winded and verbose; it adds too much clutter and
    is too much typing. When displaying diags, the compiler represents that
    type like this:

    ref proc(i64 $1,i64 $2)r64

    which is close to the normal syntax.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Mon Oct 27 12:50:54 2025
    From Newsgroup: comp.lang.c

    On 27/10/2025 02:08, Janis Papanagnou wrote:
    On 26.10.2025 12:26, bart wrote:

    The 5 minutes I quoted may have been for CPython. It would be for some
    Linux running under VirtualBox on a 2010 cheapest-in-the-shop PC.

    If I try A68G now, under WSL, using a 2021 second-cheapest PC but with
    SSD, I get:

    ./configure 20 seconds
    make 90 seconds

    Have you examined what WSL and Windows is adding to your numbers?

    It could well be that Windows' file system is less efficient than pure
    Linux (and WSL has to presumably work on top of that). But, that is my platform.

    If I try a pure Linux system (RPi4 with solid-state storage, which
    normally runs at 1/3 the speed of my PC), then I get:

    ./configure 16.5 seconds
    make 137 seconds

    There are a few extra checks made in WSL configure, but not many (195
    logged lines vs 187).

    (As I've noted several times already I'd not be surprised if your
    platform contributes to your disappointment here.)

    And you've seen my numbers. (Older PC, no SSDs, etc. - but Unix.)


    Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX.

    So they are utterly dependent on them. So much so that it is pretty much impossible to build this stuff on any non-UNIX environment, unless that environment is emulated. That is what happens with WSL, MSYS2, CYGWIN.


    Anyway, I then tried this new 3.10 A68G on the Fannkuch(9) benchmark:

    a68g fann.a68 5 seconds
    ./fann 3+3 seconds (via a68g --compile -O3 fann.a68)

    I then tried it under my scripting language (not statically typed):

    qq fann 0.4 seconds (qq built with my non-optimising compiler)

    Your 'qq' is an Algol 68 implementation?

    (If not then you're comparing apples to oranges!)

    You've never, ever seen benchmarks comparing one language implementation
    with another?

    'qq' implements a pure interpreter for a dynamically typed language.

    Algol68 is statically typed, which ought to give it the edge. It can be interpreted (the 5s figure) or compiled to native code (the 3s figure,
    and it takes 3s to compile this 60-line program), which here makes
    little difference.

    So for all that trouble, A68G's performance is indifferent. If you don't
    care for my language, then here some other timings:

    A68G -O3/comp 6 seconds (3s to compile + 3s runtime)
    A68G 5
    CPython 3.14: 1.2
    Lua 5.4 0.65
    qq 0.4
    (qq/opt 0.3 Optimised via C transpilation and gcc-O2)
    PyPy 3.8: 0.2
    LuaJIT: 0.12

    The 0.2/0.12 timings are from JIT-accelerated versions.



    'qq' takes about 0.1 seconds to build - under Windows which is
    considered slow for development. So, 1000 times faster to build, and it
    runs this program at least, 10 times faster, despite being dynamically
    typed.

    This is the vast difference between my world and yours.

    If 'qq' is some language unrelated to Algol 68 this difference tells
    nothing. (So please clarify. - Or else stop vacuous comparisons.)

    The task here is evaluating 'fannkuch(9)', using the same algorithm in
    each case.

    A68G is poor on this benchmark. Other interpreted solutions are faster.

    It is disappointing after taking all that effort to build.


    So you're again advertising your personal language and tools. - I'm not interested in non-standard language (or Windows-) tools, as you've been
    told so many times (also by others).

    Here's an example not related to my stuff:

    c:\cx>tim tcc lua.c
    Time: 0.120

    This builds the Lua intepreter in 1/8th of a second. Now, Tiny C
    generally produces indifferent code (ie. slow). Still, I get this result
    from my benchmark:

    Lua 5.4 0.65 (lua.exe built using Tiny C)

    It's still at least FIVE TIMES FASTER than A68G!


    The tools I'm using for my personal purposes, and those that I had been
    using for professional purposes, all served the necessary requirements. Your's don't.

    I'm just showing just how astonishingly fast modern hardware can be.
    Like at least a thousand times faster than a 1970s mainframe, and yet
    people are still waiting on compilers!

    But if you're happy with the performance of your tools, then that's fine.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Mon Oct 27 12:58:22 2025
    From Newsgroup: comp.lang.c

    On 27/10/2025 12:50, bart wrote:

       Lua 5.4         0.65

    Here's an example not related to my stuff:

      c:\cx>tim tcc lua.c
      Time: 0.120

    This builds the Lua intepreter in 1/8th of a second. Now, Tiny C
    generally produces indifferent code (ie. slow). Still, I get this result from my benchmark:

       Lua 5.4         0.65        (lua.exe built using Tiny C)

    It's still at least FIVE TIMES FASTER than A68G!

    Oops! I forgot to update the timing after copying that line. The proper
    figure should be:

    Lua 5.4 1.5 seconds


    So, sorry it's only 3 times as fast as A68G! And only twice as fast as compiled A68G code, if you forget about the latter's compilation time.

    Here, also, you can build the Lua interpreter from source each time
    (adding 0.12 seconds), and it would *still* be faster than A68G.

    Not impressed? I thought not.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Mon Oct 27 14:39:17 2025
    From Newsgroup: comp.lang.c

    On 27.10.2025 13:50, bart wrote:
    On 27/10/2025 02:08, Janis Papanagnou wrote:
    On 26.10.2025 12:26, bart wrote:

    [...]

    Anyway, I then tried this new 3.10 A68G on the Fannkuch(9) benchmark:

    a68g fann.a68 5 seconds
    ./fann 3+3 seconds (via a68g --compile -O3 fann.a68)

    I then tried it under my scripting language (not statically typed):

    qq fann 0.4 seconds (qq built with my non-optimising compiler)

    Your 'qq' is an Algol 68 implementation?

    (If not then you're comparing apples to oranges!)

    You've never, ever seen benchmarks comparing one language implementation
    with another?

    First of all, in communication with you here in Usenet I've seen you
    constantly switching goal posts. - Here again.


    'qq' implements a pure interpreter for a dynamically typed language.

    (Obviously completely useless to me.)


    Algol68 is statically typed, which ought to give it the edge. It can be interpreted (the 5s figure) or compiled to native code (the 3s figure,
    and it takes 3s to compile this 60-line program), which here makes
    little difference.

    So for all that trouble, A68G's performance is indifferent. If you don't
    care for my language, then here some other timings:

    A68G -O3/comp 6 seconds (3s to compile + 3s runtime)
    A68G 5
    CPython 3.14: 1.2
    Lua 5.4 0.65
    qq 0.4
    (qq/opt 0.3 Optimised via C transpilation and gcc-O2)
    PyPy 3.8: 0.2
    LuaJIT: 0.12

    The 0.2/0.12 timings are from JIT-accelerated versions.

    You are again switching goal posts. Here even twice; once for comparing
    a68g compile times of some program, and second for comparing arbitrary
    other languages. - The topic of the sub-thread was my correction of
    your misinformation was how long it takes to create a complete Genie
    runtime from scratch; 45 seconds. And let me add that you usually don't
    do that regularly but typically maybe only once or twice a year. Even
    your imaginary "5 minutes" would be okay for that.

    Speed is not an end in itself. It must be valued in comparison with
    all the other often more relevant factors (that you seem to completely
    miss, even when explained to you).

    I know your goals are space and speed. And that's fine in principle
    (unless you're ignoring other relevant factors).

    [...]

    A68G is poor on this benchmark. Other interpreted solutions are faster.

    It is disappointing after taking all that effort to build.

    Why do you care? You're anyway using your own languages, don't you?
    And others do what fit their needs.



    So you're again advertising your personal language and tools. - I'm not
    interested in non-standard language (or Windows-) tools, as you've been
    told so many times (also by others).

    Here's an example not related to my stuff:

    c:\cx>tim tcc lua.c
    Time: 0.120

    This builds the Lua intepreter in 1/8th of a second. Now, Tiny C
    generally produces indifferent code (ie. slow). Still, I get this result
    from my benchmark:

    Lua 5.4 0.65 (lua.exe built using Tiny C)

    It's still at least FIVE TIMES FASTER than A68G!

    So what? - I don't need a Lua system. So why should I care.

    You are the one who seems to think that the speed factor is the most
    important factor to choose a language for a project. - You are wrong
    for the general case. (But it may be right for your personal universe,
    of course.)



    The tools I'm using for my personal purposes, and those that I had been
    using for professional purposes, all served the necessary requirements.
    Your's don't.

    I'm just showing just how astonishingly fast modern hardware can be.
    Like at least a thousand times faster than a 1970s mainframe, and yet
    people are still waiting on compilers!

    You've been explained before many times already by many people that
    differences in compile time may not beat other more relevant factors.

    If you'd take a minimum time to think about that we could spare a lot
    of posts.


    But if you're happy with the performance of your tools, then that's fine.

    Generally that depends.

    If execution performance of a binary is crucial (and critical with
    a safer language) I'd switch to something else, like C++, or "C".

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Mon Oct 27 14:45:11 2025
    From Newsgroup: comp.lang.c

    On 27.10.2025 13:58, bart wrote:
    On 27/10/2025 12:50, bart wrote:

    Lua 5.4 0.65

    Here's an example not related to my stuff:

    c:\cx>tim tcc lua.c
    Time: 0.120

    This builds the Lua intepreter in 1/8th of a second. Now, Tiny C
    generally produces indifferent code (ie. slow). Still, I get this
    result from my benchmark:

    Lua 5.4 0.65 (lua.exe built using Tiny C)

    It's still at least FIVE TIMES FASTER than A68G!

    Oops! I forgot to update the timing after copying that line. The proper figure should be:

    Lua 5.4 1.5 seconds


    And what have you gained or lost in practice by this 0.85 seconds
    delta?

    (Clearly, you're wasting your time on marginalities! And thereby
    completely missing or ignoring the more important factors.)


    So, sorry it's only 3 times as fast as A68G! And only twice as fast as compiled A68G code, if you forget about the latter's compilation time.

    Here, also, you can build the Lua interpreter from source each time
    (adding 0.12 seconds), and it would *still* be faster than A68G.

    Lua is not Algol 68. (Again comparing apples to oranges! Or just
    another try of a red herring.)


    Not impressed? I thought not.

    I'm sure non-dimensionally thinking folks may be impressed.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Mon Oct 27 14:48:51 2025
    From Newsgroup: comp.lang.c

    On 27.10.2025 14:45, Janis Papanagnou wrote:
    On 27.10.2025 13:58, bart wrote:
    [...]

    Not impressed? I thought not.

    I'm sure non-dimensionally thinking folks may be impressed.

    Should have been:

    I'm sure one-dimensionally thinking folks may be impressed.


    Janis


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Mon Oct 27 15:11:46 2025
    From Newsgroup: comp.lang.c

    On 27/10/2025 13:39, Janis Papanagnou wrote:
    On 27.10.2025 13:50, bart wrote:
    On 27/10/2025 02:08, Janis Papanagnou wrote:
    On 26.10.2025 12:26, bart wrote:

    [...]

    Anyway, I then tried this new 3.10 A68G on the Fannkuch(9) benchmark:

    a68g fann.a68 5 seconds
    ./fann 3+3 seconds (via a68g --compile -O3 fann.a68)

    I then tried it under my scripting language (not statically typed):

    qq fann 0.4 seconds (qq built with my non-optimising compiler)

    Your 'qq' is an Algol 68 implementation?

    (If not then you're comparing apples to oranges!)

    You've never, ever seen benchmarks comparing one language implementation
    with another?

    First of all, in communication with you here in Usenet I've seen you constantly switching goal posts. - Here again.


    'qq' implements a pure interpreter for a dynamically typed language.

    (Obviously completely useless to me.)


    Algol68 is statically typed, which ought to give it the edge. It can be
    interpreted (the 5s figure) or compiled to native code (the 3s figure,
    and it takes 3s to compile this 60-line program), which here makes
    little difference.

    So for all that trouble, A68G's performance is indifferent. If you don't
    care for my language, then here some other timings:

    A68G -O3/comp 6 seconds (3s to compile + 3s runtime)
    A68G 5
    CPython 3.14: 1.2
    Lua 5.4 0.65
    qq 0.4
    (qq/opt 0.3 Optimised via C transpilation and gcc-O2)
    PyPy 3.8: 0.2
    LuaJIT: 0.12

    The 0.2/0.12 timings are from JIT-accelerated versions.

    You are again switching goal posts. Here even twice; once for comparing
    a68g compile times of some program, and second for comparing arbitrary
    other languages.

    Have a look at, for example:

    https://benchmarksgame-team.pages.debian.net/benchmarksgame/performance/fannkuchredux.html

    I guess you would call that all a waste of time. To me, it is useful,
    but flawed, since different implementations are allowed.


    - The topic of the sub-thread was my correction of
    your misinformation was how long it takes to create a complete Genie
    runtime from scratch; 45 seconds.

    I gave you actual measurements from my machine.

    Speed is not an end in itself. It must be valued in comparison with
    all the other often more relevant factors (that you seem to completely
    miss, even when explained to you).
    Speed seems to be important enough that huge efforts have gone into
    creating the best optimising compilers over decades.

    Fantastically complex products like LLVM exist, which take 100 times
    longer to compile code than a naive compiler, in order to eke out the
    last bit of performance.

    Similarly, massive investment has gone into making dynamic languages
    fast, like the state-of-the-art products used in running JavaScript, or
    the numerous JIT approaches used to accelerate languages like Python and
    Ruby.

    Build-speed is taken seriously enough, and most 'serious' compilers are
    slow enough, that complex build systems exist, which use dependencies in
    order to avoid compilation as much as possible.

    Or failing that, by parallelising builds across multiple cores, possibly
    even across distributed machines.

    So, fortunately some people take this stuff more seriously than you do.

    I am also involved in this field, and my experimental work takes the
    approach of simplicity to achieve results.



    I know your goals are space and speed. And that's fine in principle
    (unless you're ignoring other relevant factors).

    LLVM is a backend project which is massively bigger, more complex and
    slower (in build speed) than my stuff, by a number of magnitudes in each
    case.

    The resulting code however, might only be a fraction of a magnitude
    faster (for example the 0.3 vs 0.4 timings above, achieved via gcc, but
    LLVM would be similar).

    And that's if you apply the optimiser, which I would only use for
    production builds, or for benchmarking. Otherwise its code is just as
    poor as mine, or worse, but it still takes longer to build stuff!

    For me the trade-offs of a big, cumbersome product don't work. I like my near-zero builds and can work more spontaneously!

    It's still at least FIVE TIMES FASTER than A68G! [2-3 TIMES FASTER]

    So what? - I don't need a Lua system. So why should I care.

    You are the one who seems to think that the speed factor is the most important factor to choose a language for a project. - You are wrong
    for the general case. (But it may be right for your personal universe,
    of course.)

    You are wrong. What language do you use most? Let's say it is C
    (although you usually post about every other language except C!).

    Then, suppose your C compiler was written in Python rather than C++ or whatever and run under CPython. What you think would happen to your build-times?

    Now imagine further if the CPython interpreter was inself written and
    executed with CPython.

    So, the 'speed' of a language (ie. of its typical implementation, which
    also depends on the language design) does matter.

    If speed wasn't an issue then we'd all be using easy dynamic languages
    for productivity. In reality those easy languages are far too slow in
    most cases.






    The tools I'm using for my personal purposes, and those that I had been
    using for professional purposes, all served the necessary requirements.
    Your's don't.

    I'm just showing just how astonishingly fast modern hardware can be.
    Like at least a thousand times faster than a 1970s mainframe, and yet
    people are still waiting on compilers!

    You've been explained before many times already by many people that differences in compile time may not beat other more relevant factors.

    I've also explained that I work by very freqent edit-run cycles. Then compile-times matter. This is why many like to use scripting languages
    as those don't have a discernible build step.

    But I can use my system language, *or* C via my compiler, just like a scripting language.

    You will find now various projects that apply JIT-techniques to such
    languages in an effort to provide a similar experience. (I don't need
    such techniques as my AOT compilers already work near-instantly.)



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Mon Oct 27 15:23:00 2025
    From Newsgroup: comp.lang.c

    On 27/10/2025 13:45, Janis Papanagnou wrote:
    On 27.10.2025 13:58, bart wrote:
    On 27/10/2025 12:50, bart wrote:

    Lua 5.4 0.65

    Here's an example not related to my stuff:

    c:\cx>tim tcc lua.c
    Time: 0.120

    This builds the Lua intepreter in 1/8th of a second. Now, Tiny C
    generally produces indifferent code (ie. slow). Still, I get this
    result from my benchmark:

    Lua 5.4 0.65 (lua.exe built using Tiny C)

    It's still at least FIVE TIMES FASTER than A68G!

    Oops! I forgot to update the timing after copying that line. The proper
    figure should be:

    Lua 5.4 1.5 seconds


    And what have you gained or lost in practice by this 0.85 seconds
    delta?

    (Clearly, you're wasting your time on marginalities! And thereby
    completely missing or ignoring the more important factors.)

    You're subtracting 0.65 from 1.5 instead of dividing it? That's unusual,
    but OK, let's go with that!

    0.65 seconds represents the result of using gcc-O2, and 1.5 from using tcc.

    So you're saying there's little significant difference between them
    regarding the performance of the generated code.

    That's good to know.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Mon Oct 27 17:35:16 2025
    From Newsgroup: comp.lang.c

    On 27/10/2025 12:22, bart wrote:
    On 27/10/2025 09:44, David Brown wrote:
    On 26/10/2025 16:12, bart wrote:
    On 25/10/2025 16:18, David Brown wrote:
    On 25/10/2025 14:51, bart wrote:

    This is another matter. The CDECL docs talk about C and C++ type
    declarations being 'gibberish'.

    What do you feel about that, and the *need* for such a substantial
    tool to help understand or write such declarations?

    I would rather have put some effort into fixing the syntax so that
    such tools are not necessary!

    And I'd love to hear your plan for "fixing" the syntax of C - noting
    that changing the syntax of C means getting the C standards
    committee to accept your suggestions, getting at least all major C
    compilers to support them, and getting the millions of C programmers
    to use them.

    I have posted such proposals in the past (probably before 2010).


    No, you have not.

    What you have proposed is a different way to write types in
    declarations, in a different language.  That's fine if you are making
    a different language.  (For the record, I like some of your
    suggestions, and dislike others - my own choice for an "ideal" syntax
    would be different from both your syntax and C's.)

    I asked you if you had a plan for /fixing/ the syntax of /C/.  You don't. >>
    As an analogy, suppose I invited you - as an architect and builder -
    to see my house, and you said you didn't like the layout of the rooms,
    the kitchen was too small, and you thought the cellar was pointless
    complexity.  I ask you if you can give me a plan to fix it, and you
    respond by telling me your own house is nicer.

    Where did I say anything about my own house?


    In the analogy, that would your own language, and/or your own
    declaration syntax that has nothing to do with C - both of which you
    have harped on about repeatedly. Sorry, I thought that was obvious.

    If my scheme was actually added and become popular, the old one could eventually be deprecated.

    Is that your "plan" ?


    And yes it does 'fix' it by not requiring the use of tools like CDECL
    when writing new code: type-specs are already in LTR, more English-like form.

    Most C programmers don't need cdecl. The only people that do need it,
    either have very little knowledge and experience of C, or are faced with
    code written by sadists (unfortunately that is not as rare as it should
    be). Some others might occasionally find such a tool /useful/, but
    finding it useful is not "needing". And with your bizarre syntax as an alternative, just the same would apply.

    So you have set up a straw man, claimed to "fix" this imaginary problem,
    while actually doing nothing of the sort.

    And even if your syntax was as great as you think (IMHO it is nicer in
    some ways, worse in others - and I think most C programmers would agree
    on that while not being able to agree on which parts are nicer or
    worse), you still haven't shown the slightest concept of your claimed
    "plan" to implement it.

    - my own choice for an "ideal" syntax would be
    different from both your syntax and C's.)


    It sounds like it would also be different from CDECL. Perhaps you should contact the author to tell him what he's doing wrong!

    Yes, my ideal would be different from the output of cdecl. No, the
    author is not doing something "wrong". I live in a world where
    programming languages are used by more than one person, and those people
    can have different opinions. I can have opinions on what syntax /I/
    like, and equally accept that other people have different opinions. You
    like your choice of syntax - that's fine. I like some of it, and
    dislike other bits, but I certainly can't say /your/ preferences are
    "wrong" !


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Mon Oct 27 17:44:33 2025
    From Newsgroup: comp.lang.c

    On 27/10/2025 16:35, David Brown wrote:
    On 27/10/2025 12:22, bart wrote:

    Where did I say anything about my own house?


    In the analogy, that would your own language, and/or your own
    declaration syntax that has nothing to do with C

    It is C syntax flattened to LTR form, with modifiers all moved to the basetype, and shared by all variables declared. Function return types go
    after the parameter list.

    However, this will clash with existing C syntax and created ambiguities.
    So I suggested using, for example, the same keyword for a leading "*"
    /as is used in Algol68/; not my language, even if I borrowed it myself
    from there.

    If my scheme was actually added and become popular, the old one could
    eventually be deprecated.

    Is that your "plan" ?

    I don't have a plan. It was an idea for allowing a modern, saner syntax
    on top of C.

    And yes it does 'fix' it by not requiring the use of tools like CDECL
    when writing new code: type-specs are already in LTR, more English-
    like form.

    Most C programmers don't need cdecl.  The only people that do need it, either have very little knowledge and experience of C, or are faced with code written by sadists (unfortunately that is not as rare as it should be).  Some others might occasionally find such a tool /useful/, but
    finding it useful is not "needing".  And with your bizarre syntax

    /My syntax/ (as in my proposal) is bizarre, but actual C type syntax isn't?!

    The latter is possibly the worst-designed feature of any programming
    language ever, certainly of any mainstream language. This is the syntax
    for a pointer to an unbounded array of function pointers that return a
    pointer to int:

    int *(*(*)[])()

    This, is not bizarre?! Even somebody reading has to figure out which * corresponds to which 'pointer to', and where the name might go if using
    it to declare a variable.

    In the LTR syntax I suggested, it would be:

    ref[]ref func()ref int

    The variable name goes on the right. For declaring three such variables,
    it would be:

    ref[]ref func()ref int a, b, c

    Meanwhile, in C as it is, it would need to be something like this:

    int *(*(*a)[])(), *(*(*b)[])(), *(*(*c)[])()

    Or you have to use a workaround and create a named alias for the type
    (what would you call it?):

    typedef int *(*(*T)[])();

    T a, b, c;

    It's a fucking joke. And yes, I needed to use a tool to get that first
    'int *(*(*)[])()', otherwise I can spend forever in a trial and error
    process of figuring where all those brackets and asterisks go.

    THIS IS WHY such tools are necessary, because the language syntax as it
    is is not fit for purpose.

    So you have set up a straw man, claimed to "fix" this imaginary problem, while actually doing nothing of the sort.

    So what imaginary problem does CDECL fix? There's a reason it uses the
    word 'gibberish'. (I'm not even going into stuff like 'long const long unsigned'.)


    And even if your syntax was as great as you think (IMHO it is nicer in
    some ways, worse in others

    In which ways?

    - and I think most C programmers would agree
    on that while not being able to agree on which parts are nicer or
    worse), you still haven't shown the slightest concept of your claimed
    "plan" to implement it.

    What needs to be implemented? It would take some hours to add to my C compiler. Maybe it needs tweaks to fix things I hadn't forseen. But I
    can't see there is much problem.

    Getting anyone else (who are going to have the same attitudes as yours)
    to agree to it, and starting a process to add it to the language, is the
    big obstacle.

    But technically there is little to it.

    Yes, my ideal would be different from the output of cdecl.  No, the
    author is not doing something "wrong".  I live in a world where
    programming languages are used by more than one person, and those people
    can have different opinions.

    Find me one person who doesn't think that syntax like int *(*(*)[])()
    is a complete joke.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c,comp.lang.c++ on Mon Oct 27 11:51:25 2025
    From Newsgroup: comp.lang.c

    On 10/26/2025 3:38 PM, Keith Thompson wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 10/22/2025 2:39 PM, Keith Thompson wrote:
    This is cross-posted to comp.lang.c and comp.lang.c++.
    Consider redirecting followups as appropriate.
    cdecl, along with c++decl, is a tool that translates C or C++
    declaration syntax into English, and vice versa. For example :
    $ cdecl
    Type `help' or `?' for help
    cdecl> explain const char *foo[42]
    declare foo as array 42 of pointer to const char
    cdecl> declare bar as pointer to function (void) returning int
    int (*bar)(void )
    It's also available via the web site <https://cdecl.org/>.
    I must be doing something wrong:

    Yes.

    int (*fp_read) (void* const, void*, size_t)

    is syntax error. It from one of my older experiments:

    You're using the old 2.5 version. The newer forked version handles that declaration correctly, but you have to build it from source. cdecl.org
    uses the old version.


    Ahh! Thanks Keith.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Mon Oct 27 13:31:41 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    [...]
    Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX.

    So they are utterly dependent on them. So much so that it is pretty
    much impossible to build this stuff on any non-UNIX environment,
    unless that environment is emulated. That is what happens with WSL,
    MSYS2, CYGWIN.
    [...]

    **Yes, you're right**.

    The GNU autotools typically work smoothly when used on Unix-like
    systems. They can be made to work nearly as smoothly under Windows
    by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very
    difficult to use them on pure Windows.

    Yes, that sucks for Windows users who want to build cdecl or
    coreutils from source without using an emulation layer.

    (A large part of the reason for this is that the folks who
    implemented the GNU autotools, for ideological reasons, do not
    care about supporting Windows. I won't go into their reasoning,
    and I will not express an opinion about it here. I am simply
    explaining it. Discuss it elsewhere if you like.)

    I don't believe that anyone reading your articles is a GNU autotools maintainer, so your incessant whining about them is futile.

    The fact that I can't easily build cdecl from source on pure Windows
    does not bother me. I am not motivated to do anything about it.

    If it were a problem for me, I might take a look at the GNU autotools
    and try to think of ways to make them work with MS Windows. I would
    consider doing so if someone paid me for the work. If the existing
    maintainers were not interested in any such changes, they could be
    maintained in a separate fork.

    Now I've acknowleged your concerns, explained that nobody here is
    likely to be able to do anything to address them, and suggested
    ways you might address them yourself.

    What now, bart?
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Mon Oct 27 22:39:47 2025
    From Newsgroup: comp.lang.c

    On Mon, 27 Oct 2025 14:45:11 +0100
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:


    Lua is not Algol 68.


    Correct.
    Lua is a useful programming language.
    Algol 68 is a great source of inspiration for designers of
    programming languages. Useful programming language it is not.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Mon Oct 27 13:48:23 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 25/10/2025 16:18, David Brown wrote:
    [...]
    And I'd love to hear your plan for "fixing" the syntax of C - noting
    that changing the syntax of C means getting the C standards
    committee to accept your suggestions, getting at least all major C
    compilers to support them, and getting the millions of C programmers
    to use them.

    I have posted such proposals in the past (probably before 2010).

    I can't remember the exact details, but I think it is possible to
    superimpose LTR type syntax on top of the existing language.
    [...]

    If I understand correctly (you haven't been entirely clear about it),
    your proposal is to create a new friendlier declaration syntax for C,
    and have a new version of C support both the existing syntax and your
    new syntax. So in your hypothetical future C, one might have:

    int *const *p1;
    p1: pointer to const pointer to int;

    in the same program.

    I haven't seen a *concrete* proposal. If I thought the whole thing
    were a good idea, I'd encourage you to write one.

    In my personal opinion, C's declaration syntax, cleverly based
    on a somewhat loose "declaration follows use" principle, is a not
    entirely successful experiment that has caught on extraordinarily
    well, probably due to C's other advantages as a systems programming
    language. I would have preferred a different syntax **if** it had
    been used in the original C **instead of** the current syntax.
    I do not think C's syntax is any kind of "joke", as you do.
    It's entirely consistent and well defined.

    All else being equal, I would prefer a C-like language with clear
    left-to-right declaration syntax to C as it's currently defined.
    But all else is not at all equal.

    And I think that a future C that supports *both* the existing
    syntax and your new syntax would be far worse than C as it is now.
    Programmers would have to learn both. Existing code would not
    be updated. Most new code, written by experienced C programmers,
    would continue to use the old syntax. Your plan to deprecate the
    existing syntax would fail.

    And that's why it will never happen. The ISO C committee would never
    consider this kind of radical change, even if it were shoehorned
    into the syntax in a way that somehow doesn't break existing code.

    You have spent years complaining here about C's syntax (to people
    who are not in a position to do anything about it). If you had
    spent half that time learning how it works, you'd be a world-class
    expert by now, and you could teach others how to use it rather than deliberately confusing them with contrived examples.

    You can of course create your own language with a cleaner syntax.
    Apparently you've done so, but not in a way that's useful to most
    of us -- not that you're obligated to make it useful to anyone else.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Mon Oct 27 20:52:43 2025
    From Newsgroup: comp.lang.c

    On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    bart <bc@freeuk.com> writes:
    [...]
    Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX. >>
    So they are utterly dependent on them. So much so that it is pretty
    much impossible to build this stuff on any non-UNIX environment,
    unless that environment is emulated. That is what happens with WSL,
    MSYS2, CYGWIN.
    [...]

    **Yes, you're right**.

    The GNU autotools typically work smoothly when used on Unix-like
    systems. They can be made to work nearly as smoothly under Windows
    by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very difficult to use them on pure Windows.

    The way I see the status quo in this matter is this: cross-platform
    programs originating or mainly focusing on Unix-likes require effort
    /from their actual authors/ to have a native Windows port.

    Whereas when such programs are ported to Unix-like which their
    authors do not use, it is often possible for the users to get it
    working without needing help from the authors. There may be some
    patch to upstream, and that's about it.

    Also, a proper Windows port isn't just a way to build on Windows.
    Nobody does that. Windows doens't have tools out of the box.

    When you seriously commit to a Windows port, you provide a binary build
    with a proper installer.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Mon Oct 27 22:33:14 2025
    From Newsgroup: comp.lang.c

    David Brown <david.brown@hesbynett.no> wrote:
    On 26/10/2025 16:12, bart wrote:
    On 25/10/2025 16:18, David Brown wrote:
    On 25/10/2025 14:51, bart wrote:

    This is another matter. The CDECL docs talk about C and C++ type
    declarations being 'gibberish'.

    What do you feel about that, and the *need* for such a substantial
    tool to help understand or write such declarations?

    I would rather have put some effort into fixing the syntax so that
    such tools are not necessary!

    And I'd love to hear your plan for "fixing" the syntax of C - noting
    that changing the syntax of C means getting the C standards committee
    to accept your suggestions, getting at least all major C compilers to
    support them, and getting the millions of C programmers to use them.

    I have posted such proposals in the past (probably before 2010).


    No, you have not.

    What you have proposed is a different way to write types in
    declarations, in a different language. That's fine if you are making a different language. (For the record, I like some of your suggestions,
    and dislike others - my own choice for an "ideal" syntax would be
    different from both your syntax and C's.)

    I asked you if you had a plan for /fixing/ the syntax of /C/. You don't.

    As an analogy, suppose I invited you - as an architect and builder - to
    see my house, and you said you didn't like the layout of the rooms, the kitchen was too small, and you thought the cellar was pointless
    complexity. I ask you if you can give me a plan to fix it, and you
    respond by telling me your own house is nicer.

    Sorry, "proof by analogy" is usually wrong. If you insist on
    analogies the right one would be function prototypes: old style
    function declarations where inherently unsafe and it was fixed
    by adding new syntax for function declarations and definitions,
    in parallel to old syntax. Now old style declarations are
    officially retired. Bart proposed new syntax for all
    declarations to be used in parallel with old ones, that is
    exaclty the same fix as used to solve unsafety of old
    function declarations.

    IMO the worst C problem is standard process. Basically, once
    a large vendor manages to subvert the language it gets
    legitimized and part of the standard. OTOH old warts are
    preserved for long time. Worse, new warts are introduced.

    As an example, VMT-s were big opportunity to make array access
    safer. But version which is in the standard skilfully
    sabotages potential compiler attempts to increase safety.

    If you look carefuly, there is several places in the standard
    that effectively forbid static or dynamic error checks. Once
    you add extra safety checks your implementation is
    noncompilant.

    It is likely that any standarized language is eventually
    doomed to failure. This is pretty visible with Cobol,
    but C seem to be on similar trajectory (but in much earlier
    stage).
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Mon Oct 27 17:30:42 2025
    From Newsgroup: comp.lang.c

    Kaz Kylheku <643-408-1753@kylheku.com> writes:
    On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    bart <bc@freeuk.com> writes:
    [...]
    Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX. >>>
    So they are utterly dependent on them. So much so that it is pretty
    much impossible to build this stuff on any non-UNIX environment,
    unless that environment is emulated. That is what happens with WSL,
    MSYS2, CYGWIN.
    [...]

    **Yes, you're right**.

    The GNU autotools typically work smoothly when used on Unix-like
    systems. They can be made to work nearly as smoothly under Windows
    by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very
    difficult to use them on pure Windows.

    The way I see the status quo in this matter is this: cross-platform
    programs originating or mainly focusing on Unix-likes require effort
    /from their actual authors/ to have a native Windows port.

    Whereas when such programs are ported to Unix-like which their
    authors do not use, it is often possible for the users to get it
    working without needing help from the authors. There may be some
    patch to upstream, and that's about it.

    Also, a proper Windows port isn't just a way to build on Windows.
    Nobody does that. Windows doens't have tools out of the box.

    When you seriously commit to a Windows port, you provide a binary build
    with a proper installer.

    I agree that that's the status quo.

    I can imagine either an enhanced version of the GNU autotools,
    or a new set of tools similar to it, that could support building
    software from source on Windows. It wouldn't work on Windows out
    of the box, which doesn't provide much in the way of development
    tools, but it could detect the presence of Visual Studio and/or
    other development systems and use them automatically.

    Ideally it would be a drop-in replacement for the GNU autotools,
    so that someone could take, say, a copy of cdecl-18.5.tar.gz,
    feed it to the tool, and it would build and install cdecl.exe in
    the right place without depending on a Unix-like emulation layer.
    It would probably have to work with the configure.ac file (which
    is fed to autoconf) rather than with the generated configure script
    (which requires a Bourne-like shell).

    I don't know the details of how this could be done, and I certainly
    don't have the motivation to implement it unless someone pays
    me a lot of money to do so. And if nobody does this, I won't be
    particularly inconvenienced. It's entirely possible that there
    isn't enough demand to justify the effort.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Tue Oct 28 01:22:19 2025
    From Newsgroup: comp.lang.c

    On 27/10/2025 20:52, Kaz Kylheku wrote:
    On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    bart <bc@freeuk.com> writes:
    [...]
    Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX. >>>
    So they are utterly dependent on them. So much so that it is pretty
    much impossible to build this stuff on any non-UNIX environment,
    unless that environment is emulated. That is what happens with WSL,
    MSYS2, CYGWIN.
    [...]

    **Yes, you're right**.

    The GNU autotools typically work smoothly when used on Unix-like
    systems. They can be made to work nearly as smoothly under Windows
    by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very
    difficult to use them on pure Windows.

    The way I see the status quo in this matter is this: cross-platform
    programs originating or mainly focusing on Unix-likes require effort
    /from their actual authors/ to have a native Windows port.

    Whereas when such programs are ported to Unix-like which their
    authors do not use, it is often possible for the users to get it
    working without needing help from the authors. There may be some
    patch to upstream, and that's about it.

    Also, a proper Windows port isn't just a way to build on Windows.
    Nobody does that. Windows doens't have tools out of the box.

    When you seriously commit to a Windows port, you provide a binary build
    with a proper installer.

    The problem with a binary distribution is AV software on the user's
    machine which can block it.

    That's if the machine has been unlocked (something to do with 's-mode'), otherwise it can only run software from the Microsoft Store, and won't
    even run some of MS's own programs such as the command prompt.

    To get around that AV, you either need to have some clout, be
    'white-listed', or somehow know some tricks.

    In my case, rather than supply a monolithic executable (EXE file, which
    either the app itself, or some sort of installer), I've played around
    with several alternatives, all of which are also single monolithic
    files, and are a step or two back from full binaries:

    * Provide a ASM file in AT&T format, which can be assembled and linked locally. This requires the user to be a programmer, who will already
    know how to bypass AV for their own programs

    * Provide a headerless C source file, which these days is very low level C.

    * Provide a binary but in my private format, which seems to evade AV.
    But this needs a launcher (an 800-line true-C program built-locally).

    * If the user has managed to obtain a working version of my compiler via
    the above means, then further apps could be downloaded in original
    source code (not C), but again, this will be a single amalgamated file.

    In all cases, there is one file to be assembled, compiled etc. Not
    dozens of source files, headers, makefiles etc.

    The approach used here can also work for Linux, but there only the C
    format would be viable ATM since I don't directly support the SYS V ABI
    needed by the other formats, assuming an x64 target, or the Linux may
    run on some other target anyway.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Tue Oct 28 03:00:50 2025
    From Newsgroup: comp.lang.c

    On 27.10.2025 21:39, Michael S wrote:

    Lua is not Algol 68.

    Correct.
    Lua is a useful programming language.

    (I have no stakes here. Never used it.)

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c on Mon Oct 27 19:11:56 2025
    From Newsgroup: comp.lang.c

    On 10/27/2025 5:30 PM, Keith Thompson wrote:
    [...]

    I can imagine either an enhanced version of the GNU autotools,
    or a new set of tools similar to it, that could support building
    software from source on Windows.

    https://vcpkg.io/en/packages?query=

    Not bad, well for me, for now. Builds like a charm, so far.

    [...]
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Tue Oct 28 03:35:04 2025
    From Newsgroup: comp.lang.c

    On 27.10.2025 16:11, bart wrote:
    On 27/10/2025 13:39, Janis Papanagnou wrote:
    On 27.10.2025 13:50, bart wrote:

    It's still at least FIVE TIMES FASTER than A68G! [2-3 TIMES FASTER]

    So what? - I don't need a Lua system. So why should I care.

    You are the one who seems to think that the speed factor is the most
    important factor to choose a language for a project. - You are wrong
    for the general case. (But it may be right for your personal universe,
    of course.)

    You are wrong.

    With which part? - With how I think you value project requirements?
    I can only derive your mindset from the myriads of posts you emitted
    over time with basically always the same content and focus.

    What language do you use most?

    What shall that prove? - The projects' requirements are generally
    independent of my personal usage of programming languages.

    Let's say it is C
    (although you usually post about every other language except C!).

    That's meaningless, but if you're interested to know...
    Mostly (including my professional work) I've probably used C++.
    But also other languages, depending on either projects' requirements
    or, where there was a choice, what appeared to be fitting best (and
    "best" sadly includes also bad languages if there's no alternative).


    Then, suppose your C compiler was written in Python rather than C++ or whatever and run under CPython. What you think would happen to your build-times?

    The build-times have rarely been an issue; never in private context,
    and in professional contexts with MLOCS of code these things have
    been effectively addressed. (I recall you were unfamiliar with make
    files, or am I misremembering?)


    Now imagine further if the CPython interpreter was inself written and executed with CPython.

    So, the 'speed' of a language (ie. of its typical implementation, which
    also depends on the language design) does matter.

    If speed wasn't an issue then we'd all be using easy dynamic languages

    Huh? - Certainly not.

    Your mindset is really amazingly biased and restricted if it comes
    to speed as argument for or against "dynamic languages". - Speed may
    for some cases be a factor for such (poorly founded) decisions, but
    choice of language for a project (as I tried to explain you so many
    times) depends on many more important factors.

    I'm still unsure whether you grasped that the programming world and
    its projects is not a personal event. (In you're speaking only about
    one's personal context the person can do what he likes.) - If you're
    not willing to accept that or try to understand it I can't help you.

    for productivity. In reality those easy languages are far too slow in
    most cases.

    Speed is a topic, but as I wrote you have to put it in context

    Speed is not an end in itself. It must be valued in comparison
    with all the other often more relevant factors (that you seem to
    completely miss, even when explained to you).

    [...]

    You've been explained before many times already by many people that
    differences in compile time may not beat other more relevant factors.

    I've also explained that I work by very freqent edit-run cycles. Then compile-times matter.

    And I regular acknowledge that I see that it's the primary factor in
    your working context.

    This is why many like to use scripting languages
    as those don't have a discernible build step.

    I can't tell about the "many" that you have in mind, and about their
    mindset; I'm sure you either can't tell.

    I'm using for very specific types of tasks "scripting languages" -
    and keep in mind that there's no clean definition of that! - As far
    as I can tell there's various reasons for such decisions; certainly
    that's the case in my professional and private contexts.

    Janis

    [...]


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Mon Oct 27 19:59:03 2025
    From Newsgroup: comp.lang.c

    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 10/27/2025 5:30 PM, Keith Thompson wrote:
    [...]

    I can imagine either an enhanced version of the GNU autotools,
    or a new set of tools similar to it, that could support building
    software from source on Windows.

    https://vcpkg.io/en/packages?query=

    Not bad, well for me, for now. Builds like a charm, so far.

    [...]

    Looks interesting, but I don't think it's quite what I was talking about
    (based on about 5 minutes browsing the website).

    It seems to emphasize C and C++ *libraries* rather than applications.
    And I don't see that it can be used to build an existing autotools-based package (like, say, cdecl) on Windows.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Tue Oct 28 04:10:29 2025
    From Newsgroup: comp.lang.c

    On 27.10.2025 18:44, bart wrote:
    On 27/10/2025 16:35, David Brown wrote:
    On 27/10/2025 12:22, bart wrote:


    /My syntax/ (as in my proposal) is bizarre,

    What was your proposal? - Anyway, it shouldn't be "bizarre"; it's
    under your design-control!

    but actual C type syntax isn't?!

    There were reasons for that choice. And the authors have explained
    them. - This doesn't make their choice any better, though, IMO.


    The latter is possibly the worst-designed feature of any programming
    language ever, certainly of any mainstream language. This is the syntax
    for a pointer to an unbounded array of function pointers that return a pointer to int:

    int *(*(*)[])()

    This, is not bizarre?!

    You need to know the concept behind it. IOW, learn the language and
    you will get used to it. (As with other features or "monstrosities".)

    Even somebody reading has to figure out which *
    corresponds to which 'pointer to', and where the name might go if using
    it to declare a variable.

    In the LTR syntax I suggested, it would be:

    ref[]ref func()ref int

    The variable name goes on the right. For declaring three such variables,
    it would be:

    ref[]ref func()ref int a, b, c

    Meanwhile, in C as it is, it would need to be something like this:

    int *(*(*a)[])(), *(*(*b)[])(), *(*(*c)[])()

    Or you have to use a workaround and create a named alias for the type
    (what would you call it?):

    typedef int *(*(*T)[])();

    T a, b, c;

    It's a fucking joke.

    Actually, this is a way to (somewhat) control the declaration "mess"
    so that it doesn't propagate into the rest of the source code and
    muddy each occurrence. It's also a good design principle (also when
    programming in other language) to use names for [complex] types.

    I take that option 'typedef' as a sensible solution of this specific
    problem with C's underlying declaration decisions.

    And yes, I needed to use a tool to get that first
    'int *(*(*)[])()', otherwise I can spend forever in a trial and error
    process of figuring where all those brackets and asterisks go.

    THIS IS WHY such tools are necessary, because the language syntax as it
    is is not fit for purpose.

    I never used 'cdecl' (as far as I recall). (I recall I was thinking
    sometimes that such a tool could be useful.) Myself it was sufficient
    to use a 'typedef' for complex cases. Constructing such expressions
    is often easier than reading them.

    [...]

    Yes, my ideal would be different from the output of cdecl. No, the
    author is not doing something "wrong". I live in a world where
    programming languages are used by more than one person, and those
    people can have different opinions.

    Find me one person who doesn't think that syntax like int *(*(*)[])()
    is a complete joke.

    Maybe the authors (and all the enthusiastic adherents) of "C"?

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Tue Oct 28 04:23:32 2025
    From Newsgroup: comp.lang.c

    On 27.10.2025 23:33, Waldek Hebisch wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    [...]

    Sorry, "proof by analogy" is usually wrong. If you insist on
    analogies the right one would be function prototypes: old style
    function declarations where inherently unsafe and it was fixed
    by adding new syntax for function declarations and definitions,
    in parallel to old syntax. Now old style declarations are
    officially retired. Bart proposed new syntax for all
    declarations to be used in parallel with old ones, that is
    exaclty the same fix as used to solve unsafety of old
    function declarations.

    As far as I recall, Dennis Ritchie has written about the practical
    problem with "C" compilers having to support two different versions
    [of the function declaration topic] for compatibility reasons.

    Early and central "misdesigns" are not easy to address; it hurts.

    (That's one difference between the often discredited "design by
    committee" and a more casual growing design from a single person
    or interest group.)

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Tue Oct 28 04:41:15 2025
    From Newsgroup: comp.lang.c

    On 27.10.2025 21:48, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    [...]

    In my personal opinion, C's declaration syntax, cleverly based
    on a somewhat loose "declaration follows use" principle,

    IMO that was the idea, and I would object to the word "cleverly".

    When I spoke with students, newbie "C" users, about that they were
    quite confused, not only by the "same" placement as in expressions
    but also by using the same symbol for conceptually different things.

    Personally I always found it better comprehensible where languages
    use something like, say,

    REF sometype x;

    and

    y = DEREF x

    in the first place.

    If you explain things that way people much easier understand it, as
    far as my experience goes.

    is a not
    entirely successful experiment that has caught on extraordinarily
    well, probably due to C's other advantages as a systems programming
    language. I would have preferred a different syntax **if** it had
    been used in the original C **instead of** the current syntax. [...]

    All else being equal, I would prefer a C-like language with clear left-to-right declaration syntax to C as it's currently defined.
    But all else is not at all equal.

    Indeed.


    And I think that a future C that supports *both* the existing
    syntax and your new syntax would be far worse than C as it is now. Programmers would have to learn both. Existing code would not
    be updated. Most new code, written by experienced C programmers,
    would continue to use the old syntax. Your plan to deprecate the
    existing syntax would fail.

    Yes.

    And that's why it will never happen. The ISO C committee would never consider this kind of radical change, even if it were shoehorned
    into the syntax in a way that somehow doesn't break existing code.

    But interestingly, as far as I recall, the C committee did exactly
    that with the function declaration syntax option (back then when
    going from K&R to a standard). Sure, they might handle that now
    differently since it's their standard to change (and not the K&R
    origin [or quasi standard]).

    Janis

    [...]

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c on Mon Oct 27 23:45:17 2025
    From Newsgroup: comp.lang.c

    On 10/27/2025 7:59 PM, Keith Thompson wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 10/27/2025 5:30 PM, Keith Thompson wrote:
    [...]

    I can imagine either an enhanced version of the GNU autotools,
    or a new set of tools similar to it, that could support building
    software from source on Windows.

    https://vcpkg.io/en/packages?query=

    Not bad, well for me, for now. Builds like a charm, so far.

    [...]

    Looks interesting, but I don't think it's quite what I was talking about (based on about 5 minutes browsing the website).

    So far, it can be used to cure some "headaches" over in Windows land... ;^)


    It seems to emphasize C and C++ *libraries* rather than applications.
    And I don't see that it can be used to build an existing autotools-based package (like, say, cdecl) on Windows.


    Well, if what you want is not in that list, you are shit out of luck.
    ;^) It sure seems to build packages from source. For instance, I got
    Cairo compiled and up and fully integrated into MSVC. Pretty nice.

    At least its there. Although if it took a while to build everything,
    Bart would be pulling his hair out. But, beats manually building
    something that is not meant to be built on windows, uggg, sometimes,
    double uggg. Ming, cygwin, ect... vcpkg, has all of them, and used them
    to build certain things...

    I have built Cairo on Windows, and vcpkg is just oh so easy. Well, keep
    in mind, windows... ;^o

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c on Mon Oct 27 23:47:49 2025
    From Newsgroup: comp.lang.c

    On 10/27/2025 8:10 PM, Janis Papanagnou wrote:
    On 27.10.2025 18:44, bart wrote:
    On 27/10/2025 16:35, David Brown wrote:
    On 27/10/2025 12:22, bart wrote:


    /My syntax/ (as in my proposal) is bizarre,

    What was your proposal? - Anyway, it shouldn't be "bizarre"; it's
    under your design-control!

    but actual C type syntax isn't?!

    There were reasons for that choice. And the authors have explained
    them. - This doesn't make their choice any better, though, IMO.


    The latter is possibly the worst-designed feature of any programming
    language ever, certainly of any mainstream language. This is the syntax
    for a pointer to an unbounded array of function pointers that return a
    pointer to int:

    int *(*(*)[])()

    This, is not bizarre?!

    You need to know the concept behind it. IOW, learn the language and
    you will get used to it. (As with other features or "monstrosities".)

    Even somebody reading has to figure out which *
    corresponds to which 'pointer to', and where the name might go if using
    it to declare a variable.

    In the LTR syntax I suggested, it would be:

    ref[]ref func()ref int

    The variable name goes on the right. For declaring three such variables,
    it would be:

    ref[]ref func()ref int a, b, c

    Meanwhile, in C as it is, it would need to be something like this:

    int *(*(*a)[])(), *(*(*b)[])(), *(*(*c)[])()

    Or you have to use a workaround and create a named alias for the type
    (what would you call it?):

    typedef int *(*(*T)[])();

    T a, b, c;

    It's a fucking joke.

    Actually, this is a way to (somewhat) control the declaration "mess"
    so that it doesn't propagate into the rest of the source code and
    muddy each occurrence. It's also a good design principle (also when programming in other language) to use names for [complex] types.

    I take that option 'typedef' as a sensible solution of this specific
    problem with C's underlying declaration decisions.

    And yes, I needed to use a tool to get that first
    'int *(*(*)[])()', otherwise I can spend forever in a trial and error
    process of figuring where all those brackets and asterisks go.

    THIS IS WHY such tools are necessary, because the language syntax as it
    is is not fit for purpose.

    I never used 'cdecl' (as far as I recall). (I recall I was thinking
    sometimes that such a tool could be useful.) Myself it was sufficient
    to use a 'typedef' for complex cases. Constructing such expressions
    is often easier than reading them.

    [...]

    Yes, my ideal would be different from the output of cdecl. No, the
    author is not doing something "wrong". I live in a world where
    programming languages are used by more than one person, and those
    people can have different opinions.

    Find me one person who doesn't think that syntax like int *(*(*)[])()
    is a complete joke.

    Maybe the authors (and all the enthusiastic adherents) of "C"?

    Does extern "C" tend to use cdecl?

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Tue Oct 28 10:27:06 2025
    From Newsgroup: comp.lang.c

    On 28/10/2025 06:45, Chris M. Thomasson wrote:
    On 10/27/2025 7:59 PM, Keith Thompson wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 10/27/2025 5:30 PM, Keith Thompson wrote:
    [...]

    I can imagine either an enhanced version of the GNU autotools,
    or a new set of tools similar to it, that could support building
    software from source on Windows.

    https://vcpkg.io/en/packages?query=

    Not bad, well for me, for now. Builds like a charm, so far.

    [...]

    Looks interesting, but I don't think it's quite what I was talking about
    (based on about 5 minutes browsing the website).

    So far, it can be used to cure some "headaches" over in Windows land... ;^)


    It seems to emphasize C and C++ *libraries* rather than applications.
    And I don't see that it can be used to build an existing autotools-based
    package (like, say, cdecl) on Windows.


    Well, if what you want is not in that list, you are shit out of
    luck. ;^) It sure seems to build packages from source. For instance, I
    got Cairo compiled and up and fully integrated into MSVC. Pretty nice.

    At least its there. Although if it took a while to build everything,
    Bart would be pulling his hair out. But, beats manually building
    something that is not meant to be built on windows, uggg, sometimes,
    double uggg. Ming, cygwin, ect... vcpkg, has all of them, and used them
    to build certain things...

    I have built Cairo on Windows, and vcpkg is just oh so easy. Well, keep
    in mind, windows... ;^o


    PART I

    In the early days of testing my C compiler, I tried to build a
    hello-type test program using GTK2.

    GTK2 (I expect GTK4 is a lot worse!) was a complex library:

    * There were some 700 include files, spread over a dozen or two nested directories

    * Compiling my test involved over 1000 nested #include statements, 550
    unique header files, a dozen include search paths, and 330,000 lines of declarations to process

    * To link the result, GTK2 comes with 50 DLL files, totalling 50MB,
    although not all will be needed. All have version names, so it's not
    just a case of suppying a particular file name, it needs to have the
    correct version suffix.

    I managed this by trial and error. The input to the compiler needs to be:

    * A set of search paths to the needed include files

    * The exact names of the needed DLL files (their location is not needed, provided the location is part of the Windows 'Path' variable)

    Note we are not building anything from source; it is the simpler task of
    using a ready-built library! The test program might be two dozen lines of C.

    So, how does it all work normally? Apparently it's done with a program
    called 'PKG-CONFIG' which performs some magic based on some 'metadata' somewhere.

    However this was of no interest to me: I wanted a bare-compiler solution
    with minimal meta-dependencies

    PART II

    At a different point, I wanted to try GTK2 from my own language. Here
    the sticking point is creating bindings, in my syntax, for the 10,000 functions, types, structs, enums, and macros exported by the library.

    My C compiler has an extension which could do some of that
    automatically: it processes the library headers (via the method in Part
    I), and generated a single, flattened interface file containing all the necessary information.

    For GTK2, this was a single 25Kloc file, which I called gtk2.m. In my language, I would compile the library by having 'import gtk2' in one
    place in the program.

    However, 4000 of those 25000 lines were C macros; simple #defines could
    be converted, but the rest needed manual translation: a big task. (The
    method has worked however for smaller libraries like OpenGL and SDL2.)

    But here's the interesting thing: if, instead of generating bindings in
    my syntax, suppose I generated them as C?

    Then, instead of 700 headers, 1/3 million lines and dozens of folders,
    the GTK2 API could be expressed in a single 25Kloc header file.

    Why isn't such a process done anyway by the suppliers of the library?

    (SDL2 would also reduce from 80 headers of 50Kloc, to one header of 3Kloc.)

    PART III

    This was an idea I had for my language, but it never got implemented.

    At this point, a simple external library involves one or more DLL files,
    and an interface file needed by the compiler, which gives the API info.

    My idea was, why not put that interface file inside the DLL? Then you
    submit that DLL name to the compiler, and it can extract the necessary
    info, either via some special function, or an exported set of variables.

    Where the DLL structure is complex, like GTK2, there could be an
    accompanying small DLL that replaces those 700 files of headers. One
    with an obvious name, like 'gtk2.dll'.

    (In my language, such input gets specified once inside the lead module. Building any app is always 'mm prog'.)

    However, one remaining problem is finding where the DLL is located.

    Again, the idea could work in C too.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Tue Oct 28 11:16:11 2025
    From Newsgroup: comp.lang.c

    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:

    That's meaningless, but if you're interested to know...
    Mostly (including my professional work) I've probably used C++.
    But also other languages, depending on either projects' requirements
    or, where there was a choice, what appeared to be fitting best (and
    "best" sadly includes also bad languages if there's no alternative).

    Which bad languages are these?

    The build-times have rarely been an issue; never in private context,
    and in professional contexts with MLOCS of code these things have
    been effectively addressed.

    Not really. There are the workarounds and compromises that I listed: compilation is avoided as much as possible. For that you need to use independent compilation, and require dependency graphs and external
    tools to manage the process.

    That CDECL took, what, 49 seconds on my machine, to process 68Kloc of C? That's a whopping 1400 lines per second!

    If we go back 45 years to machines that were 1000 times slower, the same process would only manage 1.4 lines per second, and it would take 13
    HOURS, to create an interactive program that explained what 'int (*(*(*)))[]()' (whatever it was) might mean.

    So, yeah, build-time is a problem, even on the ultra-fast hardware we
    have now.

    Bear in mind that CDECL (like every finished product you build from
    source) is a working, debugged program. You shouldn't need to do that
    much analysis of it. And here, its performance is not critical either:
    you don't even need fast code from it.



    (I recall you were unfamiliar with make
    files, or am I misremembering?)

    I know makefiles. Never used them, never will. You might recall that I
    create my own solutions.


    Now imagine further if the CPython interpreter was inself written and
    executed with CPython.

    So, the 'speed' of a language (ie. of its typical implementation, which
    also depends on the language design) does matter.

    If speed wasn't an issue then we'd all be using easy dynamic languages

    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and capable
    as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    Speed is a topic, but as I wrote you have to put it in context

    Actually, the real topic is slowness. I'm constantly coming across
    things which I know (from half a century working with computers) are far slower than they ought to be.

    But I'm also coming across people who seem to accept that slowness as
    just how things are. They should question things more!

    I can't tell about the "many" that you have in mind, and about their
    mindset; I'm sure you either can't tell.

    I'm pretty sure there are quite a few million users of scripting languages.


    I'm using for very specific types of tasks "scripting languages" -
    and keep in mind that there's no clean definition of that!

    They have typical characteristics as I'm quite sure you're aware. For
    example:

    * Dynamic typing
    * Run from source
    * Instant edit-run cycle
    * Possible REPL
    * Uncluttered syntax
    * Higher level features
    * Extensive libraries so that you can quickly 'script' most tasks

    So, interactivity and spontaneity. But they also have cons:

    * Slower execution
    * Little compile-time error checking
    * Less control (of data structures for example)


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Tue Oct 28 14:56:39 2025
    From Newsgroup: comp.lang.c

    On Sun, 26 Oct 2025 15:45:34 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 26 Oct 2025 14:56:56 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    [...]
    Free software still has to be usable. cdecl is usable for most
    of us.

    [...]


    I'd say that it is not sufficiently usable for most of us to
    actually use it.

    Why do you say that?

    I would guess that less than 1 per cent of C programmers ever used
    it and less than 5% of those who used it once continued to use it regularly.
    All numbers pulled out of thin air...

    So it's about usefulness, not usability. You're not saying that
    it works incorrectly or that it's difficult to use (which would be
    usability issues), but that the job it performs is not useful for
    most C programmers.

    (One data point: I use it occasionally.)


    Few minutes ago I typed 'pacman -S cdecl' at my msys2 command prompt.
    Then I hit Y at suggestion to proceed with installation. After another
    second or three I got it installed. Then tried it and even managed to
    get couple of declarations properly explained.
    So, now I also belong to less than 1 per cent :-)

    In the process I finally understand why the build process is none-trivial.
    It's mostly because of interactivity.
    It's very hard to build decent interactive program in portable subset
    of C. Or, may be, even impossible rather than hard.
    Personally, I consider interactivity of cdecl as UI mistake.
    For me, as a user, it's a minor mistake, because I can easily ignore interactivity and to use it as a normal command line utility:
    $ cdecl -e "FILE* uu"
    declare uu as pointer to FILE

    But if I were tasked with porting of cdecl to non-unixy environment
    then interactivity would be the biggest obstacle.




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Tue Oct 28 13:18:56 2025
    From Newsgroup: comp.lang.c

    On 28/10/2025 12:56, Michael S wrote:
    On Sun, 26 Oct 2025 15:45:34 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 26 Oct 2025 14:56:56 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    [...]
    Free software still has to be usable. cdecl is usable for most
    of us.

    [...]


    I'd say that it is not sufficiently usable for most of us to
    actually use it.

    Why do you say that?

    I would guess that less than 1 per cent of C programmers ever used
    it and less than 5% of those who used it once continued to use it
    regularly.
    All numbers pulled out of thin air...

    So it's about usefulness, not usability. You're not saying that
    it works incorrectly or that it's difficult to use (which would be
    usability issues), but that the job it performs is not useful for
    most C programmers.

    (One data point: I use it occasionally.)


    Few minutes ago I typed 'pacman -S cdecl' at my msys2 command prompt.
    Then I hit Y at suggestion to proceed with installation. After another
    second or three I got it installed. Then tried it and even managed to
    get couple of declarations properly explained.
    So, now I also belong to less than 1 per cent :-)

    In the process I finally understand why the build process is none-trivial. It's mostly because of interactivity.
    It's very hard to build decent interactive program in portable subset
    of C. Or, may be, even impossible rather than hard.


    I don't understand. What's hard about interactive programs?

    The problem below, which is in standard C and runs on both Windows and
    Linux, should give you all the interativity needed for a program like CDECL.

    It reads a line of input, and prints something based on that. In between
    would go all the non-interactive processing that it needs to do (parse
    the line and so on).

    So what's missing that could render this task impossible?

    (Obviously, it will need a keyboard and display!)

    ----------------------------------------
    #include <stdio.h>
    #include <string.h>

    int main() {
    char buffer[1000];

    puts("Type q to quit:");

    while (1) {
    printf("Cdecl> ");
    fgets(buffer, sizeof(buffer), stdin);
    if (buffer[0] == 'q') break;

    printf("Input was: %s\n", buffer);
    }
    }





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Tue Oct 28 15:59:29 2025
    From Newsgroup: comp.lang.c

    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    Lua is not Algol 68.

    Correct.
    Lua is a useful programming language.

    (I have no stakes here. Never used it.)


    It's usefulness is demonstrated by its widespread use. It is mostly
    used as a scripting or automation language integrated in other software, rather than as a stand-alone language. It is particularly popular in
    the gaming industry.

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the modern
    world is demonstrated by the almost total non-existence of serious tools
    and, more importantly, real-world code in the language. It certainly
    /was/ a useful programming language, long ago, but it has not been
    seriously used outside of historical hobby interest for half a century.
    And unlike other ancient languages (like Cobol or Fortran) there is no
    code of relevance today written in the language. Original Algol was
    mostly used in research, while Algol 68 was mostly not used at all. As
    C.A.R. Hoare said, "As a tool for the reliable creation of sophisticated programs, the language was a failure".

    I'm sure there are /some/ people who have or will write real code in
    Algol 68 in modern times (the folks behind the new gcc Algol 68
    front-end want to be able to write code in the language), but it is very
    much a niche language.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Tue Oct 28 15:03:41 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 28/10/2025 12:56, Michael S wrote:
    On Sun, 26 Oct 2025 15:45:34 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 26 Oct 2025 14:56:56 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    [...]
    Free software still has to be usable. cdecl is usable for most
    of us.

    [...]


    I'd say that it is not sufficiently usable for most of us to
    actually use it.

    Why do you say that?

    I would guess that less than 1 per cent of C programmers ever used
    it and less than 5% of those who used it once continued to use it
    regularly.
    All numbers pulled out of thin air...

    So it's about usefulness, not usability. You're not saying that
    it works incorrectly or that it's difficult to use (which would be
    usability issues), but that the job it performs is not useful for
    most C programmers.

    (One data point: I use it occasionally.)


    Few minutes ago I typed 'pacman -S cdecl' at my msys2 command prompt.
    Then I hit Y at suggestion to proceed with installation. After another
    second or three I got it installed. Then tried it and even managed to
    get couple of declarations properly explained.
    So, now I also belong to less than 1 per cent :-)

    In the process I finally understand why the build process is none-trivial. >> It's mostly because of interactivity.
    It's very hard to build decent interactive program in portable subset
    of C. Or, may be, even impossible rather than hard.


    I don't understand. What's hard about interactive programs?

    The problem below, which is in standard C and runs on both Windows and >Linux, should give you all the interativity needed for a program like CDECL.

    It reads a line of input, and prints something based on that. In between >would go all the non-interactive processing that it needs to do (parse
    the line and so on).

    So what's missing that could render this task impossible?

    (Obviously, it will need a keyboard and display!)

    ----------------------------------------
    #include <stdio.h>
    #include <string.h>

    int main() {
    char buffer[1000];

    puts("Type q to quit:");

    while (1) {
    printf("Cdecl> ");
    fgets(buffer, sizeof(buffer), stdin);
    if (buffer[0] == 'q') break;

    printf("Input was: %s\n", buffer);
    }
    }

    Where is the command line editing and history support
    in this trivial application?

    Use libreadline or libedit and you'll get command line
    history and editing; compatable with the standard
    unix/linux shells (useful shells, unlike the DOS
    command line or soi disant powershell).
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Tue Oct 28 16:20:48 2025
    From Newsgroup: comp.lang.c

    On 27/10/2025 23:33, Waldek Hebisch wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 26/10/2025 16:12, bart wrote:
    On 25/10/2025 16:18, David Brown wrote:
    On 25/10/2025 14:51, bart wrote:

    This is another matter. The CDECL docs talk about C and C++ type
    declarations being 'gibberish'.

    What do you feel about that, and the *need* for such a substantial
    tool to help understand or write such declarations?

    I would rather have put some effort into fixing the syntax so that
    such tools are not necessary!

    And I'd love to hear your plan for "fixing" the syntax of C - noting
    that changing the syntax of C means getting the C standards committee
    to accept your suggestions, getting at least all major C compilers to
    support them, and getting the millions of C programmers to use them.

    I have posted such proposals in the past (probably before 2010).


    No, you have not.

    What you have proposed is a different way to write types in
    declarations, in a different language. That's fine if you are making a
    different language. (For the record, I like some of your suggestions,
    and dislike others - my own choice for an "ideal" syntax would be
    different from both your syntax and C's.)

    I asked you if you had a plan for /fixing/ the syntax of /C/. You don't.

    As an analogy, suppose I invited you - as an architect and builder - to
    see my house, and you said you didn't like the layout of the rooms, the
    kitchen was too small, and you thought the cellar was pointless
    complexity. I ask you if you can give me a plan to fix it, and you
    respond by telling me your own house is nicer.

    Sorry, "proof by analogy" is usually wrong.

    I agree - I wasn't trying to "prove" anything. Analogies can be
    illustrative. Bart had claimed to have a "plan to fix C", without understanding what that could mean, and I was trying to find a way to
    show him how absurd that was. (That is, his claim to have a plan to fix
    C was absurd, not necessarily his alternative syntaxes for declarations.)

    If you insist on
    analogies the right one would be function prototypes: old style
    function declarations where inherently unsafe and it was fixed
    by adding new syntax for function declarations and definitions,
    in parallel to old syntax. Now old style declarations are
    officially retired. Bart proposed new syntax for all
    declarations to be used in parallel with old ones, that is
    exaclty the same fix as used to solve unsafety of old
    function declarations.


    The function prototype syntax was an enhancement to the existing syntax,
    and could be used happily in parallel with it. And it was developed
    within the community of the C language developers and implementers (it
    was before ANSI/ISO standardisation). Bart's suggestion turns existing
    C syntax upside down, is incompatible with everything - in particular, incompatible with the philosophy and intention behind C's syntax - and
    is the product of one person whose motivation seems to be hating C and
    whining about it. So it is a very different situation.

    IMO the worst C problem is standard process. Basically, once
    a large vendor manages to subvert the language it gets
    legitimized and part of the standard. OTOH old warts are
    preserved for long time. Worse, new warts are introduced.


    Backwards compatibility is simultaneously the best part of C, and the
    worst part of C.

    As an example, VMT-s were big opportunity to make array access
    safer. But version which is in the standard skilfully
    sabotages potential compiler attempts to increase safety.

    If you look carefuly, there is several places in the standard
    that effectively forbid static or dynamic error checks. Once
    you add extra safety checks your implementation is
    noncompilant.


    I certainly have no problem finding countless things in C that I would
    have preferred to be done differently. I don't know any serious C
    programmer who could not do the same - but they would all come up with different points (with plenty of overlap).

    It is likely that any standarized language is eventually
    doomed to failure. This is pretty visible with Cobol,
    but C seem to be on similar trajectory (but in much earlier
    stage).


    It takes a /very/ broad definition of "failure" to encompass C!

    But I think Stroustrup was spot-on with his comment "There are two kinds
    of programming languages - the ones everyone complains about, and the
    ones nobody uses".


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Tue Oct 28 16:05:47 2025
    From Newsgroup: comp.lang.c

    David Brown <david.brown@hesbynett.no> writes:
    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    Lua is not Algol 68.

    Correct.
    Lua is a useful programming language.

    (I have no stakes here. Never used it.)


    It's usefulness is demonstrated by its widespread use. It is mostly
    used as a scripting or automation language integrated in other software, >rather than as a stand-alone language. It is particularly popular in
    the gaming industry.

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the modern
    world is demonstrated by the almost total non-existence of serious tools >and, more importantly, real-world code in the language. It certainly
    /was/ a useful programming language, long ago, but it has not been
    seriously used outside of historical hobby interest for half a century.
    And unlike other ancient languages (like Cobol or Fortran) there is no
    code of relevance today written in the language. Original Algol was
    mostly used in research, while Algol 68 was mostly not used at all. As >C.A.R. Hoare said, "As a tool for the reliable creation of sophisticated >programs, the language was a failure".

    There is still one computer system that uses Algol as both
    the system programming language, and for applications.

    Unisys Clearpath (descendents of the Burroughs B6500).

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Tue Oct 28 16:16:24 2025
    From Newsgroup: comp.lang.c

    On 28/10/2025 15:03, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 12:56, Michael S wrote:
    On Sun, 26 Oct 2025 15:45:34 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 26 Oct 2025 14:56:56 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    [...]
    Free software still has to be usable. cdecl is usable for most >>>>>>>> of us.

    [...]


    I'd say that it is not sufficiently usable for most of us to
    actually use it.

    Why do you say that?

    I would guess that less than 1 per cent of C programmers ever used
    it and less than 5% of those who used it once continued to use it
    regularly.
    All numbers pulled out of thin air...

    So it's about usefulness, not usability. You're not saying that
    it works incorrectly or that it's difficult to use (which would be
    usability issues), but that the job it performs is not useful for
    most C programmers.

    (One data point: I use it occasionally.)


    Few minutes ago I typed 'pacman -S cdecl' at my msys2 command prompt.
    Then I hit Y at suggestion to proceed with installation. After another
    second or three I got it installed. Then tried it and even managed to
    get couple of declarations properly explained.
    So, now I also belong to less than 1 per cent :-)

    In the process I finally understand why the build process is none-trivial. >>> It's mostly because of interactivity.
    It's very hard to build decent interactive program in portable subset
    of C. Or, may be, even impossible rather than hard.


    I don't understand. What's hard about interactive programs?

    The problem below, which is in standard C and runs on both Windows and
    Linux, should give you all the interativity needed for a program like CDECL. >>
    It reads a line of input, and prints something based on that. In between
    would go all the non-interactive processing that it needs to do (parse
    the line and so on).

    So what's missing that could render this task impossible?

    (Obviously, it will need a keyboard and display!)

    ----------------------------------------
    #include <stdio.h>
    #include <string.h>

    int main() {
    char buffer[1000];

    puts("Type q to quit:");

    while (1) {
    printf("Cdecl> ");
    fgets(buffer, sizeof(buffer), stdin);
    if (buffer[0] == 'q') break;

    printf("Input was: %s\n", buffer);
    }
    }

    Where is the command line editing and history support
    in this trivial application?

    On Windows, that seems to work anyway: you can edit and navigate within
    a line, or use Up/Down to retrieve previous lines.

    On WSL, only backspace works, others keys show the escape sequences.
    It's the same with the RPi.

    I'd never noticed before that Linux line input doesn't provide these fundamentals.

    Still, it will suffice for the simple task that Cdecl has to do.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Tue Oct 28 17:03:33 2025
    From Newsgroup: comp.lang.c

    On 2025-10-28, bart <bc@freeuk.com> wrote:
    On 27/10/2025 20:52, Kaz Kylheku wrote:
    On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    bart <bc@freeuk.com> writes:
    [...]
    Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX.

    So they are utterly dependent on them. So much so that it is pretty
    much impossible to build this stuff on any non-UNIX environment,
    unless that environment is emulated. That is what happens with WSL,
    MSYS2, CYGWIN.
    [...]

    **Yes, you're right**.

    The GNU autotools typically work smoothly when used on Unix-like
    systems. They can be made to work nearly as smoothly under Windows
    by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very
    difficult to use them on pure Windows.

    The way I see the status quo in this matter is this: cross-platform
    programs originating or mainly focusing on Unix-likes require effort
    /from their actual authors/ to have a native Windows port.

    Whereas when such programs are ported to Unix-like which their
    authors do not use, it is often possible for the users to get it
    working without needing help from the authors. There may be some
    patch to upstream, and that's about it.

    Also, a proper Windows port isn't just a way to build on Windows.
    Nobody does that. Windows doens't have tools out of the box.

    When you seriously commit to a Windows port, you provide a binary build
    with a proper installer.

    The problem with a binary distribution is AV software on the user's
    machine which can block it.

    Well, then you're fucked. (Which, anyway, is a good general adjective
    for someone still depending on Microsoft Windows.)

    The problem with source distribution is that users on Windows don't
    have any tooling. To get tooling, they would need to install binaries.

    To get around that AV, you either need to have some clout, be

    The way you do that is by developing a compelling program that helps
    users get their work done and becomes popular, so users (and their
    managers) can they convince their IT that they need it.

    In my case, rather than supply a monolithic executable (EXE file, which either the app itself, or some sort of installer), I've played around

    You are perhaps too hastily skipping over the idea of "some sort of
    installer".

    Yes, use an installer for Windows if you're doing something
    serious that is offered to the public, rather than just to a handful of
    friends or customers.

    I use NSIS myself.

    Creating an installer is a PITA, but once you close the iteration loop
    on that, you hardly have to touch it, if the structure of your
    deliverables stays the same from release to release.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Tue Oct 28 20:00:57 2025
    From Newsgroup: comp.lang.c

    On Tue, 28 Oct 2025 16:05:47 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    David Brown <david.brown@hesbynett.no> writes:
    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    Lua is not Algol 68.

    Correct.
    Lua is a useful programming language.

    (I have no stakes here. Never used it.)


    It's usefulness is demonstrated by its widespread use. It is mostly
    used as a scripting or automation language integrated in other
    software, rather than as a stand-alone language. It is particularly >popular in the gaming industry.

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the modern
    world is demonstrated by the almost total non-existence of serious
    tools and, more importantly, real-world code in the language. It
    certainly /was/ a useful programming language, long ago, but it has
    not been seriously used outside of historical hobby interest for
    half a century. And unlike other ancient languages (like Cobol or
    Fortran) there is no code of relevance today written in the
    language. Original Algol was mostly used in research, while Algol
    68 was mostly not used at all. As C.A.R. Hoare said, "As a tool for
    the reliable creation of sophisticated programs, the language was a >failure".

    There is still one computer system that uses Algol as both
    the system programming language, and for applications.

    Unisys Clearpath (descendents of the Burroughs B6500).


    Is B6500 ALGOL related to A68?
    My impression from Wikipedia article is that B5000 ALGOL was a
    proprietary off-spring of A60. Wikipedia says nothing about sources of
    B6500 ALGOL, but considering that Burroughs was an American enterprise
    and that back at time in US ALGOL 68 was widely considered as a failed
    European experiment I would guess that B6500 ALGOL is derived from
    B5000 ALGOL rather than from A68.




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Tue Oct 28 18:01:00 2025
    From Newsgroup: comp.lang.c

    On 2025-10-26, Michael S <already5chosen@yahoo.com> wrote:
    I can't imagine why anyone would write cdecl (if it is written in C)
    such that it's anything but a maximally conforming ISO C program,
    which can be built like this:

    make cdecl

    without any Makefile present, in a directory in which there is just
    one file: cdecl.c.


    You are exaggerating.
    There is nothing wrong with multiple files and small nice manually

    Yes, I'm exaggerating; of course I can imagine using more than
    one file for cdecl.

    I would say that if you need two files to write cdecl, and
    one of them is not an accurate grammar file for a parser generator
    (needing to be a spearate file due to being in that notation),
    which handles things int (*p)(int (*q)(void * const x)),
    you've massively fucked it up.

    In that regard autotools resemble Postel's principle - the most harmful

    Postel's principle is awful, requiring paragraphs of apologetic
    defense to explain what Postel really meant and how it made sense in his context, so that it wasn't actually idiotic.

    Programs should be conservative in what they generate, and loudly reject
    any input that is out of spec.

    Programs that accept crap are good for business, because naive
    customers just see that those programs "work" with some input
    that other programs "don't handle".

    They are harmful to the ecosystem, creating a race for the bottom
    competition in which specs fall by the wayside while programs struggle
    to handle buggy inputs, and nobody knows what is correct any more.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Tue Oct 28 18:28:21 2025
    From Newsgroup: comp.lang.c

    Michael S <already5chosen@yahoo.com> writes:
    On Tue, 28 Oct 2025 16:05:47 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    David Brown <david.brown@hesbynett.no> writes:
    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    Lua is not Algol 68.

    Correct.
    Lua is a useful programming language.

    (I have no stakes here. Never used it.)


    It's usefulness is demonstrated by its widespread use. It is mostly
    used as a scripting or automation language integrated in other
    software, rather than as a stand-alone language. It is particularly
    popular in the gaming industry.

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the modern
    world is demonstrated by the almost total non-existence of serious
    tools and, more importantly, real-world code in the language. It
    certainly /was/ a useful programming language, long ago, but it has
    not been seriously used outside of historical hobby interest for
    half a century. And unlike other ancient languages (like Cobol or
    Fortran) there is no code of relevance today written in the
    language. Original Algol was mostly used in research, while Algol
    68 was mostly not used at all. As C.A.R. Hoare said, "As a tool for
    the reliable creation of sophisticated programs, the language was a
    failure".

    There is still one computer system that uses Algol as both
    the system programming language, and for applications.

    Unisys Clearpath (descendents of the Burroughs B6500).


    Is B6500 ALGOL related to A68?

    A-series ALGOL has many extensions.

    DCAlgol, for example, is used to create applications
    for data communications (e.g. poll-select multidrop
    applications such as teller terminals, etc).

    NEWP is an algol dialect used for systems programming
    and the operating system itself.


    ALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000098-517/86000098-517.pdf
    DCALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000841-208.pdf
    NEWP: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-21.0/86002003-409.pdf
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Tue Oct 28 20:49:30 2025
    From Newsgroup: comp.lang.c

    On Tue, 28 Oct 2025 18:28:21 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Tue, 28 Oct 2025 16:05:47 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    David Brown <david.brown@hesbynett.no> writes:
    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    Lua is not Algol 68.

    Correct.
    Lua is a useful programming language.

    (I have no stakes here. Never used it.)


    It's usefulness is demonstrated by its widespread use. It is
    mostly used as a scripting or automation language integrated in
    other software, rather than as a stand-alone language. It is
    particularly popular in the gaming industry.

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the
    modern world is demonstrated by the almost total non-existence of
    serious tools and, more importantly, real-world code in the
    language. It certainly /was/ a useful programming language, long
    ago, but it has not been seriously used outside of historical
    hobby interest for half a century. And unlike other ancient
    languages (like Cobol or Fortran) there is no code of relevance
    today written in the language. Original Algol was mostly used in
    research, while Algol 68 was mostly not used at all. As C.A.R.
    Hoare said, "As a tool for the reliable creation of sophisticated
    programs, the language was a failure".

    There is still one computer system that uses Algol as both
    the system programming language, and for applications.

    Unisys Clearpath (descendents of the Burroughs B6500).


    Is B6500 ALGOL related to A68?

    A-series ALGOL has many extensions.


    I read your answer as "I don't know. If you are interesting then RTFM by yourself". Is it correct interpretation?

    DCAlgol, for example, is used to create applications
    for data communications (e.g. poll-select multidrop
    applications such as teller terminals, etc).

    NEWP is an algol dialect used for systems programming
    and the operating system itself.


    ALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000098-517/86000098-517.pdf
    DCALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000841-208.pdf
    NEWP: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-21.0/86002003-409.pdf


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Tue Oct 28 20:14:51 2025
    From Newsgroup: comp.lang.c

    On 28.10.2025 15:59, David Brown wrote:
    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    [ snip Lua statements ]

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the modern
    world is demonstrated by the almost total non-existence of serious tools
    and, more importantly, real-world code in the language.

    Obviously you are mixing the terms usefulness and dissemination
    (its actual use). Please accept that I'm differentiating here.

    There's quite some [historic] languages that were very useful but
    couldn't disseminate. (For another prominent example cf. Simula,
    that invented not only the object oriented principles with classes
    and inheritance, was a paragon for quite some OO-languages later,
    and it made a lot more technical and design inventions, some even
    now still unprecedented.) It's a pathological historic phenomenon
    that programming languages from the non-US American locations had
    inherent problems to disseminate especially back these days!

    Reasons for dissemination of a language are multifold; back then
    (but to a degree also today) they were often determined by political
    and marketing factors... (you can read about that in various historic
    documents and also in later ruminations about computing history)

    It certainly /was/ a useful programming language, long ago,

    ...as you seem to basically agree to here. (At least as far as you
    couple usefulness with dissemination.)

    but it has not been
    seriously used outside of historical hobby interest for half a century.

    (Make that four decades. It's been used in the mid 1980's. - Later
    I didn't follow it anymore, so I cannot tell about the 1990's.)

    (I also disagree in your valuation "hobby interest"; for "hobbies"
    there were easier accessible languages used, not systems that were
    back these days mainly available on mainframes only.)

    As far as you mean in programming software systems, that may be true;
    I cannot tell that I'd have an oversight who did use it. I've read
    about various applications, though; amongst them that it's even been
    used as a systems programming language (where I was astonished about).

    And unlike other ancient languages (like Cobol or Fortran) there is no
    code of relevance today written in the language.

    Probably right. (That would certainly be also my guess.)

    Original Algol was
    mostly used in research, while Algol 68 was mostly not used at all. As C.A.R. Hoare said, "As a tool for the reliable creation of sophisticated programs, the language was a failure".

    I don't know the context of his statement. If you know the language
    you might admit that reliable software is exactly one strong property
    of that language. (Per se already, but especially so if compared to
    languages like "C", the language discussed in this newsgroup, with an
    extremely large dissemination and also impact.)


    I'm sure there are /some/ people who have or will write real code in
    Algol 68 in modern times

    The point was that the language per se was and is useful. But its
    actual usage for developing software systems seems to have been of
    little and more so it's currently of no importance, without doubt.

    (the folks behind the new gcc Algol 68
    front-end want to be able to write code in the language),

    There's more than the gcc folks. (I've heard, that gcc has taken some substantial code from Genie, an Algol 68 "compiler-interpreter" that
    is still maintained. BTW; I'm for example using that one, not gcc's.)

    but it is very much a niche language.

    It's _functionally_ a general purpose language, not a niche language
    (in the sense of "special purpose language"). Its dissemination makes
    it to a "niche language", that's true. It's in practice just a dead
    language. It's rarely used by anyone. But it's a very useful language.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Tue Oct 28 20:32:14 2025
    From Newsgroup: comp.lang.c

    On 28.10.2025 19:00, Michael S wrote:
    On Tue, 28 Oct 2025 16:05:47 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    There is still one computer system that uses Algol as both
    the system programming language, and for applications.

    Unisys Clearpath (descendents of the Burroughs B6500).


    Is B6500 ALGOL related to A68?

    I would have to look that up myself, but in older literature I've
    seen the all-caps "ALGOL" mostly (only?) in context of Algol 60.

    I also wouldn't expect that Burroughs is of any relevance nowadays.

    IMO it anyway doesn't invalidate the fact that Algol 68 is a dead
    language nowadays, certainly in its practical use, and otherwise
    also mostly forgotten.

    Janis

    My impression from Wikipedia article is that B5000 ALGOL was a
    proprietary off-spring of A60. Wikipedia says nothing about sources of
    B6500 ALGOL, but considering that Burroughs was an American enterprise
    and that back at time in US ALGOL 68 was widely considered as a failed European experiment I would guess that B6500 ALGOL is derived from
    B5000 ALGOL rather than from A68.





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Tue Oct 28 17:34:27 2025
    From Newsgroup: comp.lang.c

    On 2025-10-27 22:35, Janis Papanagnou wrote:
    been effectively addressed. (I recall you were unfamiliar with make
    files, or am I misremembering?)

    He's heard of make files, and many people have tried to explain them to
    him, but his comments about them indicate that he completely
    misunderstands them, to a degree that I find hard to fathom. It's
    similar to the unbelievable degree of his misunderstandings of C. You
    don't have to like C - many don't - but if you're going to use it you
    should try to understand it, and his preferences make it impossible for
    him to do so.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Tue Oct 28 14:59:05 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages
    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
    capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    I'll give this one more try.

    This kind of thing makes it difficult to communicate with you.

    In this particular instances, you wrote that "we'd **all** be using easy dynamic languages" (emphasis added).

    Janis replied "Certainly not." -- meaning that we would not **all** be
    using easy dynamic languages. Janis is correct if there are only a few
    people, or even one person, who would not use easy dynamic languages.

    In reply to that, you wrote that **you** would use such languages --
    which is fine and dandy, but it doesn't refute what Janis wrote.

    Nobody at any time claimed that *nobody* would use easy dynamic
    languages. Obviously some people do and some people don't. If speed
    were not an issue, that would still be the case, though it would likely
    change the numbers. (There are valid reasons other than speed to use non-dynamic languages.)

    Are you with me so far?

    You then wrote:

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    That's wrong. I'll assume it was an honest mistake. If you suggested
    that even one other person might also have the same desire, I don't
    think anyone would dispute it. *Of course* there are plenty of people
    who want to use dynamic languages, and there would be more if speed were
    not an issue. As you have done before, you make incorrect assumptions
    about other people's thoughts and motives.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    The "certainly not" was in response to your claim that we would ALL
    be using dynamic languages, a claim that was at best hyberbole. Nobody
    has claimed to know everyone else's mindset.

    You misunderstood what Janis wrote. It happens to all of us. You just
    need to be aware that what Janis wrote was not what you thought Janis
    wrote, and you have reacted to something nobody said -- and not for the
    first time.

    This post is likely to be a waste of time, but I'm prepared to be
    pleasantly surprised.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Tue Oct 28 22:26:29 2025
    From Newsgroup: comp.lang.c

    On 28/10/2025 17:03, Kaz Kylheku wrote:
    On 2025-10-28, bart <bc@freeuk.com> wrote:
    On 27/10/2025 20:52, Kaz Kylheku wrote:
    On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    bart <bc@freeuk.com> writes:
    [...]
    Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX.

    So they are utterly dependent on them. So much so that it is pretty
    much impossible to build this stuff on any non-UNIX environment,
    unless that environment is emulated. That is what happens with WSL,
    MSYS2, CYGWIN.
    [...]

    **Yes, you're right**.

    The GNU autotools typically work smoothly when used on Unix-like
    systems. They can be made to work nearly as smoothly under Windows
    by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very
    difficult to use them on pure Windows.

    The way I see the status quo in this matter is this: cross-platform
    programs originating or mainly focusing on Unix-likes require effort
    /from their actual authors/ to have a native Windows port.

    Whereas when such programs are ported to Unix-like which their
    authors do not use, it is often possible for the users to get it
    working without needing help from the authors. There may be some
    patch to upstream, and that's about it.

    Also, a proper Windows port isn't just a way to build on Windows.
    Nobody does that. Windows doens't have tools out of the box.

    When you seriously commit to a Windows port, you provide a binary build
    with a proper installer.

    The problem with a binary distribution is AV software on the user's
    machine which can block it.

    Well, then you're fucked. (Which, anyway, is a good general adjective
    for someone still depending on Microsoft Windows.)

    The problem with source distribution is that users on Windows don't
    have any tooling. To get tooling, they would need to install binaries.

    There seems little problem with installing well-known compilers.

    Windows' AV seems to use AI methods to detect viruses which can give
    false positives (there is an 'ai' tag on the report code shown). So I
    guess 'gcc' etc must pass.

    Anyway these days I don't deal with non-technical endusers. People
    should know how to build programs. Or I had assumed they did.

    Although I'd gone to a lot of trouble to ensure my single-file C
    distributions are as easy to build as hello.c (on Windows, that is the
    case), I found out something interesting:

    Some people don't actually know how to compile hello.c! They know only
    how to type 'make', and some argue that is actually simpler in that you
    only type one thing instead of two or three.

    I was rather surprised: I'd reduced the job of installing a kitchen to hammering in just one nail so that you can trivially DIY it, but some
    people don't know how to use a hammer.


    To get around that AV, you either need to have some clout, be

    The way you do that is by developing a compelling program that helps
    users get their work done and becomes popular, so users (and their
    managers) can they convince their IT that they need it.

    In my case, rather than supply a monolithic executable (EXE file, which
    either the app itself, or some sort of installer), I've played around

    You are perhaps too hastily skipping over the idea of "some sort of installer".

    Yes, use an installer for Windows if you're doing something
    serious that is offered to the public, rather than just to a handful of friends or customers.

    An installer is just an executable like any other, at least if it as a
    .EXE extension.

    If you supply a one-file, self-contained ready-to-run application, then
    it doesn't really need installing. Wherever it happens to reside after downloading, it can happily be run from there!

    The only thing that's needed is to make it so that it can be run from
    anywhere without needing to type its path. But I can't remember any apps
    I've installed recently that seem to get that right, even with a
    long-winded installer:

    It might go through a long process of perhaps several minutes. It says
    it's installed, you type (what you assume to be) its name on the command
    line, and you get: File not found. It doesn't even tell where it
    installed it, or its actual EXE name.

    So my stuff is no worse. I just don't think anybody cares anymore; most
    poeple use GUI apps launched via Windows menus.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Tue Oct 28 23:14:48 2025
    From Newsgroup: comp.lang.c

    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages
    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
    capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    I'll give this one more try.

    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    He (I assume) always dismisses every single one of my arguments out of hand:

    Build speed is never a problem - ever. The speed of any language
    implemention is never a concern either.

    Despite describing all the work that has gone on with making
    optimisation compilers, faster linkers, tracing-JIT interpreters etc,
    all of which suggest that some people think these are very much a
    problem, that cuts no ice at all.

    When I gave the example of my language that was 1000 times faster to
    build than A68G, and which ran that test 10 times faster than A68G, that apparently doesn't count; he doesn't care; or I'm changing the goalposts.

    So I instead gave an example of Tiny C building Lua, and running the
    test under Lua, but that was no good either:

    "Lua is not Algol68".

    It is just impossible get through. He is never going to admit that A68G
    is rather sluggish in its performance (I guess suggesting optimised C
    might be faster than A68G won't work either, since C isn't Algol68!)

    It's rather frustrating. It's even more frustrating when you take his
    side and think I'm the one who needs convincing about anything.

    I made this remark:

    This is why many like to use scripting languages
    as those don't have a discernible build step.

    On the face of it, it is uncontroversial: they do allow rapid
    development and instant feedback, as one of their several pros. Yet, JP
    feels the need to be contrary:

    I can't tell about the "many" that you have in mind, and about their
    mindset; I'm sure you either can't tell.

    And now you have joined in, to back him up!



    In this particular instances, you wrote that "we'd **all** be using easy dynamic languages" (emphasis added).

    Janis replied "Certainly not." -- meaning that we would not **all** be
    using easy dynamic languages. Janis is correct if there are only a few people, or even one person, who would not use easy dynamic languages.

    You're still on about the logic and trying to prove that JP was right
    and I was wrong.

    JP is trying to trash everything I say and everything I do.



    In reply to that, you wrote that **you** would use such languages --
    which is fine and dandy, but it doesn't refute what Janis wrote.

    Nobody at any time claimed that *nobody* would use easy dynamic
    languages. Obviously some people do and some people don't. If speed
    were not an issue, that would still be the case, though it would likely change the numbers. (There are valid reasons other than speed to use non-dynamic languages.)

    Are you with me so far?

    You then wrote:

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    That's wrong. I'll assume it was an honest mistake. If you suggested
    that even one other person might also have the same desire, I don't
    think anyone would dispute it. *Of course* there are plenty of people
    who want to use dynamic languages, and there would be more if speed were
    not an issue. As you have done before, you make incorrect assumptions
    about other people's thoughts and motives.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    The "certainly not" was in response to your claim that we would ALL
    be using dynamic languages, a claim that was at best hyberbole. Nobody
    has claimed to know everyone else's mindset.

    You misunderstood what Janis wrote.

    I understand what he's trying to do. He despises me; he thinks the
    projects I work on are worthless. And any results I get can be
    dismissed. Meanwhile he's a 'professional', as stated many times.

    Maybe you can make up your own mind: here's a survey of mostly
    interpreted languages, all running the same Fibonacci benchmark:

    https://www.reddit.com/r/Compilers/comments/1jyl98f/fibonacci_survey/

    My products are marked with "*". You can see that the fastest purely interpreted language is one of mine.

    JP won't accept any of this, even if you took my stuff out, because he contends that you can't compare different languages.

    This post is likely to be a waste of time, but I'm prepared to be
    pleasantly surprised.

    *I'm* waiting to be pleasantly suprised by you agreeing with me for a change



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Wed Oct 29 00:04:13 2025
    From Newsgroup: comp.lang.c

    On 2025-10-28, bart <bc@freeuk.com> wrote:
    On 28/10/2025 17:03, Kaz Kylheku wrote:
    On 2025-10-28, bart <bc@freeuk.com> wrote:
    On 27/10/2025 20:52, Kaz Kylheku wrote:
    On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    bart <bc@freeuk.com> writes:
    [...]
    Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX.

    So they are utterly dependent on them. So much so that it is pretty >>>>>> much impossible to build this stuff on any non-UNIX environment,
    unless that environment is emulated. That is what happens with WSL, >>>>>> MSYS2, CYGWIN.
    [...]

    **Yes, you're right**.

    The GNU autotools typically work smoothly when used on Unix-like
    systems. They can be made to work nearly as smoothly under Windows
    by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very >>>>> difficult to use them on pure Windows.

    The way I see the status quo in this matter is this: cross-platform
    programs originating or mainly focusing on Unix-likes require effort
    /from their actual authors/ to have a native Windows port.

    Whereas when such programs are ported to Unix-like which their
    authors do not use, it is often possible for the users to get it
    working without needing help from the authors. There may be some
    patch to upstream, and that's about it.

    Also, a proper Windows port isn't just a way to build on Windows.
    Nobody does that. Windows doens't have tools out of the box.

    When you seriously commit to a Windows port, you provide a binary build >>>> with a proper installer.

    The problem with a binary distribution is AV software on the user's
    machine which can block it.

    Well, then you're fucked. (Which, anyway, is a good general adjective
    for someone still depending on Microsoft Windows.)

    The problem with source distribution is that users on Windows don't
    have any tooling. To get tooling, they would need to install binaries.

    There seems little problem with installing well-known compilers.

    If you think that is the case, then you can make an installer which
    bundles some know compiler, and your source code ... and so it goes.

    At install time, it builds the program.

    The user doesn't care how the program came to be there.

    (But even programs you build on the Windows machine itself can trigger antivirus ...)

    An installer is just an executable like any other, at least if it as a
    .EXE extension.

    Yes and, similarly, "there seems little problem with installing
    well-known" installers.

    If you supply a one-file, self-contained ready-to-run application, then
    it doesn't really need installing. Wherever it happens to reside after downloading, it can happily be run from there!

    Yes; that would be nice. Many people get PuTTY.exe that way, for
    instance.

    The only thing that's needed is to make it so that it can be run from anywhere without needing to type its path. But I can't remember any apps I've installed recently that seem to get that right, even with a
    long-winded installer:

    I did that for the Windows port of the TXR language. The installer
    updates PATH and sends the Windows message to running apps about the environment change. IIRC, existing cmd.exe instances pick that up.

    The generated uninstall.exe will take it right out.

    I've not looked at this in ages. I seem to recall there is a check
    against inserting the same PATH entry multiple times.

    Anyway, once you have that working, it works.

    In my inst.nsi, in Section "TXR" it looks like this;

    ${If} $AccountType == "Admin"
    ${EnvVarUpdate} $0 "PATH" "A" "HKLM" "$INSTDIR\txr\bin"
    ${Else}
    ${EnvVarUpdate} $0 "PATH" "A" "HKCU" "$INSTDIR\txr\bin"
    ${Endif}

    And in Section "Uninstall" the removal looks like this:

    ${If} $AccountType == "Admin"
    ${un.EnvVarUpdate} $0 "PATH" "R" "HKLM" "$INSTDIR\bin"
    ${Else}
    ${un.EnvVarUpdate} $0 "PATH" "R" "HKCU" "$INSTDIR\bin"
    ${Endif}

    Thus everything is done by this EnvVarUpdate, and its un.EnvVarUpdate

    These two environment update functions come from an "env.nsh" file that
    is not part of NSIS; it is a utility developed by multiple authors: Cal
    Turney, Amir Szekely, Diego Pedroso, Kevin English, Hendri Adriaens and
    others.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Tue Oct 28 18:48:33 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages >>>> Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
    capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!
    I'll give this one more try.
    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    No, I'm talking to you. It turns out that was a mistake.

    My post was **only** about your apparent confusion about a single
    statement, quoted above. I wasn't talking about JP personally, or about
    any of his other interactions with you. I explained in great detail
    what I was referring to. You ignored it.

    You seem unwilling or unable to focus on one thing.

    He (I assume) always dismisses every single one of my arguments out of hand:

    Build speed is never a problem - ever. The speed of any language
    implemention is never a concern either.

    And here you are putting words in other people's mouths.

    I think you goal is to argue, not to do anything that might result in
    agreement or learning.

    [...]
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Wed Oct 29 06:57:10 2025
    From Newsgroup: comp.lang.c

    On 28.10.2025 12:16, bart wrote:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:

    That's meaningless, but if you're interested to know...
    Mostly (including my professional work) I've probably used C++.
    But also other languages, depending on either projects' requirements
    or, where there was a choice, what appeared to be fitting best (and
    "best" sadly includes also bad languages if there's no alternative).

    Which bad languages are these?

    Are you hunting for a language war discussion? - I won't start it here.
    If you want, please start an appropriate topic in comp.lang.misc or so.

    [...]

    That CDECL took, what, 49 seconds on my machine, to process 68Kloc of C? That's a whopping 1400 lines per second!

    If we go back 45 years to machines that were 1000 times slower,

    We are not in these days any more. Nowadays there's much more complex
    software; some inherently bad designed software, and in other cases
    they might not care about tweaking the last second out of a process
    (for various reasons). So this comparison isn't really contributing
    anything here.

    the same
    process would only manage 1.4 lines per second, and it would take 13
    HOURS, to create an interactive program that explained what 'int (*(*(*)))[]()' (whatever it was) might mean.

    If that's the sole task of the program the speed is not very appealing.
    But I had not looked into the code, the algorithms implemented, or the
    features it supports. Criticism may be justified, maybe not.

    But you're creating a tool just once, and then use it arbitrary times.
    This is as a user of the tool. So why you care so much is beyond me.
    As a developer of the tool the used algorithms and the build process
    is under your control.


    So, yeah, build-time is a problem, even on the ultra-fast hardware we
    have now.

    What problem? - That you don't want to wait a few seconds? - Or that
    you cannot use that tool when time-traveling "back 45 years"?


    Bear in mind that CDECL (like every finished product you build from
    source) is a working, debugged program. You shouldn't need to do that
    much analysis of it. And here, its performance is not critical either:
    you don't even need fast code from it.

    (Erm.. - so after the rant you're now agreeing?)


    (I recall you were unfamiliar with make
    files, or am I misremembering?)

    I know makefiles. Never used them, never will.

    (Do what you prefer. - After all you're not cooperating with others in
    your personal projects, as I understood, so there's no need to "learn"
    [or just use!] things you don't like. If you think it's a good idea to
    spend time in writing own code for already solved tasks, I'm fine with
    that.)

    You might recall that I create my own solutions.

    I don't recall, to be honest. But let's rather say; I'm not astonished
    that you have "created your own solutions". (Where other folks would
    just use an already existing, flexibly and simply usable, working and
    supported solution.) - So that's your problem not anyone else's.



    Now imagine further if the CPython interpreter was inself written and
    executed with CPython.

    So, the 'speed' of a language (ie. of its typical implementation, which
    also depends on the language design) does matter.

    If speed wasn't an issue then we'd all be using easy dynamic languages

    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and capable
    as possible, so they could be used for more tasks.

    Sure, you would. Obviously. - You've never been the widely accepted
    standard source for sensible general purpose solutions, though.


    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know that.

    You had presented your statement as if there'd be a pressing logical
    decision route. It is not.


    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    No. I neither said nor implied that. (I suggest to re-read what you
    said and what I wrote.)


    Speed is a topic, but as I wrote you have to put it in context

    Actually, the real topic is slowness. I'm constantly coming across
    things which I know (from half a century working with computers) are far slower than they ought to be.

    Fair enough.


    But I'm also coming across people who seem to accept that slowness as
    just how things are. They should question things more!

    I also think that there a not few people that accept inferior quality;
    how else could the success of, say, DOS, Windows, and off-the-shelf
    MS office software, be explained. Or some persistent deficiencies in
    some GNU/Linux tools and runtime system. Or services presented per Web interface.

    Speed is one factor. (I said that before.)


    I can't tell about the "many" that you have in mind, and about their
    mindset; I'm sure you either can't tell.

    I'm pretty sure there are quite a few million users of scripting languages.

    This is of no doubt, I'd say.

    What was arguable was the made-up _decision step_ concerning speed
    and "scripting languages".



    I'm using for very specific types of tasks "scripting languages" -
    and keep in mind that there's no clean definition of that!

    They have typical characteristics as I'm quite sure you're aware. For example:

    Yes, you're right, since I mentioned them I'm aware of them. But they
    are not serving a clear definition of "scripting languages"; they are
    basically just hints.


    * Dynamic typing

    Marcel van der Veer is advertising Genie (his Algol 68 interpreter) as
    a system usable for scripting. (With no dynamic but static typing.)

    * Run from source

    How about JIT, how about intermediate languages?

    * Instant edit-run cycle
    * Possible REPL

    * Uncluttered syntax

    Have a look at the syntax of (e.g.) the Unix shell "scripting language".

    * Higher level features

    Not a distinguished characteristic of scripting languages.

    * Extensive libraries so that you can quickly 'script' most tasks

    Awk (for example) is a stand-alone scripting language.


    So, interactivity and spontaneity. But they also have cons:

    * Slower execution

    Yes, but they can be rather fast (with intermediate code (GNU Awk),
    precompiled language elements (Genie), or other means). It very much
    depends on the languages, on "both types" of languages.

    * Little compile-time error checking

    (We already commented in your above point "Dynamic typing".)

    * Less control (of data structures for example)

    Not sure what you mean (control constructs, more data structures).
    But have a look into Unix shells for control constructs, and into
    Kornshell specifically for data structures.

    It's a very inhomogeneous area. Impossible to clearly classify.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Wed Oct 29 08:06:38 2025
    From Newsgroup: comp.lang.c

    On 29.10.2025 00:14, bart wrote:
    On 28/10/2025 21:59, Keith Thompson wrote:
    [...]

    He (I assume) always dismisses every single one of my arguments out of
    hand:

    No, I'm trying to speak about various things; basically my focus
    is the facts. Not the persons involved. But there's persons with
    specific mindsets (like you) that provoke reactions; on flaws in
    your logic, misrepresentations, limited perspectives, etc.


    Build speed is never a problem - ever.

    Like here. You're making things up. - For example I clearly said;
    "Speed is a topic". But since you're so pathologically focused on
    that factor that you miss the important projects' contexts. So I
    then even quoted that (in case you missed it):
    Speed is not an end in itself. It must be valued in comparison
    with all the other often more relevant factors (that you seem to
    completely miss, even when explained to you).

    The speed of any language implemention is never a concern either.

    Nonsense.

    [...]

    When I gave the example of my language that was 1000 times faster to
    build than A68G, and which ran that test 10 times faster than A68G, that apparently doesn't count; he doesn't care; or I'm changing the goalposts.

    Exactly. Or comparing apples and oranges. - Sadly you do all that
    regularly.

    [...]

    On the face of it, it is uncontroversial: they do allow rapid
    development and instant feedback, as one of their several pros. Yet, JP
    feels the need to be contrary:

    I can't tell about the "many" that you have in mind, and about their
    mindset; I'm sure you either can't tell.

    And now you have joined in, to back him up!

    Bart, you should take Keith's words meant benevolent; all he's trying
    was you not always assuming that we want to hurt you if we criticize
    any misconceptions in your thinking or considering a topic only from
    one isolate perspective. If you continue to assume that the "worst"
    was meant, and only against you, you won't get anywhere.

    Keith has explained in his posts exactly what was said and meant, and
    made your discussion maneuvers explicit. (I would have been happier
    if you, Bart, would have noticed yourself what was obvious to Keith.)

    [...]
    [...]

    You misunderstood what Janis wrote.

    I understand what he's trying to do. He despises me; he thinks the

    Obviously you don't understand, and certainly also don't know what
    I think; if you would understand it you wouldn't have written this
    nonsense.

    projects I work on are worthless.

    Actually, as far as I saw your projects, methods, and targets, yes;
    they are completely worthless _for me_. (Mind the emphasis.)

    I also doubt that they are of worth in typical professional contexts;
    since they seem to lack some basic properties needed in professional
    contexts. - But that is your problem, not mine. (I just don't care.)

    [...] Meanwhile he's a 'professional', as stated many times.

    Oh, my perception is that the regulars here are *all* professionals!
    And (typically) even to a high degree. - That's, I think, one reason
    why you sometimes (often?) get headwind from the audience.

    What I'm regularly trying to tell you is that your project setups
    and results might only rarely serve the requirements in professional
    _projects_ as you find them in _professional software companies_.

    You cannot seem to accept that.

    Personally I'm not working anymore professionally. (I mentioned that occasionally.) But I've still the expertise from my professional work
    and education, and I share my experiences to those who are interested.

    You, personally, are of no interest to me; your presumptions are thus
    wrong. (I'm interested in CS and IT topics.)

    Janis

    [...]

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Wed Oct 29 11:20:47 2025
    From Newsgroup: comp.lang.c

    On 29/10/2025 07:06, Janis Papanagnou wrote:
    On 29.10.2025 00:14, bart wrote:
    On 28/10/2025 21:59, Keith Thompson wrote:
    [...]

    He (I assume) always dismisses every single one of my arguments out of
    hand:

    No, I'm trying to speak about various things; basically my focus
    is the facts. Not the persons involved. But there's persons with
    specific mindsets (like you) that provoke reactions; on flaws in
    your logic, misrepresentations, limited perspectives, etc.


    Build speed is never a problem - ever.

    Like here. You're making things up. - For example I clearly said;
    "Speed is a topic". But since you're so pathologically focused on
    that factor that you miss the important projects' contexts. So I
    then even quoted that (in case you missed it):
    Speed is not an end in itself. It must be valued in comparison
    with all the other often more relevant factors (that you seem to
    completely miss, even when explained to you).

    The speed of any language implemention is never a concern either.

    Nonsense.

    [...]

    When I gave the example of my language that was 1000 times faster to
    build than A68G, and which ran that test 10 times faster than A68G, that
    apparently doesn't count; he doesn't care; or I'm changing the goalposts.

    Exactly. Or comparing apples and oranges. - Sadly you do all that
    regularly.

    [...]

    On the face of it, it is uncontroversial: they do allow rapid
    development and instant feedback, as one of their several pros. Yet, JP
    feels the need to be contrary:

    I can't tell about the "many" that you have in mind, and about their
    mindset; I'm sure you either can't tell.

    And now you have joined in, to back him up!

    Bart, you should take Keith's words meant benevolent; all he's trying
    was you not always assuming that we want to hurt you if we criticize
    any misconceptions in your thinking or considering a topic only from
    one isolate perspective. If you continue to assume that the "worst"
    was meant, and only against you, you won't get anywhere.

    Keith has explained in his posts exactly what was said and meant, and
    made your discussion maneuvers explicit. (I would have been happier
    if you, Bart, would have noticed yourself what was obvious to Keith.)

    [...]
    [...]

    You misunderstood what Janis wrote.

    I understand what he's trying to do. He despises me; he thinks the

    Obviously you don't understand, and certainly also don't know what
    I think; if you would understand it you wouldn't have written this
    nonsense.

    projects I work on are worthless.

    Actually, as far as I saw your projects, methods, and targets, yes;
    they are completely worthless _for me_. (Mind the emphasis.)

    I also doubt that they are of worth in typical professional contexts;
    since they seem to lack some basic properties needed in professional contexts. - But that is your problem, not mine. (I just don't care.)

    [...] Meanwhile he's a 'professional', as stated many times.

    Oh, my perception is that the regulars here are *all* professionals!
    And (typically) even to a high degree. - That's, I think, one reason
    why you sometimes (often?) get headwind from the audience.

    What I'm regularly trying to tell you is that your project setups
    and results might only rarely serve the requirements in professional _projects_ as you find them in _professional software companies_.

    Everyone these days can do their own development on their own projects.
    The standards do not need to be that high, the scale need not be that huge.

    Yet the off-the-shelf tools available are still slow and cumbersome.


    You cannot seem to accept that.

    Personally I'm not working anymore professionally. (I mentioned that occasionally.) But I've still the expertise from my professional work
    and education, and I share my experiences to those who are interested.

    You, personally, are of no interest to me; your presumptions are thus
    wrong. (I'm interested in CS and IT topics.)

    I'm interested in developing small, human-scale and *personal* projects
    around compilers, assemblers, linkers, interpreters and emulators. I
    also devise my own languages.

    That they were small, simple, fast, and self-contained with no
    dependencies (a necessity when I started out) was incidental.

    But those aspects are now deliberately cultivated as a stand against
    big, slow, complex tools and complex ecosystems.

    I also (I seem to be unique in this regard) understand the vast
    difference between building a WIP project from source during
    development, which may be done 100s of times a day, and an enduser
    building a finished product from source code, just once.


    And yet, most source projects that you build from source are just a dump
    of the developer's source tree. No effort is put into making it
    streamlined with few points of failure.

    So I am looking at that. And also at the problems of working with large libraries. I posted elsewhere about this: so WHY isn't the provided API
    for a library supplied as one compact monolithic header instead of
    dozens or hundreds or several headers? Why possible benefit is that to
    the /user/ of the library?

    In short, I'm doing at lot of experimental work in finding tidy,
    efficient solutions to building personal software, ones that are mainly OS-agnostic too.

    Meanwhile everybody else is striving to do that exact opposite! And in
    this newsgroup, continously shout down my work and my views.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Wed Oct 29 14:17:14 2025
    From Newsgroup: comp.lang.c

    On Wed, 29 Oct 2025 06:57:10 +0100
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    On 28.10.2025 12:16, bart wrote:

    * Less control (of data structures for example)

    Not sure what you mean (control constructs, more data structures).
    But have a look into Unix shells for control constructs, and into
    Kornshell specifically for data structures.

    It's a very inhomogeneous area. Impossible to clearly classify.

    Janis


    Less control of data structures means less control of data structures.
    In some (not all) non-scripting languages we have ether full control of
    the layout of records (Ada) or at least non-full-but-good-enough-in- practice-if-one-knows-what-he-is-doing control (C).
    In scripting languages the same effect often has to be achieved by
    coding binary parser in imperative manner. Imperative style in this
    case is less convenient and more error-prone than declarative style
    available in Ada and C.
    However there are many none-scripting language, including few of the
    most popular (Java, C#) that in this regard are not better than your
    typical scripting language.
    So, may be, better division here would be not "dynamic,scripting vs statically-typed, non-scripting", but "system-oriented languages vs application-oriented languages".


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Wed Oct 29 14:40:46 2025
    From Newsgroup: comp.lang.c

    On 29/10/2025 05:57, Janis Papanagnou wrote:
    On 28.10.2025 12:16, bart wrote:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:

    That's meaningless, but if you're interested to know...
    Mostly (including my professional work) I've probably used C++.
    But also other languages, depending on either projects' requirements
    or, where there was a choice, what appeared to be fitting best (and
    "best" sadly includes also bad languages if there's no alternative).

    Which bad languages are these?

    Are you hunting for a language war discussion? - I won't start it here.
    If you want, please start an appropriate topic in comp.lang.misc or so.

    I'm just looking for /anything/ you don't like! Since you seem to be remarkably uncritical of everything - except all the stuff I do.



    [...]

    That CDECL took, what, 49 seconds on my machine, to process 68Kloc of C?
    That's a whopping 1400 lines per second!

    If we go back 45 years to machines that were 1000 times slower,

    We are not in these days any more.

    The point of the comparison to 1000-times slower hardware is to
    highlight how remarkably slow some modern toolsets are.

    Nowadays there's much more complex
    software; some inherently bad designed software, and in other cases
    they might not care about tweaking the last second out of a process
    (for various reasons). So this comparison isn't really contributing
    anything here.

    Software might be bigger, but that is why you use LPS figures rather
    than overall build-time.

    However, I also picked on this task since it wouldn't have changed
    signicantly over those decades.



    So, yeah, build-time is a problem, even on the ultra-fast hardware we
    have now.

    What problem? - That you don't want to wait a few seconds?

    You KNOW compile- and build-times can be a serious bottleneck, and
    people are looking into ways to improve that, other than throwing extra hardware resources at it.

    Either you haven't experienced that, or you are remarkably tolerant and patient.

    The actual problem I picked up on is that the build-time was out of
    proportion to the scale of the task. In this case of a one-off build, it
    is not that consequential. But it suggests something is wrong.

    On current hardware we must surely be able to do better than 1-2K lines
    per second, even if optimising. And I know we can because some products,
    not just mine, can manage 500-1000 times faster.


    I know makefiles. Never used them, never will.

    (Do what you prefer. - After all you're not cooperating with others in
    your personal projects, as I understood, so there's no need to "learn"
    [or just use!] things you don't like. If you think it's a good idea to
    spend time in writing own code for already solved tasks, I'm fine with
    that.)

    You might recall that I create my own solutions.

    I don't recall, to be honest. But let's rather say; I'm not astonished
    that you have "created your own solutions". (Where other folks would
    just use an already existing, flexibly and simply usable, working and supported solution.) - So that's your problem not anyone else's.

    Because existing solutions DIDN'T EXIST in a practical form (remember I
    worked with 8-bit computers), or they were hopelessly slow and
    complicated on restricted hardware.

    I don't need a linker, I don't need a makefile, I don't need lists of dependencies between modules, I don't need independent compilation, I
    don't use object files.

    The generated makefile for the 49-module CDECL project is 2000 lines of gobbledygook; that's not really selling it to me!

    If *I* had a 49-module C project, the build info I'd supply you would basically be that list of files, plus the source files.

    With my language, you'd need exactly two files: a self-contained
    compiler, and a self-contained source file amalgamation. For a 600KB
    binary, it might take as much as 0.2 seconds to build.

    I consider that a more satisfactory solution that writing 2000 lines of garbage. YMMV.



    I also think that there a not few people that accept inferior quality;
    how else could the success of, say, DOS, Windows, and off-the-shelf
    MS office software, be explained.

    MS products are fairly solid. They are superb at backwards compatibility
    and at compatibility across machines in general. That's why Windows apps
    can be supplied as binaries that will work on any Windows machine.

    However they tend to be absolutely huge, complicated and slow, even more
    so than any Linux tools. (It once took 90 minutes to install VS.
    Starting it - usually inadvertently due to file associations - took 90 seconds.)

    * Dynamic typing

    Marcel van der Veer is advertising Genie (his Algol 68 interpreter) as
    a system usable for scripting. (With no dynamic but static typing.)

    This product is unusual, but then it's not clear where Algol 68 lies.
    It's a not really a static language like C, Rust, Zig, Go, Java ... but
    it's also not as high-level as ones like Haskell or OCaml, which are
    static or type-infered.

    The first group are usually compiled but may offer interpreted options.
    Such languages can be naturally converted to performant native code.

    However A68G prioritises interpretation. While there is a
    compile-to-native option, it's not very performant.

    So overall it's a curiosity. (An interesting one because after several decades, I was finally able to try out Algol68 for real. I wasn't
    impressed, and nothing to do with its speed either.)



    * Run from source

    How about JIT, how about intermediate languages?

    Intermediate languages (designed for compiler backends) are irrelevant. Whether they even have a textual source format is a detail.

    JIT-ing used in place of AOT-compilation for static languages is
    something new. I haven't come across examples so I don't know how it
    comes across, or what latencies there might be.

    Personally, I can run both C (single file programs ATM) and my languages directly from source, with no discernible delay, via a VERY FAST AOT
    step. But I wouldn't class them as scripting languages for other reasons.


    * Little compile-time error checking

    (We already commented in your above point "Dynamic typing".)

    There's more that could be done. Take:

    F(x, y, z)

    F is a function in some imported module. In most dynamic languages, the
    import is done at runtime so the number of arguments, or if F is even a function can't be checked at compile-time.

    In my dynamic language, the import is done at compile-time so there's
    more that can be checked in advance. It's less dynamic, but x, y, z can
    still be dynamically typed.

    * Less control (of data structures for example)

    Not sure what you mean (control constructs, more data structures).

    I mean things like layouts of structs, or even the exact form of an
    array. Again, mine has FFI abilities built-in, and directly supports
    C-like data types.

    So either of these user types can be defined:

    record date1 =
    var d, m, y # can hold any types
    end

    type date2 = struct
    u8 d, m
    u16 y
    end

    An instance of the latter occupies 4 bytes; of the former, 48+32 bytes
    plus whatever big data the members may contain.

    Most dynamic languages don't natively support that latter kind of data
    type. Actually many don't even directly have records with named fields
    like the first. They have be to emulated.


    It's a very inhomogeneous area. Impossible to clearly classify.

    Ask some people for examples of what they think of as scripting
    languages. I'd be interested in what they say.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From tTh@tth@none.invalid to comp.lang.c on Wed Oct 29 16:09:20 2025
    From Newsgroup: comp.lang.c

    On 10/29/25 15:40, bart wrote:

    I don't need a linker, I don't need a makefile, I don't need lists of dependencies between modules, I don't need independent compilation, I
    don't use object files.


    s/don't need/refuse to use/
    --
    ** **
    * tTh des Bourtoulots *
    * http://maison.tth.netlib.re/ *
    ** **
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Wed Oct 29 16:36:26 2025
    From Newsgroup: comp.lang.c

    On 28/10/2025 20:14, Janis Papanagnou wrote:
    On 28.10.2025 15:59, David Brown wrote:
    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    [ snip Lua statements ]

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the modern
    world is demonstrated by the almost total non-existence of serious tools
    and, more importantly, real-world code in the language.

    Obviously you are mixing the terms usefulness and dissemination
    (its actual use). Please accept that I'm differentiating here.

    There's quite some [historic] languages that were very useful but
    couldn't disseminate. (For another prominent example cf. Simula,
    that invented not only the object oriented principles with classes
    and inheritance, was a paragon for quite some OO-languages later,
    and it made a lot more technical and design inventions, some even
    now still unprecedented.) It's a pathological historic phenomenon
    that programming languages from the non-US American locations had
    inherent problems to disseminate especially back these days!

    Reasons for dissemination of a language are multifold; back then
    (but to a degree also today) they were often determined by political
    and marketing factors... (you can read about that in various historic documents and also in later ruminations about computing history)

    I can certainly agree that some languages, including Algol, Algol 68 and Simula, have had very significant influence on the programming world and
    other programming languages, despite limited usage. I was interpreting "useful programming language" as meaning "a language useful for writing programs" - and neither Algol 68 nor Simula are sensible choices for
    writing code today. Neither of them were ever appropriate choices for
    many programming tasks (Algol and its derivatives was used a lot more
    than Algol 68). The lack of significant usage of these languages beyond
    a few niche cases is evidence (but not proof) that they were never particularly useful as programming languages.


    It certainly /was/ a useful programming language, long ago,

    ...as you seem to basically agree to here. (At least as far as you
    couple usefulness with dissemination.)

    I do couple these, yes. I agree with you that there are many reasons
    for the popularity of languages other than technical suitability, but
    many of these add up to the general "usefulness" of the language. When choosing the language to use for a particular task, the availability of programmers familiar with the language, the availability of tools,
    libraries, and existing code, can be just as important as the language's efficiency, expressibility, or any technical benefits. Consider Bart's language - if we believe him at face value, it is the fastest, clearest,
    most logical, most powerful, and generally best programming language
    ever conceived. But for almost every programmer on the planet, it is completely useless.

    Similarly, Algol 68 may have been the technically best language of its
    age, and highly influential on other languages, and yet still not a
    useful programming language. It could also have been a useful
    programming language in its day, and no longer be a useful programming language.


    but it has not been
    seriously used outside of historical hobby interest for half a century.

    (Make that four decades. It's been used in the mid 1980's. - Later
    I didn't follow it anymore, so I cannot tell about the 1990's.)

    (I also disagree in your valuation "hobby interest"; for "hobbies"
    there were easier accessible languages used, not systems that were
    back these days mainly available on mainframes only.)

    I did not suggest that it is now, or ever has been, an appropriate
    language for hobby programmers - I don't know the language enough to
    judge. I suggested that anyone programming in Algol 68 today is likely
    to be doing so as a hobby or for historical interest. (There may be the occasional professional maintaining ancient Algol code for ancient
    mainframes that are still in use.)


    As far as you mean in programming software systems, that may be true;
    I cannot tell that I'd have an oversight who did use it. I've read
    about various applications, though; amongst them that it's even been
    used as a systems programming language (where I was astonished about).


    My understanding - which may well be flawed - is that Algol 60 and many non-standard variants were used quite widely at the time. Algol 68, on
    the other hand, never took off outside.

    And unlike other ancient languages (like Cobol or Fortran) there is no
    code of relevance today written in the language.

    Probably right. (That would certainly be also my guess.)

    Original Algol was
    mostly used in research, while Algol 68 was mostly not used at all. As
    C.A.R. Hoare said, "As a tool for the reliable creation of sophisticated
    programs, the language was a failure".

    I don't know the context of his statement. If you know the language
    you might admit that reliable software is exactly one strong property
    of that language. (Per se already, but especially so if compared to
    languages like "C", the language discussed in this newsgroup, with an extremely large dissemination and also impact.)


    I don't know the context either.


    I'm sure there are /some/ people who have or will write real code in
    Algol 68 in modern times

    The point was that the language per se was and is useful. But its
    actual usage for developing software systems seems to have been of
    little and more so it's currently of no importance, without doubt.

    (the folks behind the new gcc Algol 68
    front-end want to be able to write code in the language),

    There's more than the gcc folks. (I've heard, that gcc has taken some substantial code from Genie, an Algol 68 "compiler-interpreter" that
    is still maintained. BTW; I'm for example using that one, not gcc's.)

    but it is very much a niche language.

    It's _functionally_ a general purpose language, not a niche language
    (in the sense of "special purpose language"). Its dissemination makes
    it to a "niche language", that's true. It's in practice just a dead
    language. It's rarely used by anyone. But it's a very useful language.


    Can you give any examples of situations where it might be reasonable to
    choose Algol 68 as a language /today/ for a piece of code, rather than a
    more mainstream language (C, Python, Java, Pascal, Visual Basic,
    whatever) ? If such situations are very rare or non-existent, then I do
    not see it as a useful language.

    But I think we are mostly disagreeing about what we consider the term
    "useful programming language" to mean.





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Wed Oct 29 17:12:44 2025
    From Newsgroup: comp.lang.c

    On 29/10/2025 00:14, bart wrote:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages >>>> Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
      capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    I'll give this one more try.

    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    He (I assume) always dismisses every single one of my arguments out of
    hand:

    Build speed is never a problem - ever. The speed of any language implemention is never a concern either.


    Bart, I think this all comes down to some basic logic that you get wrong regularly :

    The opposite of "X is always true" is /not/ "X is always false" or that
    "(not X) is always true". It is that "X is /sometimes/ false", or that
    "(not X) is /sometimes/ true".

    You get this wrong repeatedly when you and I are in disagreement, and I
    see it again and again with other people - such as with both Janis and
    Keith.

    No one, in any of the posts I have read in c.l.c. in countless years,
    has ever claimed that "build speed is /never/ a problem". People have regularly said that it /often/ is not a problem, or it is not a problem
    in their own work, or that slow compile times can often be dealt with in various ways so that it is not a problem. People don't disagree that
    build speed can be an issue - they disagree with your claims that it is /always/ an issue (except when using /your/ tools, or perhaps tcc).

    When Janis disagrees with you, he is not trashing /everything/ you say,
    he is disagreeing with /some/ of what you say.

    No one disagrees that /some/ people would change to using dynamic
    scripting languages if they had no runtime speed penalty compared to
    compiled languages - but probably everyone would disagree with a claim
    that /all/ programmers would change. And no one here thinks that either
    you or anyone has a reasonable basis for judging how many that "some"
    would be.

    So please, stop making this kind of mistake. I am confident that you understand the logic here. But you regularly write as though you do
    not, setting up nonsensical straw man arguments as a result. And then
    you make claims about what other people think or said based on this.
    Yes, it is very much /you/ who is difficult to communicate with.

    And you should not be surprised if Keith agrees with you sometimes -
    like I do, like Janis does, and like most people here do, he judges your points as best he can and agrees with some and disagrees with others.
    These discussions are not black-or-white, all-or-nothing affairs. If
    you like to hear positive feedback and agreement on your comments (and
    who doesn't like that?), you need to pay attention to what people write
    and notice when people agree with you rather than focusing only on when
    they disagree. Cut the paranoia, drop the straw men and exaggerations,
    argue you case logically, listen to the replies and feedback you get,
    and the whole discussion will be a lot more enjoyable and productive.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Wed Oct 29 16:47:39 2025
    From Newsgroup: comp.lang.c

    On 29/10/2025 15:09, tTh wrote:
    On 10/29/25 15:40, bart wrote:

    I don't need a linker, I don't need a makefile, I don't need lists of
    dependencies between modules, I don't need independent compilation, I
    don't use object files.


         s/don't need/refuse to use/

    It looks like Python refuses to use all those things too!

    Think about that, then think about how it might be possible for a
    language and implementation to use an alternate path to get from source
    code to executable. One that is simpler.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Wed Oct 29 17:24:49 2025
    From Newsgroup: comp.lang.c

    On 29/10/2025 15:36, David Brown wrote:
    On 28/10/2025 20:14, Janis Papanagnou wrote:

    Reasons for dissemination of a language are multifold; back then
    (but to a degree also today) they were often determined by political
    and marketing factors... (you can read about that in various historic
    documents and also in later ruminations about computing history)

    I can certainly agree that some languages, including Algol, Algol 68 and Simula, have had very significant influence on the programming world and other programming languages, despite limited usage.  I was interpreting "useful programming language" as meaning "a language useful for writing programs" - and neither Algol 68 nor Simula are sensible choices for
    writing code today.  Neither of them were ever appropriate choices for
    many programming tasks (Algol and its derivatives was used a lot more
    than Algol 68).  The lack of significant usage of these languages beyond
    a few niche cases is evidence (but not proof) that they were never particularly useful as programming languages.

    Algol68, while refreshingly different when I came across it in the late
    70s, was a complex language.

    Its reference document, the Revised Report, its two-level van
    Wijngaarden grammar, suggested a language too much up its own arse.

    Its complexities tended to leak even into straightforward features that
    people are familiar with from other languages.

    Understanding it, and confidently using it, looked hard. Implementing it
    must have been a lot harder.

    Also, at the time I'd only ever seen examples of it in print, where it
    was beautifully typeset and looked gorgeous.

    The reality when I finally got to try it was very different. You spent
    half the time fighting with upper/lower case and trying to get
    semicolons right. And most of rest grappling with esoteric error
    messages couched in terms from the revised report (which has its own vocabulary).

    I borrowed some syntactic features I considered cool, but I had to
    produce a real, practical systems language for microprocessors, whose
    compiler had to run on the same machine.

    From this perspective, I consider it rather dreadful now, with lots of dubious-sounding aspects.

    Take this one: comments start with '#' (an alternative to COMMENT) and
    also end with '#'. Leave out '#' (or have a stray one) and everything
    now gets out of step.

    Or this one:

    print((2 + 3 * 4));

    BEGIN
    PRIO * = 5;
    print((2 + 3 * 4))
    END

    The first print shows 14. The second shows 20, as the precedence of '*'
    has been set to match that of '+'.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Wed Oct 29 19:24:12 2025
    From Newsgroup: comp.lang.c

    On 29/10/2025 01:48, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages >>>>> Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
    capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!
    I'll give this one more try.
    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    No, I'm talking to you. It turns out that was a mistake.

    My post was **only** about your apparent confusion about a single
    statement, quoted above. I wasn't talking about JP personally, or about
    any of his other interactions with you. I explained in great detail
    what I was referring to. You ignored it.

    You seem unwilling or unable to focus on one thing.

    He (I assume) always dismisses every single one of my arguments out of hand: >>
    Build speed is never a problem - ever. The speed of any language
    implemention is never a concern either.

    And here you are putting words in other people's mouths.

    I think you goal is to argue, not to do anything that might result in agreement or learning.

    Again, I think you're mixing up me and JP, whose only goal is to
    contradict and refute everything I say.

    I say: X has some problem; Y doesn't have that problem. This is about approaches to building software.

    He refuses to acknowledge that X has any problem whatsoever, or shrugs
    off the importance

    He refuses to accept that Y is a solution, because I devised it and he
    looks down upon me because he considers himself superior.


    He refuses to accept Z (which I haven't devised) for other reasons (to
    avoid admitting that I might have a point).

    The problems are X are real and I think you have acknowledged them. But
    I have decades of experience of viable alternatives so I think I can
    offer an educated, alternative opinion

    JP I don't think has offered any better alternatives has not devised any
    that I am aware. So he is just and user of such software and not a creator.

    This is rather frustrating to me. You seem to be on his side, and don't
    care about X versus Y either.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Wed Oct 29 20:33:07 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> wrote:
    On 26/10/2025 16:04, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 26/10/2025 06:25, Janis Papanagnou wrote:


    However the A68G configure script is 11000 lines; the CDECL one 31600 lines.

    (I wonder why the latter needs 20000 more lines? I guess nobody is
    curious - or they simply don't care.)

    You should be able to figure that out yourself. You may actually
    learn something useful along the way.


    So you don't know.

    What special requirements does CDECL have (which has a task that is at
    least a magnitude simpler than A68G's), that requires those 20,000 extra lines?

    I did not look deeply, but cdecl is using automake and related
    tools. IIUC you can have small real source, and depend on
    autools to provide tests. This is likely to bring tons of
    irrelevant tests into configure. Or you can specify precisely
    which tests are needed. In the second case you need to
    write more code, but generated configure is smaller.

    My working hypotesis is that cdecl is relatively simple program,
    so autotools defaults lead to working build. And nobody was
    motiveted enough to select what is needed, so configure
    contains a lot of code which is useful sometimes, but probably
    not for cdel.

    BTW: In one "my" project there is hand-written configure.ac
    which is select tests that are actually needed for the
    project. Automake in _not_ used. Generated configure
    has 8564 lines. But the project has rather complex
    requirements and autotools defaults are unlikely to
    work, so one really have to explicitly handle various
    details.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Wed Oct 29 21:21:34 2025
    From Newsgroup: comp.lang.c

    On 29/10/2025 16:12, David Brown wrote:
    On 29/10/2025 00:14, bart wrote:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic
    languages
    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
      capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    I'll give this one more try.

    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    He (I assume) always dismisses every single one of my arguments out of
    hand:

    Build speed is never a problem - ever. The speed of any language
    implemention is never a concern either.


    Bart, I think this all comes down to some basic logic that you get wrong regularly :

    The opposite of "X is always true" is /not/ "X is always false" or that "(not X) is always true".  It is that "X is /sometimes/ false", or that "(not X) is /sometimes/ true".

    You get this wrong repeatedly when you and I are in disagreement, and I
    see it again and again with other people - such as with both Janis and Keith.

    No one, in any of the posts I have read in c.l.c. in countless years,
    has ever claimed that "build speed is /never/ a problem".  People have regularly said that it /often/ is not a problem, or it is not a problem
    in their own work, or that slow compile times can often be dealt with in various ways so that it is not a problem.  People don't disagree that
    build speed can be an issue - they disagree with your claims that it
    is /always/ an issue (except when using /your/ tools, or perhaps tcc).

    It was certainly an issue here: the 'make' part of building CDECL and
    A68G, I considered slow for the scale of the task given that the apps
    are 68 and 78Kloc (static total of .c and .h files).

    A68G I know takes 90 seconds to build (since I've just tried it again;
    it took long enough that I had an ice-cream while waiting, so that's something).

    That's under 1Kloc per second; not great.

    But at least all the optimising would have produced a super-fast
    executable? Well, that's disappointing too; no-one can say that A68G is
    fast.

    I said that my equivalent product was 1000 times faster to build (don't
    forget the configure nonsense) and it ran 10 times faster on the same test.

    That is a quite remarkable difference. VERY remarkable. Only some of it
    is due to my product being smaller (but it's not 1000 times smaller!).

    This was stated to demonstrate how different my world was.

    My view is that there is something very wrong with the build systems
    everyone here uses. But I can understand that no one wants to admit that they're that bad.

    You find ways around it, you get inured to it, but you just have to use
    much more powerful machines than mine, but I would go round the bend if
    I had to work with something so unresponsive.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ben Bacarisse@ben@bsb.me.uk to comp.lang.c on Wed Oct 29 21:30:44 2025
    From Newsgroup: comp.lang.c

    scott@slp53.sl.home (Scott Lurndal) writes:

    Michael S <already5chosen@yahoo.com> writes:
    On Tue, 28 Oct 2025 16:05:47 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:
    ...
    There is still one computer system that uses Algol as both
    the system programming language, and for applications.

    Unisys Clearpath (descendents of the Burroughs B6500).


    Is B6500 ALGOL related to A68?

    A-series ALGOL has many extensions.

    DCAlgol, for example, is used to create applications
    for data communications (e.g. poll-select multidrop
    applications such as teller terminals, etc).

    NEWP is an algol dialect used for systems programming
    and the operating system itself.


    ALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000098-517/86000098-517.pdf
    DCALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000841-208.pdf
    NEWP: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-21.0/86002003-409.pdf

    None of these are related to Algol 68, any more than any other
    Algol-like language might be. None exhibit any of the key features that distinguish Algol 68 from Algol 60 or any of the many Algol-like
    languages such as Algol W or S-algol (sic).
    --
    Ben.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Wed Oct 29 15:10:41 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 29/10/2025 01:48, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages [...]

    Bart, is the above statement literally accurate? Do you believe that
    we would ALL be using "easy dynamic languages" if speed were not an
    issue, meaning that non-dynamic languages would die out completely?

    That's what this whole sub-argument is about.

    Maybe your statement was meant to be hyberbole, and that what you
    really meant is that dynamic languages would be more popular than
    they are now if speed were not an issue. Possibly someone just took
    your figuratative statement a little too literally. If that's the
    case, please just say so.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Thu Oct 30 00:04:43 2025
    From Newsgroup: comp.lang.c

    On 29/10/2025 22:21, bart wrote:
    On 29/10/2025 16:12, David Brown wrote:
    On 29/10/2025 00:14, bart wrote:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic
    languages
    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
      capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world >>>>> might also have the same desire, you'd say that I can't possibly know >>>>> that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    I'll give this one more try.

    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    He (I assume) always dismisses every single one of my arguments out
    of hand:

    Build speed is never a problem - ever. The speed of any language
    implemention is never a concern either.


    Bart, I think this all comes down to some basic logic that you get
    wrong regularly :

    The opposite of "X is always true" is /not/ "X is always false" or
    that "(not X) is always true".  It is that "X is /sometimes/ false",
    or that "(not X) is /sometimes/ true".

    You get this wrong repeatedly when you and I are in disagreement, and
    I see it again and again with other people - such as with both Janis
    and Keith.


    Bart, did you understand what I wrote here? Do you agree with it - or
    at least accept how your posts can be interpreted this way? If you
    can't change the way you express yourself, these threads will always end
    with you repeating wild exaggerations and generalisations on your
    favourite rants, no matter what the original topic, and you'll again get frustrated because you feel "everyone is against you". We get more than enough of that with Olcott - I know you can do better.

    No one, in any of the posts I have read in c.l.c. in countless years,
    has ever claimed that "build speed is /never/ a problem".  People have
    regularly said that it /often/ is not a problem, or it is not a
    problem in their own work, or that slow compile times can often be
    dealt with in various ways so that it is not a problem.  People don't
    disagree that build speed can be an issue - they disagree with your
    claims that it is /always/ an issue (except when using /your/ tools,
    or perhaps tcc).

    It was certainly an issue here: the 'make' part of building CDECL and
    A68G, I considered slow for the scale of the task given that the apps
    are 68 and 78Kloc (static total of .c and .h files).


    I have no interest in A68G. I have no stake in cdecl or knowledge (or particular interest) in how it was written, and how appropriate the
    number of lines of code are for the task in hand. I am confident it
    could have been written in a different way with less code - but not at
    all confident that doing so would be in any way better for the author of
    the program. I am also confident that you know far too little about
    what the program can do, or why it was written the way it was, to judge whether it has a "reasonable" number of lines of code, or not.

    However, it's easy to look at the facts. The "src" directory from the
    github clone has about 50,000 lines of code in .c files, and 18,000
    lines of code in .h files. The total is therefore about 68 kloc of
    source. This does not at all mean that compilation processes exactly 68 thousand lines of code - it will be significantly more than that as
    headers are included by multiple files, and lots of other headers from
    the C standard library and other libraries are included. Let's guess
    100 kloc.

    The build process takes 8 seconds on my decade-old machine, much of
    which is something other than running the compiler. (Don't ask me what
    it is doing - I did not write this software, design its build process,
    or determine how the program is structured and how it is generated by
    yacc or related tools. This is not my area of expertise.) If for some strange reason I choose to run "make" rather than "make -j", thus
    wasting much of my computer's power, it takes 16 seconds. Some of these non-compilation steps do not appear to be able to run in parallel, and a couple of the compilations (like "parser.c", which appears to be from a
    parser generator rather than specifically written) are large and take a
    couple of seconds to compile. My guess is that the actual compilations
    are perhaps 4 seconds. Overall, I make it 25 kloc per second. While I
    don't think that is a particularly relevant measure of anything useful,
    it does show that either you are measuring the wrong thing, using a
    wildly inappropriate or limited build environment, or are unaware of how
    to use your computer to build code. (And my computer cpu was about 30%
    busy doing other productive tasks, such as playing a game, while I was
    doing those builds.)


    So, you are exaggerating, mismeasuring or misusing your system to get
    build times that are well over an order of magnitude worse than
    expected. This follows your well-established practice.

    And you claim your own tools would be 1000 times faster. Maybe they
    would be. Certainly there have been tools in the past that are much
    smaller and faster than modern tools, and were useful at the time.
    Modern tools do so much more, however. A tool that doesn't do the job
    needed is of no use for a given task, even if it could handle other
    tasks quickly.

    But the crux of the matter, and I can't stress this enough as it never
    seems to get through to you, is that fast enough is fast enough. No one
    cares how long cdecl takes to build. Almost everyone who wants it will download a binary file - "apt-get install cdecl", or similar. The only
    people who bother to compile it are those who want the cutting edge
    version. And even if it takes a minute or two to build, so what? It
    does not matter. If it took an hour, that would be annoying if you
    wanted to run it /now/, but even then if it were a useful tool (to the
    user in question), all you need to do is start it running and then let
    it churn away in the background. Computers are really good at doing
    that kind of stuff, and don't get bored easily. Building is a one-time
    task. (If the edit-build-test cycle for the developers took an hour,
    that would be a totally different matter.)

    Of course everyone agrees that smaller and faster is better, all things
    being equal - but all things are usually /not/ equal, and once something
    is fast enough to be acceptable, making it faster is not a priority.

    You can view all this as "bad" if you want. But since the size of the
    source code for cdecl, the time it takes to build, the use of autotools,
    the out-the-box Windows experience, and the length of configure script
    have absolutely /zero/ influence on whether or not I would use cdecl, or
    how useful I would find it, why should I care about those things? I
    don't think they are relevant to the vast majority of other potential
    cdecl users either, and thus do not have to care for their experiences
    either.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From vallor@vallor@vallor.earth to comp.lang.c on Wed Oct 29 23:11:36 2025
    From Newsgroup: comp.lang.c

    At Wed, 29 Oct 2025 21:21:34 +0000, bart <bc@freeuk.com> wrote:

    On 29/10/2025 16:12, David Brown wrote:
    On 29/10/2025 00:14, bart wrote:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic
    languages
    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
      capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world >>>> might also have the same desire, you'd say that I can't possibly know >>>> that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    I'll give this one more try.

    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    He (I assume) always dismisses every single one of my arguments out of
    hand:

    Build speed is never a problem - ever. The speed of any language
    implemention is never a concern either.


    Bart, I think this all comes down to some basic logic that you get wrong regularly :

    The opposite of "X is always true" is /not/ "X is always false" or that "(not X) is always true".  It is that "X is /sometimes/ false", or that "(not X) is /sometimes/ true".

    You get this wrong repeatedly when you and I are in disagreement, and I see it again and again with other people - such as with both Janis and Keith.

    No one, in any of the posts I have read in c.l.c. in countless years,
    has ever claimed that "build speed is /never/ a problem".  People have regularly said that it /often/ is not a problem, or it is not a problem
    in their own work, or that slow compile times can often be dealt with in various ways so that it is not a problem.  People don't disagree that build speed can be an issue - they disagree with your claims that it
    is /always/ an issue (except when using /your/ tools, or perhaps tcc).

    It was certainly an issue here: the 'make' part of building CDECL and
    A68G, I considered slow for the scale of the task given that the apps
    are 68 and 78Kloc (static total of .c and .h files).

    Not sure if it's worth it, but my 2 cents:

    You can throw more processors at your "make" with the "-j" switch,
    something like:

    $ make -j $(nproc)

    Where $(nproc) substitutes the number of processors on your system
    for a parallel make.


    A68G I know takes 90 seconds to build (since I've just tried it again;
    it took long enough that I had an ice-cream while waiting, so that's something).

    That's under 1Kloc per second; not great.

    But at least all the optimising would have produced a super-fast
    executable? Well, that's disappointing too; no-one can say that A68G is fast.

    I said that my equivalent product was 1000 times faster to build (don't forget the configure nonsense) and it ran 10 times faster on the same test.

    That is a quite remarkable difference. VERY remarkable. Only some of it
    is due to my product being smaller (but it's not 1000 times smaller!).

    This was stated to demonstrate how different my world was.

    My view is that there is something very wrong with the build systems everyone here uses. But I can understand that no one wants to admit that they're that bad.

    You find ways around it, you get inured to it, but you just have to use
    much more powerful machines than mine, but I would go round the bend if
    I had to work with something so unresponsive.

    --
    -v System76 Thelio Mega v1.1 x86_64 NVIDIA RTX 3090Ti 24G
    OS: Linux 6.17.5 D: Mint 22.2 DE: Xfce 4.18
    NVIDIA: 580.95.05 Mem: 258G
    "Let's split up, we can do more damage that way."
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Wed Oct 29 23:19:10 2025
    From Newsgroup: comp.lang.c

    On 29/10/2025 22:10, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 29/10/2025 01:48, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages
    [...]

    Bart, is the above statement literally accurate?

    Literally as in all 8.x billion individuals on the planet, including
    infants and people in comas, would be using such languages?

    This is what you seem to be suggesting that I mean, and here you're both
    being overly pedantic. You could just agree with me you know!

    'If X then we'd all be doing Y' is a common English idiom, suggesting X
    was a no-brainer.


    Do you believe that
    we would ALL be using "easy dynamic languages" if speed were not an
    issue, meaning that non-dynamic languages would die out completely?

    Yes, I believe that if dynamic languages, however they are implemented,
    could always deliver native code speeds, then a huge number of people,
    and companies, would switch because of that and other benefits.

    Bear in mind that if that was the case, then new dynamic languages could emerge that help broad their range of applications.




    That's what this whole sub-argument is about.

    Well I didn't start it. Somebody suggested the speed of a language implementation had little relevance (not willing to admit the
    shortcomings of A68G), and I suggested in light-hearted idiom that if
    dynamic languages were much faster, their take-up would be much greater.

    What should I have said, that it would increase by 54.91% over the next
    4 quarters?

    (Remind me to run my posts through a lawyer next time.)


    really meant is that dynamic languages would be more popular than
    they are now if speed were not an issue. Possibly someone just took
    your figuratative statement a little too literally. If that's the
    case, please just say so.

    Oh, you finaly got it! See it wasn't hard.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Wed Oct 29 23:29:42 2025
    From Newsgroup: comp.lang.c

    On 29/10/2025 20:33, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:
    On 26/10/2025 16:04, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 26/10/2025 06:25, Janis Papanagnou wrote:


    However the A68G configure script is 11000 lines; the CDECL one 31600 lines.

    (I wonder why the latter needs 20000 more lines? I guess nobody is
    curious - or they simply don't care.)

    You should be able to figure that out yourself. You may actually
    learn something useful along the way.


    So you don't know.

    What special requirements does CDECL have (which has a task that is at
    least a magnitude simpler than A68G's), that requires those 20,000 extra
    lines?

    I did not look deeply, but cdecl is using automake and related
    tools. IIUC you can have small real source, and depend on
    autools to provide tests. This is likely to bring tons of
    irrelevant tests into configure. Or you can specify precisely
    which tests are needed. In the second case you need to
    write more code, but generated configure is smaller.

    My working hypotesis is that cdecl is relatively simple program,
    so autotools defaults lead to working build. And nobody was
    motiveted enough to select what is needed, so configure
    contains a lot of code which is useful sometimes, but probably
    not for cdel.

    BTW: In one "my" project there is hand-written configure.ac
    which is select tests that are actually needed for the
    project. Automake in _not_ used. Generated configure
    has 8564 lines. But the project has rather complex
    requirements and autotools defaults are unlikely to
    work, so one really have to explicitly handle various
    details.



    I have a project coming up next month: a subset of my C compiler, which
    is not written in C, being ported to actual C.

    What I'm thinking of doing is taking part of that project, and creating
    a standalone program that does the 'explain' part of cdecl, and only for
    C, not C++. This would not worth doing by itself.

    Then I can make that available to see how it looks and how it builds.

    But I do not expect it to need anything other than a C compiler, and it
    should work on any OS (it needs only a keyboard and a display).

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Wed Oct 29 16:47:29 2025
    From Newsgroup: comp.lang.c

    David Brown <david.brown@hesbynett.no> writes:
    [...]
    But the crux of the matter, and I can't stress this enough as it never
    seems to get through to you, is that fast enough is fast enough. No
    one cares how long cdecl takes to build.
    [...]

    Since the most recent argument here has been about the interpretation
    of an absolute statement, I think I should point out that your last
    statement above is not literally true. *Some* people do care how
    long cdecl takes to build. Most of us, I think, don't particularly
    care as long as it's no more than a few minutes.

    I understand what you meant, but in a discussion about hyberbolic
    statements being taken literally, I suggest it's good to be
    painfully precise.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Thu Oct 30 00:36:05 2025
    From Newsgroup: comp.lang.c

    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:

    It was certainly an issue here: the 'make' part of building CDECL and
    A68G, I considered slow for the scale of the task given that the apps
    are 68 and 78Kloc (static total of .c and .h files).


    I have no interest in A68G.  I have no stake in cdecl or knowledge (or particular interest) in how it was written, and how appropriate the
    number of lines of code are for the task in hand.  I am confident it
    could have been written in a different way with less code - but not at
    all confident that doing so would be in any way better for the author of
    the program.  I am also confident that you know far too little about
    what the program can do, or why it was written the way it was, to judge whether it has a "reasonable" number of lines of code, or not.

    However, it's easy to look at the facts.  The "src" directory from the github clone has about 50,000 lines of code in .c files, and 18,000
    lines of code in .h files.  The total is therefore about 68 kloc of source.  This does not at all mean that compilation processes exactly 68 thousand lines of code - it will be significantly more than that as
    headers are included by multiple files, and lots of other headers from
    the C standard library and other libraries are included.  Let's guess
    100 kloc.

    Yes, that's why I said the 'static' line counts are 68 and 78K. Maybe
    the slowdown is due to some large headers that lie outside the problem
    (not the standard headers), but so what? (That would be a shortcoming of
    the C language.)

    The A68G sources also contain lots of upper-case content, so perhaps
    macro expansion is going on too.

    The bottom line is this is an 80Kloc app that takes that long to buidld.


    The build process takes 8 seconds on my decade-old machine, much of
    which is something other than running the compiler.  (Don't ask me what
    it is doing - I did not write this software, design its build process,
    or determine how the program is structured and how it is generated by
    yacc or related tools.  This is not my area of expertise.)  If for some strange reason I choose to run "make" rather than "make -j", thus
    wasting much of my computer's power, it takes 16 seconds.  Some of these non-compilation steps do not appear to be able to run in parallel, and a couple of the compilations (like "parser.c", which appears to be from a parser generator rather than specifically written) are large and take a couple of seconds to compile.  My guess is that the actual compilations
    are perhaps 4 seconds.  Overall, I make it 25 kloc per second.  While I don't think that is a particularly relevant measure of anything useful,
    it does show that either you are measuring the wrong thing, using a
    wildly inappropriate or limited build environment, or are unaware of how
    to use your computer to build code.

    Tell me then how I should do it to get single-figure build times for a
    fresh build. But whatever it is, why doesn't it just do that anyway?!

    (And my computer cpu was about 30%
    busy doing other productive tasks, such as playing a game, while I was
    doing those builds.)


    So, you are exaggerating, mismeasuring or misusing your system to get
    build times that are well over an order of magnitude worse than
    expected.  This follows your well-established practice.

    So, what exactly did I do wrong here (for A68G):

    root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
    real 1m32.205s
    user 0m40.813s
    sys 0m7.269s

    This 90 seconds is the actual time I had to hang about waiting. I'd be interested in how I managed to manipulate those figures!

    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:

    root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make
    output
    <warnings>
    real 0m49.512s
    user 0m19.033s
    sys 0m3.911s

    On the RPi4 (usually 1/3 the speed of my PC), the make-time for A68G was
    137 seconds (using SD storage; the PC uses SSD), so perhaps 40 seconds
    on the PC, suggesting that the underlying Windows file system may be
    slowing things down, but I don't know.

    However the same PC, under actual Windows, manages this:

    c:\qx>tim mm qq
    Compiling qq.m to qq.exe (500KB but half is data; A68G is 1MB?)
    Time: 0.084

    And this:

    c:\cx>tim tcc lua.c (250-400KB)
    Time: 0.124

    And you claim your own tools would be 1000 times faster.

    In this case, yes. The figure is more typically around 100 if the other compiler is optimising, however that would be representations of the
    same program. A68G is somewhat bigger than my product.

      Maybe they
    would be.  Certainly there have been tools in the past that are much smaller and faster than modern tools, and were useful at the time.
    Modern tools do so much more, however.  A tool that doesn't do the job needed is of no use for a given task, even if it could handle other
    tasks quickly.

    It ran my test program; that's what counts!





    But the crux of the matter, and I can't stress this enough as it never
    seems to get through to you, is that fast enough is fast enough.  No one cares how long cdecl takes to build.

    I don't care either; I just wanted to try it.

    But I pick up things that nobody else seems to: this particular build
    was unusually slow; why was that? Perhaps there's a bottleneck in the
    process that needs to be fixed, or a bug, that would give benefits when
    it does matter.

    (An article posted in Reddit detailed how a small change in how Clang
    worked made a 5-7% difference in build times for large projects.

    You'd probably dismiss it as irrelevant, but lots of such improvements
    build up. At least it is good that some people are looking at such aspects.

    https://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST-Leaner-Faster.html)


    Of course everyone agrees that smaller and faster is better, all things being equal - but all things are usually /not/ equal, and once something
    is fast enough to be acceptable, making it faster is not a priority.

    My compilers have already reached that threshold (most stuff builds in
    the time it takes to take my finger off the Enter button). But most
    mainstream compilers are a LONG way off.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Wed Oct 29 18:03:02 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 29/10/2025 22:10, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 29/10/2025 01:48, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages
    [...]
    Bart, is the above statement literally accurate?

    Literally as in all 8.x billion individuals on the planet, including
    infants and people in comas, would be using such languages?

    This is what you seem to be suggesting that I mean, and here you're
    both being overly pedantic. You could just agree with me you know!

    I have agreed with a significant number of your statements in the recent
    past. I would not consider agreeing with this particular statement
    without understanding just what you meant by it. (That would be a
    necessary but sufficient prerequisite for my agreement.)

    'If X then we'd all be doing Y' is a common English idiom, suggesting
    X was a no-brainer.

    So you were being figurative, not literal. That's what I thought.
    Thank you for confirming it.

    Do you believe that
    we would ALL be using "easy dynamic languages" if speed were not an
    issue, meaning that non-dynamic languages would die out completely?

    Yes, I believe that if dynamic languages, however they are
    implemented, could always deliver native code speeds, then a huge
    number of people, and companies, would switch because of that and
    other benefits.

    You are conflating "a huge number of people" with "ALL". I suppose this
    is meant to be hyperbole.

    You wrote :

    If speed wasn't an issue then we'd all be using easy dynamic
    languages

    Janis replied :

    Huh? - Certainly not.

    Your reply to that was :

    *I* would! That's why I made my scripting languages as fast and
    capable as possible, so they could be used for more tasks.

    That is not responsive to what Janis wrote. I'm 99% sure that
    Janis's stated opinion is that *some but not all* programmers would
    switch to "easy dynamic langauges" if speed were not an issue.
    Telling us that you would does not contradict what Janis wrote
    or meant.

    However, if I dare to suggest that even one other person in the
    world might also have the same desire, you'd say that I can't
    possibly know that.

    No. If you suggested that one or more other people would switch to
    dynamic languages if speed were not an issue, I probably wouldn't even
    reply, because that statement would be so obviously true that it
    wouldn't be worth discussing. Your ideas about what other people think
    are so distorted that you assume we would disagree.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    And that's just nonsense, and *completely* nonresponsive to what Janis
    wrote.

    Your position is that, if speed were not an issue, "a huge
    number of people, and companies, would switch" to "easy dynamic
    languages". My position, and I believe Janis's position, is that *many*
    people and companies would likely switch to such languages in those circumstances, but probably not "a huge number". (I'm not interested in debating what "a huge number" means. (I acknowledge the possiblity that
    you're right and Janis and I are wrong, but we'll never know, because
    speed will never not be an issue. In any case, the point of this reply
    is to establish what was actually said, not who is right or wrong.)

    When Janis expressed skepticism about your claim that either "all"
    or "a huge number" of people would switch, you reacted exactly as
    if Janis had says that *nobody* would switch. You were offended by
    something that neither Janis nor anyone else wrote or suggested.
    I don't care who started the argument, but your misinterpretation
    of what Janis wrote is what has caused it to continue.

    This kind of thing keeps happening.

    Do you understand what I'm saying?
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Thu Oct 30 03:37:48 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> wrote:
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:

    It was certainly an issue here: the 'make' part of building CDECL and
    A68G, I considered slow for the scale of the task given that the apps
    are 68 and 78Kloc (static total of .c and .h files).


    I have no interest in A68G.  I have no stake in cdecl or knowledge (or
    particular interest) in how it was written, and how appropriate the
    number of lines of code are for the task in hand.  I am confident it
    could have been written in a different way with less code - but not at
    all confident that doing so would be in any way better for the author of
    the program.  I am also confident that you know far too little about
    what the program can do, or why it was written the way it was, to judge
    whether it has a "reasonable" number of lines of code, or not.

    However, it's easy to look at the facts.  The "src" directory from the
    github clone has about 50,000 lines of code in .c files, and 18,000
    lines of code in .h files.  The total is therefore about 68 kloc of
    source.  This does not at all mean that compilation processes exactly 68 >> thousand lines of code - it will be significantly more than that as
    headers are included by multiple files, and lots of other headers from
    the C standard library and other libraries are included.  Let's guess
    100 kloc.

    Yes, that's why I said the 'static' line counts are 68 and 78K. Maybe
    the slowdown is due to some large headers that lie outside the problem
    (not the standard headers), but so what? (That would be a shortcoming of
    the C language.)

    The A68G sources also contain lots of upper-case content, so perhaps
    macro expansion is going on too.

    The bottom line is this is an 80Kloc app that takes that long to buidld.


    The build process takes 8 seconds on my decade-old machine, much of
    which is something other than running the compiler.  (Don't ask me what
    it is doing - I did not write this software, design its build process,
    or determine how the program is structured and how it is generated by
    yacc or related tools.  This is not my area of expertise.)  If for some >> strange reason I choose to run "make" rather than "make -j", thus
    wasting much of my computer's power, it takes 16 seconds.  Some of these >> non-compilation steps do not appear to be able to run in parallel, and a
    couple of the compilations (like "parser.c", which appears to be from a
    parser generator rather than specifically written) are large and take a
    couple of seconds to compile.  My guess is that the actual compilations
    are perhaps 4 seconds.  Overall, I make it 25 kloc per second.  While I >> don't think that is a particularly relevant measure of anything useful,
    it does show that either you are measuring the wrong thing, using a
    wildly inappropriate or limited build environment, or are unaware of how
    to use your computer to build code.

    Tell me then how I should do it to get single-figure build times for a
    fresh build. But whatever it is, why doesn't it just do that anyway?!

    (And my computer cpu was about 30%
    busy doing other productive tasks, such as playing a game, while I was
    doing those builds.)


    So, you are exaggerating, mismeasuring or misusing your system to get
    build times that are well over an order of magnitude worse than
    expected.  This follows your well-established practice.

    So, what exactly did I do wrong here (for A68G):

    root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
    real 1m32.205s
    user 0m40.813s
    sys 0m7.269s

    This 90 seconds is the actual time I had to hang about waiting. I'd be interested in how I managed to manipulate those figures!

    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:

    root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make
    output
    <warnings>
    real 0m49.512s
    user 0m19.033s
    sys 0m3.911s

    Those numbers indicate that there is something wrong with your
    machine. Sum of second and third line above give CPU time.
    Real time is twice as large, so something is slowing down things.
    One possible trouble is having too small RAM, then OS is swaping
    data to/from disc. Some programs do a lot of random I/O, that
    can be slow on spinning disc, but SSD-s usually are much
    faster at random I/O.

    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    FYI, reasonably typical report for normal make (without -j
    option) on my machine is:

    real 0m4.981s
    user 0m3.712s
    sys 0m0.963s

    that is real time is bigger than CPU time, but the difference is
    reasonably small. On bigger project using 'make -j 20' I get
    results like:

    real 1m1.840s
    user 6m5.335s
    sys 0m34.691s

    In the second case some steps are serial and use only one core,
    but on average parallel build is much faster. Both cases are
    full build, using NVE (which has much higher troughput than
    SATA SSD).

    On the RPi4 (usually 1/3 the speed of my PC), the make-time for A68G was
    137 seconds (using SD storage; the PC uses SSD),

    AFAIK RPi4 is quad core machine, if yours has enough RAM you could
    try 'make -j 5'.

    so perhaps 40 seconds
    on the PC, suggesting that the underlying Windows file system may be
    slowing things down, but I don't know.

    As I wrote, something is wrong. Build should be mostly CPU time,
    so it should be almost twice as fast compared to real time that
    you gave.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From tTh@tth@none.invalid to comp.lang.c on Thu Oct 30 05:00:15 2025
    From Newsgroup: comp.lang.c

    On 10/30/25 01:36, bart wrote:

    You'd probably dismiss it as irrelevant, but lots of such improvements
    build up. At least it is good that some people are looking at such aspects.

    https://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST- Leaner-Faster.html)


    This page is about C++, not C. It was irrelevant in
    this newsgroup. Try again, Bart.
    --
    ** **
    * tTh des Bourtoulots *
    * http://maison.tth.netlib.re/ *
    ** **
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Wed Oct 29 21:24:34 2025
    From Newsgroup: comp.lang.c

    antispam@fricas.org (Waldek Hebisch) writes:
    [...]
    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    I haven't been using make's "-j" option for most of my builds.
    I'm going to start doing so now (updating my wrapper script).

    I initially tried replacing "make" by "make -j", with no numeric
    argument. The result was that my system nearly froze (the load
    average went up to nearly 200). It even invoked the infamous OOM
    killer. "make -j" tells make to use as many parallel processes
    as possible.

    "make -j $(nproc)" is much better. The "nproc" command reports the
    number of available processing units. Experiments with a fairly
    large build show that arguments to "-j" larger than $(nproc) do
    not speed things up (on a fairly old machine with nproc=4). I had
    speculated that "make -n 5" might be worthwhile of some processes
    were I/O-bound, but that doesn't appear to be the case.

    This applies to GNU make. There are other "make" implementations
    which may or may not have a similar feature.

    [...]
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From vallor@vallor@vallor.earth to comp.lang.c on Thu Oct 30 04:52:50 2025
    From Newsgroup: comp.lang.c

    At Wed, 29 Oct 2025 21:24:34 -0700, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    antispam@fricas.org (Waldek Hebisch) writes:
    [...]
    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    I haven't been using make's "-j" option for most of my builds.
    I'm going to start doing so now (updating my wrapper script).

    I initially tried replacing "make" by "make -j", with no numeric
    argument. The result was that my system nearly froze (the load
    average went up to nearly 200). It even invoked the infamous OOM
    killer. "make -j" tells make to use as many parallel processes
    as possible.

    "make -j $(nproc)" is much better. The "nproc" command reports the
    number of available processing units. Experiments with a fairly
    large build show that arguments to "-j" larger than $(nproc) do
    not speed things up (on a fairly old machine with nproc=4). I had
    speculated that "make -n 5" might be worthwhile of some processes
    were I/O-bound, but that doesn't appear to be the case.

    This applies to GNU make. There are other "make" implementations
    which may or may not have a similar feature.

    [...]

    I cloned the cdecl archive to ramdisk and timed the installation commands:

    $ time -p ./bootstrap
    [...]
    real 6.13
    user 4.59
    sys 0.54

    $ time -p ./configure
    [...]
    real 11.94
    user 5.24
    sys 6.13

    $ time -p make -j$(nproc)
    [...]
    real 3.57
    user 11.01
    sys 2.74

    $ time -p sudo make install
    [...]
    real 0.35
    user 0.00
    sys 0.01

    On this system:

    $ nproc
    64

    $ grep 'model name' /proc/cpuinfo | uniq
    model name : AMD Ryzen Threadripper 3970X 32-Core Processor

    This workstation is a few years old, but I don't see any need to replace
    it at this point.

    The numbers above will hopefully give naysayers of autoconf and
    make pause for thought...
    --
    -v System76 Thelio Mega v1.1 x86_64 NVIDIA RTX 3090Ti 24G
    OS: Linux 6.17.6 D: Mint 22.2 DE: Xfce 4.18
    NVIDIA: 580.95.05 Mem: 258G
    "It's deja vu all over again."
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Thu Oct 30 05:11:18 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> wrote:

    Because existing solutions DIDN'T EXIST in a practical form (remember I worked with 8-bit computers), or they were hopelessly slow and
    complicated on restricted hardware.

    I don't need a linker, I don't need a makefile, I don't need lists of dependencies between modules, I don't need independent compilation, I
    don't use object files.

    The generated makefile for the 49-module CDECL project is 2000 lines of gobbledygook; that's not really selling it to me!

    If *I* had a 49-module C project, the build info I'd supply you would basically be that list of files, plus the source files.

    I sometime work with 8-bit microcontrollers. More frequently I work
    with 32-bit microcontrollers of size comparable to 8-bit
    microcontrollers. One target has 4 kB RAM (plus 16 kB flash for
    storing programs). On such targets I care about program size.
    I found it convenient during developement to run programs from
    RAM, so ideally program + data should fit in 4 kB. And frequently
    it fits. I have separate modules. For example, usually before
    doing anything else I need to configure the clock. Needed clock
    speed depends on program. I could use a general clock setting
    routine that can set "any" clock speed. But such routine would
    be more complicated and consequently bigger than a more specialized
    one. So I have a few versions so that each version sets a single
    clock speed and is doing only what is necessary for this speed. Microcontrollers contain several built-in devices, they need
    drivers. But it is almost impossible to use all devices and
    given program usually uses only a few devices. So in programs
    I just include what is needed.

    My developement process is work in progress, there are some
    things which I would like to improve. But I need to organize
    things, for which I use files. There are compiler options,
    paths to tools and libraries. In other words, there is
    essential info outside C files. I use Makefile-s to record
    this info. It is quite likely that in the future I will
    have a tool to create specialized C code from higher level
    information. In such case my dependecies will get more
    complex.

    Modern microcontrollers are quite fast compared to their
    typical tasks, so most of the time speed of code is not
    critical. But I write interrupt handlers and typically
    interrupt handler should be as fast as possible, so speed
    matters here. And as I wrote size of compiled code is
    important. So compiler that quickly generates slow and big
    code is of limited use to me. Given that files are usually
    rather small I find gcc speed reasonable (during developement
    I usually do not need to wait for compilation, it is fast
    enough).

    Certainly better compiler is possible. But given need to
    generate reasonably good code for several differen CPU-s
    (there are a few major familes and within family there are
    variations affecting generated code) this is big task.

    One could have better language than C. But currenly it
    seems that I will be able to get features that I want by
    generating code. Of course, if you look at whole toolchain
    and developement process this is much more complicated than
    specialized compiler for specialized language. But creating
    whole environment with features that I want is a big task.
    By using gcc I reduce amount of work that _I_ need to do.
    I wrote several pieces of code that are available in existing
    libraries (because I wanted to have smaller specialized
    version), so I probably do more work than typical developer.
    But life is finite so one need to choose what is worth
    (re)doing as opposed to reusing existing code.

    BTW: Using usual recipes, frequently gives much bigger programs,
    for example program blinking a LED (embedded equivelent of
    "Hello world") may take 20-30 kB (with my approach it is
    552 bytes, most of which is essentially forced by MCU
    architecure).

    So, gcc and make _I_ find useful. For microcontroller
    projects I currently do not need 'configure' and related
    machinery, but do not exlude that in the future.

    Note that while I am developing programs, my focus is on
    providing a library and developement process. That is
    potential user is supposed to write code which should
    integrate with code that I wrote. So I either need
    some amalgamation at source level or linking. ATM linking
    works better. So I need linking, in the sense that if
    I were forbiden to use linking, I would have to develop
    some replacement and that could be substantial work and
    inconvenience, for example textual amalgamation would
    increase build time from rather satisfactory now to
    probably noticable delay.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From vallor@vallor@vallor.earth to comp.lang.c on Thu Oct 30 05:38:17 2025
    From Newsgroup: comp.lang.c

    At Thu, 30 Oct 2025 04:52:50 +0000, vallor <vallor@vallor.earth> wrote:

    $ grep 'model name' /proc/cpuinfo | uniq
    model name : AMD Ryzen Threadripper 3970X 32-Core Processor

    That was on Linux. Now in a virt running Cygwin on Windows 11 Pro for Workstations...C drive image is on my NAS, connected with 10G-base-T.
    nproc is 4.

    CYGWIN_NT-10.0-26100 w11 3.6.5-1.x86_64 2025-10-09 17:21 UTC x86_64 Cygwin

    $ time -p ./bootstrap
    [...]
    real 14.29
    user 6.55
    sys 3.39

    $ time -p ./configure
    [...]
    real 106.75
    user 38.89
    sys 46.26

    $ time -p make -j$(nproc)
    [...]
    real 31.40
    user 50.76
    sys 15.83

    $ time -p make install
    [...]
    real 3.28
    user 1.24
    sys 1.52

    So configure took 1:47. Also, that's a bit misleading, because
    I had to run ./configure multiple times, and use the cygwin package
    manager to install dependencies: flex, bison, and libreadline-dev.

    I could have run it on a RAMdisk, but wasn't worth my time to figure
    out how to set one up in Windows...which probably would have taken
    more than 107 seconds to do anyway.

    Seems like ./configure could be made faster, though, but one
    only runs it occasionally...
    --
    -v System76 Thelio Mega v1.1 x86_64 NVIDIA RTX 3090Ti 24G
    OS: Linux 6.17.6 D: Mint 22.2 DE: Xfce 4.18
    NVIDIA: 580.95.05 Mem: 258G
    "Honey, PLEASE don't pick up the PH$@#*&$^(#@&$^%(*NO CARRIER"
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.lang.c on Thu Oct 30 07:45:15 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 04:24, Keith Thompson wrote:
    I haven't been using make's "-j" option for most of my builds.
    I'm going to start doing so now (updating my wrapper script).

    Well, let's see, on approximately 10,000 lines of code:

    $ make clean
    $time make

    real 0m2.391s
    user 0m2.076s
    sys 0m0.286s

    $ make clean
    $time make -j $(nproc)

    real 0m0.041s
    user 0m0.021s
    sys 0m0.029s

    That's a reduction in wall clock time of 4 minutes per MLOC to 4
    *seconds* per MLOC. I can't deny I'm impressed.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Thu Oct 30 09:02:19 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 00:19, bart wrote:
    On 29/10/2025 22:10, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 29/10/2025 01:48, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic >>>>>>>>> languages
    [...]

    Bart, is the above statement literally accurate?

    Literally as in all 8.x billion individuals on the planet, including
    infants and people in comas, would be using such languages?

    This is what you seem to be suggesting that I mean, and here you're both being overly pedantic. You could just agree with me you know!

    'If X then we'd all be doing Y' is a common English idiom, suggesting X
    was a no-brainer.


     Do you believe that
    we would ALL be using "easy dynamic languages" if speed were not an
    issue, meaning that non-dynamic languages would die out completely?

    Yes, I believe that if dynamic languages, however they are implemented, could always deliver native code speeds, then a huge number of people,
    and companies, would switch because of that and other benefits.


    This would all be /so/ much easier if you just wrote what you meant in
    the first place. You don't need to use exaggerations and hyperbole, and
    you don't need to extrapolate your own opinions as though they apply to everyone. And it doesn't help when you write with the assumption that
    your gut feelings (with no objective information to back them up) are "no-brainers" or somehow obvious, and then you get in a fluster when
    others disagree.

    On the particular point here, would more people use "dynamic languages"
    (a somewhat vague term, but we are speaking vaguely here anyway) if
    speed were not an issue? I think if languages like Python or Javascript
    were faster, we'd see a /little/ more use of them - but not much more.
    After all, dynamic languages are already massively popular in particular fields with today's speeds. And while I doubt if anyone would complain
    if they were faster (unless the speed increase cost in other ways), they
    are apparently fast enough for a very wide range of uses.

    Of course there are situations where people have thought "Python is too
    slow for this, so I will have to use C even though I hate that
    language". But I personally do not think that will be the case for a
    "huge number of people and companies".

    Bear in mind that if that was the case, then new dynamic languages could emerge that help broad their range of applications.


    New dynamic languages pop up regularly, and there are many ways in which
    their speed is being improved (such as JIT, or better byte compiling and better VM's, as well as language design targeting speed). But sure, new
    ones could emerge that cover different use-cases better. The same
    applies to static languages.

    Whether the speed of any /particular/ language - such as Algol 68 -
    affected its uptake, is another matter.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Thu Oct 30 11:15:22 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 01:36, bart wrote:
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:

    It was certainly an issue here: the 'make' part of building CDECL and
    A68G, I considered slow for the scale of the task given that the apps
    are 68 and 78Kloc (static total of .c and .h files).


    I have no interest in A68G.  I have no stake in cdecl or knowledge (or
    particular interest) in how it was written, and how appropriate the
    number of lines of code are for the task in hand.  I am confident it
    could have been written in a different way with less code - but not at
    all confident that doing so would be in any way better for the author
    of the program.  I am also confident that you know far too little
    about what the program can do, or why it was written the way it was,
    to judge whether it has a "reasonable" number of lines of code, or not.

    However, it's easy to look at the facts.  The "src" directory from the
    github clone has about 50,000 lines of code in .c files, and 18,000
    lines of code in .h files.  The total is therefore about 68 kloc of
    source.  This does not at all mean that compilation processes exactly
    68 thousand lines of code - it will be significantly more than that as
    headers are included by multiple files, and lots of other headers from
    the C standard library and other libraries are included.  Let's guess
    100 kloc.

    Yes, that's why I said the 'static' line counts are 68 and 78K. Maybe
    the slowdown is due to some large headers that lie outside the problem
    (not the standard headers), but so what? (That would be a shortcoming of
    the C language.)

    The A68G sources also contain lots of upper-case content, so perhaps
    macro expansion is going on too.

    The bottom line is this is an 80Kloc app that takes that long to buidld.


    No, the bottom line is that this program took longer to build than you expected or wanted.

    Did the build time affect whether or not you use A68G ? If not, then it
    does /not/ take too long to build, even on your system.

    Of course you might feel it takes longer than you expect, or
    frustratingly long - that's up to you, your opinions, and your expectations.



    The build process takes 8 seconds on my decade-old machine, much of
    which is something other than running the compiler.  (Don't ask me
    what it is doing - I did not write this software, design its build
    process, or determine how the program is structured and how it is
    generated by yacc or related tools.  This is not my area of
    expertise.)  If for some strange reason I choose to run "make" rather
    than "make -j", thus wasting much of my computer's power, it takes 16
    seconds.  Some of these non-compilation steps do not appear to be able
    to run in parallel, and a couple of the compilations (like "parser.c",
    which appears to be from a parser generator rather than specifically
    written) are large and take a couple of seconds to compile.  My guess
    is that the actual compilations are perhaps 4 seconds.  Overall, I
    make it 25 kloc per second.  While I don't think that is a
    particularly relevant measure of anything useful, it does show that
    either you are measuring the wrong thing, using a wildly inappropriate
    or limited build environment, or are unaware of how to use your
    computer to build code.

    Tell me then how I should do it to get single-figure build times for a
    fresh build. But whatever it is, why doesn't it just do that anyway?!


    Try "make -j" rather than "make" to build in parallel. That is not the default mode for make, because you don't lightly change the default
    behaviour of a program that millions use regularly and have used over
    many decades. Some build setups (especially very old ones) are not
    designed to work well with parallel building, so having the "safe"
    single task build as the default for make is a good idea.

    I would also, of course, recommend Linux for these things. Or get a
    cheap second-hand machine and install Linux on that - you don't need
    anything fancy. As you enjoy comparative benchmarks, the ideal would be duplicate hardware with one system running Windows, the other Linux.
    (Dual boot is a PITA, and I am not suggesting you mess up your normal
    daily use system.)

    Raspberry Pi's are great for lots of things, but they are not fast for building software - most models have too little memory to support all
    the cores in big parallel builds, they can overheat when pushed too far,
    and their "disks" are very slow. If you have a Pi 5 with lots of ram,
    and use a tmpfs filesystem for the build, it can be a good deal faster.

    (And my computer cpu was about 30% busy doing other productive tasks,
    such as playing a game, while I was doing those builds.)


    So, you are exaggerating, mismeasuring or misusing your system to get
    build times that are well over an order of magnitude worse than
    expected.  This follows your well-established practice.

    So, what exactly did I do wrong here (for A68G):

      root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
      real    1m32.205s
      user    0m40.813s
      sys     0m7.269s

    This 90 seconds is the actual time I had to hang about waiting. I'd be interested in how I managed to manipulate those figures!

    Try "time make -j" as a simple step.


    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:

      root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make
    output
      <warnings>
      real    0m49.512s
      user    0m19.033s
      sys     0m3.911s

    On the RPi4 (usually 1/3 the speed of my PC), the make-time for A68G was
    137 seconds (using SD storage; the PC uses SSD), so perhaps 40 seconds
    on the PC, suggesting that the underlying Windows file system may be
    slowing things down, but I don't know.

    However the same PC, under actual Windows, manages this:

      c:\qx>tim mm qq
      Compiling qq.m to qq.exe      (500KB but half is data; A68G is 1MB?)
      Time: 0.084

    And this:

      c:\cx>tim tcc lua.c           (250-400KB)
      Time: 0.124


    Windows is a fine system in some ways, but it has different strengths
    and weaknesses compared to Linux. There are plenty of things Windows
    handles better than Linux in a very general sense. Here, however, there
    are two things that Linux (and all *nix style OS's) does significantly
    better than Windows - it has much more efficient filesystems, especially
    when dealing with lots of files at once, and it is much more efficient
    at starting and stopping processes and running lots of processes at once.

    gcc, make, and other tools used in the build of ccdecl (again, I have
    not looked at A68G) come from a world where big tasks are broken down
    into many little tasks. When you run a "gcc" command, even just for a
    compile (without linking), it will run a number of different programs - starting and stopping multiple processes. That is cheap on Linux, but a significant overhead on Windows. They communicate with temporary files
    - cheap on Linux (they are never written to a disk), but expensive on
    Windows. Similarly, the typical C libraries on Linux are happy to use multiple files because doing so is cheap on Linux - but much more
    expensive on Windows. (A single "#include <stdio.h>" C file on my Linux system uses 20 headers, totalling 3536 lines.) There are good reasons
    for breaking things into small parts like this, for better
    maintainability, scalability, portability and flexibility. However, it
    means that these things are all slower on Windows systems.

    Software that originates in the Windows world tends to be more
    monolithic - you make one big program that does everything, you make C
    library headers that are combined to avoid extra includes, and so on. Portability and scalability don't matter so much in a monoculture, and flexibility and reuse don't matter when toolchain developers are closed companies. (By that I mean that in the *nix world, some of the headers
    will be shared across multiple different C standard libraries, different
    C compilers, different OS's, and different target architectures in any combination.)

    I am not saying that one way is "right" and the other way is "wrong" - I
    am saying they are significantly different, and this can be a reason why certain kinds of big software systems can have very different
    performance characteristics on *nix systems and Windows.


    And you claim your own tools would be 1000 times faster.

    In this case, yes. The figure is more typically around 100 if the other compiler is optimising, however that would be representations of the
    same program. A68G is somewhat bigger than my product.

      Maybe they would be.  Certainly there have been tools in the past
    that are much smaller and faster than modern tools, and were useful at
    the time. Modern tools do so much more, however.  A tool that doesn't
    do the job needed is of no use for a given task, even if it could
    handle other tasks quickly.

    It ran my test program; that's what counts!

    If a tool does the job you need, and does so efficiently, that's great.






    But the crux of the matter, and I can't stress this enough as it never
    seems to get through to you, is that fast enough is fast enough.  No
    one cares how long cdecl takes to build.

    I don't care either; I just wanted to try it.

    But I pick up things that nobody else seems to: this particular build
    was unusually slow; why was that? Perhaps there's a bottleneck in the process that needs to be fixed, or a bug, that would give benefits when
    it does matter.

    Do you think there is a reason why /you/ get fixated on these things,
    and no one else in this group appears to be particularly bothered?
    Could it be that these things are not actually a problem to other
    people? You have never given any indication that you are interested in identifying bottlenecks or slowdowns, and have certainly shown no
    interest in fixing them or even just reporting them to anyone of
    relevance (like the guy who wrote cdecl, or the authors of autotools, or
    the gcc developers, or whoever might be at least vaguely connected with
    the process). I am sure there are lots of people here who - if they
    bothered to build cdecl at all - might think the build took longer than
    they would have guessed. But no one else has whined about it.

    Usually when a person thinks that they are seeing something no one else
    sees, they are wrong. (Look at Olcott for an extreme example.) And if
    if there had ever been a regular in comp.lang.c who was once unaware
    that there are C compilers that can compile faster than gcc, or that
    autotools is outdated and probably unnecessary in most cases, you can be
    sure they have heard your message enough times already.


    (An article posted in Reddit detailed how a small change in how Clang
    worked made a 5-7% difference in build times for large projects.

    You'd probably dismiss it as irrelevant, but lots of such improvements
    build up. At least it is good that some people are looking at such aspects.

    https://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST-Leaner-Faster.html)


    I am very happy that people make compilers faster. For me, personally,
    the biggest benefit clang has brought to my work is that the competition
    and cooperation with gcc has encouraged improvements to gcc - functional improvements such as better static warnings, and faster compilation.

    And I fully understand that build times for large projects are
    important, especially during development.

    But I do not share your obsession that compile and build times are the critical factor or the defining feature for a compiler (or toolchain in general). In my experience - /my/ experience - compile times for C code
    has never been an issue. I have never felt the urge to use a different compiler because the one I am using is too slow. I have never felt it
    made sense to use -O0 rather than -O2 (or whatever I choose as
    appropriate for the task in hand) because of compiler speed. I have
    never felt that I won't use a particular piece of software because the
    build step took too long.

    I have certainly found that it can be /nicer/ to have faster compiles or builds. I have certainly found it worth the effort to do builds
    efficiently - if I had to recompile all code for all files in my
    projects every time I made a small change, then build speed would become
    a problem. And I have occasionally done builds (such as full builds of embedded Linux systems) that take a long time - these would be
    frustrating if I had to do them regularly.

    And again, I am always glad when my tools run faster - but that does not
    mean I have a problem with them being too slow. I know you find it very difficult to understand that concept.



    Of course everyone agrees that smaller and faster is better, all
    things being equal - but all things are usually /not/ equal, and once
    something is fast enough to be acceptable, making it faster is not a
    priority.

    My compilers have already reached that threshold (most stuff builds in
    the time it takes to take my finger off the Enter button). But most mainstream compilers are a LONG way off.


    This is not a goal most compiler vendors have. When people are not particularly bothered about the speed of compilation for their files,
    the speed is good enough - people are more interested in other things.
    They are more interested in features like better checks, more helpful
    warnings or information, support for newer standards, better
    optimisation, and so on.

    Mainstream compiler vendors do care about speed - but not about the
    speed of the little C programs you write and compile. They put a huge
    amount of effort into the speed for situations where it matters, such as
    for building very large projects, or building big projects with advanced optimisations (like link-time optimisations across large numbers of
    files and modules), or working with code that is inherently slow to
    compile (like C++ code with complex templates or significant
    compile-time compilation).


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Thu Oct 30 12:50:33 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 05:24, Keith Thompson wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    [...]
    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    I haven't been using make's "-j" option for most of my builds.
    I'm going to start doing so now (updating my wrapper script).

    I initially tried replacing "make" by "make -j", with no numeric
    argument. The result was that my system nearly froze (the load
    average went up to nearly 200). It even invoked the infamous OOM
    killer. "make -j" tells make to use as many parallel processes
    as possible.

    "make -j $(nproc)" is much better. The "nproc" command reports the
    number of available processing units. Experiments with a fairly
    large build show that arguments to "-j" larger than $(nproc) do
    not speed things up (on a fairly old machine with nproc=4). I had
    speculated that "make -n 5" might be worthwhile of some processes
    were I/O-bound, but that doesn't appear to be the case.

    This applies to GNU make. There are other "make" implementations
    which may or may not have a similar feature.


    Sometimes "make -j" can be problematic, yes. I don't know if newer
    versions of GNU make have got better at avoiding being too enthusiastic
    about starting jobs, but certainly if you have a project where a very
    large number of compile tasks could be started in parallel, but you
    don't have the ram to handle them all, things can go badly wrong. I've
    seen that myself too on occasion. (In the case of cdecl, there are not
    that many parallel compiles for it to be a risk, at least not on my
    machine.)

    Using "make -j ${nproc}" - or using "make -j 4" or "make -j 8" if you
    know your core count - can be a safer starting point. The ideal number
    for a given build can vary quite a lot, however. More parallel
    processes take more ram - great up to a point, but it can mean less ram
    for disk and file caching and thus slower results overall. And often
    cores are not all created equal - with SMT, half your cores might not be "real" cores, and on some processors you have a mix of fast cores and
    slow low-power cores. On my work machine with 4 "real" cores and 4 SMT
    cores, "make -j 6" is usually optimal for bigger builds. And then you
    have to consider that sometimes builds require significant other work
    than just compiling, and the ideal balance for those tasks may be
    different. Of course such fine-tuning it only really matters if you are
    doing the builds a lot.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Thu Oct 30 12:07:48 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 10:15, David Brown wrote:
    On 30/10/2025 01:36, bart wrote:

    Try "make -j" rather than "make" to build in parallel.  That is not the default mode for make, because you don't lightly change the default behaviour of a program that millions use regularly and have used over
    many decades.  Some build setups (especially very old ones) are not designed to work well with parallel building, so having the "safe"
    single task build as the default for make is a good idea.

    I would also, of course, recommend Linux for these things.  Or get a
    cheap second-hand machine and install Linux on that - you don't need anything fancy.  As you enjoy comparative benchmarks, the ideal would be duplicate hardware with one system running Windows, the other Linux.
    (Dual boot is a PITA, and I am not suggesting you mess up your normal
    daily use system.)

    Raspberry Pi's are great for lots of things, but they are not fast for building software - most models have too little memory to support all
    the cores in big parallel builds, they can overheat when pushed too far,
    and their "disks" are very slow.  If you have a Pi 5 with lots of ram,
    and use a tmpfs filesystem for the build, it can be a good deal faster.

    (And my computer cpu was about 30% busy doing other productive tasks,
    such as playing a game, while I was doing those builds.)


    So, you are exaggerating, mismeasuring or misusing your system to get
    build times that are well over an order of magnitude worse than
    expected.  This follows your well-established practice.

    So, what exactly did I do wrong here (for A68G):

       root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
       real    1m32.205s
       user    0m40.813s
       sys     0m7.269s

    This 90 seconds is the actual time I had to hang about waiting. I'd be
    interested in how I managed to manipulate those figures!

    Try "time make -j" as a simple step.


    OK, "make -j" gave a real time of 30s, about three times faster. (Not
    quite sure how that works, given that my machine has only two cores.)

    However, I don't view "-j", and parallelisation, as a solution to slow compilation. It is just a workaround, something you do when you've
    exhausted other possibilities.

    You have to get raw compilation fast enough first.

    Suppose I had the task of transporting N people from A to B in my car,
    but I can only take four at a time and have to get them there by a
    certain time.

    One way of helping out is to use "-j": get multiple drivers with their
    own cars to transport them in parallel.

    Imagine however that my car and all those others can only go at walking
    pace: 3mph instead of 30mph. Then sure, you can recruit enough
    volunteers to get the task done in the necessary time (putting aside the practical details).

    But can you a see a fundamental problem that really ought to be fixed first?


    But I pick up things that nobody else seems to: this particular build
    was unusually slow; why was that? Perhaps there's a bottleneck in the
    process that needs to be fixed, or a bug, that would give benefits
    when it does matter.

    Do you think there is a reason why /you/ get fixated on these things,
    and no one else in this group appears to be particularly bothered?

    Usually when a person thinks that they are seeing something no one else sees, they are wrong.

    Quite a few people have suggested that there is something amiss about my
    1:32 and 0:49 timings. One has even said there is something wrong with
    my machine.

    You have even suggested I have manipulated the figures!

    So was I right in sensing something was off, or not?

    And I fully understand that build times for large projects are
    important, especially during development.

    But I do not share your obsession that compile and build times are the critical factor or the defining feature for a compiler (or toolchain in general).

    I find fast compile-times useful for several reasons:

    *I develop whole-program compilers* This means all sources have to be
    compiled at the same time, as there is no independent compilation at the module level.

    The advantage is that I don't need the complexity of makefiles to help
    decide which dependent modules need recompiling.

    *It can allow programs to be run directly from source* This is something
    that is being explored via complex JIT approaches. But my AOT compiler
    is fast enough that that is not necessary

    *It also allow programs to be interpreted* This is like run from source,
    but the compilation is faster as it can stop at the IL. (Eg. sqlite3
    compiles in 150ms instead of 250ms.)

    *It can allow whole-program optimisation* This is not something I take advantage of much yet. But it allows a simpler approach than either LTO,
    so somehow figuring out to create a one-file amalgamation.

    So it enables interesting new approaches. Imagine if you download the
    CDECL bundle and then just run it without needing to configure anything,
    or having to do 'make', or 'make -j'.

    This is a demo which runs my C compiler instead of a CDECL. The C
    compiler source bundle is the file cc.ma (created using 'mm -ma cc'):

    c:\demo>dir
    30/10/2025 11:31 648,000 cc.ma
    26/09/2025 14:44 60 hello.c

    Now I run my C compiler from source:

    c:\demo>mm -r cc hello
    Compiling cc.m to cc.(run)
    Compiling hello.c to hello.exe

    Magic! Or, since 'cc' also shares the same backend as 'mm', it can also
    run stuff from source (but is limited to single file C programs):

    c:\demo>mm -r cc -r hello
    Compiling cc.m to cc.(run)
    Compiling hello.c to hello.(run)
    Hello, World!

    Forget ./configure, forget make. Of course you can do the same thing,
    maybe there is 'make -run', the difference is that the above is instant.

    This is not a goal most compiler vendors have.  When people are not particularly bothered about the speed of compilation for their files,
    the speed is good enough - people are more interested in other things.
    They are more interested in features like better checks, more helpful warnings or information, support for newer standards, better
    optimisation, and so on.

    See the post from Richard Heathfield where he is pleasantly surprised
    that he can get a 60x speedup in build-time.

    People like fast tools!

    Mainstream compiler vendors do care about speed - but not about the
    speed of the little C programs you write and compile.  They put a huge amount of effort into the speed for situations where it matters, such as
    for building very large projects, or building big projects with advanced optimisations (like link-time optimisations across large numbers of
    files and modules), or working with code that is inherently slow to
    compile (like C++ code with complex templates or significant compile-
    time compilation).

    I think some 90% at least of the EXE/DLL files in my Windows\System32
    folder are under 1MB in size. That would be approx 100Kloc of C, or under.

    We've seen how long programs of 1MB and 0.6GB (apparent stripped sizes
    of A68G and CDECL) can take to build. Or do those count as 'little'?

    Anyway, the approaches used to speed up compilation of smaller programs
    can also help larger ones.

    (A few years ago, my main compiler was written in my intepreted
    scripting language, so it was very slow IMV. However it was still double
    the speed of gcc -O0! While generating equally indifferent code.

    So I say something is wrong.)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Thu Oct 30 12:56:40 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 05:11, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:

    Because existing solutions DIDN'T EXIST in a practical form (remember I
    worked with 8-bit computers), or they were hopelessly slow and
    complicated on restricted hardware.

    I don't need a linker, I don't need a makefile, I don't need lists of
    dependencies between modules, I don't need independent compilation, I
    don't use object files.

    The generated makefile for the 49-module CDECL project is 2000 lines of
    gobbledygook; that's not really selling it to me!

    If *I* had a 49-module C project, the build info I'd supply you would
    basically be that list of files, plus the source files.

    I sometime work with 8-bit microcontrollers. More frequently I work
    with 32-bit microcontrollers of size comparable to 8-bit
    microcontrollers. One target has 4 kB RAM (plus 16 kB flash for
    storing programs). On such targets I care about program size.
    I found it convenient during developement to run programs from
    RAM, so ideally program + data should fit in 4 kB. And frequently
    it fits. I have separate modules. For example, usually before
    doing anything else I need to configure the clock. Needed clock
    speed depends on program. I could use a general clock setting
    routine that can set "any" clock speed. But such routine would
    be more complicated and consequently bigger than a more specialized
    one. So I have a few versions so that each version sets a single
    clock speed and is doing only what is necessary for this speed. Microcontrollers contain several built-in devices, they need
    drivers. But it is almost impossible to use all devices and
    given program usually uses only a few devices. So in programs
    I just include what is needed.

    My developement process is work in progress, there are some
    things which I would like to improve. But I need to organize
    things, for which I use files. There are compiler options,
    paths to tools and libraries. In other words, there is
    essential info outside C files. I use Makefile-s to record
    this info. It is quite likely that in the future I will
    have a tool to create specialized C code from higher level
    information. In such case my dependecies will get more
    complex.

    Modern microcontrollers are quite fast compared to their
    typical tasks, so most of the time speed of code is not
    critical. But I write interrupt handlers and typically
    interrupt handler should be as fast as possible, so speed
    matters here. And as I wrote size of compiled code is
    important. So compiler that quickly generates slow and big
    code is of limited use to me. Given that files are usually
    rather small I find gcc speed reasonable (during developement
    I usually do not need to wait for compilation, it is fast
    enough).

    Certainly better compiler is possible. But given need to
    generate reasonably good code for several differen CPU-s
    (there are a few major familes and within family there are
    variations affecting generated code) this is big task.

    One could have better language than C. But currenly it
    seems that I will be able to get features that I want by
    generating code. Of course, if you look at whole toolchain
    and developement process this is much more complicated than
    specialized compiler for specialized language. But creating
    whole environment with features that I want is a big task.
    By using gcc I reduce amount of work that _I_ need to do.
    I wrote several pieces of code that are available in existing
    libraries (because I wanted to have smaller specialized
    version), so I probably do more work than typical developer.
    But life is finite so one need to choose what is worth
    (re)doing as opposed to reusing existing code.

    BTW: Using usual recipes, frequently gives much bigger programs,
    for example program blinking a LED (embedded equivelent of
    "Hello world") may take 20-30 kB (with my approach it is
    552 bytes, most of which is essentially forced by MCU
    architecure).

    So, gcc and make _I_ find useful. For microcontroller
    projects I currently do not need 'configure' and related
    machinery, but do not exlude that in the future.

    Note that while I am developing programs, my focus is on
    providing a library and developement process. That is
    potential user is supposed to write code which should
    integrate with code that I wrote. So I either need
    some amalgamation at source level or linking. ATM linking
    works better. So I need linking, in the sense that if
    I were forbiden to use linking, I would have to develop
    some replacement and that could be substantial work and
    inconvenience, for example textual amalgamation would
    increase build time from rather satisfactory now to
    probably noticable delay.


    My background is unusual. I started off in hardware, and developed a
    small language and tools to help with my job as test and development
    engineer, something done on the side.

    Those tools evolved, and I got used to creating my own solutions, ones
    that were very productive compared to the (expensive and slow) compilers
    that were available then.

    Linking existed, in the form of a 'loader' program that combined
    multiple object files into one executable; a trivial task IMO, but other people's linkers seemed to make a big deal of it (they still do!).

    I didn't use makefiles: I had a crude IDE which used a project file,
    listing my source modules. So the IDE already knew all the which files
    needed to be submitted for compilation, on the occasions I needed to
    compile everything.

    I was also familiar enough with my projects to know when I only need to recompile the one module. In any case, compilation was quite fast even
    on the early 80s home and business computers I used (and used to help design!).

    I only use linking now for my C compiler, but that task is done within
    my assembler; there are no object files.

    My main language uses a whole-program compiler so linking is not
    relevant. External libraries are accessed dynamically only.

    When I wrote commercial apps, where users wanted to add their own
    content, I provided a scripting language for that. Developing add-ons
    was done within the running application.

    Now, if someone wanted to statically link native code from my compiler
    into their program, or vice versa, I can generate object files in
    standard format.

    Then a normal linker is used, but *they* are using the linker; not me!

    There are other solutions too: others can create libraries that are then
    used via runtime dynamic-linking. While I also have facilities within my backend to generate executable code in-memory, and that could be made available as a library to user-programs.

    In short, there are lots of alternatives when you are not limited to traditional tools, but you may have to write them yourself. For most
    people, that is not feasible or not practical, they will already be
    heavily invested in dependencies, and it cannot be done overnight anyway.

    But in my case it allows me to truthfully say:

    I don't need a linker, I don't need a makefile, I don't need lists of dependencies between modules, I don't need independent compilation, I
    don't use object files.

    However ... I still believe that the build process for lots of C
    programs, for when a user needs to compile working program, can be
    vastly simplified. That means makefiles at least are not needed.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Thu Oct 30 14:13:53 2025
    From Newsgroup: comp.lang.c

    antispam@fricas.org (Waldek Hebisch) writes:
    bart <bc@freeuk.com> wrote:
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:


    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:

    root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make >>>output
    <warnings>
    real 0m49.512s
    user 0m19.033s
    sys 0m3.911s

    Those numbers indicate that there is something wrong with your
    machine. Sum of second and third line above give CPU time.
    Real time is twice as large, so something is slowing down things.
    One possible trouble is having too small RAM, then OS is swaping
    data to/from disc. Some programs do a lot of random I/O, that
    can be slow on spinning disc, but SSD-s usually are much
    faster at random I/O.

    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    FYI, reasonably typical report for normal make (without -j
    option) on my machine is:

    real 0m4.981s
    user 0m3.712s
    sys 0m0.963s


    Just for grins, here's a report for a full rebuild of a real-world project
    that I build regularly. Granted most builds are partial (e.g. one or
    two source files touched) and take far less time (15 seconds or so,
    most of which is make calling stat(2) on a few hundred source files
    on an NFS filesystem). Close to three million SLOC, mostly in header
    files. C++.

    $ time make -s -j96
    real 9m10.38s
    user 3h50m15.59s
    sys 9m58.20s

    I'd challenge Bart to match that with a similarly sized project using
    his compiler and toolset, but I seriously doubt that this project could
    be effectively implemented using his personal language and toolset.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Thu Oct 30 14:32:36 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 14:13, Scott Lurndal wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    bart <bc@freeuk.com> wrote:
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:


    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:

    root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make
    output
    <warnings>
    real 0m49.512s
    user 0m19.033s
    sys 0m3.911s

    Those numbers indicate that there is something wrong with your
    machine. Sum of second and third line above give CPU time.
    Real time is twice as large, so something is slowing down things.
    One possible trouble is having too small RAM, then OS is swaping
    data to/from disc. Some programs do a lot of random I/O, that
    can be slow on spinning disc, but SSD-s usually are much
    faster at random I/O.

    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    FYI, reasonably typical report for normal make (without -j
    option) on my machine is:

    real 0m4.981s
    user 0m3.712s
    sys 0m0.963s


    Just for grins, here's a report for a full rebuild of a real-world project that I build regularly. Granted most builds are partial (e.g. one or
    two source files touched) and take far less time (15 seconds or so,
    most of which is make calling stat(2) on a few hundred source files
    on an NFS filesystem). Close to three million SLOC, mostly in header
    files. C++.


    What is the total size of the produced binaries? That will me an idea of
    the true LoC for the project.

    How many source files (can include headers) does it involve? How many
    binaries does it actually produce?

    $ time make -s -j96
    real 9m10.38s
    user 3h50m15.59s
    sys 9m58.20s

    I'd challenge Bart to match that with a similarly sized project using
    his compiler and toolset, but I seriously doubt that this project could
    be effectively implemented using his personal language and toolset.

    If what you are asking is how my toolset can cope with a project on this scale, then I can have a go at emulating it, given the information above.

    I can tell you that over 4 hours, and working at generating 3-5MB per
    second, my compiler could produce 40-70GB of binary code in that time, although not in one file due to memory. I guess the size is somewhat
    smaller than that.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Thu Oct 30 16:41:49 2025
    From Newsgroup: comp.lang.c

    On Thu, 30 Oct 2025 07:45:15 +0000
    Richard Heathfield <rjh@cpax.org.uk> wrote:

    On 30/10/2025 04:24, Keith Thompson wrote:
    I haven't been using make's "-j" option for most of my builds.
    I'm going to start doing so now (updating my wrapper script).

    Well, let's see, on approximately 10,000 lines of code:

    $ make clean
    $time make

    real 0m2.391s
    user 0m2.076s
    sys 0m0.286s

    $ make clean
    $time make -j $(nproc)

    real 0m0.041s
    user 0m0.021s
    sys 0m0.029s

    That's a reduction in wall clock time of 4 minutes per MLOC to 4
    *seconds* per MLOC. I can't deny I'm impressed.


    Something wrong here.
    Most likely you compared "cold" build vs "hot" build.
    Or your 'make clean' failed to clean majority of objects.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Thu Oct 30 16:04:51 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 13:07, bart wrote:
    On 30/10/2025 10:15, David Brown wrote:
    On 30/10/2025 01:36, bart wrote:

    Try "make -j" rather than "make" to build in parallel.  That is not
    the default mode for make, because you don't lightly change the
    default behaviour of a program that millions use regularly and have
    used over many decades.  Some build setups (especially very old ones)
    are not designed to work well with parallel building, so having the
    "safe" single task build as the default for make is a good idea.

    I would also, of course, recommend Linux for these things.  Or get a
    cheap second-hand machine and install Linux on that - you don't need
    anything fancy.  As you enjoy comparative benchmarks, the ideal would
    be duplicate hardware with one system running Windows, the other
    Linux. (Dual boot is a PITA, and I am not suggesting you mess up your
    normal daily use system.)

    Raspberry Pi's are great for lots of things, but they are not fast for
    building software - most models have too little memory to support all
    the cores in big parallel builds, they can overheat when pushed too
    far, and their "disks" are very slow.  If you have a Pi 5 with lots of
    ram, and use a tmpfs filesystem for the build, it can be a good deal
    faster.

    (And my computer cpu was about 30% busy doing other productive
    tasks, such as playing a game, while I was doing those builds.)


    So, you are exaggerating, mismeasuring or misusing your system to
    get build times that are well over an order of magnitude worse than
    expected.  This follows your well-established practice.

    So, what exactly did I do wrong here (for A68G):

       root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
       real    1m32.205s
       user    0m40.813s
       sys     0m7.269s

    This 90 seconds is the actual time I had to hang about waiting. I'd
    be interested in how I managed to manipulate those figures!

    Try "time make -j" as a simple step.


    OK, "make -j" gave a real time of 30s, about three times faster. (Not
    quite sure how that works, given that my machine has only two cores.)

    You presumably understand how multi-tasking works when there are more processes than there are cores to run them. Sometimes you have more
    processes ready to run, in which case some have to wait. But sometimes processes are already waiting for something else (typically disk I/O
    here, but it could be networking or other things). So while one compile
    task is waiting for the disk, another one can be running. It's not
    common for the speedup from "make -j" or "make -j N" for some number N
    to be greater than the number of cores, but it can happen for small
    numbers of cores and slow disk.


    However, I don't view "-j", and parallelisation, as a solution to slow compilation. It is just a workaround, something you do when you've
    exhausted other possibilities.

    You moan that compiles are too slow. Yet doing them in parallel is a "workaround". Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround". Using a computer from this century is a "workaround". Using a decent OS is a "workaround". Is /everything/ that would reduce your scope for complaining loudly to the
    wrong people a workaround?

    Of course this kind of thing does not change the fundamental speed of
    the compiler, but it is very much a solution to problems, frustration or issues that people might have from compilers being slower than they
    might want. "make -j" does not make the compiler faster, but it does
    mean that the speed of the compiler is less of an issue.


    You have to get raw compilation fast enough first.

    Why? And - again - the "raw" compilation of gcc on C code, for my
    usage, is already more than fast enough for my needs. If it were
    faster, I would still use make. If it ran at 1 MLOC per second, I'd
    still use make, and I'd still structure my code the same way, and I'd
    still run on Linux. I would be happy to see gcc run at that speed, but
    it would not change how I work.


    Suppose I had the task of transporting N people from A to B in my car,
    but I can only take four at a time and have to get them there by a
    certain time.

    One way of helping out is to use "-j": get multiple drivers with their
    own cars to transport them in parallel.

    Imagine however that my car and all those others can only go at walking pace: 3mph instead of 30mph. Then sure, you can recruit enough
    volunteers to get the task done in the necessary time (putting aside the practical details).

    But can you a see a fundamental problem that really ought to be fixed
    first?

    Sure - if that were realistic. But a more accurate model is that the
    cars go at 30 mph - the people will all get there safely, comfortably
    and in a reasonable time, and if there are lots of people you can scale
    by using more cars in parallel so that the real-world time taken is not
    much different. Your alternative is an electric scooter trimmed to go
    at 600 mph. Yes, it is faster for an individual, but is it really
    /better/? I'm sure we'd all be pleased if the car went at 60 mph rather
    than 30 mph, but the speed of the vehicle is not the only thing that
    affects the throughput of your transport system.

    There is no logical reason to focus solely on speed of one individual
    part of a large process when there are other ways to improve the speed
    of the process as a whole.



    But I pick up things that nobody else seems to: this particular build
    was unusually slow; why was that? Perhaps there's a bottleneck in the
    process that needs to be fixed, or a bug, that would give benefits
    when it does matter.

    Do you think there is a reason why /you/ get fixated on these things,
    and no one else in this group appears to be particularly bothered?

    Usually when a person thinks that they are seeing something no one
    else sees, they are wrong.

    Quite a few people have suggested that there is something amiss about my 1:32 and 0:49 timings. One has even said there is something wrong with
    my machine.


    Maybe there /is/ something wrong with your machine or setup. If you
    have a 2 core machine, it is presumably a low-end budget machine from
    perhaps 15 years ago. I'm all in favour of keeping working systems and
    I strongly disapprove of some people's two or three year cycles for
    swapping out computers, but there is a balance somewhere. With such an
    old system, I presume you also have old Windows (my office Windows
    machine is Windows 7), and thus the old and very slow style of WSL.
    That, I think, could explain the oddities in your timings.

    You have even suggested I have manipulated the figures!

    No, I did not. I have at various times suggested that you cherry-pick,
    that you might have poor methodology and that you sometimes benchmark in
    an unrealistic way in order to give yourself a bigger windmill for your tilting. (Timing a build on an old slow WSL layer on Windows on old
    slow hardware is an example of this - the typical user who would compile something like cdecl from source will be using some flavour of *nix and
    a computer suitable for software development.)


    So was I right in sensing something was off, or not?


    You were wrong in thinking something was off about cdecl or its build.
    And it should not be news to you that there is something very suboptimal
    about your computer environment, as this is not exactly the first time
    it has been discussed.

    And I fully understand that build times for large projects are
    important, especially during development.

    But I do not share your obsession that compile and build times are the
    critical factor or the defining feature for a compiler (or toolchain
    in general).

    I find fast compile-times useful for several reasons:

    Everyone who compiles code finds faster compile times nicer than slower compile times. That is not the point. The issue is about fast /enough/ compiles, and fast /enough/ builds.

    But of course I am quite happy to accept that fast compile times are
    important to you - your preferences and opinions are your own. The
    issue is that you can't accept other people have different priorities
    and experiences.


    *I develop whole-program compilers* This means all sources have to be compiled at the same time, as there is no independent compilation at the module level.

    OK. I have sometimes used whole-program compilation. It is naturally
    slower, but is helped by good tools (such as toolchains that support
    so-called "link-time optimisation"). And improving the speed of LTO - particularly by improving the parallelisation of the task across
    multiple cores - is a key focus for gcc and clang/llvm for speed.


    The advantage is that I don't need the complexity of makefiles to help decide which dependent modules need recompiling.

    People use make for many reasons - incremental building and dependency management is just one (albeit important) aspect. You mentioned in
    another post that "Python does not need make" - I have Python projects
    that are organised by makefiles. And honestly, if you had taken 1% of
    the time and effort you have spend complaining in c.l.c. about "make"
    and instead learned about it, you'd be writing makefiles in your sleep.
    It really is not that hard, and you will never convince me you are not
    smart enough to understand it quickly and easily.


    *It can allow programs to be run directly from source* This is something that is being explored via complex JIT approaches. But my AOT compiler
    is fast enough that that is not necessary

    I don't see what that is at all important for C programming. Why would someone want to use C for scripting? If I had a C file "test.c" that
    was short enough to be realistic for use as a script, and did not care
    about optimisation or static checking, I could just type "make test &&
    ./test" to run it pretty much instantly.


    *It also allow programs to be interpreted* This is like run from source,
    but the compilation is faster as it can stop at the IL. (Eg. sqlite3 compiles in 150ms instead of 250ms.)

    Faster compiles do not change anything fundamental about a language.
    They do not mean that C programs are interpreted, they mean that C
    programs compile faster.


    *It can allow whole-program optimisation* This is not something I take advantage of much yet. But it allows a simpler approach than either LTO,
    so somehow figuring out to create a one-file amalgamation.


    I can fully appreciate that as a compiler /writer/, you want a simpler
    system than LTO. As a compiler /user/, like the vast majority of
    programmers, I don't really care how complicated the compiler is. That
    is someone else's job.

    So it enables interesting new approaches. Imagine if you download the
    CDECL bundle and then just run it without needing to configure anything,
    or having to do 'make', or 'make -j'.

    Almost everyone who uses cdecl does that already. Enthusiasts living on
    the cutting edge need to spend a couple of minutes downloading and
    building the latest versions, but other people will use pre-built
    binaries. And those people are already very familiar with the
    "./configure && make -j 8 && sudo make install" sequence.


    Forget ./configure, forget make. Of course you can do the same thing,
    maybe there is 'make -run', the difference is that the above is instant.

    To be clear - I do think autotools is usually unnecessary, overly
    complex, slow, and long outdated. There are some kinds of projects
    where it could be a definite benefit - typically those for which there
    are a lot of configuration options that people might want in their
    builds, and it gives a lot of them out of the box. But I think there's
    a lot of potential at least for skipping almost all ./configure tests on almost all systems without losing the advantages and features of
    autotools. However, it's up to the project authors to decide if they
    want to use autotools or not, and the cost of ten seconds of my time
    does not bother me here.


    This is not a goal most compiler vendors have.  When people are not
    particularly bothered about the speed of compilation for their files,
    the speed is good enough - people are more interested in other things.
    They are more interested in features like better checks, more helpful
    warnings or information, support for newer standards, better
    optimisation, and so on.

    See the post from Richard Heathfield where he is pleasantly surprised
    that he can get a 60x speedup in build-time.


    There were no details in that post - I suspect it was not /entirely/
    serious.

    People like fast tools!

    Sure. I haven't seen anyone suggest otherwise.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Thu Oct 30 16:22:57 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 30/10/2025 14:13, Scott Lurndal wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    bart <bc@freeuk.com> wrote:
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:


    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:

    root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make >>>>> output
    <warnings>
    real 0m49.512s
    user 0m19.033s
    sys 0m3.911s

    Those numbers indicate that there is something wrong with your
    machine. Sum of second and third line above give CPU time.
    Real time is twice as large, so something is slowing down things.
    One possible trouble is having too small RAM, then OS is swaping
    data to/from disc. Some programs do a lot of random I/O, that
    can be slow on spinning disc, but SSD-s usually are much
    faster at random I/O.

    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    FYI, reasonably typical report for normal make (without -j
    option) on my machine is:

    real 0m4.981s
    user 0m3.712s
    sys 0m0.963s


    Just for grins, here's a report for a full rebuild of a real-world project >> that I build regularly. Granted most builds are partial (e.g. one or
    two source files touched) and take far less time (15 seconds or so,
    most of which is make calling stat(2) on a few hundred source files
    on an NFS filesystem). Close to three million SLOC, mostly in header
    files. C++.


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so


    That will me an idea of
    the true LoC for the project.

    There is really no relationship between SLoC and binary size.

    There are about 16 million SLOC (it's been a while since I
    last run sloccount against this codebase).

    $ sloccount .
    Totals grouped by language (dominant language first):
    ansic: 11905053 (72.22%)
    python: 2506984 (15.21%)
    cpp: 1922112 (11.66%)
    tcl: 87725 (0.53%)
    asm: 42745 (0.26%)
    sh: 14333 (0.09%)

    Total Physical Source Lines of Code (SLOC) = 16,484,351 Development Effort Estimate, Person-Years (Person-Months) = 5,357.42 (64,289.00)
    (Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
    Schedule Estimate, Years (Months) = 13.99 (167.89)
    (Basic COCOMO model, Months = 2.5 * (person-months**0.38))
    Estimated Average Number of Developers (Effort/Schedule) = 382.92
    Total Estimated Cost to Develop = $ 723,714,160
    (average salary = $56,286/year, overhead = 2.40).

    The bulk of the ANSI C code are header files generated from
    YAML, likewise most of the python code (used for unit testing).
    The primary functionality is in the C++ (cpp) code.
    The application is highly multithreaded (circa 100 threads in
    an average run).


    How many source files (can include headers) does it involve? How many >binaries does it actually produce?

    $ time make -s -j96
    real 9m10.38s
    user 3h50m15.59s
    sys 9m58.20s

    I'd challenge Bart to match that with a similarly sized project using
    his compiler and toolset, but I seriously doubt that this project could
    be effectively implemented using his personal language and toolset.

    If what you are asking is how my toolset can cope with a project on this >scale, then I can have a go at emulating it, given the information above.

    I can tell you that over 4 hours, and working at generating 3-5MB per >second, my compiler could produce 40-70GB of binary code in that time,

    That's a completely irrelevent metric.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From richard@richard@cogsci.ed.ac.uk (Richard Tobin) to comp.lang.c on Thu Oct 30 16:26:37 2025
    From Newsgroup: comp.lang.c

    In article <10dv52b$3gq3j$1@dont-email.me>,
    Richard Heathfield <rjh@cpax.org.uk> wrote:

    $time make -j $(nproc)

    Eww. How does make distinguish between j with an argument and
    j with no argument and a target?

    $ make -j a
    make: *** No rule to make target 'a'. Stop.
    $ make -j 3
    make: *** No targets specified and no makefile found. Stop.
    $ make 3
    cc 3.c -o 3

    That's a really bad idea.

    -- Richard
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Thu Oct 30 18:30:01 2025
    From Newsgroup: comp.lang.c

    On Thu, 30 Oct 2025 16:04:51 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 30/10/2025 13:07, bart wrote:


    OK, "make -j" gave a real time of 30s, about three times faster.
    (Not quite sure how that works, given that my machine has only two
    cores.)

    You presumably understand how multi-tasking works when there are more processes than there are cores to run them. Sometimes you have more processes ready to run, in which case some have to wait. But
    sometimes processes are already waiting for something else (typically
    disk I/O here, but it could be networking or other things). So while
    one compile task is waiting for the disk, another one can be running.
    It's not common for the speedup from "make -j" or "make -j N" for
    some number N to be greater than the number of cores, but it can
    happen for small numbers of cores and slow disk.


    It *can* give much higher speedup than the number of cores.
    Measurements taken at relatively small MCU project: 33 modules,
    size:
    text data bss dec hex filename
    26953 156 28028 55137 d761

    Compiled on my corporate desktop.
    Good hardware (Intel i7-17700, 8 P cores, 12 E cores, 28 logical CPUs, competent SSD : Samsung PM9F1).
    Bad software environment - very aggressive antivirus + 2 other
    "management" crapware agents.

    msys2, arm-none-eabi-gcc 13.3.0

    2nd column: execution time with all cores enabled.
    3rd column: execution time with compilation locked to single
    logical CPU (P-core).
    4th column: execution time with compilation locked to single
    logical CPU (E-core).

    flags tm-all tm-one-P tm-one-E
    none 0m20.689s 0m21.162s 0m44.608s
    -j 2 0m9.464s 0m11.199s 0m34.154s
    -j 3 0m6.855s 0m8.695s
    -j 4 0m4.970s 0m7.992s 0m21.895s
    -j 5 0m4.429s 0m7.632s
    -j 6 0m4.016s 0m7.340s
    -j 7 0m3.766s 0m7.296s
    -j 8 0m3.564s 0m7.248s
    -j 9 0m3.439s 0m7.245s 0m20.323s
    -j 10 0m3.562s 0m7.324s
    -j 28 0m3.741s 0m7.295s
    -j 33 0m3.623s 0m7.128s 0m18.098s
    -j 0m3.843s 0m7.187s 0m19.365s

    So, on P-core I see almost 3x speed up from simultaneity even with no
    actual parallelism.






















    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Thu Oct 30 17:30:55 2025
    From Newsgroup: comp.lang.c

    richard@cogsci.ed.ac.uk (Richard Tobin) writes:
    In article <10dv52b$3gq3j$1@dont-email.me>,
    Richard Heathfield <rjh@cpax.org.uk> wrote:

    $time make -j $(nproc)

    Eww. How does make distinguish between j with an argument and
    j with no argument and a target?

    $ man 3 getopt

    Standard unix semantics since, well, forever. 'j' with
    no argument is an error.

    $ man 1 make


    https://pubs.opengroup.org/onlinepubs/9799919799/basedefs/V1_chap12.html
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Thu Oct 30 17:40:01 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 16:22, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 14:13, Scott Lurndal wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    bart <bc@freeuk.com> wrote:
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:


    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are: >>>>>
    root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make >>>>>> output
    <warnings>
    real 0m49.512s
    user 0m19.033s
    sys 0m3.911s

    Those numbers indicate that there is something wrong with your
    machine. Sum of second and third line above give CPU time.
    Real time is twice as large, so something is slowing down things.
    One possible trouble is having too small RAM, then OS is swaping
    data to/from disc. Some programs do a lot of random I/O, that
    can be slow on spinning disc, but SSD-s usually are much
    faster at random I/O.

    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    FYI, reasonably typical report for normal make (without -j
    option) on my machine is:

    real 0m4.981s
    user 0m3.712s
    sys 0m0.963s


    Just for grins, here's a report for a full rebuild of a real-world project >>> that I build regularly. Granted most builds are partial (e.g. one or
    two source files touched) and take far less time (15 seconds or so,
    most of which is make calling stat(2) on a few hundred source files
    on an NFS filesystem). Close to three million SLOC, mostly in header
    files. C++.


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so


    That will me an idea of
    the true LoC for the project.

    There is really no relationship between SLoC and binary size.

    Yes, there is: a rule of thumb for x64 is 10 bytes of code for line of C source. But disproportional use of header files may affect that.


    There are about 16 million SLOC (it's been a while since I
    last run sloccount against this codebase).

    $ sloccount .
    Totals grouped by language (dominant language first):
    ansic: 11905053 (72.22%)
    python: 2506984 (15.21%)
    cpp: 1922112 (11.66%)
    tcl: 87725 (0.53%)
    asm: 42745 (0.26%)
    sh: 14333 (0.09%)

    Total Physical Source Lines of Code (SLOC) = 16,484,351 Development Effort Estimate, Person-Years (Person-Months) = 5,357.42 (64,289.00)
    (Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
    Schedule Estimate, Years (Months) = 13.99 (167.89)
    (Basic COCOMO model, Months = 2.5 * (person-months**0.38))
    Estimated Average Number of Developers (Effort/Schedule) = 382.92
    Total Estimated Cost to Develop = $ 723,714,160
    (average salary = $56,286/year, overhead = 2.40).

    The bulk of the ANSI C code are header files generated from
    YAML, likewise most of the python code (used for unit testing).
    The primary functionality is in the C++ (cpp) code.
    The application is highly multithreaded (circa 100 threads in
    an average run).


    How many source files (can include headers) does it involve? How many
    binaries does it actually produce?

    $ time make -s -j96
    real 9m10.38s
    user 3h50m15.59s
    sys 9m58.20s

    I'd challenge Bart to match that with a similarly sized project using
    his compiler and toolset, but I seriously doubt that this project could
    be effectively implemented using his personal language and toolset.

    If what you are asking is how my toolset can cope with a project on this
    scale, then I can have a go at emulating it, given the information above.

    I can tell you that over 4 hours, and working at generating 3-5MB per
    second, my compiler could produce 40-70GB of binary code in that time,

    That's a completely irrelevent metric.


    For me it is entirely relevant, as the tools I use are linear. If my car averages 60mph, then after 4 hours I expect to do 240 miles.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Thu Oct 30 17:49:31 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    You moan that compiles are too slow.  Yet doing them in parallel is a "workaround".  Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround".  Using a computer from this century is a "workaround".  Using a decent OS is a "workaround".  Is / everything/ that would reduce your scope for complaining loudly to the
    wrong people a workaround?

    Yes, they are all workarounds to cope with unreasonably slow compilers.
    They in fact all come across as excuses for your favorite compiler being
    slow.

    Which one of these methods would you use to advertise the LPS throughput
    of a compiler that you develop?


    Of course this kind of thing does not change the fundamental speed of
    the compiler, but it is very much a solution to problems, frustration or issues that people might have from compilers being slower than they
    might want.  "make -j" does not make the compiler faster, but it does
    mean that the speed of the compiler is less of an issue.


    You have to get raw compilation fast enough first.

    Why?  And - again - the "raw" compilation of gcc on C code, for my
    usage, is already more than fast enough for my needs.

    Not for mine, sorry.

      If it were
    faster, I would still use make.  If it ran at 1 MLOC per second, I'd
    still use make, and I'd still structure my code the same way, and I'd
    still run on Linux.

    If it ran 1Mlps, then half of make would be pointless.

    However, with C, it would run into other problems, like heavy include
    files, which would normally be repeatedly processed per-module. (This is something my language solves, but I also suggested, elsewhere in the
    thread, a way it could be mitigated in C.)

    But can you a see a fundamental problem that really ought to be fixed
    first?

    Sure - if that were realistic.  But a more accurate model is that the
    cars go at 30 mph
    No, I contend that big compilers do seem to go at 3mph, or worse.

    We can argue about how much extra work your compilers do than mine, so
    let's look at a slightly different tool: assemblers.

    Assembly is a straightforward task: there is no deep analysis, no optimisation, so it should be very quick, yes? Well have a look this
    survey I did from a couple of years ago:

    https://www.reddit.com/r/Compilers/comments/1c41y6d/assembler_survey/

    There are quite a range of speeds! So what are those slow products up to
    that take so long?

    People use make for many reasons - incremental building and dependency management is just one (albeit important) aspect.  You mentioned in
    another post that "Python does not need make" - I have Python projects
    that are organised by makefiles.

    Makefiles sound to me like your 'hammer' then.

      And honestly, if you had taken 1% of
    the time and effort you have spend complaining in c.l.c. about "make"
    and instead learned about it, you'd be writing makefiles in your sleep.
    It really is not that hard, and you will never convince me you are not
    smart enough to understand it quickly and easily.

    I simply don't like them; sorry. Everything they might do, is taken care
    of by language design, or by my compiler, or by scripting in a proper scripting language.

    And they are ugly.


    *It can allow programs to be run directly from source* This is
    something that is being explored via complex JIT approaches. But my
    AOT compiler is fast enough that that is not necessary

    I don't see what that is at all important for C programming.  Why would someone want to use C for scripting?  If I had a C file "test.c" that
    was short enough to be realistic for use as a script, and did not care
    about optimisation or static checking, I could just type "make test
    && ./test" to run it pretty much instantly.

    By 'scripting' people have certain expectations. Here is my example of C
    run like a script:

    c:\cx>cs sql
    SQLite version 3.25.3/MCC 2018-11-05 20:37:38
    Enter ".help" for usage hints.
    Connected to a transient in-memory database.
    Use ".open FILENAME" to reopen on a persistent database.
    sqlite>

    Here, there is 1/4 second delay as it compiles sql.c (some 250Kloc), so
    a bit heavy for scripting. But another option is:

    c:\cx>ci sql
    SQLite version 3.25.3/MCC 2018-11-05 20:37:38
    ...

    'ci' will interpret from source, and 'cs' will run from source as native
    code. (ci/cs are the same EXE with a different name. The compiler looks
    at the name to apply different default options, eg. -r -q for 'cs'.)

    So, there is little start-up delay; there is no discernible build-step;
    there is no unreasonable limit on size; there are no messy files left
    lying around; no files are written so could run on read-only media; for
    C, can run at native-code speeds if possible.

    Otherwise we would have had 'scripting' for C for ever, if you
    definition of it is simpler being able to invoke a program on the same
    line that you've just built it!

    But I accept that using a 'shebang' line, plus the use of tcc, will work
    in many cases.


    Almost everyone who uses cdecl does that already.  Enthusiasts living on the cutting edge need to spend a couple of minutes downloading and
    building the latest versions, but other people will use pre-built binaries.  And those people are already very familiar with the "./ configure && make -j 8 && sudo make install" sequence.

    This is all Unix-Linux specific. There are other ways of building
    programs. I've used some of those over the course of some 49 years.


    Forget ./configure, forget make. Of course you can do the same thing,
    maybe there is 'make -run', the difference is that the above is instant.

    To be clear - I do think autotools is usually unnecessary, overly
    complex, slow, and long outdated.

    What?!

    After being accused of baseless moaning, you know also agree that
    something might be pointlessly slow?!

    What about the argument that 'you only have to run it once'?


    There were no details in that post - I suspect it was not /entirely/ serious.

    He wouldn't have made up the figures, but someone said they may have
    been erroneous.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From richard@richard@cogsci.ed.ac.uk (Richard Tobin) to comp.lang.c on Thu Oct 30 18:29:25 2025
    From Newsgroup: comp.lang.c

    In article <jhNMQ.1338175$Jgh9.1030888@fx15.iad>,
    Scott Lurndal <slp53@pacbell.net> wrote:

    Eww. How does make distinguish between j with an argument and
    j with no argument and a target?

    $ man 3 getopt

    Standard unix semantics since, well, forever. 'j' with
    no argument is an error.

    The upstream articles refer to Gnu make, which evidently does not
    conform to that.

    -- Richard
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Thu Oct 30 18:37:27 2025
    From Newsgroup: comp.lang.c

    richard@cogsci.ed.ac.uk (Richard Tobin) writes:
    In article <jhNMQ.1338175$Jgh9.1030888@fx15.iad>,
    Scott Lurndal <slp53@pacbell.net> wrote:

    Eww. How does make distinguish between j with an argument and
    j with no argument and a target?

    $ man 3 getopt

    Standard unix semantics since, well, forever. 'j' with
    no argument is an error.

    The upstream articles refer to Gnu make, which evidently does not
    conform to that.

    Yes, unfortunately the GNU people totally screwed up
    the pption rules. Particuarly with word options rather than
    simple single letters. If a utility requires more
    than 52 options, it should be split into multiple utilities.

    Then there are the application programmers with a
    windows backround who never learned the rules.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Thu Oct 30 18:59:23 2025
    From Newsgroup: comp.lang.c

    On 2025-10-30, bart <bc@freeuk.com> wrote:
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    You moan that compiles are too slow.  Yet doing them in parallel is a
    "workaround".  Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround".  Using a computer from this
    century is a "workaround".  Using a decent OS is a "workaround".  Is /
    everything/ that would reduce your scope for complaining loudly to the
    wrong people a workaround?

    Yes, they are all workarounds to cope with unreasonably slow compilers.

    The idea of incremental rebuilding goes back to a time when compilers
    were fast, but machines were slow.

    If you had /those/ exact compilers today, and used them for even a pretty
    large project, you could likely do a full rebuild every time.

    But incremental building didn't go away because we already had it,
    and we took that into account when maintaining compilers.

    Basically, decades ago, we accepted the idea that it can take several
    seconds to compile the average file, and that we have incremental
    building to help with that.

    And so, unsurprisingly, as machines got several orders of magnitude
    faster, people we have made compilers do more and become more bloated,
    so that it can still take seconds to do one file, and you use make to
    avoid doing it.

    A lot of is it the optimization. Disable optimization and GCC is
    something like 15X faster.

    Optimization exhibits diminshing returns. It takes more and more
    work for less and less gain. It's really easy to make optimization
    take 10X longer for a fraction of a percent increase in speed.

    Yet, it tends to be done because of the reasoning that the program is
    compiled once, and then millions of instances of the program are run
    all over the world.

    One problem in optimization is that it is expensive to look for the
    conditions that enable a certain optimization. It is more expensive
    than doing the optimization, because the optimization is often
    a conceptually simple code transformation that can be done quickly,
    when the conditions are identified. But compiler has to look for those conditions everywhere, in every segment of code, every basic block.
    But it may turn out that there is a "hit" for those conditions in
    something like one file out of every hundred, or even more rarely.

    When there is no "hit" for the optimization's conditions, then it
    doesn't take place, and all that time spent looking for it is just
    making the compiler slower.

    The problem is that to get the best possible optimization, you have to
    look for numerous such rare conditions. When one of them doesn't "hit",
    one of the others might. The costs of these add up. Over time,
    compiler developers tend to add optimizatons much more than remove them.

    They in fact all come across as excuses for your favorite compiler being slow.

    Well, yes. Since we've had incremental rebuilding since the time VLSI
    machines were measured in single digit Mhz, we've taken it for granted
    that it will be used and so, to reiterate, that excuses the idea of
    a compiler taking several seconds to do one file.

    Which one of these methods would you use to advertise the LPS throughput
    of a compiler that you develop?

    It would be a lie to measure lines per second on anything but
    a single-core, complete rebuild of the benchmark program.

    High LPS compilers are somehow not winning in the programming
    marketplace, or at least some segments.

    That field is open!

    Once upon a time it seemed that GCC would remain unchallenged. Then
    Clang came along: but it too got huge, fat and slow within a bunch of
    years. This is mainly due to trying to have good optimizations.

    You will never get a C compiler that has very high LSP throughput, but
    doesn't optimize as well as the "leading brand", to make inroads into
    the ecosystem dominated by the "leading brand".
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Thu Oct 30 13:21:39 2025
    From Newsgroup: comp.lang.c

    richard@cogsci.ed.ac.uk (Richard Tobin) writes:
    In article <10dv52b$3gq3j$1@dont-email.me>,
    Richard Heathfield <rjh@cpax.org.uk> wrote:

    $time make -j $(nproc)

    Eww. How does make distinguish between j with an argument and
    j with no argument and a target?

    $ make -j a
    make: *** No rule to make target 'a'. Stop.
    $ make -j 3
    make: *** No targets specified and no makefile found. Stop.
    $ make 3
    cc 3.c -o 3

    That's a really bad idea.

    Meh.

    The data structure that defines the '-j' option in the GNU make
    source is:

    static struct command_switch switches[] =
    {
    // ...
    { 'j', positive_int, &arg_job_slots, 1, 1, 0, 0, &inf_jobs, &default_job_slots,
    "jobs", 0 },
    //...
    };

    Yes, it's odd that "-j" may or may not be followed by an argument.
    The way it works is that if the following argument exists and is
    (a string representing) a positive integer, it's taken as "-j N",
    otherwise it's taken as just "-j".

    A make argument that's not an option is called a "target"; for
    example in "make -j 4 foo", "foo" is the target. A target whose name
    is a positive integer is rare enough that the potential ambiguity
    is almost never an issue. If it is, you can use the long form:
    "make --jobs" or "make --jobs=N".

    I think it would have been cleaner if the argument to "-j" had
    been mandatory, with an argument of "0", "-1", or "max" having
    some special meaning. But changing it could break existing scripts
    that invoke "make -j" (though as I've written elsethread, "make -j"
    can cause problems).

    It would also have been nice if the "make -j $(nproc)" functionality
    had been built into make.

    The existing behavior is a bit messy, but it works, and I've never
    run into any actual problems with the way the options are parsed.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Thu Oct 30 13:37:41 2025
    From Newsgroup: comp.lang.c

    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Try "time make -j" as a simple step.
    [...]

    In my recent testing, "make -j" without a numeric argument (which
    tells make to run as many parallel steps as possible) caused my
    system to bog down badly. This was on a fairly large project (I used
    vim); it might not be as much of a problem with a smaller project.

    I've found that "make -j $(nproc)" is safer. The "nproc" command
    is likely to be available on any system that has a "make" command.

    It occurs to me that "make -j N" can fail if the Makefile does
    not correctly reflect all the dependencies. I suspect this is
    less likely to be a problem if the Makefile is generated rather
    than hand-written.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Thu Oct 30 23:01:33 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 18:49, bart wrote:
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    You moan that compiles are too slow.  Yet doing them in parallel is a
    "workaround".  Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround".  Using a computer from
    this century is a "workaround".  Using a decent OS is a "workaround".
    Is / everything/ that would reduce your scope for complaining loudly
    to the wrong people a workaround?

    Yes, they are all workarounds to cope with unreasonably slow compilers.
    They in fact all come across as excuses for your favorite compiler being slow.

    Which one of these methods would you use to advertise the LPS throughput
    of a compiler that you develop?


    If I were developing a compiler, I would not advertise any kind of lines-per-second value. It is a totally useless metric - as useless as measuring developer performance on the lines of code he/she writes per day.


    Of course this kind of thing does not change the fundamental speed of
    the compiler, but it is very much a solution to problems, frustration
    or issues that people might have from compilers being slower than they
    might want.  "make -j" does not make the compiler faster, but it does
    mean that the speed of the compiler is less of an issue.


    You have to get raw compilation fast enough first.

    Why?  And - again - the "raw" compilation of gcc on C code, for my
    usage, is already more than fast enough for my needs.

    Not for mine, sorry.

    OK. I realise that's how you feel.


      If it were faster, I would still use make.  If it ran at 1 MLOC per
    second, I'd still use make, and I'd still structure my code the same
    way, and I'd still run on Linux.

    If it ran 1Mlps, then half of make would be pointless.

    If gcc ran at 1 Mlps, the developers would be doing something wrong -
    there are optimisations already understood that could give significant benefits to generated code but are impractical to implement or use
    because they scale badly and become too slow in practice. It would be
    better to prioritise these than meaningless speeds.


    However, with C, it would run into other problems, like heavy include
    files, which would normally be repeatedly processed per-module. (This is something my language solves, but I also suggested, elsewhere in the
    thread, a way it could be mitigated in C.)


    No method of avoiding headers has been found to be worth the effort in
    C. In C++, it's a different matter, and one of the key motivators for
    the development of C++ modules is build times.

    But can you a see a fundamental problem that really ought to be fixed
    first?

    Sure - if that were realistic.  But a more accurate model is that the
    cars go at 30 mph
    No, I contend that big compilers do seem to go at 3mph, or worse.

    We can argue about how much extra work your compilers do than mine, so
    let's look at a slightly different tool: assemblers.

    Assembly is a straightforward task: there is no deep analysis, no optimisation, so it should be very quick, yes? Well have a look this
    survey I did from a couple of years ago:

    https://www.reddit.com/r/Compilers/comments/1c41y6d/assembler_survey/

    There are quite a range of speeds! So what are those slow products up to that take so long?

    People use make for many reasons - incremental building and dependency
    management is just one (albeit important) aspect.  You mentioned in
    another post that "Python does not need make" - I have Python projects
    that are organised by makefiles.

    Makefiles sound to me like your 'hammer' then.

    It's a Swiss army knife, not a hammer.


      And honestly, if you had taken 1% of
    the time and effort you have spend complaining in c.l.c. about "make"
    and instead learned about it, you'd be writing makefiles in your
    sleep. It really is not that hard, and you will never convince me you
    are not smart enough to understand it quickly and easily.

    I simply don't like them; sorry. Everything they might do, is taken care
    of by language design, or by my compiler, or by scripting in a proper scripting language.

    And they are ugly.


    You haven't a clue about make and makefiles, but you insist on judging
    them - and on judging people who use the tool. It's okay for you not to
    use make, but it is not okay to be self-righteous about it as though
    your prejudice from ignorance is a good thing.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Thu Oct 30 23:37:15 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 21:37, Keith Thompson wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Try "time make -j" as a simple step.
    [...]

    In my recent testing, "make -j" without a numeric argument (which
    tells make to run as many parallel steps as possible) caused my
    system to bog down badly. This was on a fairly large project (I used
    vim); it might not be as much of a problem with a smaller project.

    I've found that "make -j $(nproc)" is safer. The "nproc" command
    is likely to be available on any system that has a "make" command.

    It occurs to me that "make -j N" can fail if the Makefile does
    not correctly reflect all the dependencies. I suspect this is
    less likely to be a problem if the Makefile is generated rather
    than hand-written.


    There certainly are makefile builds that might not work correctly with parallel builds. And I think you are right that this is typically a dependency specification issue, and that generating dependencies
    automatically in some way should have lower risk of problems. I think
    it is also typically on older makefiles - from the days of single core machines where "make -j N" was not considered - that had such issues.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Thu Oct 30 23:11:43 2025
    From Newsgroup: comp.lang.c

    David Brown <david.brown@hesbynett.no> wrote:
    On 30/10/2025 13:07, bart wrote:

    Quite a few people have suggested that there is something amiss about my
    1:32 and 0:49 timings. One has even said there is something wrong with
    my machine.


    Maybe there /is/ something wrong with your machine or setup. If you
    have a 2 core machine, it is presumably a low-end budget machine from perhaps 15 years ago. I'm all in favour of keeping working systems and
    I strongly disapprove of some people's two or three year cycles for
    swapping out computers, but there is a balance somewhere. With such an
    old system, I presume you also have old Windows (my office Windows
    machine is Windows 7), and thus the old and very slow style of WSL.
    That, I think, could explain the oddities in your timings.

    My laptop which is 6 years old now has two rather slow cores. I
    bought it because I use it when I am "on move", that is I carry
    it to places where I use it. I wanted laptop with light, hence
    small battery. And compute power needs appropriate electric
    power, faster/more cores would drain batteries faster.

    I have rather new (I bought it year ago) mini-PC. It has two cores, significantly faster than my laptop but only two. Again, its adantage
    is low power use, I did not do exact measurements but its power use
    should be comparable with newest Raspberry Pi. It is fine as a low
    end personal machine.

    More generally, my impression is that 15 years ago there was limited
    choice on x86 cores. Now there are chips optimized for low power
    and speed of CPU can vary quite a lot. In desktop I have 12
    fast cores (24 logical cores with hyperthreading). Some people
    here mentioned machines with mixture of fast and slow cores.
    There are machines having a lot of slower cores. And low end
    machines like my laptor or mini-PC. I did not met any new 1 core
    x86 CPU in last several years, but 2 core ones are reasonably
    popular.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Thu Oct 30 23:23:15 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 18:59, Kaz Kylheku wrote:
    On 2025-10-30, bart <bc@freeuk.com> wrote:
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    You moan that compiles are too slow.  Yet doing them in parallel is a
    "workaround".  Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround".  Using a computer from this >>> century is a "workaround".  Using a decent OS is a "workaround".  Is / >>> everything/ that would reduce your scope for complaining loudly to the
    wrong people a workaround?

    Yes, they are all workarounds to cope with unreasonably slow compilers.

    The idea of incremental rebuilding goes back to a time when compilers
    were fast, but machines were slow.

    What do you mean by incremental rebuilding? I usually talk about
    /independent/ compilation.

    Then incremental builds might be about deciding which modules to
    recompile, except that that is so obvious, you didn't give it a name.

    Compile the one file you've just edited. If it might impact on any
    others (you work on a project for months, you will know it intimately),
    then you just compile the lot.


    If you had /those/ exact compilers today, and used them for even a pretty large project, you could likely do a full rebuild every time.

    But incremental building didn't go away because we already had it,
    and we took that into account when maintaining compilers.

    Basically, decades ago, we accepted the idea that it can take several
    seconds to compile the average file, and that we have incremental
    building to help with that.

    And so, unsurprisingly, as machines got several orders of magnitude
    faster, people we have made compilers do more and become more bloated,
    so that it can still take seconds to do one file, and you use make to
    avoid doing it.

    A lot of is it the optimization. Disable optimization and GCC is
    something like 15X faster.

    I don't think so. Not for C anyway, or that level of language. It's
    usually about 3-5 times between -O0 and -O3, and even less between -O0
    and -O2.

    (The difference tends to greater for compiling bigger modules, but you
    also get more global optimisations.)

    Optimization exhibits diminshing returns. It takes more and more
    work for less and less gain. It's really easy to make optimization
    take 10X longer for a fraction of a percent increase in speed.

    Yet, it tends to be done because of the reasoning that the program is compiled once, and then millions of instances of the program are run
    all over the world.

    One problem in optimization is that it is expensive to look for the conditions that enable a certain optimization. It is more expensive
    than doing the optimization, because the optimization is often
    a conceptually simple code transformation that can be done quickly,
    when the conditions are identified. But compiler has to look for those conditions everywhere, in every segment of code, every basic block.
    But it may turn out that there is a "hit" for those conditions in
    something like one file out of every hundred, or even more rarely.

    When there is no "hit" for the optimization's conditions, then it
    doesn't take place, and all that time spent looking for it is just
    making the compiler slower.

    The problem is that to get the best possible optimization, you have to
    look for numerous such rare conditions. When one of them doesn't "hit",
    one of the others might. The costs of these add up. Over time,
    compiler developers tend to add optimizatons much more than remove them.

    They in fact all come across as excuses for your favorite compiler being
    slow.

    The problem is that there is no fast path for -O0:

    c:\cx>tim gcc -O2 -s sql.c
    Time: 39.685

    c:\cx>tim gcc -O0 -s sql.c
    Time: 7.819 **

    That 8s vs 40s is welcome, but it can be also be:

    c:\cx>tim bcc sql
    Compiling sql.c to sql.exe
    Time: 0.245

    (** Note that this test uses windows.h, and gcc's version is much bigger
    than mine, and accounts for 1.3s of that timing.)

    So -O0 is still 25 slower than my product.

    (Tcc would be even faster, but it's not working for this app ATM. I'm sometimes considered whether gcc should just secretly bundle tcc.exe,
    and run it for O-1.)


    Well, yes. Since we've had incremental rebuilding since the time VLSI machines were measured in single digit Mhz, we've taken it for granted
    that it will be used and so, to reiterate, that excuses the idea of
    a compiler taking several seconds to do one file.

    Which one of these methods would you use to advertise the LPS throughput
    of a compiler that you develop?

    It would be a lie to measure lines per second on anything but
    a single-core, complete rebuild of the benchmark program.

    Exactly. But also, you really need to do comparisons with other products
    on the same hardware, as LPS will be tied to the machine.

    (My friend's ordinary laptop, used for ordinary consumer stuff, is 70%
    faster than my PC. But I'm happy to give benchmark results on the PC.)

    High LPS compilers are somehow not winning in the programming
    marketplace, or at least some segments.

    That field is open!

    Once upon a time it seemed that GCC would remain unchallenged. Then
    Clang came along: but it too got huge, fat and slow within a bunch of
    years. This is mainly due to trying to have good optimizations.

    It had to keep up with gcc. But it is not helped by being based around
    LLVM which has grown into a monstrosity.

    You will never get a C compiler that has very high LSP throughput, but doesn't optimize as well as the "leading brand", to make inroads into
    the ecosystem dominated by the "leading brand".

    People into compilers are obsessed with optimisation. It can be a
    necessity for languages that generate lots of redundant code that needs
    to be cleaned up, but not so much for C.

    Typical differences of between -O0 and -O2 compiled code can be 2:1.

    However even the most terrible native code will be a magnitude faster
    than interpreted code.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Thu Oct 30 16:44:38 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 30/10/2025 18:59, Kaz Kylheku wrote:
    On 2025-10-30, bart <bc@freeuk.com> wrote:
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    You moan that compiles are too slow.  Yet doing them in parallel is a >>>> "workaround".  Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround".  Using a computer from this >>>> century is a "workaround".  Using a decent OS is a "workaround".  Is / >>>> everything/ that would reduce your scope for complaining loudly to the >>>> wrong people a workaround?

    Yes, they are all workarounds to cope with unreasonably slow compilers.
    The idea of incremental rebuilding goes back to a time when
    compilers
    were fast, but machines were slow.

    What do you mean by incremental rebuilding? I usually talk about /independent/ compilation.

    Then incremental builds might be about deciding which modules to
    recompile, except that that is so obvious, you didn't give it a name.

    Compile the one file you've just edited. If it might impact on any
    others (you work on a project for months, you will know it
    intimately), then you just compile the lot.

    I'll assume that was a serious question. Even if you don't care,
    others might.

    Let's say I'm working on a project that has a bunch of *.c and
    *.h files.

    If I modify just foo.c, then type "make", it will (if everything
    is set up correctly) recompile "foo.c" generating "foo.o", and
    then run a link step to recreate any executable that depends on
    "foo.o". It knows it doesn't have to recompile "bar.c" because
    "bar.o" sill exists and is newer than "bar.c".

    Perhaps the project provides several executable programs, and
    only two of them rely on foo.o. Then it can relink just those
    two executables.

    This is likely to give you working executables substantially
    faster than if you did a full rebuild. It's more useful while
    you're developing and updating a project than when you download
    the source and build it once.

    (I often tend to do full rebuilds anyway, for vague reasons I won't
    get into.)

    This depends on all relevant dependencies being reflected in the
    Makefile, and on file timestamps being updated correctly when files
    are edited. (In the distant past, I've run into problems with the
    latter when the files are on an NFS server and the server and client
    have their clocks set differently.)

    (I'll just go ahead and acknowledge, so you don't have to, that
    this might not be necessary if the build tools are infinitely fast.)

    If I've done a "make clean" or "git clean", or started from scratch
    by cloning a git repo or unpacking a .tar.gz file, then any generated
    files will not be present, and typing "make" will have to rebuild
    everything.

    [...]
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Thu Oct 30 23:49:53 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    Maybe there /is/ something wrong with your machine or setup.  If you
    have a 2 core machine, it is presumably a low-end budget machine from perhaps 15 years ago.  I'm all in favour of keeping working systems and
    I strongly disapprove of some people's two or three year cycles for
    swapping out computers, but there is a balance somewhere.  With such an
    old system, I presume you also have old Windows (my office Windows
    machine is Windows 7), and thus the old and very slow style of WSL.
    That, I think, could explain the oddities in your timings.

    The machine is from 2021. It has an SSD, 8GB, and runs Windows 11. It
    uses WSL version 2.

    It is fast enough for my 40Kloc compiler to self-host itself repeatedly
    at about 15Hz (ie. produce 15 new generations per second). And that is
    using unoptimised x64 code:

    c:\mx2>tim ms ms ms ms ms ms ms ms ms ms ms ms ms ms ms hello
    Hello, World
    Time: 1.017

    Hmm, I'm only counting 14 'ms' after the first. So apologies, it is only
    14Hz!


    You have even suggested I have manipulated the figures!

    No, I did not.  I have at various times suggested that you cherry-pick, that you might have poor methodology and that you sometimes benchmark in
    an unrealistic way in order to give yourself a bigger windmill for your tilting.

    You said this:

    DB:
    So, you are exaggerating, mismeasuring or misusing your system to get
    build times that are well over an order of magnitude worse than
    expected. This follows your well-established practice.

    But this is also very interesting: right from the start, I've been
    making the point that the figures I got were far slower than expected
    for the task.

    Here it seems you are saying the same thing. Yet I'm the one who gets repeatedly castigated.

    So was I right in sensing something was off, or not?


    You were wrong in thinking something was off about cdecl or its build.
    And it should not be news to you that there is something very suboptimal about your computer environment, as this is not exactly the first time
    it has been discussed.

    There's nothing wrong with my environment. My PC is a supercomputer
    compared with even 1970s mainframes and certainly compared to 1980s PCs.





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Fri Oct 31 00:15:45 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 23:44, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 18:59, Kaz Kylheku wrote:
    On 2025-10-30, bart <bc@freeuk.com> wrote:
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    You moan that compiles are too slow.  Yet doing them in parallel is a >>>>> "workaround".  Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround".  Using a computer from this >>>>> century is a "workaround".  Using a decent OS is a "workaround".  Is / >>>>> everything/ that would reduce your scope for complaining loudly to the >>>>> wrong people a workaround?

    Yes, they are all workarounds to cope with unreasonably slow compilers. >>> The idea of incremental rebuilding goes back to a time when
    compilers
    were fast, but machines were slow.

    What do you mean by incremental rebuilding? I usually talk about
    /independent/ compilation.

    Then incremental builds might be about deciding which modules to
    recompile, except that that is so obvious, you didn't give it a name.

    Compile the one file you've just edited. If it might impact on any
    others (you work on a project for months, you will know it
    intimately), then you just compile the lot.

    I'll assume that was a serious question. Even if you don't care,
    others might.

    Let's say I'm working on a project that has a bunch of *.c and
    *.h files.

    If I modify just foo.c, then type "make", it will (if everything
    is set up correctly) recompile "foo.c" generating "foo.o", and
    then run a link step to recreate any executable that depends on
    "foo.o". It knows it doesn't have to recompile "bar.c" because
    "bar.o" sill exists and is newer than "bar.c".

    Perhaps the project provides several executable programs, and
    only two of them rely on foo.o. Then it can relink just those
    two executables.

    This is likely to give you working executables substantially
    faster than if you did a full rebuild. It's more useful while
    you're developing and updating a project than when you download
    the source and build it once.

    I never came across any version of 'make' in the DEC OSes I used in the
    1970s, in the 1980s did see it either.

    In any case it wouldn't have worked with my compiler, as it was not a
    discrete program: it was memory-resident together with an editor, as
    part of my IDE.

    This helped to get fast turnarounds even on floppy-based 8-bit systems.

    Plus, I wouldn't have felt the issue was of any great importance:

    When you're working intensely on a project for weeks or months, you will
    be dealing with a thousand functions, variables and constants that you
    have to keep organised in your mind.

    Keeping track of which modules needed recompiling was child's play (and
    I don't mean that literally!).

    Anyway, with the language I was using at that time, modules had a
    particular organisation:

    * Most were modules containing code
    * Some were classed as headers (only vaguely related to C headers),
    which contained shared, project-wide declarations
    * All modules shared the same set of headers (on compilation, all the
    headers were treated as one, via an IDE-synthesised header that
    included the rest)

    Edits to code modules only needed that module recompiled. A change to
    any header could require all to be recompiled, but that was at your discretion.



    (I often tend to do full rebuilds anyway, for vague reasons I won't
    get into.)

    This depends on all relevant dependencies being reflected in the
    Makefile, and on file timestamps being updated correctly when files
    are edited. (In the distant past, I've run into problems with the
    latter when the files are on an NFS server and the server and client
    have their clocks set differently.)

    (I'll just go ahead and acknowledge, so you don't have to, that
    this might not be necessary if the build tools are infinitely fast.)

    If I've done a "make clean" or "git clean", or started from scratch
    by cloning a git repo or unpacking a .tar.gz file, then any generated
    files will not be present, and typing "make" will have to rebuild
    everything.

    [...]


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Fri Oct 31 00:27:36 2025
    From Newsgroup: comp.lang.c

    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    [...]
    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    I haven't been using make's "-j" option for most of my builds.
    I'm going to start doing so now (updating my wrapper script).

    I initially tried replacing "make" by "make -j", with no numeric
    argument. The result was that my system nearly froze (the load
    average went up to nearly 200). It even invoked the infamous OOM
    killer. "make -j" tells make to use as many parallel processes
    as possible.

    "make -j $(nproc)" is much better. The "nproc" command reports the
    number of available processing units. Experiments with a fairly
    large build show that arguments to "-j" larger than $(nproc) do
    not speed things up (on a fairly old machine with nproc=4). I had
    speculated that "make -n 5" might be worthwhile of some processes
    were I/O-bound, but that doesn't appear to be the case.

    I frequently build my project on a few different machines. My
    machines typically are generously (compared to compiler need)
    equipped with RAM. Measuring several builds '-j 3' gave me
    fastest build on 2 core machine (no hyperthreading), '-j 7'
    gave me fastest build on old 4 core machine with hyperthreading
    (so 'nproc' reported 8 cores). In general, increasing number
    of jobs I see increasing total CPU time, but real time may go
    down because more jobs can use time where CPU(s) would be
    otherwise idle. At some number of jobs I get best real time
    and with larger number of jobs overheads due to multiple jobs
    seem to dominate leading to increase in real time. If number
    of jobs is too high I get slowdown due to lack of real memory.

    On 12 core machine (24 logical cores) I use '-j 20'. Increasing
    number of jobs give sligtly faster build, but difference is
    small, so I prefer to have more cores availble for interactive
    use.

    Of course, that is balancing tradeoffs, your builds may have
    different characteristics than mine. I just wanted to say
    that _sometimes_ going beyond number of cores is useful.
    IIUC what Bart wrote he got 3 times speedup using '-j 3'
    on two core machine, which is unusually good speedup. IME
    normally 3 jobs on 2 core machine is neutral or gives small
    speedup. OTOH with hyperthreading activationg logical core
    my slow down its twin. Consequently using less jobs than
    logical cores may be better.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Fri Oct 31 00:28:12 2025
    From Newsgroup: comp.lang.c

    On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
    If I were developing a compiler, I would not advertise any kind of lines-per-second value. It is a totally useless metric - as useless as measuring developer performance on the lines of code he/she writes per day.

    If that were your only advantage, you'd have to flout it.

    "[[ Our compiler emits lousy code, emits only half the required ISO
    diagnostics (and those are all there are), and is compatible with only
    75% of your system's header files, and 80% of the ABI, but ...]] have you
    seen the raw speed in lines per second?"
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c on Thu Oct 30 17:35:31 2025
    From Newsgroup: comp.lang.c

    On 10/30/2025 5:27 PM, Waldek Hebisch wrote:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    [...]
    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    I haven't been using make's "-j" option for most of my builds.
    I'm going to start doing so now (updating my wrapper script).

    I initially tried replacing "make" by "make -j", with no numeric
    argument. The result was that my system nearly froze (the load
    average went up to nearly 200). It even invoked the infamous OOM
    killer. "make -j" tells make to use as many parallel processes
    as possible.

    "make -j $(nproc)" is much better. The "nproc" command reports the
    number of available processing units. Experiments with a fairly
    large build show that arguments to "-j" larger than $(nproc) do
    not speed things up (on a fairly old machine with nproc=4). I had
    speculated that "make -n 5" might be worthwhile of some processes
    were I/O-bound, but that doesn't appear to be the case.

    I frequently build my project on a few different machines. My
    machines typically are generously (compared to compiler need)
    equipped with RAM. Measuring several builds '-j 3' gave me
    fastest build on 2 core machine (no hyperthreading), '-j 7'
    gave me fastest build on old 4 core machine with hyperthreading
    (so 'nproc' reported 8 cores). In general, increasing number
    of jobs I see increasing total CPU time, but real time may go
    down because more jobs can use time where CPU(s) would be
    otherwise idle. At some number of jobs I get best real time
    and with larger number of jobs overheads due to multiple jobs
    seem to dominate leading to increase in real time. If number
    of jobs is too high I get slowdown due to lack of real memory.

    On 12 core machine (24 logical cores) I use '-j 20'. Increasing
    number of jobs give sligtly faster build, but difference is
    small, so I prefer to have more cores availble for interactive
    use.

    Of course, that is balancing tradeoffs, your builds may have
    different characteristics than mine. I just wanted to say
    that _sometimes_ going beyond number of cores is useful.
    IIUC what Bart wrote he got 3 times speedup using '-j 3'
    on two core machine, which is unusually good speedup. IME
    normally 3 jobs on 2 core machine is neutral or gives small
    speedup. OTOH with hyperthreading activationg logical core

    Make sure to avoid false sharing when using hyperthreading... :^o


    my slow down its twin. Consequently using less jobs than
    logical cores may be better.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Thu Oct 30 18:16:43 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 30/10/2025 23:44, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    What do you mean by incremental rebuilding? I usually talk about
    /independent/ compilation.

    Then incremental builds might be about deciding which modules to
    recompile, except that that is so obvious, you didn't give it a name.

    Compile the one file you've just edited. If it might impact on any
    others (you work on a project for months, you will know it
    intimately), then you just compile the lot.
    I'll assume that was a serious question. Even if you don't care,
    others might.
    [...]

    I never came across any version of 'make' in the DEC OSes I used in
    the 1970s, in the 1980s did see it either.

    In any case it wouldn't have worked with my compiler, as it was not a discrete program: it was memory-resident together with an editor, as
    part of my IDE.

    This helped to get fast turnarounds even on floppy-based 8-bit systems.

    Plus, I wouldn't have felt the issue was of any great importance:
    [...]

    You asked what incremental building means. I told you. Your only
    response is to let us all know that you don't find it useful.

    I think we all already knew that.

    I assumed (a) that you didn't already know what incremental building
    means and (b) that you wanted to know. That's why I posted my answer to
    your question.

    I don't recall ever seeing you react positively to someone giving you information that you've asked for. Instead, you tend to use the answer
    as an opportunity to tell us all that whatever concept you were asking
    about is not useful to you.

    Did you ask what incremental building means because you wanted to know?

    Should I assume that every question you ask is rhetorical?

    And a minor point: In the quoted text in your followup, the blank lines
    between paragraphs in what I wrote were deleted. Please don't do that.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Fri Oct 31 01:22:30 2025
    From Newsgroup: comp.lang.c

    On 31/10/2025 00:28, Kaz Kylheku wrote:
    On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
    If I were developing a compiler, I would not advertise any kind of
    lines-per-second value. It is a totally useless metric - as useless as
    measuring developer performance on the lines of code he/she writes per day.

    If that were your only advantage, you'd have to flout it.

    "[[ Our compiler emits lousy code, emits only half the required ISO diagnostics (and those are all there are), and is compatible with only
    75% of your system's header files, and 80% of the ABI, but ...]] have you seen the raw speed in lines per second?"


    How would Turbo C compare then?

    Anyway, C is often used as a target for compilers of other languages.

    There, it should be validated code, and so needs little error checking.
    It might not even use any headers (my generated C doesn't).

    The main requirement is that after the front-end compiler has generated
    the C, taking some fraction of a second, it doesn't immediately hit a
    brick wall if it tried to use a substantial product like gcc for the
    next stage.

    Here, optimisation is less important (unless the generated is hopelessy
    poor). But it's quite possible to choose between a fast backend compiler
    for routine builds, and a slower optimising one for production.

    In fact, you can use this approach anyway even if directly coding in C:
    use a fast compiler most of the time, and a slower one for a periodic
    check or when you need the better code.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Fri Oct 31 01:36:36 2025
    From Newsgroup: comp.lang.c

    On 31/10/2025 01:16, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 23:44, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    What do you mean by incremental rebuilding? I usually talk about
    /independent/ compilation.

    Then incremental builds might be about deciding which modules to
    recompile, except that that is so obvious, you didn't give it a name.

    Compile the one file you've just edited. If it might impact on any
    others (you work on a project for months, you will know it
    intimately), then you just compile the lot.
    I'll assume that was a serious question. Even if you don't care,
    others might.
    [...]

    I never came across any version of 'make' in the DEC OSes I used in
    the 1970s, in the 1980s did see it either.

    In any case it wouldn't have worked with my compiler, as it was not a
    discrete program: it was memory-resident together with an editor, as
    part of my IDE.

    This helped to get fast turnarounds even on floppy-based 8-bit systems.

    Plus, I wouldn't have felt the issue was of any great importance:
    [...]

    You asked what incremental building means. I told you. Your only
    response is to let us all know that you don't find it useful.

    Actually I didn't mention 'make'. I said what I thought it meant, and I expanded on that in my reply to you.

    You mentioned 'make', and I also explained why it wouldn't have been any
    good to me.

    In any case, you still have to give that dependency information to
    'make', and maintain it, as well as all info about the constituent files
    of the project.

    Since I used project files from a very early stage, much of that
    information is already present (and is used to browse the source files
    and to do full compiles and linking).

    If I wanted automatic dependency handling, then it would have made sense
    to add that to the project file, than use an external tool with arcane
    syntax.

    The project file also had the task of doing test runs of the
    application, applying suitable inputs, and at one point, also dealing
    with overlays.

    Sometimes, the generated program was downloaded to a separate
    microprocessor to in other to test on bare hardware.

    The picture I'm giving is that there was lots going on, centrally
    controlled, compared with the minor aspects that a makefile could help
    with, but which would have needed a duplicate lot of information.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Thu Oct 30 19:13:17 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 31/10/2025 01:16, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 23:44, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    What do you mean by incremental rebuilding? I usually talk about
    /independent/ compilation.

    Then incremental builds might be about deciding which modules to
    recompile, except that that is so obvious, you didn't give it a name. >>>>>
    Compile the one file you've just edited. If it might impact on any
    others (you work on a project for months, you will know it
    intimately), then you just compile the lot.
    I'll assume that was a serious question. Even if you don't care,
    others might.
    [...]

    I never came across any version of 'make' in the DEC OSes I used in
    the 1970s, in the 1980s did see it either.

    In any case it wouldn't have worked with my compiler, as it was not a
    discrete program: it was memory-resident together with an editor, as
    part of my IDE.

    This helped to get fast turnarounds even on floppy-based 8-bit systems.

    Plus, I wouldn't have felt the issue was of any great importance:
    [...]
    You asked what incremental building means. I told you. Your only
    response is to let us all know that you don't find it useful.

    Actually I didn't mention 'make'. I said what I thought it meant, and
    I expanded on that in my reply to you.

    You mentioned 'make', and I also explained why it wouldn't have been
    any good to me.

    "make" is probably the most common tool that supports incremental
    building, and certainly the one I'm most familiar with. There
    are other tools that have similar support (many of them are built on top
    of "make"). The idea of incremental building isn't as tightly tied to
    "make" as I might have suggested.

    In any case, you still have to give that dependency information to
    'make', and maintain it, as well as all info about the constituent
    files of the project.

    Makefiles are commonly generated automatically.

    I asked you several questions, that you quietly snipped. I'll assume
    you refuse to answer them.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Fri Oct 31 02:14:00 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 23:49, bart wrote:
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    Maybe there /is/ something wrong with your machine or setup.  If you
    have a 2 core machine, it is presumably a low-end budget machine from
    perhaps 15 years ago.  I'm all in favour of keeping working systems
    and I strongly disapprove of some people's two or three year cycles
    for swapping out computers, but there is a balance somewhere.  With
    such an old system, I presume you also have old Windows (my office
    Windows machine is Windows 7), and thus the old and very slow style of
    WSL. That, I think, could explain the oddities in your timings.

    The machine is from 2021. It has an SSD, 8GB, and runs Windows 11. It
    uses WSL version 2.

    It is fast enough for my 40Kloc compiler to self-host itself repeatedly
    at about 15Hz (ie. produce 15 new generations per second). And that is
    using unoptimised x64 code:

      c:\mx2>tim ms ms ms ms ms ms ms ms ms ms ms ms ms ms ms hello
      Hello, World
      Time: 1.017

    Hmm, I'm only counting 14 'ms' after the first. So apologies, it is only 14Hz!

    That timing is from the current compiler. The more streamlined one I'm
    working on now (where the IL plays a smaller role) can manage 16Hz; 14% faster.

    There are a few sluggish areas I want to look at.

    And yes it is more of a sport now than a real need.

    My compilers ought to be slow as they have so many passes. Tcc
    supposedly has only one. So another project I might have a go at is a single-pass C compiler that is faster than Tcc.

    Just to see how fast I can go at producing native code. However, if the
    code is too poor, there will be lots of it, and it will slow down the
    latter stages. I'll have to see.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Fri Oct 31 04:37:47 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> wrote:
    On 27/10/2025 13:39, Janis Papanagnou wrote:
    On 27.10.2025 13:50, bart wrote:
    On 27/10/2025 02:08, Janis Papanagnou wrote:
    On 26.10.2025 12:26, bart wrote:

    Speed is not an end in itself. It must be valued in comparison with
    all the other often more relevant factors (that you seem to completely
    miss, even when explained to you).
    Speed seems to be important enough that huge efforts have gone into
    creating the best optimising compilers over decades.

    Fantastically complex products like LLVM exist, which take 100 times
    longer to compile code than a naive compiler, in order to eke out the
    last bit of performance.

    Similarly, massive investment has gone into making dynamic languages
    fast, like the state-of-the-art products used in running JavaScript, or
    the numerous JIT approaches used to accelerate languages like Python and Ruby.

    Build-speed is taken seriously enough, and most 'serious' compilers are
    slow enough, that complex build systems exist, which use dependencies in order to avoid compilation as much as possible.

    Or failing that, by parallelising builds across multiple cores, possibly even across distributed machines.

    So, fortunately some people take this stuff more seriously than you do.

    I am also involved in this field, and my experimental work takes the approach of simplicity to achieve results.



    I know your goals are space and speed. And that's fine in principle
    (unless you're ignoring other relevant factors).

    LLVM is a backend project which is massively bigger, more complex and
    slower (in build speed) than my stuff, by a number of magnitudes in each case.

    The resulting code however, might only be a fraction of a magnitude
    faster (for example the 0.3 vs 0.4 timings above, achieved via gcc, but
    LLVM would be similar).

    And that's if you apply the optimiser, which I would only use for
    production builds, or for benchmarking. Otherwise its code is just as
    poor as mine, or worse, but it still takes longer to build stuff!

    For me the trade-offs of a big, cumbersome product don't work. I like my near-zero builds and can work more spontaneously!

    It's still at least FIVE TIMES FASTER than A68G! [2-3 TIMES FASTER]

    So what? - I don't need a Lua system. So why should I care.

    You are the one who seems to think that the speed factor is the most
    important factor to choose a language for a project. - You are wrong
    for the general case. (But it may be right for your personal universe,
    of course.)

    You are wrong. What language do you use most? Let's say it is C
    (although you usually post about every other language except C!).

    Then, suppose your C compiler was written in Python rather than C++ or whatever and run under CPython. What you think would happen to your build-times?

    Now imagine further if the CPython interpreter was inself written and executed with CPython.

    So, the 'speed' of a language (ie. of its typical implementation, which
    also depends on the language design) does matter.

    If speed wasn't an issue then we'd all be using easy dynamic languages
    for productivity. In reality those easy languages are far too slow in
    most cases.

    It is not clear what you mean by "easy dynamic languages". Take
    Objective Caml as an example. It has read-eval-print loop, so
    you can do interactive developement, so it is quit "dynamic".
    It seems that most developement is done using bytecode interpreter,
    but there is also optimizing compiler which can boast about
    its benchmark results. OTOH, it has strict type system and
    you need to stick to its rules. So not entirely easy, but
    if you write correct code compiler will automatically assign
    types, so you can avoid writing any type declarations.
    Do you consider Objective Caml as an "easy dynamic language"?

    Concerning choice of language: much code is in big programs and
    is quite expensive to rewrite such a program in a different
    language. So simply a lot of coding uses the same language
    in which program is written. It is also common to write new
    parts in a different language, but then the new language must
    be link compatible with the old one. For example, classic
    algoritmic languages like Fortran, Cobol, Algol, Pascal, C,
    Modula 2 or Ada used to have link compatible implementations:
    with modest effort one can call routines in one language from
    the other.

    For me correctness of programs is important and I find
    static typing quite helpful in improving correctness.
    First, compile time type checks catch a lot of errors
    that would otherwise require extensive tests to catch.
    In other words, errors are caught earlier than say with
    dynamic typing. Second, well defined types serve as
    documentation of used data structures and interfaces.
    This make my thinking about program clearer which
    tends to reduce mistakes that I make.

    A lot of folks consider types are hard things and
    associate "easy language" with one which is dynamically
    typed, or worse essentially untyped.

    My experience with dynamically typed languages is that in
    largish program there may be essentially nonsense code which
    is not executed in normal operation. But rarely it gets
    executed leading to crashes or nonsence results. If you have
    small project with good developers than this is less of a
    problem. Also, practices like "you add tests first and can
    only add code to fix failing test" help. But for large
    project with average or below average abilities and
    requirtement of high quality code typically management want
    all possible way to monitor and increase quality. Which
    frequenty means using staticaly typed language.

    So, compatiblity with existing code may lead to use of
    otherwise suboptimal language. Quality usually is opposite
    of "easy" and may lead to use of staticaly typed language.

    OTOH, for me biggest productivity boost comes from garbage
    collection. But AFAIK problem of cooperation between garbage
    collectors do not have satisfactory solution. More precisely,
    if memory use were not a concern, then one could simply
    turn C 'free' into a no-op (and do similar thing in other
    languages). Consequently, there would be no garbage
    collection and no problem of incompatibility between garbage
    collectors. But with easy style of programing, a program
    can easiliy allocate say a gigabyte per second. If program
    need to run for long time one would get ridiculosly large
    memory use. So for short running programs one can somemtimes
    tolerate lack of dealocation and garbage collection,
    such program simply uses few times more memory than it should.
    But for longer running programs you can get unbounded growth
    of memory use, which is usually unacceptable.

    Also, big attraction of popular mainstream languages are
    large libraries, so that many standard tasks are just
    a library call or combination of small number of calls.
    But somebody must write those libraries and deal with
    difficulties. And frequently "easy" languages use libraries
    where "hard" parts and written in "hard" languages.

    The tools I'm using for my personal purposes, and those that I had been >>>> using for professional purposes, all served the necessary requirements. >>>> Your's don't.

    I'm just showing just how astonishingly fast modern hardware can be.
    Like at least a thousand times faster than a 1970s mainframe, and yet
    people are still waiting on compilers!

    You've been explained before many times already by many people that
    differences in compile time may not beat other more relevant factors.

    I've also explained that I work by very freqent edit-run cycles. Then compile-times matter. This is why many like to use scripting languages
    as those don't have a discernible build step.

    But I can use my system language, *or* C via my compiler, just like a scripting language.

    You will find now various projects that apply JIT-techniques to such languages in an effort to provide a similar experience. (I don't need
    such techniques as my AOT compilers already work near-instantly.)

    Good that this works for you. I recently worked on old version of
    binutils for a niche target. To make them work I needed to
    resolve some probles. First problem was that somebody thought that
    adding '-Werror' to distributed code is a good idea. But I used
    new version of gcc which implemented new warnings and the old
    code faild to compile because it triggered new warnings which
    '-Werror' turned into errors. I tried tcc, and compilation
    using tcc worked fine. But there were bugs in binutils.
    Trying to fix/work around I got nonsense results. After wasting
    some time I realized to debug info generated by tcc was wrong,
    or a least incompantible with debugger (that is gdb). So
    I used a wrapper around gcc which discared '-Werror' and then
    compilartion worked fine and I was able to debug binutils,
    locate problematic lines and implement a workaround.

    In all this I had to build binutils several times so faster
    build would help. But I probably wasted more time due to tcc
    debug info problem than waiting for builds. Simply, if
    I need to debug a program, then speed difference between
    gcc and tcc is less important to me than debug info.
    gcc had to be much slower than it is to make a difference.
    Simply debugging without proper support takes time and
    at its current speed gcc with working debug info gives
    me net saving compared to faster compiler without working
    debug info.

    Note, if needed I can debug without debugger. Simply, to
    get information which is almost immediate using debugger
    usually I would need several edit-run cycles. Even with
    compile time reduced to 0 the edit-run part is likely
    to take more time than using debugger.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Fri Oct 31 07:44:43 2025
    From Newsgroup: comp.lang.c

    On 30.10.2025 21:21, Keith Thompson wrote:
    [...]

    The data structure that defines the '-j' option in the GNU make
    source is:

    static struct command_switch switches[] =
    {
    // ...
    { 'j', positive_int, &arg_job_slots, 1, 1, 0, 0, &inf_jobs, &default_job_slots,
    "jobs", 0 },
    //...
    };

    Yes, it's odd that "-j" may or may not be followed by an argument.
    The way it works is that if the following argument exists and is
    (a string representing) a positive integer, it's taken as "-j N",
    otherwise it's taken as just "-j".

    Incidentally in some recent (<2 years past) "C" program I needed
    a lot of options to control the software. When I looked into the
    man pages of getopt(3) in my GNU/Linux environment I noticed the
    "optional optarg" capability of this 'getopt' version and I used
    it deliberately for good reasons. - The opt-string specification
    for this feature was done with a double-colon, as defined in
    "s::d:f:r:g::u:a::m::kt::lqj::p::nci:o:"
    for the program syntax
    [-s[wxh]] [-d density] [-f pattern] [-r seed] [-g[ngen]] [-u rule]
    [-a[gen]] [-m[rate]] [-k|-t[sec]|-l|-q] [-j[n]] [-p[symbol]|-n|-c]
    [-i infile] [-o outfile]
    The disambiguation with program arguments or other options was done
    by writing _no space_ between the option letter and the optional option-argument. So you could write, e.g., -j, or -j1, but not -j 1
    (for those options that could have optional arguments).

    I cannot tell, though, whether GNU make did use this getopt feature
    similarly (or whether it had coded some ad hoc heuristic parsing).


    A make argument that's not an option is called a "target"; for
    example in "make -j 4 foo", "foo" is the target. A target whose name
    is a positive integer is rare enough that the potential ambiguity
    is almost never an issue. If it is, you can use the long form:
    "make --jobs" or "make --jobs=N".

    I think it would have been cleaner if the argument to "-j" had
    been mandatory, with an argument of "0", "-1", or "max" having
    some special meaning. But changing it could break existing scripts
    that invoke "make -j" (though as I've written elsethread, "make -j"
    can cause problems).

    I agree with having an explicit option argument would be clearer.

    In my case above (and I don't know about the 'make' case discussed
    in this thread) the -j had another semantics than -j0 (or such); I
    needed both possibilities. So the option would have been (for me)
    to add another (unrelated) option name from the very few remaining
    letters and the choice would then have been arbitrary/non-mnemonic.
    (For reasons I also didn't want to introduce long option names.)


    It would also have been nice if the "make -j $(nproc)" functionality
    had been built into make.

    Yes. - This is actually how I'd have (with GNU 'getopt') designed it;
    make (one instance), make -j (use max. available), make -j N (use N).

    (Personally I dislike using the "C" programming pattern '-1' on the
    user interface level to indicate "maximum" or some such.)

    The existing behavior is a bit messy, but it works, and I've never
    run into any actual problems with the way the options are parsed.

    (I've never had any speed issues with make, so I've never used -j;
    despite it comes "for free". - But I've also no 64 kernel CPUs or
    MLOC-sized projects at home.)

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Fri Oct 31 07:49:25 2025
    From Newsgroup: comp.lang.c

    On 31.10.2025 07:44, Janis Papanagnou wrote:

    (I've never had any speed issues with make, so I've never used -j;
    despite it comes "for free". - But I've also no 64 kernel CPUs or
    MLOC-sized projects at home.)

    Oops! - s/kernel/core/

    Janis


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Fri Oct 31 09:31:29 2025
    From Newsgroup: comp.lang.c

    On 30.10.2025 23:01, David Brown wrote:
    On 30/10/2025 18:49, bart wrote:
    [...]

    If I were developing a compiler, I would not advertise any kind of lines-per-second value. It is a totally useless metric -

    It's good enough for marketing.

    as useless as measuring developer performance on the lines of code
    he/she writes per day.

    Which sadly had been done (maybe still?) by the less enlightened
    instances of management.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From tTh@tth@none.invalid to comp.lang.c on Fri Oct 31 10:29:12 2025
    From Newsgroup: comp.lang.c

    On 10/31/25 02:22, bart wrote:

    Anyway, C is often used as a target for compilers of other languages.

    There, it should be validated code, and so needs little error checking.
    It might not even use any headers (my generated C doesn't).

    s/should/MUST/
    --
    ** **
    * tTh des Bourtoulots *
    * http://maison.tth.netlib.re/ *
    ** **
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From tTh@tth@none.invalid to comp.lang.c on Fri Oct 31 11:52:57 2025
    From Newsgroup: comp.lang.c

    On 10/30/25 23:37, David Brown wrote:

    There certainly are makefile builds that might not work correctly with parallel builds.  And I think you are right that this is typically a dependency specification issue, and that generating dependencies automatically in some way should have lower risk of problems.

    I have encountered a case where to actions run in parallel
    overwrite a badly named temp file, same file for two process
    is definitively wrong :(
    --
    ** **
    * tTh des Bourtoulots *
    * http://maison.tth.netlib.re/ *
    ** **
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Fri Oct 31 13:15:05 2025
    From Newsgroup: comp.lang.c

    On Fri, 31 Oct 2025 01:22:30 +0000
    bart <bc@freeuk.com> wrote:

    On 31/10/2025 00:28, Kaz Kylheku wrote:
    On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
    If I were developing a compiler, I would not advertise any kind of
    lines-per-second value. It is a totally useless metric - as
    useless as measuring developer performance on the lines of code
    he/she writes per day.

    If that were your only advantage, you'd have to flout it.

    "[[ Our compiler emits lousy code, emits only half the required ISO diagnostics (and those are all there are), and is compatible with
    only 75% of your system's header files, and 80% of the ABI, but
    ...]] have you seen the raw speed in lines per second?"


    How would Turbo C compare then?


    Turbo C implented majority of C89/C90 years (like 3-5 years in some
    cases) ahead of many of so-called "serious" C compilers.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From richard@richard@cogsci.ed.ac.uk (Richard Tobin) to comp.lang.c on Fri Oct 31 11:43:46 2025
    From Newsgroup: comp.lang.c

    In article <20251030172415.416@kylheku.com>,
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    If that were your only advantage, you'd have to flout it.

    Flaunt.

    -- Richard
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Fri Oct 31 13:10:38 2025
    From Newsgroup: comp.lang.c

    On 31/10/2025 00:23, bart wrote:

    People into compilers are obsessed with optimisation. It can be a
    necessity for languages that generate lots of redundant code that needs
    to be cleaned up, but not so much for C.

    Typical differences of between -O0 and -O2 compiled code can be 2:1.

    However even the most terrible native code will be a magnitude faster
    than interpreted code.


    You live in a world of x86 (with brief visits to 64-bit ARM). You used
    to work with smaller processors and lower level code, but seem to have forgotten that long ago.

    A prime characteristic of modern x86 processors is that they are
    extremely good at running extremely bad code. They are targeted at
    systems where being able to run old binaries is essential. A great deal
    of the hardware in an x86 cpu core is there to handle poorly optimised
    code - lots of jumps and function calls get predicted and speculated,
    data that is pushed onto and pulled off the stack gets all kinds of fast
    paths and short-circuits, and so on. And then there is the memory - if
    code has to wait for data from ram, the cpu can happily execute hundreds
    of cycles of unnecessary unoptimised code without making any difference
    to the final speed.

    Big ARM processors - such as on Pi's - have the same effects, though to
    a somewhat lesser extent.

    A prime characteristic of user programs on PC's and other "big" systems
    is that a lot of the time is spent doing things other than running the
    user code - file I/O, screen display, OS calls, or code in static
    libraries, DLLs (or SOs), etc. That stuff is completely unaffected by
    the efficiency of the user code - that's why interpreted or VM code is
    fast enough for a very wide range of use-cases.

    And if you are working with Windows systems with an MS DLL for the C
    runtime library (as used by some C toolchains on Windows, but not all),
    then you can get more distortions. If you have a call to memcpy that
    uses an external DLL, that is going to take perhaps 500 clock cycles
    even for a small fixed size of memcpy (assuming all code and data is in cache). The user code for the call might be 10 cycles or 20 cycles
    depending on the optimisation - compiler optimisation makes no
    measurable difference here. But if the toolchain uses a static library
    for memcpy and can optimise locally to replace the call, the static call
    to general memcpy code might take 200 cycles while the local code takes
    10 cycles. Suddenly the difference between optimising and
    non-optimising is huge.

    Then there is the type of code you are dealing with. Some code is very
    cpu intensive and can benefit from optimisations, other code is not.

    And optimisation is not just a matter of choosing -O0 or -O2 flags. It
    can mean thought and changes in the source code (some standard C
    changes, like use of "restrict" parameters, some compiler-specific
    changes like gcc attributes or builtins, and some target specific like organising data to fit cache usage). And it can mean careful flag
    choices - different specific optimisations suitable for the code at
    hand, and target related flags for enabling more target features. I am entirely confident that you have done nothing of these things when
    testing. That's not necessarily a bad thing in itself, when looking at
    widely portable source compiled to generic binaries, but it gives a very unrealistic picture of compiler optimisations and what can be achieved
    by someone who knows how to work with their compiler.


    All this conspires to give you this 2:1 ratio that you regularly state
    for the difference between optimised code and unoptimised code - gcc -O2
    and gcc -O0.


    In reality, people can often achieve far greater ratios for the type of
    code where performance matters and where it is is achievable. Someone
    working on game engines on an x86 would probably expect at least 10
    times difference between the flags they use, and no optimisation flags.
    For the targets I use, which are (generally) not super-scaler,
    out-of-order, etc., five to ten times difference is not uncommon. And
    when you throw C++ or other modern languages into the mix (remember, gcc
    and clang/llvm are not simple C compilers), the benefits of inlining and
    other inter-procedural optimisations can easily be an order of
    magnitude. (This is one reason why gcc and clang enable a number of optimisations, including at least inlining of functions marked
    appropriately, even with no optimisation flags specified.)


    You can continue to believe that high-end toolchains are no more than
    twice as good as your own compiler or tcc, if you like. (And they give
    you all the performance and features that you need, fine.) Those of us
    who want more from our tools, and know how to get it, know better.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Fri Oct 31 13:16:44 2025
    From Newsgroup: comp.lang.c

    On 31/10/2025 01:28, Kaz Kylheku wrote:
    On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
    If I were developing a compiler, I would not advertise any kind of
    lines-per-second value. It is a totally useless metric - as useless as
    measuring developer performance on the lines of code he/she writes per day.

    If that were your only advantage, you'd have to flout it.

    "[[ Our compiler emits lousy code, emits only half the required ISO diagnostics (and those are all there are), and is compatible with only
    75% of your system's header files, and 80% of the ABI, but ...]] have you seen the raw speed in lines per second?"


    I have seen, and even used, compilers that would fit that description
    quite well :-( Usually, however, the flouted advantage is not the raw
    speed, but support for a microcontroller target that no one else
    supports. Oh, and generally they could add "costs a ridiculous price"
    to the list.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Fri Oct 31 12:39:59 2025
    From Newsgroup: comp.lang.c

    On 30/10/2025 16:22, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 14:13, Scott Lurndal wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    bart <bc@freeuk.com> wrote:
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:


    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are: >>>>>
    root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make >>>>>> output
    <warnings>
    real 0m49.512s
    user 0m19.033s
    sys 0m3.911s

    Those numbers indicate that there is something wrong with your
    machine. Sum of second and third line above give CPU time.
    Real time is twice as large, so something is slowing down things.
    One possible trouble is having too small RAM, then OS is swaping
    data to/from disc. Some programs do a lot of random I/O, that
    can be slow on spinning disc, but SSD-s usually are much
    faster at random I/O.

    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    FYI, reasonably typical report for normal make (without -j
    option) on my machine is:

    real 0m4.981s
    user 0m3.712s
    sys 0m0.963s


    Just for grins, here's a report for a full rebuild of a real-world project >>> that I build regularly. Granted most builds are partial (e.g. one or
    two source files touched) and take far less time (15 seconds or so,
    most of which is make calling stat(2) on a few hundred source files
    on an NFS filesystem). Close to three million SLOC, mostly in header
    files. C++.


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so

    Well, I've done a couple of small tests.

    The first was in generating 200 'small' DLLs - duplicates of the same
    library. This took 6 seconds to produce 200 libraries of 50KB each (10MB total). Each library is 5KB as it includes my language's standard libs.

    The second was to compile a single program of 7.5MB. This was done by
    taking one 300KB project and duplicating one of the bigger source
    modules a large number of times (130 copies for the 4.5MB result).

    However that ran into some problems; possibly, running out of memory (I
    have 6GB available), or something. In any case it's not worth my time
    looking at it right now.

    I did manage to produce a 4.5MB executable, and that took about 1
    second. The total source code was 500K (about 9 bytes per source line;
    how about that!)

    To summarise:

    Generate 200 x 50KB DLLS: 6 seconds (1.7MB/s) (1000Kloc so 170Klps)
    Generate 1 x 4.5MB EXE: 1 second (4.5MB/s) (500Kloc so 500Klps)

    This is on a machine that David Brown suggested was hopelessly old and
    slow. All source code compiled was in my language.

    I then did the same test using an existing C port of that library, with:

    gcc -O0 -s -shared libnnn.c -o libnnn.dll

    It took 72 seconds, with each DLL now being 100KB. Source code is the
    bare library so only 1.7Lloc, giving a throughput of 4.7Klps.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Fri Oct 31 13:43:20 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 30/10/2025 23:44, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:

    <snip accurate description of make(1) semantics>

    This is likely to give you working executables substantially
    faster than if you did a full rebuild. It's more useful while
    you're developing and updating a project than when you download
    the source and build it once.

    I never came across any version of 'make' in the DEC OSes I used in the >1970s, in the 1980s did see it either.

    Unix provided make in the 1970s, on DEC hardware.


    In any case it wouldn't have worked with my compiler, as it was not a >discrete program: it was memory-resident together with an editor, as
    part of my IDE.

    This helped to get fast turnarounds even on floppy-based 8-bit systems.

    The programs[*] I worked on in the 70 and 80's couldn't have been compiled
    on floppy-based 8-bit systems.

    [*] Master Control Program (MCP), for example.

    We had a program called WFL (Work Flow Language) which could be used
    to automate MCP builds, which would rebuild only the modules that
    changed then run the binder (linker) to create the MCP binary.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Fri Oct 31 13:48:21 2025
    From Newsgroup: comp.lang.c

    David Brown <david.brown@hesbynett.no> writes:
    On 30/10/2025 21:37, Keith Thompson wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Try "time make -j" as a simple step.
    [...]

    In my recent testing, "make -j" without a numeric argument (which
    tells make to run as many parallel steps as possible) caused my
    system to bog down badly. This was on a fairly large project (I used
    vim); it might not be as much of a problem with a smaller project.

    I've found that "make -j $(nproc)" is safer. The "nproc" command
    is likely to be available on any system that has a "make" command.

    It occurs to me that "make -j N" can fail if the Makefile does
    not correctly reflect all the dependencies. I suspect this is
    less likely to be a problem if the Makefile is generated rather
    than hand-written.


    There certainly are makefile builds that might not work correctly with >parallel builds. And I think you are right that this is typically a >dependency specification issue, and that generating dependencies >automatically in some way should have lower risk of problems. I think
    it is also typically on older makefiles - from the days of single core >machines where "make -j N" was not considered - that had such issues.


    Hence the development of tools like 'mkdepend' (from X11, IIRC).

    Modern gcc includes all the support necessary to generate
    dependency files used by make(1) to reduce [re-]build times.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Fri Oct 31 13:57:20 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 30/10/2025 16:22, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since >> they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so

    Well, I've done a couple of small tests.

    Pointlessly.


    The first was in generating 200 'small' DLLs - duplicates of the same >library. This took 6 seconds to produce 200 libraries of 50KB each (10MB >total). Each library is 5KB as it includes my language's standard libs.

    The shared object 'text' size ranges from 500KB to 14MB.

    Your toy projects aren't representative of real world application
    development. Can you not understand that?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Fri Oct 31 14:55:49 2025
    From Newsgroup: comp.lang.c

    On 31/10/2025 13:57, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 16:22, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since
    they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so

    Well, I've done a couple of small tests.

    Pointlessly.


    The first was in generating 200 'small' DLLs - duplicates of the same
    library. This took 6 seconds to produce 200 libraries of 50KB each (10MB
    total). Each library is 5KB as it includes my language's standard libs.

    The shared object 'text' size ranges from 500KB to 14MB.

    Well, I asked for some figures, and they were lacking. And here, the
    14MB figure contradicts the 7.5MB you mentioned above as the largest object.


    Your toy projects aren't representative of real world application development. Can you not understand that?

    I don't believe you. Clearly my tests show that basic conversion of HLL
    code to native code can be easily done at several MB per second even on
    my low-end hardware - per core.

    If your tests have a effective throughput far below that, then either
    you have very slow compilers, or are doing a mountain of work unrelated
    to compiling, or the orchestration of the whole process is poor, or some combination.

    (You mentioned there are nearly 400 developers involved? It sounds like
    a management problem.

    Perhaps you should employ someone whose job it is to look at the big
    picture, and to get those iteration times down.)

    In any case, the tasks I want to build are nothing like that, yet there
    is at least 2 magnitudes difference in build-time between my 'toy'
    tools, and all that Unix stuff that you are all trying to force down my throat.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Fri Oct 31 16:34:22 2025
    From Newsgroup: comp.lang.c

    On 31/10/2025 12:10, David Brown wrote:
    On 31/10/2025 00:23, bart wrote:

    People into compilers are obsessed with optimisation. It can be a
    necessity for languages that generate lots of redundant code that
    needs to be cleaned up, but not so much for C.

    Typical differences of between -O0 and -O2 compiled code can be 2:1.

    However even the most terrible native code will be a magnitude faster
    than interpreted code.


    You live in a world of x86 (with brief visits to 64-bit ARM).  You used
    to work with smaller processors and lower level code, but seem to have forgotten that long ago.

    A prime characteristic of modern x86 processors is that they are
    extremely good at running extremely bad code.

    Yes. And? That means compilers don't need to be so clever!


    They are targeted at
    systems where being able to run old binaries is essential.  A great deal
    of the hardware in an x86 cpu core is there to handle poorly optimised
    code - lots of jumps and function calls get predicted and speculated,
    data that is pushed onto and pulled off the stack gets all kinds of fast paths and short-circuits, and so on.  And then there is the memory - if code has to wait for data from ram, the cpu can happily execute hundreds
    of cycles of unnecessary unoptimised code without making any difference
    to the final speed.

    Big ARM processors - such as on Pi's - have the same effects, though to
    a somewhat lesser extent.

    A prime characteristic of user programs on PC's and other "big" systems
    is that a lot of the time is spent doing things other than running the
    user code - file I/O, screen display, OS calls, or code in static
    libraries, DLLs (or SOs), etc.  That stuff is completely unaffected by
    the efficiency of the user code - that's why interpreted or VM code is
    fast enough for a very wide range of use-cases.


    Yes. That's why interpreted/dynamic languages (those usually go
    together) are viable.

    When I first introduced interpreted scripting to my apps (35 years ago),
    I had a rough guideline in that an interpreted version of a task should ideally be no worse than half the speed of 100% native code.

    My everyday text-editor is interpreted, and I routinely edit 1-million
    line files without noticing any lag.


    And if you are working with Windows systems with an MS DLL for the C
    runtime library (as used by some C toolchains on Windows, but not all),
    then you can get more distortions.  If you have a call to memcpy that
    uses an external DLL, that is going to take perhaps 500 clock cycles
    even for a small fixed size of memcpy (assuming all code and data is in cache).  The user code for the call might be 10 cycles or 20 cycles depending on the optimisation - compiler optimisation makes no
    measurable difference here.  But if the toolchain uses a static library
    for memcpy and can optimise locally to replace the call, the static call
    to general memcpy code might take 200 cycles while the local code takes
    10 cycles.  Suddenly the difference between optimising and non-
    optimising is huge.

    (My language has a 'clear' operator. Then inline code is generated for fixed-size objects.)

    Then there is the type of code you are dealing with.  Some code is very
    cpu intensive and can benefit from optimisations, other code is not.

    And optimisation is not just a matter of choosing -O0 or -O2 flags.

    To me, 'compiler'-optimisation means getting my program faster /without changing the source/. All I want to do is either enable or disable the
    option.

    A lot of my optimisations are to do with design choices in my language, special features it might provide, and design choices in the application.

    Anything that can be done in the compiler is a bonus, but I don't rely
    on it (other than the special case of generated C, see below).



      It
    can mean thought and changes in the source code (some standard C
    changes, like use of "restrict" parameters, some compiler-specific
    changes like gcc attributes or builtins, and some target specific like organising data to fit cache usage).


      And it can mean careful flag
    choices - different specific optimisations suitable for the code at
    hand, and target related flags for enabling more target features.

    It sounds a lot of work. I used to just use inline assembly and be done
    with it!

      I am
    entirely confident that you have done nothing of these things when testing.  That's not necessarily a bad thing in itself, when looking at widely portable source compiled to generic binaries, but it gives a very unrealistic picture of compiler optimisations and what can be achieved
    by someone who knows how to work with their compiler.


    All this conspires to give you this 2:1 ratio that you regularly state
    for the difference between optimised code and unoptimised code - gcc -O2
    and gcc -O0.

    If I'm giving figures that compare gcc-O0 to gcc-O2, then clearly,
    everything else must remain the same. Otherwise why not compare two
    entirely different algorithms while we're about it.

    Anyway, I assume all that stuff you've mentioned has been incorporated
    into the A68G makefiles, and it's still a pretty slow interpreter!
    (Although probably the advanced features of the language don't help.)

    However, one thing I did try the other day was to take the generated
    makefile, and change the -O2 flag to -O0. Building it was a little
    faster (60s instead of 90s), but my benchmark ran in 13s instead of 5s,
    so 2.6:1.

    You seem to be suggesting the difference should be greater, but this is someone else's codebase, and someone else's set of compiler flags, other
    than the choice of -O0/-O2.

    So, while I understand what you're saying, that doesn't apply if you are building, running and measuring an existing codebase created by someone
    else.

    I *am* seeing figures of 2:1, or sometimes 3:1 or 4:1; the latter
    usually when someone is trying to be too clever with intensive use of
    macros that may hide too many nested function, so that it needs inlining
    to get a respectable speed.



    In reality, people can often achieve far greater ratios for the type of
    code where performance matters and where it is is achievable.  Someone working on game engines on an x86 would probably expect at least 10
    times difference between the flags they use, and no optimisation flags.
    For the targets I use, which are (generally) not super-scaler, out-of- order, etc., five to ten times difference is not uncommon.

    For the /applications/ I write (not silly benchmarks), and for x64, 2:1
    is typical, but this is comparing my compilers (a little better than
    gcc-O0), with gcc-O2.

    These are apps like compilers, assemblers and interpreters, which are computationally intensive (most code executed is within the program I've generated). On those, I usually get better than 2:1 for /programs I've written/, such as 1.5:1.

    It can be worse than 2:1 for C programs, especially other people's.

    But I have also seen up to 10:1 for my generated C code (18:1 below),
    which currently is very poor, where I /require/ optimisation to clean up redundancies.


      And when you
    throw C++ or other modern languages into the mix (remember, gcc and clang/llvm are not simple C compilers), the benefits of inlining and
    other inter-procedural optimisations can easily be an order of
    magnitude.  (This is one reason why gcc and clang enable a number of optimisations, including at least inlining of functions marked appropriately, even with no optimisation flags specified.)


    You can continue to believe that high-end toolchains are no more than
    twice as good as your own compiler or tcc, if you like.

    Here are examples of two C libraries:

    Jpeg decoder on 94MB image:

    Ratio
    gcc -O2 4.4 seconds
    bcc 6.4 seconds 1.45 : 1
    tcc 10.6 seconds 2.41 : 1

    (The input file has been cached. Stopping after loading via fread takes
    0.08 seconds.)

    Calculate N digits of pi via my bignum library:

    gcc -O2 0.7 seconds
    bcc 1.6 seconds 2.3 : 1 (C version ported from M)
    mm 1.2 seconds 1.7 : 1 (using version in my language)
    tcc 1.9 seconds 2.7 : 1

    And here is the Lua interpreter running Fibonacci:

    gcc -O2 3.2 seconds
    gcc -O0 11.4 seconds 3.6 : 1
    bcc 7.3 seconds 2.3 : 1
    tcc 10.2 seconds 3.2 : 1

    This one is my interpreter also running the same Fibonacci test:

    gcc -O2 1.2 seconds (from low-level transpiled C)
    gcc -O0 22.3 seconds 18.6 : 1
    mm 1.3 seconds 1.1 : 1

    Here, gcc's optimiser is earning its keep.

    The ratios involving my own products are 1.45, 2.3, 1.7, 2.3, 1.1. The
    average is 1.77:1 slowdown compared to gcc-O2.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Fri Oct 31 17:18:53 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    On 31/10/2025 13:57, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 16:22, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since
    they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so

    Well, I've done a couple of small tests.

    Pointlessly.


    The first was in generating 200 'small' DLLs - duplicates of the same
    library. This took 6 seconds to produce 200 libraries of 50KB each (10MB >>> total). Each library is 5KB as it includes my language's standard libs.

    The shared object 'text' size ranges from 500KB to 14MB.

    Well, I asked for some figures, and they were lacking. And here, the
    14MB figure contradicts the 7.5MB you mentioned above as the largest object.

    The 7.5MB was the shared object containing the main code. 14MB
    was one outlier that I hadn't expected to be so large a text region (am actually looking into that now, I suspect the gcc optimizer doesn't handle
    a particular bit of generated data structure initialization sequence very well).

    $ size lib/*.so | cut -f 1

    text
    367395
    8053916
    8053916
    8053916
    22385
    134993
    6902921
    719346
    33698635
    36084944
    19501560
    3869694
    73570
    211384
    126472
    44610
    90992
    69081
    287447
    5308581
    12213437
    11228898
    6166468
    116563
    63242
    71842
    480359
    30823
    315595
    552362
    111956
    111956
    951445
    1457999
    29053
    2388204
    348969
    150472
    219346
    49420
    750129
    120295
    138622
    868002
    117492
    142438
    489431
    595478
    151900
    265009
    112371
    234140
    52977
    1152928
    567153
    614616
    151578
    181964
    14798814
    657231
    29984
    145595
    90394
    46204
    276076
    38248
    25649
    81913
    93313
    328478
    70278
    31539
    387492
    1885298
    144763
    51537
    37037
    44668
    167946
    4726570
    2472426
    95714
    29547
    24790
    55887
    76059
    47813
    78769
    136931
    65500
    323558
    2757388
    465288
    707782
    240259
    69803
    109695
    91664
    47862
    629404
    738060
    155033
    281246
    397902
    66721
    49279
    124507
    148506
    320033
    81491
    131769
    252140
    156101
    118933
    1777033
    353799
    534605
    96492
    143886
    254192
    26850
    54655
    106790
    56512
    87201
    230382
    792823
    314391
    37951
    274781
    1149389
    25851
    131519
    108052
    96303
    338036
    175900
    61630
    138460
    189483
    116789
    340759
    31324
    25293
    32149
    26870
    78069
    1494212
    427356
    237699
    30062440
    577998
    14611
    57346
    8724
    12007
    16053
    429021
    25367738
    35760664
    593138
    30982
    10087
    6552
    20032
    6539
    6738
    6738
    15262923
    145335
    4997
    42188
    11129
    11321
    7671
    8521
    8521
    11756
    15872
    11076
    23053

    A couple are third-party libraries distributed
    in binary form (e.g. the ones with 30+Mbytes of text).




    Your toy projects aren't representative of real world application
    development. Can you not understand that?

    I don't believe you. Clearly my tests show that basic conversion of HLL
    code to native code can be easily done at several MB per second even on
    my low-end hardware - per core.


    If your tests have a effective throughput far below that, then either
    you have very slow compilers, or are doing a mountain of work unrelated
    to compiling, or the orchestration of the whole process is poor, or some >combination.

    Or your tools are not capable of building a project of this size
    and complexity. If they were, they'd likely take even _more_ time
    to run.


    (You mentioned there are nearly 400 developers involved? It sounds like
    a management problem.

    I said nothing about the number of developers (perhaps you were looking
    at the output of the 'sloccount' command?)

    Between 2 and 8 developers have worked on this project
    at any one time over the last 15 years.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Fri Oct 31 17:52:24 2025
    From Newsgroup: comp.lang.c

    On 31/10/2025 17:18, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 31/10/2025 13:57, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 16:22, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since
    they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so

    Well, I've done a couple of small tests.

    Pointlessly.


    The first was in generating 200 'small' DLLs - duplicates of the same
    library. This took 6 seconds to produce 200 libraries of 50KB each (10MB >>>> total). Each library is 5KB as it includes my language's standard libs. >>>
    The shared object 'text' size ranges from 500KB to 14MB.

    Well, I asked for some figures, and they were lacking. And here, the
    14MB figure contradicts the 7.5MB you mentioned above as the largest object.

    The 7.5MB was the shared object containing the main code. 14MB
    was one outlier that I hadn't expected to be so large a text region (am actually looking into that now, I suspect the gcc optimizer doesn't handle
    a particular bit of generated data structure initialization sequence very well).

    $ size lib/*.so | cut -f 1

    text
    367395
    8053916


    A couple are third-party libraries distributed
    in binary form (e.g. the ones with 30+Mbytes of text).

    In sorted form:

    1 4,997 bytes
    2 6,539
    3 6,552
    ...
    178 30,062,440
    179 33,698,635
    180 35,760,664
    181 36,084,944

    About 330MB, or 260MB if disregarding the two biggest.

    That's quite substantial, but still, going with my test which built
    4.5MB in one second, 60 such builds would take a minute, totalling a
    260MB. Say add a bit more if split into 180 separate builds.

    And that is if done one at a time.

    So I still contend that the basic translation can still be done in a reasonable time, /if/ you really had to rebuild everything.

    (When I rebuild everything, it's because a module is part of one
    executable, so that whole binary must be rebuilt.)

    If your tests have a effective throughput far below that, then either
    you have very slow compilers, or are doing a mountain of work unrelated
    to compiling, or the orchestration of the whole process is poor, or some
    combination.

    Or your tools are not capable of building a project of this size
    and complexity. If they were, they'd likely take even _more_ time
    to run.

    Perhaps not, but so what? I've always developed tools according to the
    tasks and circumstances that were relevant to me.

    And usually, for building my own software.

    They just happen to also be a great deal zippier in operation when
    compared with other tools for building the same codebases.

    I'm pretty certain they have inefficiences that someone could address if
    they wanted to, or could choose to find streamlined paths if a fast
    turnaround was desirable.

    That's why I said it should be somebody's job to do that, in the same
    way that I considered it part of my job to ensure my development process wasn't slow enough to slow me down. If I'm twiddling my thumbs, then something's wrong!


    (You mentioned there are nearly 400 developers involved? It sounds like
    a management problem.

    I said nothing about the number of developers (perhaps you were looking
    at the output of the 'sloccount' command?)

    Yes. (I'm not sure what that was about.)

    Between 2 and 8 developers have worked on this project
    at any one time over the last 15 years.

    You might want to clear out some cruft then.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Fri Oct 31 21:39:39 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> wrote:
    On 31/10/2025 00:28, Kaz Kylheku wrote:
    On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
    If I were developing a compiler, I would not advertise any kind of
    lines-per-second value. It is a totally useless metric - as useless as
    measuring developer performance on the lines of code he/she writes per day. >>
    If that were your only advantage, you'd have to flout it.

    "[[ Our compiler emits lousy code, emits only half the required ISO
    diagnostics (and those are all there are), and is compatible with only
    75% of your system's header files, and 80% of the ABI, but ...]] have you
    seen the raw speed in lines per second?"


    How would Turbo C compare then?

    I probably used Turbo C once (to compile a C program fetched from
    the net). But I used Turbo Pascal and later Borland C (which was
    supposed to be an optimizing compiler). AFAICS main attraction
    of Turbo family in general was fast compilation. But generated
    code was poor, much bigger and slower than code from optimizing
    compilers. I used Borland C to deliver a few programs for
    Windows (I developed them using gcc on Linux, Borland C was
    just for final tests and delivery). But later I have set up
    Mingw cross compiler and testing showed that gcc compiled code
    was significantly faster than output from Borland C.

    It seems that "professionals" preferred other compilers, like
    Microsoft one or Watcom (or possibly others, there were quite
    a lot of different compilers in this period).
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Fri Oct 31 22:01:24 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> wrote:
    On 30/10/2025 10:15, David Brown wrote:
    On 30/10/2025 01:36, bart wrote:

    So, what exactly did I do wrong here (for A68G):

       root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
       real    1m32.205s
       user    0m40.813s
       sys     0m7.269s

    This 90 seconds is the actual time I had to hang about waiting. I'd be
    interested in how I managed to manipulate those figures!

    Try "time make -j" as a simple step.


    OK, "make -j" gave a real time of 30s, about three times faster. (Not
    quite sure how that works, given that my machine has only two cores.)

    However, I don't view "-j", and parallelisation, as a solution to slow compilation. It is just a workaround, something you do when you've
    exhausted other possibilities.

    You have to get raw compilation fast enough first.
    <snip>

    Quite a few people have suggested that there is something amiss about my 1:32 and 0:49 timings. One has even said there is something wrong with
    my machine.

    Yes, I wrote this. 90 seconds in itself could be OK, your machine
    just could be slow. But the numbers you gave clearly show that
    that only about 50% of time on _one_ core is used to do the build.
    So something is slowing down your machine. And this is specific to
    your setup, as other people running build on Linux get better than
    90% CPU utilization. You apparently get offended by this statement.
    If you are realy interested if fast tools you should investigate
    what is causing this.

    Anyway, there could be a lot of different reasons for slowdown.
    Fact that you get 3 times faster build using 'make -j' suggests
    that some other program is competing for CPU and using more jobs
    allows getting higher share of CPU. If that affects only programs
    running under WSL, than your numbers may or may not be relevant to
    WSL experience, but are incomparable to Linux timings. If slowdown
    affects all programs on your machine, then you should be interested
    in eliminating it, because it would also make your compiler faster.
    But that is your machine, if you not curious what happens that
    is OK.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Fri Oct 31 22:47:27 2025
    From Newsgroup: comp.lang.c

    On 2025-10-31, Richard Tobin <richard@cogsci.ed.ac.uk> wrote:
    In article <20251030172415.416@kylheku.com>,
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    If that were your only advantage, you'd have to flout it.

    Flaunt.

    *rubeyes* I can't believe I wrote that!
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Fri Oct 31 23:40:03 2025
    From Newsgroup: comp.lang.c

    On 31/10/2025 00:28, Kaz Kylheku wrote:
    On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
    If I were developing a compiler, I would not advertise any kind of
    lines-per-second value. It is a totally useless metric - as useless as
    measuring developer performance on the lines of code he/she writes per day.

    If that were your only advantage, you'd have to flout it.

    "[[ Our compiler emits lousy code, emits only half the required ISO diagnostics (and those are all there are), and is compatible with only
    75% of your system's header files, and 80% of the ABI, but ...]]

    Those incompatibilities anyway, even on big compilers, and people are
    tolerant of them.

    How many headers have you seen which multiple conditional blocks which
    pander to different compilers, for example (from SDL2):

    # if defined(HAVE_ALLOCA_H)
    # include <alloca.h>
    # elif defined(__GNUC__)
    # define alloca __builtin_alloca
    # elif defined(_MSC_VER)
    # include <malloc.h>
    # define alloca _alloca
    # elif defined(__WATCOMC__)
    # include <malloc.h>
    # elif defined(__BORLANDC__)
    # include <malloc.h>
    # elif defined(__DMC__)
    # include <stdlib.h>
    # elif defined(__AIX__)
    #pragma alloca
    # elif defined(__MRC__)
    void *alloca(unsigned);
    # else
    char *alloca();
    # endif
    #endif

    (If you are writing your own compiler, where is it going to fit in?)

    In fact, half of configure scripts seem to be about testing the
    capabilities of the C compiler, so it is apparently expected that any of
    those features can be missing.

    And as for diagnostics, it seems that you have actively know about them
    and explicitly enable checking for them.




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Fri Oct 31 17:14:42 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> writes:
    [...]
    And as for diagnostics, it seems that you have actively know about
    them and explicitly enable checking for them.

    It "seems"?

    Yes. Most C compilers, and gcc in particular, are not fully
    conforming by default, and do not produce all the diagnostics
    required by the ISO C standard. Most C compilers have options
    that tell them to attempt to do so. For gcc or clang, you can use
    "-std=c17 -pedantic". Replace "c17" by whatever edition of the
    standard hou prefer to use. Replace "-pedantic" by "-pedantic-errors"
    if you want fatal diagnostics. Replace "-pedantic" by "-Wpedantic"
    if you're fond of the letter 'W'.

    I've been telling you this for well over a decade, and it still only
    "seems" to be the case? How does that work?
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Sat Nov 1 11:57:57 2025
    From Newsgroup: comp.lang.c

    On 31/10/2025 22:01, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:
    On 30/10/2025 10:15, David Brown wrote:
    On 30/10/2025 01:36, bart wrote:

    So, what exactly did I do wrong here (for A68G):

       root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
       real    1m32.205s
       user    0m40.813s
       sys     0m7.269s

    This 90 seconds is the actual time I had to hang about waiting. I'd be >>>> interested in how I managed to manipulate those figures!

    Try "time make -j" as a simple step.


    OK, "make -j" gave a real time of 30s, about three times faster. (Not
    quite sure how that works, given that my machine has only two cores.)

    However, I don't view "-j", and parallelisation, as a solution to slow
    compilation. It is just a workaround, something you do when you've
    exhausted other possibilities.

    You have to get raw compilation fast enough first.
    <snip>

    Quite a few people have suggested that there is something amiss about my
    1:32 and 0:49 timings. One has even said there is something wrong with
    my machine.

    Yes, I wrote this. 90 seconds in itself could be OK, your machine
    just could be slow. But the numbers you gave clearly show that
    that only about 50% of time on _one_ core is used to do the build.
    So something is slowing down your machine. And this is specific to
    your setup, as other people running build on Linux get better than
    90% CPU utilization. You apparently get offended by this statement.
    If you are realy interested if fast tools you should investigate
    what is causing this.

    Anyway, there could be a lot of different reasons for slowdown.
    Fact that you get 3 times faster build using 'make -j' suggests
    that some other program is competing for CPU and using more jobs
    allows getting higher share of CPU. If that affects only programs
    running under WSL, than your numbers may or may not be relevant to
    WSL experience, but are incomparable to Linux timings. If slowdown
    affects all programs on your machine, then you should be interested
    in eliminating it, because it would also make your compiler faster.
    But that is your machine, if you not curious what happens that
    is OK.


    I'm really not interested in finding out the ins and outs of my Linux
    system or messing about with it.

    All I know is that I followed the instructions and the built-time for a particular project WAS 90 seconds elapsed, after that configure stuff.
    It shouldn't be job to fix any shortcomings.

    I wasn't that happy either with using '-j'. Yes I got a faster time, but
    that looks to me like brushing things under the carpet. What is really
    going on? It's hard to tell because it's all so complicated.

    I had a go anyway. I logged the output of a full 'make'. The output
    (sans some make-lines at each end) was 213 lines: 107 invocations of
    gcc, and 106 uses of 'mv'.

    I was able to use that output file as a script (and I didn't need
    'clean' before each run).

    It still took 92 seconds. I got rid of the 'mv' lines, it was now 85
    seconds. I added some commands, 'echo n' before each compile, and
    'time', to track each invocation.

    It looks like there are 106 files compiled, and last use of gcc is for linking, which took 3.x seconds. Most compiles were 0.5-0.8 seconds,
    with a few taking 1-2 seconds, all elapsed 'real' time.

    In each case, the user time was a fraction of the real time. One that
    caught my eye was file # 4: 0.450s real, 0.08s user.

    I tried to extract the invocation and simplify it, but it was too
    complicated. It looks like this (line breaks added):

    gcc -DHAVE_CONFIG_H -I. -I./src/include -D_GNU_SOURCE
    -DBINDIR='"/usr/local/bin"' -DINCLUDEDIR='"/usr/local/include"'
    -g -O2 --std=c17 -Wall -Wshadow -Wunused-variable -Wunused-parameter
    -Wno-long-long -MT ./src/a68g/a68g-a68g-conversion.o
    -MD -MP -MF ./src/a68g/.deps/a68g-a68g-conversion.Tpo -c
    -o ./src/a68g/a68g-a68g-conversion.o
    `test -f './src/a68g/a68g-conversion.c' ||
    echo './'`./src/a68g/a68g-conversion.c


    I've no idea what this is up to. But here, I managed to compile that
    file my way (I copied it to a place where the relevant headers were all
    in one place):

    gcc -O2 -c a68g-conversion.c

    Now real time is 0.14 seconds (recall it was 0.45). User time is still
    0.08s.

    So, what is all that crap that is making it 3 times slower? And do we
    need all those -Wall checks, given that this is a working, debugged program?

    I suggest a better approach would be to get rid of that rubbish and
    simplify it, rather than keep it in but having to call in reinforcements
    by employing extra cores, don't you think?

    If slowdown
    affects all programs on your machine, then you should be interested
    in eliminating it, because it would also make your compiler faster.

    That would be interesting. My already heavy 6-pass compiler can manage a sustained 0.5Mlps on the same machine, /and/ under Windows. How much
    faster can it be?

    OK, I have a way to run my C compiler under Linux. It would be a cross-compiler for Windows, and wouldn't be able to generate EXEs (needs access to actual Windows DLLS), but it can generate OBJ files.

    It's done via C transpilation, and I compared such versions on both
    Windows and WSL:

    c:\cx>tim cc -c sql
    Compiling sql.c to sql.obj
    Time: 0.187

    root@DESKTOP-11:/mnt/c/cx# time ./cu -c sql.c
    Compiling sql.c to sql.obj

    real 0m0.316s
    user 0m0.170s
    sys 0m0.075s

    The 'user' time looks about the same as what I get on Windows. I just
    get a longer elapsed time on Linux!

    (Note: the 'tim' utility on Windows is written to exclude the shell
    process start overheads, since I want actual compile-time. Normally my compilers are invoked from an IDE program - not using 'system' - so that overhead is not relevant.

    If included, the Windows timing would be 0.21 seconds.)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Sat Nov 1 14:56:58 2025
    From Newsgroup: comp.lang.c

    On 01/11/2025 11:57, bart wrote:
    On 31/10/2025 22:01, Waldek Hebisch wrote:

    Anyway, there could be a lot of different reasons for slowdown.
    Fact that you get 3 times faster build using 'make -j' suggests
    that some other program is competing for CPU and using more jobs
    allows getting higher share of CPU.  If that affects only programs
    running under WSL, than your numbers may or may not be relevant to
    WSL experience, but are incomparable to Linux timings.  If slowdown
    affects all programs on your machine, then you should be interested
    in eliminating it, because it would also make your compiler faster.
    But that is your machine, if you not curious what happens that
    is OK.

    I've no idea what this is up to. But here, I managed to compile that
    file my way (I copied it to a place where the relevant headers were all
    in one place):

       gcc -O2 -c a68g-conversion.c

    Now real time is 0.14 seconds (recall it was 0.45). User time is still 0.08s.

    So, what is all that crap that is making it 3 times slower? And do we
    need all those -Wall checks, given that this is a working, debugged
    program?

    I suggest a better approach would be to get rid of that rubbish and
    simplify it, rather than keep it in but having to call in reinforcements
    by employing extra cores, don't you think?

    I can now compile and link the 106 C modules of A68G into an executable,
    using my simple approach.

    The @ file below is invoked as 'gcc -O2 @file'. For this test, all
    relevant files are in one place for simplicity. Only a single invocation
    of gcc is used (multiple invocations would be needed to parallise,
    assuming gcc doesn't have such abilities itself).

    It took 38 seconds (30 seconds user) on a single core. Using -O0, it
    took 18/10 seconds.

    The generated A68 binary is 1.7MB. If I use -Os instead of -O2, the size
    is just 1MB, and build time is 35s elapsed. The benchmark is only
    slightly slower.

    It appears that the purpose of './configure' is to generate a 440-line
    header called 'a68g-config.h'.

    The BINDIR macro is needed only for plugin-script.c.

    -----------------------------
    -o a68 -s
    -DBINDIR='"/usr/local/bin"'
    --std=c17
    a68g-apropos.c
    a68g-bits.c
    a68g-conversion.c
    a68g-diagnostics.c
    a68g-io.c
    a68g-keywords.c
    a68g-listing.c
    a68g-mem.c
    a68g-non-terminal.c
    a68g-options.c
    a68g-path.c
    a68g-postulates.c
    a68g-pretty.c
    a68g.c
    double-gamic.c
    double-math.c
    double.c
    genie-assign.c
    genie-call.c
    genie-coerce.c
    genie-declaration.c
    genie-denotation.c
    genie-enclosed.c
    genie-formula.c
    genie-hip.c
    genie-identifier.c
    genie-misc.c
    genie-regex.c
    genie-rows.c
    genie-stowed.c
    genie-unix.c
    genie.c
    moids-diagnostics.c
    moids-misc.c
    moids-size.c
    moids-to-string.c
    mp-bits.c
    mp-complex.c
    mp-gamic.c
    mp-gamma.c
    mp-genie.c
    mp-math.c
    mp-mpfr.c
    mp-pi.c
    mp.c
    parser-annotate.c
    parser-bottom-up.c
    parser-brackets.c
    parser-extract.c
    parser-modes.c
    parser-moids-check.c
    parser-moids-coerce.c
    parser-moids-equivalence.c
    parser-refinement.c
    parser-scanner.c
    parser-scope.c
    parser-taxes.c
    parser-top-down.c
    parser-victal.c
    parser.c
    plugin-basic.c
    plugin-driver.c
    plugin-folder.c
    plugin-gen.c
    plugin-inline.c
    plugin-script.c
    plugin-tables.c
    plugin.c
    prelude-bits.c
    prelude-gsl.c
    prelude-mathlib.c
    prelude.c
    rts-bool.c
    rts-char.c
    rts-curl.c
    rts-curses.c
    rts-enquiries.c
    rts-formatted.c
    rts-heap.c
    rts-int128.c
    rts-internal.c
    rts-mach.c
    rts-monitor.c
    rts-parallel.c
    rts-plotutils.c
    rts-postgresql.c
    rts-sounds.c
    rts-stowed.c
    rts-transput.c
    rts-unformatted.c
    single-blas.c
    single-decomposition.c
    single-fft.c
    single-gamic.c
    single-gsl.c
    single-laplace.c
    single-math.c
    single-multivariate.c
    single-physics.c
    single-python.c
    single-r-math.c
    single-rnd.c
    single-svd.c
    single-torrix-gsl.c
    single-torrix.c
    single.c
    -lncursesw -ldl -lpthread -lgmp -lquadmath -lrt -lm
    --- Synchronet 3.21a-Linux NewsLink 1.2