This is cross-posted to comp.lang.c and comp.lang.c++.
Consider redirecting followups as appropriate.
cdecl, along with c++decl, is a tool that translates C or C++
declaration syntax into English, and vice versa. For example :
$ cdecl
Type `help' or `?' for help
cdecl> explain const char *foo[42]
declare foo as array 42 of pointer to const char
cdecl> declare bar as pointer to function (void) returning int
int (*bar)(void )
It's also available via the web site <https://cdecl.org/>.
Em 22/10/2025 18:39, Keith Thompson escreveu:
This is cross-posted to comp.lang.c and comp.lang.c++.
Consider redirecting followups as appropriate.
cdecl, along with c++decl, is a tool that translates C or C++
declaration syntax into English, and vice versa. For example :
$ cdecl
Type `help' or `?' for help
cdecl> explain const char *foo[42]
declare foo as array 42 of pointer to const char
cdecl> declare bar as pointer to function (void) returning int
int (*bar)(void )
It's also available via the web site <https://cdecl.org/>.
This one does not work:
void (*f(int i))(void)
Thiago Adams <thiago.adams@gmail.com> writes:
Em 22/10/2025 18:39, Keith Thompson escreveu:
This is cross-posted to comp.lang.c and comp.lang.c++.
Consider redirecting followups as appropriate.
cdecl, along with c++decl, is a tool that translates C or C++
declaration syntax into English, and vice versa. For example :
$ cdecl
Type `help' or `?' for help
cdecl> explain const char *foo[42]
declare foo as array 42 of pointer to const char
cdecl> declare bar as pointer to function (void) returning int
int (*bar)(void )
It's also available via the web site <https://cdecl.org/>.
This one does not work:
void (*f(int i))(void)
Right. But the new version Keith was posting about does work with that declaration.
Em 22/10/2025 18:39, Keith Thompson escreveu:
This is cross-posted to comp.lang.c and comp.lang.c++.
Consider redirecting followups as appropriate.
cdecl, along with c++decl, is a tool that translates C or C++
declaration syntax into English, and vice versa. For example :
    $ cdecl
    Type `help' or `?' for help
    cdecl> explain const char *foo[42]
    declare foo as array 42 of pointer to const char
    cdecl> declare bar as pointer to function (void) returning int
    int (*bar)(void )
It's also available via the web site <https://cdecl.org/>.
This one does not work:
void (*f(int i))(void)
On 23/10/2025 02:19, Thiago Adams wrote:
Em 22/10/2025 18:39, Keith Thompson escreveu:
This is cross-posted to comp.lang.c and comp.lang.c++.
Consider redirecting followups as appropriate.
cdecl, along with c++decl, is a tool that translates C or C++
declaration syntax into English, and vice versa. For example :
    $ cdecl
    Type `help' or `?' for help
    cdecl> explain const char *foo[42]
    declare foo as array 42 of pointer to const char
    cdecl> declare bar as pointer to function (void) returning int
    int (*bar)(void )
It's also available via the web site <https://cdecl.org/>.
This one does not work:
void (*f(int i))(void)
KT said the newer version is only available by building from source
code, which must be done under some Linux-compatible system.
I've had a look: it comprises 32Kloc of configure script, and 68Kloc of
C sources, so 100Kloc just to decode declarations! (A bit longer than
the 2-page version in K&R2.)
(There's a further 30Kloc of what looks like C library code. So is this
a complete C compiler, or does it still only do declarations?)
Regarding your example, my old C compiler (which is a fraction the size
of this new Cdecl) 'explains' it as:
 'ref proc(int)ref proc()void'
(Not quite English, more Algol68-ish.)
On 2025-10-23, Ben Bacarisse <ben@bsb.me.uk> wrote:
Thiago Adams <thiago.adams@gmail.com> writes:
Em 22/10/2025 18:39, Keith Thompson escreveu:
This is cross-posted to comp.lang.c and comp.lang.c++.
Consider redirecting followups as appropriate.
cdecl, along with c++decl, is a tool that translates C or C++
declaration syntax into English, and vice versa. For example :
$ cdecl
Type `help' or `?' for help
cdecl> explain const char *foo[42]
declare foo as array 42 of pointer to const char
cdecl> declare bar as pointer to function (void) returning int
int (*bar)(void )
It's also available via the web site <https://cdecl.org/>.
This one does not work:
void (*f(int i))(void)
Right. But the new version Keith was posting about does work with that
declaration.
A cdecl in Unbuntu, accompanied by a 1996 dated man page, handles this:
cdecl> explain void (*signal(int, void (*)(int)))(int);
declare signal as function (int, pointer to function (int) returning void)
returning pointer to function (int) returning void
But chokes if we add parameter names to the function being declared:
cdecl> explain void (*signal(int sig, void (*)(int)))(int);
syntax error
cdecl> explain void (*signal(int, void (*handler)(int)))(int);
syntax error
Or to the function pointer being passed in:
cdecl> explain void (*signal(int, void (*)(int sig)))(int);
syntax error
Or to the one being returned:
cdecl> explain void (*signal(int, void (*)(int)))(int sig);
syntax error
I'm astonished that every cdecl out there would not have cases
covering this: function with a function pointer param, returning
a function pointer param, with and without param names.
On 23/10/2025 02:19, Thiago Adams wrote:
Em 22/10/2025 18:39, Keith Thompson escreveu:
This is cross-posted to comp.lang.c and comp.lang.c++.This one does not work:
Consider redirecting followups as appropriate.
cdecl, along with c++decl, is a tool that translates C or C++
declaration syntax into English, and vice versa. For example :
    $ cdecl
    Type `help' or `?' for help
    cdecl> explain const char *foo[42]
    declare foo as array 42 of pointer to const char
    cdecl> declare bar as pointer to function (void) returning int
    int (*bar)(void )
It's also available via the web site <https://cdecl.org/>.
void (*f(int i))(void)
KT said the newer version is only available by building from source
code, which must be done under some Linux-compatible system.
I've had a look: it comprises 32Kloc of configure script, and 68Kloc
of C sources, so 100Kloc just to decode declarations! (A bit longer
than the 2-page version in K&R2.)
(There's a further 30Kloc of what looks like C library code. So is
this a complete C compiler, or does it still only do declarations?)
Regarding your example, my old C compiler (which is a fraction the
size of this new Cdecl) 'explains' it as:
'ref proc(int)ref proc()void'
(Not quite English, more Algol68-ish.)
bart <bc@freeuk.com> writes:
On 23/10/2025 02:19, Thiago Adams wrote:
Em 22/10/2025 18:39, Keith Thompson escreveu:
This is cross-posted to comp.lang.c and comp.lang.c++.This one does not work:
Consider redirecting followups as appropriate.
cdecl, along with c++decl, is a tool that translates C or C++
declaration syntax into English, and vice versa. For example :
    $ cdecl
    Type `help' or `?' for help
    cdecl> explain const char *foo[42]
    declare foo as array 42 of pointer to const char
    cdecl> declare bar as pointer to function (void) returning int >>>>     int (*bar)(void )
It's also available via the web site <https://cdecl.org/>.
void (*f(int i))(void)
KT said the newer version is only available by building from source
code, which must be done under some Linux-compatible system.
As far as I know, it should build on just about any Unix-like system,
not just ones that happen to use the Linux kernel. Perhaps that's what
you mean by "Linux-compatible"? If so, I suggest "Unix-like" would be clearer. (I'm building it under Cygwin as I write this.)
I've had a look: it comprises 32Kloc of configure script, and 68Kloc
of C sources, so 100Kloc just to decode declarations! (A bit longer
than the 2-page version in K&R2.)
Yes, and neither you nor I had to write any of it. I cloned the repo,
ran one command (my wrapper script for builds like this), and it works.
I wonder how many lines of code are required for the specification of
the x86_64 CPU in the computer I'm using to write this. But really,
it doesn't matter to me, since that work has been done, and all I
have to do is use it.
The configure script is automatically generated (I mentioned the
"bootstrap" script that generates it if you build from the git repo).
I suppose building it under Windows (without some Unix-like layer
like MinGW or Cygwin) would be more difficult. That's true of
a lot of tools that are primarily used on Unix-like systems.
It's likely that the author of the code doesn't care about Windows.
I agree that it can be a problem that a lot of code developed for
Unix-like systems is difficult to build on Windows. For a lot
of users, an emulation layer like Cygwin, MinGW, or WSL is a good
enough solution. If it isn't for you, perhaps you could help solve
the problem. Perhaps the GNU autotools could be updated with better
Windows support. I wouldn't know how to do that; perhaps you would.
"Don't use autotools" is not a good solution, since there are so
many software packages that depend on it, often maintained by people
who don't care about Windows.
(There's a further 30Kloc of what looks like C library code. So is
this a complete C compiler, or does it still only do declarations?)
I haven't looked at the source code (I haven't needed to), but the man
page indicates that this version of cdecl recognizes a number of types defined in the standard library, such as FILE, clock_t, and std::partial_ordering (remember that it includes C++ support).
I don't know whether all this could be done in fewer lines of code, and frankly I don't much care. The tool works and is useful, and I didn't
have to write it.
Have you tried using it? I'm sure you have some system where you could
build it.
Regarding your example, my old C compiler (which is a fraction the
size of this new Cdecl) 'explains' it as:
'ref proc(int)ref proc()void'
(Not quite English, more Algol68-ish.)
Can I run your old C compiler on my Ubuntu system?
On 24/10/2025 00:04, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
Regarding your example, my old C compiler (which is a fraction theCan I run your old C compiler on my Ubuntu system?
size of this new Cdecl) 'explains' it as:
'ref proc(int)ref proc()void'
(Not quite English, more Algol68-ish.)
The old one needed a tweak to bring it up-to-date for my newer C
transpiler. So it was easier to port the feature to the newer product.
Download https://github.com/sal55/langs/blob/master/ccu.c
(Note: 86Kloc/2MB file; this is poor quality, linear C generated from intermediate language.)
Build instructions are at the top. Although this targets Win64, it
works enough to demonstrate the feature above. Create this C file (say test.c):
int main(void) {
void (*f(int i))(void);
$showmode f;
}
Run as follows (if built as 'ccu'):
./ccu -s test
It will display the type during compilation.
Obviously this is not a dedicated product (and doing the reverse needs
a separate program), but I only needed to add about 10 lines of code
to support '$showmode'.
Original source, omitting the unneeded output options, would be 2/3
the size of that configure script.
bart <bc@freeuk.com> writes:
On 24/10/2025 00:04, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
I note that you've ignored the vast majority of my previous article.
Regarding your example, my old C compiler (which is a fraction theCan I run your old C compiler on my Ubuntu system?
size of this new Cdecl) 'explains' it as:
'ref proc(int)ref proc()void'
(Not quite English, more Algol68-ish.)
The old one needed a tweak to bring it up-to-date for my newer C
transpiler. So it was easier to port the feature to the newer product.
Download https://github.com/sal55/langs/blob/master/ccu.c
(Note: 86Kloc/2MB file; this is poor quality, linear C generated from
intermediate language.)
Build instructions are at the top. Although this targets Win64, it
works enough to demonstrate the feature above. Create this C file (say
test.c):
int main(void) {
void (*f(int i))(void);
$showmode f;
}
Run as follows (if built as 'ccu'):
./ccu -s test
It will display the type during compilation.
Obviously this is not a dedicated product (and doing the reverse needs
a separate program), but I only needed to add about 10 lines of code
to support '$showmode'.
Original source, omitting the unneeded output options, would be 2/3
the size of that configure script.
OK, I was able to compile and run your ccu.c, and at least on this
example it works as you've described it. It looks interesting,
but I personally don't find it particularly useful, given that I
already have cdecl, I prefer its syntax, and it's easier to use
(and I almost literally could not care less about the number of
lines of code needed to implement cdecl).
On 24/10/2025 03:00, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 24/10/2025 00:04, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
I note that you've ignored the vast majority of my previous article.
I've noted it, but chose not to reply. You have a point of view and
attitude which I don't share.
Mainly that you don't care how complicated a program for even a simple
task is, and how laborious and OS-dependent its build process is, so
long as it (eventually) works.
That it favours your own OS, leaving users of other to have to jump
through extra hoops, doesn't appear to bother you.
Well I built cdecl too, under WSL. Jesus, that looked a lot of work!
However, it took me a while to find where it put the executable, as the
make process doesn't directly tell you that. It seems it puts it inside
the src directory, which is unusual. It further appears that you have to
do 'make install' to be able to run it without a path.
(Yes, I did glance at the readme, but it is a .md file which I didn't notice, and in plain text it looked unreadable.)
When I did run it, then while it had a fair number of options, it didn't appear to do much beyond converting C declarations to and from an
English description.
That program is 2.8 MB (10 times the size of my C compiler).
I guess you don't care about that either. But surely, you must be
curious about WHY it is so big? You must surely know, with your decades
of experience, that this is 100 times bigger than necessary for such a
task?
On 24/10/2025 15:27, bart wrote:
On 24/10/2025 03:00, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 24/10/2025 00:04, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
I note that you've ignored the vast majority of my previous article.
I've noted it, but chose not to reply. You have a point of view and
attitude which I don't share.
Mainly that you don't care how complicated a program for even a simple
task is, and how laborious and OS-dependent its build process is, so
long as it (eventually) works.
That it favours your own OS, leaving users of other to have to jump
through extra hoops, doesn't appear to bother you.
Why would someone care what how someone else writes their code, or what
it does, or what systems it runs on? They guy who wrote cdecl gets to choose exactly how he wants to write it, and what systems it supports.
We others get it for free - we can use it if we like and it if it suits
our needs. But neither Keith nor anyone else paid that guy to do the
work, or contributed anything to the task, and we have no right to judge what he choose to do, or how he choose to do it.
Well I built cdecl too, under WSL. Jesus, that looked a lot of work!
I have no experience with WSL, so I can't comment on the effort there.
That program is 2.8 MB (10 times the size of my C compiler).
First, as usual, nobody cares about a couple of megabytes. Secondly, if you /do/ care, then you might do at least a /tiny/ bit of investigation.
 First, run "strip" on it to remove debugging symbols - now it is a bit over 600 KB. By running "strings" on it, I can see that about 100 KB is strings - messages, rules, types, keywords, etc.
I guess you don't care about that either. But surely, you must be
curious about WHY it is so big? You must surely know, with your
decades of experience, that this is 100 times bigger than necessary
for such a task?
Were you not curious, or did you just pull random sizes out of thin air
as an excuse to complain again about any program written by anyone else
but you?
It is entirely possible that your little alternative is a useful program
and does what you personally want and need with a smaller executable.
But cdecl does a great deal more,
doing things that other people need
and want (like handling C++ declarations - surely 100 times more effort
than handling C declarations, especially the limited older standards you use).
On 24/10/2025 18:35, David Brown wrote:
On 24/10/2025 15:27, bart wrote:
On 24/10/2025 03:00, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 24/10/2025 00:04, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
I note that you've ignored the vast majority of my previous article.
I've noted it, but chose not to reply. You have a point of view and
attitude which I don't share.
Mainly that you don't care how complicated a program for even a simple
task is, and how laborious and OS-dependent its build process is, so
long as it (eventually) works.
That it favours your own OS, leaving users of other to have to jump
through extra hoops, doesn't appear to bother you.
Why would someone care what how someone else writes their code, or what
it does, or what systems it runs on?
This a curious argument: it's free software so you don't care in the >slightest how efficient it is or how user-friendly it might be to build?
This is a program that reads lines of text from the terminal and
translates them into another line of text. THAT needs thirty thousand
lines of configure script?! And that's even before you start compiling
the program itself.
On 24/10/2025 15:27, bart wrote:
However, it took me a while to find where it put the executable, as
the make process doesn't directly tell you that. It seems it puts it
inside the src directory, which is unusual.
It further appears that
you have to do 'make install' to be able to run it without a path.
I agree that putting the executable in "src" is a little odd. But
running "make install" is hardly unusual - it is as standard as it gets.
(And of course there are a dozen other different ways you can arrange
to run the programs without a path if you don't like "make install".)
On 24/10/2025 03:00, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 24/10/2025 00:04, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
I note that you've ignored the vast majority of my previous article.
I've noted it, but chose not to reply. You have a point of view and
attitude which I don't share.
Mainly that you don't care how complicated a program for even a simple
task is, and how laborious and OS-dependent its build process is, so
long as it (eventually) works.
That it favours your own OS, leaving users of other to have to jump
through extra hoops, doesn't appear to bother you.
Well I built cdecl too, under WSL. Jesus, that looked a lot of work!
However, it took me a while to find where it put the executable, as
the make process doesn't directly tell you that. It seems it puts it
inside the src directory, which is unusual. It further appears that
you have to do 'make install' to be able to run it without a path.
(Yes, I did glance at the readme, but it is a .md file which I didn't
notice, and in plain text it looked unreadable.)
When I did run it, then while it had a fair number of options, it
didn't appear to do much beyond converting C declarations to and from
an English description.
That program is 2.8 MB (10 times the size of my C compiler).
I guess you don't care about that either. But surely, you must be
curious about WHY it is so big? You must surely know, with your
decades of experience, that this is 100 times bigger than necessary
for such a task?
I decided to make my own mini-cdecl. It took 20 minutes and works like this:
c:\cx>qq cdecl
Mycdecl> explain void (*f(int i))(void);
f = proc(i32)ref proc()void
Mycdecl> q
On 24/10/2025 15:27, bart wrote:[...]
Well I built cdecl too, under WSL. Jesus, that looked a lot of work!
I have no experience with WSL, so I can't comment on the effort
there. For my own use on a Linux system, I had to install a package
(apt-get install libreadline-dev), but that's neither difficult to do,
or time-consuming, and it was not hard to see what was needed. Of
course, a non-programmer might not have realised that was needed, but
if you are stumped on a configure script error "readline.h header not
found, use --without-readline" and can't figure out how to get
"readline.h" or configure the program to avoid using it, and can't at
least google for help, then you are probably not the target audience
for cdecl.
However, it took me a while to find where it put the executable, as
the make process doesn't directly tell you that. It seems it puts it
inside the src directory, which is unusual. It further appears that
you have to do 'make install' to be able to run it without a path.
I agree that putting the executable in "src" is a little odd. But
running "make install" is hardly unusual - it is as standard as it
gets. (And of course there are a dozen other different ways you can
arrange to run the programs without a path if you don't like "make
install".)
That program is 2.8 MB (10 times the size of my C compiler).
First, as usual, nobody cares about a couple of megabytes. Secondly,
if you /do/ care, then you might do at least a /tiny/ bit of
investigation. First, run "strip" on it to remove debugging symbols
- now it is a bit over 600 KB. By running "strings" on it, I can see
that about 100 KB is strings - messages, rules, types, keywords, etc.
On 24/10/2025 18:35, David Brown wrote:
On 24/10/2025 15:27, bart wrote:
On 24/10/2025 03:00, Keith Thompson wrote:Why would someone care what how someone else writes their code, or
bart <bc@freeuk.com> writes:
On 24/10/2025 00:04, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
I note that you've ignored the vast majority of my previous article.
I've noted it, but chose not to reply. You have a point of view and
attitude which I don't share.
Mainly that you don't care how complicated a program for even a
simple task is, and how laborious and OS-dependent its build
process is, so long as it (eventually) works.
That it favours your own OS, leaving users of other to have to jump
through extra hoops, doesn't appear to bother you.
what it does, or what systems it runs on? They guy who wrote cdecl
gets to choose exactly how he wants to write it, and what systems it
supports. We others get it for free - we can use it if we like and
it if it suits our needs. But neither Keith nor anyone else paid
that guy to do the work, or contributed anything to the task, and we
have no right to judge what he choose to do, or how he choose to do
it.
This a curious argument: it's free software so you don't care in the slightest how efficient it is or how user-friendly it might be to
build?
This is a program that reads lines of text from the terminal and
translates them into another line of text. THAT needs thirty thousand
lines of configure script?! And that's even before you start compiling
the program itself.
I'm thinking of making available some software that does even less,
but wrap enough extra and POINTLESS levels complexity around that
you'd need to lease time on a super-computer to build it. But the
software is free so that makes it alright?
I was talking about all the stuff scrolling endlessly up to the screen
for a minute and a half while running the configure script and then
compiling the modules.
bart <bc@freeuk.com> writes:
On 23/10/2025 02:19, Thiago Adams wrote:
Em 22/10/2025 18:39, Keith Thompson escreveu:
This is cross-posted to comp.lang.c and comp.lang.c++.This one does not work:
Consider redirecting followups as appropriate.
cdecl, along with c++decl, is a tool that translates C or C++
declaration syntax into English, and vice versa. For example :
    $ cdecl
    Type `help' or `?' for help
    cdecl> explain const char *foo[42]
    declare foo as array 42 of pointer to const char
    cdecl> declare bar as pointer to function (void) returning int >>>>     int (*bar)(void )
It's also available via the web site <https://cdecl.org/>.
void (*f(int i))(void)
KT said the newer version is only available by building from source
code, which must be done under some Linux-compatible system.
As far as I know, it should build on just about any Unix-like system,
not just ones that happen to use the Linux kernel. Perhaps that's what
you mean by "Linux-compatible"? If so, I suggest "Unix-like" would be clearer. (I'm building it under Cygwin as I write this.)
I've had a look: it comprises 32Kloc of configure script, and 68Kloc
of C sources, so 100Kloc just to decode declarations! (A bit longer
than the 2-page version in K&R2.)
Yes, and neither you nor I had to write any of it. I cloned the repo,
ran one command (my wrapper script for builds like this), and it works.
I wonder how many lines of code are required for the specification of
the x86_64 CPU in the computer I'm using to write this. But really,
it doesn't matter to me, since that work has been done, and all I
have to do is use it.
The configure script is automatically generated (I mentioned the
"bootstrap" script that generates it if you build from the git repo).
I suppose building it under Windows (without some Unix-like layer
like MinGW or Cygwin) would be more difficult. That's true of
a lot of tools that are primarily used on Unix-like systems.
It's likely that the author of the code doesn't care about Windows.
I agree that it can be a problem that a lot of code developed for
Unix-like systems is difficult to build on Windows. For a lot
of users, an emulation layer like Cygwin, MinGW, or WSL is a good
enough solution. If it isn't for you, perhaps you could help solve
the problem. Perhaps the GNU autotools could be updated with better
Windows support. I wouldn't know how to do that; perhaps you would.
bart <bc@freeuk.com> writes:
On 24/10/2025 18:35, David Brown wrote:
On 24/10/2025 15:27, bart wrote:
On 24/10/2025 03:00, Keith Thompson wrote:Why would someone care what how someone else writes their code, or
bart <bc@freeuk.com> writes:
On 24/10/2025 00:04, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
I note that you've ignored the vast majority of my previous article.
I've noted it, but chose not to reply. You have a point of view and
attitude which I don't share.
Mainly that you don't care how complicated a program for even a
simple task is, and how laborious and OS-dependent its build
process is, so long as it (eventually) works.
That it favours your own OS, leaving users of other to have to jump
through extra hoops, doesn't appear to bother you.
what it does, or what systems it runs on? They guy who wrote cdecl
gets to choose exactly how he wants to write it, and what systems it
supports. We others get it for free - we can use it if we like and
it if it suits our needs. But neither Keith nor anyone else paid
that guy to do the work, or contributed anything to the task, and we
have no right to judge what he choose to do, or how he choose to do
it.
This a curious argument: it's free software so you don't care in the
slightest how efficient it is or how user-friendly it might be to
build?
Its efficiency is not a great concern. I've seen no perceptible delay between issuing a command to cdecl and seeing the result. No, I don't
much care what it does behind the scenes. If I did care, I might look through the sources and try to think of ways to improve it. But the
effort to do so would vastly exceed any time I might save running it.
The build and installation process for cdecl is very user-friendly.
It
matches the process for thousands of other software packages that are distributed in source. I can see that the process might be confusing if you're not accustomed to it. If you *asked* rather than just
complaining, you might learn something.
The stripped executable occupies about 0.000008% of my hard drive.
This is a program that reads lines of text from the terminal and
translates them into another line of text. THAT needs thirty thousand
lines of configure script?! And that's even before you start compiling
the program itself.
The configure script is automatically generated from "configure.ac",
which is 343 lines, 241 lines if comments and blank lines are
deleted. I've never written a configure.ac file myself, but most
of it looks like boilerplate. It would probably be fairly easy
(with some experience) to create one by modifying an existing one
from another project.
I was talking about all the stuff scrolling endlessly up to the screen
for a minute and a half while running the configure script and then
compiling the modules.
Why is that a problem? If you like, you can redirect the output of "./configure" and "make" to a file, and take a look at the output later
if you need to (you probably won't).
[...]
On 24/10/2025 18:35, David Brown wrote:
On 24/10/2025 15:27, bart wrote:
On 24/10/2025 03:00, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 24/10/2025 00:04, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
I note that you've ignored the vast majority of my previous article.
I've noted it, but chose not to reply. You have a point of view and
attitude which I don't share.
Mainly that you don't care how complicated a program for even a
simple task is, and how laborious and OS-dependent its build process
is, so long as it (eventually) works.
That it favours your own OS, leaving users of other to have to jump
through extra hoops, doesn't appear to bother you.
Why would someone care what how someone else writes their code, or
what it does, or what systems it runs on? They guy who wrote cdecl
gets to choose exactly how he wants to write it, and what systems it
supports. We others get it for free - we can use it if we like and it
if it suits our needs. But neither Keith nor anyone else paid that
guy to do the work, or contributed anything to the task, and we have
no right to judge what he choose to do, or how he choose to do it.
This a curious argument: it's free software so you don't care in the slightest how efficient it is or how user-friendly it might be to build?
This is a program that reads lines of text from the terminal and
translates them into another line of text. THAT needs thirty thousand
lines of configure script?! And that's even before you start compiling
the program itself.
I'm thinking of making available some software that does even less, but
wrap enough extra and POINTLESS levels complexity around that you'd need
to lease time on a super-computer to build it. But the software is free
so that makes it alright?
Well I built cdecl too, under WSL. Jesus, that looked a lot of work!
I have no experience with WSL, so I can't comment on the effort there.
I was talking about all the stuff scrolling endlessly up to the screen
for a minute and a half while running the configure script and then compiling the modules.
That program is 2.8 MB (10 times the size of my C compiler).
First, as usual, nobody cares about a couple of megabytes. Secondly,
if you /do/ care, then you might do at least a /tiny/ bit of
investigation. Â Â First, run "strip" on it to remove debugging symbols
- now it is a bit over 600 KB. By running "strings" on it, I can see
that about 100 KB is strings - messages, rules, types, keywords, etc.
If I was directly building it myself, then I would use -s with gcc. But since the process is automatic via makefiles, I assumed it would give me
a working, production version, not a version needing to be debugged!
It is entirely possible that your little alternative is a useful
program and does what you personally want and need with a smaller
executable. But cdecl does a great deal more,
CDECL translates a single C type specification into linear LTR form. Or
vice versa. That's what nearly everyone needs it for, and why it exists. Why, what other stuff does it do?
So, yes, anyone with an inquiring mind can form an idea of how much code might be needed for such a task, and how it ought to compare with a
complete language implementations.
In fact, people have posted algorithms here for doing exactly the same.
I don't recall that they took tens of thousands of lines to describe.
doing things that other people need and want (like handling C++
declarations - surely 100 times more effort than handling C
declarations, especially the limited older standards you use).
So, it's a hundred times bigger than necessary due to C++. That explains that then. (Sorry, 20 times bigger because whoever provided the build
system decided it should include debug info to make it 5 times as big
for no reason.)
David Brown <david.brown@hesbynett.no> writes:
On 24/10/2025 15:27, bart wrote:[...]
Well I built cdecl too, under WSL. Jesus, that looked a lot of work!
I have no experience with WSL, so I can't comment on the effort
there. For my own use on a Linux system, I had to install a package
(apt-get install libreadline-dev), but that's neither difficult to do,
or time-consuming, and it was not hard to see what was needed. Of
course, a non-programmer might not have realised that was needed, but
if you are stumped on a configure script error "readline.h header not
found, use --without-readline" and can't figure out how to get
"readline.h" or configure the program to avoid using it, and can't at
least google for help, then you are probably not the target audience
for cdecl.
WSL, "Windows Subsystem for Linux" (which should probably have been
called "Linux Subsystem for Windows") provides something that looks just
like a direct Linux desktop system. It supports several different Linux-based distributions. I use Ubuntu, and the build procedure under
WSL is exactly the same as under Ubuntu.
However, it took me a while to find where it put the executable, as
the make process doesn't directly tell you that. It seems it puts it
inside the src directory, which is unusual. It further appears that
you have to do 'make install' to be able to run it without a path.
I agree that putting the executable in "src" is a little odd. But
running "make install" is hardly unusual - it is as standard as it
gets. (And of course there are a dozen other different ways you can
arrange to run the programs without a path if you don't like "make
install".)
Putting the executable in src is very common for this kind of package.
I generally don't notice, since I always run "make install", which knows where to find the executable and where to copy it.
[...]
That program is 2.8 MB (10 times the size of my C compiler).
First, as usual, nobody cares about a couple of megabytes. Secondly,
if you /do/ care, then you might do at least a /tiny/ bit of
investigation. First, run "strip" on it to remove debugging symbols
- now it is a bit over 600 KB. By running "strings" on it, I can see
that about 100 KB is strings - messages, rules, types, keywords, etc.
It's easier than that. The Makefile provides an "install-strip" option
that does the installation and strips the executable. A lot of packages
like this support "make install-strip". For those that don't, just run "strip" manually after installation.
On 24/10/2025 20:50, bart wrote:
You are getting worked up about some text output that scrolled
"endlessly" for a minute and a half?
If I was directly building it myself, then I would use -s with gcc.
But since the process is automatic via makefiles, I assumed it would
give me a working, production version, not a version needing to be
debugged!
Actually, on closer checking (not because /I/ care, but because /you/ apparently care) it was not debugging information, but all the linking
and symbolic information that is a normal part of elf format files when
they are built (allowing for incremental linking, using the files as
static libraries for other programs, tracing the programs, fault-
finding, etc.).
CDECL translates a single C type specification into linear LTR form.
Or vice versa. That's what nearly everyone needs it for, and why it
exists. Why, what other stuff does it do?
RTFM.
So, it's a hundred times bigger than necessary due to C++. That
explains that then. (Sorry, 20 times bigger because whoever provided
the build system decided it should include debug info to make it 5
times as big for no reason.)
It is not a program for expanding or explaining C declarations. It is a program for expanding or explaining C and C++ declarations. C++ is not "unnecessary", it is part of what it does.
This is another matter. The CDECL docs talk about C and C++ type declarations being 'gibberish'.
What do you feel about that, and the *need* for such a substantial tool
to help understand or write such declarations?
I would rather have put some effort into fixing the syntax so that such tools are not necessary!
This is another matter. The CDECL docs talk about C and C++ type declarations being 'gibberish'.
What do you feel about that, and the *need* for such a substantial tool
to help understand or write such declarations?
I would rather have put some effort into fixing the syntax so that such tools are not necessary!
On 24/10/2025 20:50, bart wrote:
I was talking about all the stuff scrolling endlessly up to the screen
for a minute and a half while running the configure script and then
compiling the modules.
You are getting worked up about some text output that scrolled
"endlessly" for a minute and a half? (Do you spot the exaggeration here?) Of course it is less of an issue for me - "./configure" took a
mere 10 seconds on my ten year old machine. But even at a minute and a half, it's just a task that the computer runs, once, and it is done
without effort. Try relaxing a little more, and perhaps use that minute and a half to stretch your legs or drink some coffee, rather than to
build up a pointless fury.
On 25/10/2025 12:04, David Brown wrote:
On 24/10/2025 20:50, bart wrote:
I was talking about all the stuff scrolling endlessly up to the
screen for a minute and a half while running the configure script and
then compiling the modules.
You are getting worked up about some text output that scrolled
"endlessly" for a minute and a half? (Do you spot the exaggeration
here?)Â Of course it is less of an issue for me - "./configure" took a
mere 10 seconds on my ten year old machine. But even at a minute and
a half, it's just a task that the computer runs, once, and it is done
without effort. Try relaxing a little more, and perhaps use that
minute and a half to stretch your legs or drink some coffee, rather
than to build up a pointless fury.
The point about the minute and a half is that a fast compiler even on my machine could translate tens of millions of lines of source code in that time. If the app was actually that size (say, a web browser) then fine.
But the C source is only 0.07Mloc. So what TF is going on?
It appears that this is one of those apps that is superfically written
in 'C' but it actually relies on a plethora of other languages, files,
tools and myriad kinds of options. You can't just go into ./src and do
'gcc *.c'.
Even the makefile has to first be generated. There are files
with .in, .am and .m4 extensions. The eventual 'makefile' has 2000 lines
of gobbledygook, to build 49 C modules. (My projects are also around 40 modules; the build info comprises, funnily enough, some 40 lines.)
So this is a complicated build process! Unfortunately it is typical of
such products originating from Unix-Linux (you really want one term that
you can use for both).
This is not specific to CDECL; it's nearly everything that comes of Unix-Linux. But this came up and I had a look.
But let me ask then about this particularly app (an interactive text-
based program where performance is irrelevant; it could have been
written in Python): do you think it would have been possible to
distribute this as a set of 100% *standard* C source files, with the
only dependency being *any* C compiler?
On 25/10/2025 12:04, David Brown wrote:
On 24/10/2025 20:50, bart wrote:
I was talking about all the stuff scrolling endlessly up to theYou are getting worked up about some text output that scrolled
screen for a minute and a half while running the configure script
and then compiling the modules.
"endlessly" for a minute and a half? (Do you spot the exaggeration
here?)Â Of course it is less of an issue for me - "./configure" took
a mere 10 seconds on my ten year old machine. But even at a minute
and a half, it's just a task that the computer runs, once, and it is
done without effort. Try relaxing a little more, and perhaps use
that minute and a half to stretch your legs or drink some coffee,
rather than to build up a pointless fury.
The point about the minute and a half is that a fast compiler even on
my machine could translate tens of millions of lines of source code in
that time. If the app was actually that size (say, a web browser) then
fine.
But the C source is only 0.07Mloc. So what TF is going on?
It appears that this is one of those apps that is superfically written
in 'C' but it actually relies on a plethora of other languages, files,
tools and myriad kinds of options. You can't just go into ./src and do
'gcc *.c'.
Even the makefile has to first be generated. There are files with .in,
.am and .m4 extensions. The eventual 'makefile' has 2000 lines of gobbledygook, to build 49 C modules. (My projects are also around 40
modules; the build info comprises, funnily enough, some 40 lines.)
So this is a complicated build process! Unfortunately it is typical of
such products originating from Unix-Linux (you really want one term
that you can use for both).
This is not specific to CDECL; it's nearly everything that comes of Unix-Linux. But this came up and I had a look.
But let me ask then about this particularly app (an interactive
text-based program where performance is irrelevant; it could have been written in Python): do you think it would have been possible to
distribute this as a set of 100% *standard* C source files, with the
only dependency being *any* C compiler?
[...]
(I remember trying to build A68G, an interpreter, on Windows, and the 'configure' step was a major obstacle. But I was willing to isolate the
12 C source files involved, then it was built in one second.
I did of course try building it in Linux too, and it took about 5
minutes that I recall, using a spinnning hard drive, mostly spent
running through that configure script.
[...]
On 25/10/2025 14:51, bart wrote:
[...][...]
And I'd love to hear your plan for "fixing" the syntax of C - noting
that changing the syntax of C means getting the C standards committee to accept your suggestions, getting at least all major C compilers to
support them, and getting the millions of C programmers to use them.
[...] Unfortunately it is typical of such products originating from Unix-Linux (you really want one term that you can use for both).
[...]--- Synchronet 3.21a-Linux NewsLink 1.2
bart <bc@freeuk.com> writes:
On 24/10/2025 18:35, David Brown wrote:
On 24/10/2025 15:27, bart wrote:
On 24/10/2025 03:00, Keith Thompson wrote:Why would someone care what how someone else writes their code, or
bart <bc@freeuk.com> writes:
On 24/10/2025 00:04, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
I note that you've ignored the vast majority of my previous
article.
I've noted it, but chose not to reply. You have a point of view
and attitude which I don't share.
Mainly that you don't care how complicated a program for even a
simple task is, and how laborious and OS-dependent its build
process is, so long as it (eventually) works.
That it favours your own OS, leaving users of other to have to
jump through extra hoops, doesn't appear to bother you.
what it does, or what systems it runs on? They guy who wrote cdecl
gets to choose exactly how he wants to write it, and what systems
it supports. We others get it for free - we can use it if we like
and it if it suits our needs. But neither Keith nor anyone else
paid that guy to do the work, or contributed anything to the task,
and we have no right to judge what he choose to do, or how he
choose to do it.
This a curious argument: it's free software so you don't care in the slightest how efficient it is or how user-friendly it might be to
build?
Its efficiency is not a great concern. I've seen no perceptible delay between issuing a command to cdecl and seeing the result. No, I don't
much care what it does behind the scenes. If I did care, I might look through the sources and try to think of ways to improve it. But the
effort to do so would vastly exceed any time I might save running it.
The build and installation process for cdecl is very user-friendly.
It matches the process for thousands of other software packages that
are distributed in source. I can see that the process might be
confusing if you're not accustomed to it. If you *asked* rather than
just complaining, you might learn something.
The stripped executable occupies about 0.000008% of my hard drive.
This is a program that reads lines of text from the terminal and
translates them into another line of text. THAT needs thirty
thousand lines of configure script?! And that's even before you
start compiling the program itself.
The configure script is automatically generated from "configure.ac",
which is 343 lines, 241 lines if comments and blank lines are
deleted. I've never written a configure.ac file myself, but most
of it looks like boilerplate. It would probably be fairly easy
(with some experience) to create one by modifying an existing one
from another project.
I'm thinking of making available some software that does even less,
but wrap enough extra and POINTLESS levels complexity around that
you'd need to lease time on a super-computer to build it. But the
software is free so that makes it alright?
Free software still has to be usable. cdecl is usable for most of us.
[...]
I was talking about all the stuff scrolling endlessly up to the
screen for a minute and a half while running the configure script
and then compiling the modules.
Why is that a problem? If you like, you can redirect the output of "./configure" and "make" to a file, and take a look at the output
later if you need to (you probably won't).
[...]
(This reply is not meant for bart, but rather for all interested
folks who should not get repelled by his FUD posts.)
On 25.10.2025 00:18, bart wrote:
[...]
(I remember trying to build A68G, an interpreter, on Windows, and the
'configure' step was a major obstacle. But I was willing to isolate the
12 C source files involved, then it was built in one second.
I did of course try building it in Linux too, and it took about 5
minutes that I recall, using a spinnning hard drive, mostly spent
running through that configure script.
(I don't know what system or system configuration the poster runs.
I'm well aware that if you are using the Windows platform you may
suffer from many things; but the platform choice is your decision!
But maybe he's just misremembering; and nonetheless spreading FUD.)
I've a quite old (~16+ years old) Linux system that was back these
days when I bought it already at the _very low performance range_.
With this old system the ./configure needs less than 10 seconds,
and the build process with make about _half a minute_ for the whole
a68g Genie system. - The whole procedure, from software download,
extraction, configure/make, and start an Algol application, needs
one minute! (Make that two minutes if you are typing v_e_r_y slowly
or have a slow download link. Or just put the necessary commands in
a shell file; just did that and it needed (including the download)
less than 45 seconds, and ready to run.)
The whole procedure, from software download,
extraction, configure/make, and start an Algol application, needs
one minute!
On 26/10/2025 06:25, Janis Papanagnou wrote:
(This reply is not meant for bart, but rather for all interested
folks who should not get repelled by his FUD posts.)
On 25.10.2025 00:18, bart wrote:
[...]
(I remember trying to build A68G, an interpreter, on Windows, and the
'configure' step was a major obstacle. But I was willing to isolate the
12 C source files involved, then it was built in one second.
I did of course try building it in Linux too, and it took about 5
minutes that I recall, using a spinnning hard drive, mostly spent
running through that configure script.
(I don't know what system or system configuration the poster runs.
I'm well aware that if you are using the Windows platform you may
suffer from many things; but the platform choice is your decision!
But maybe he's just misremembering; and nonetheless spreading FUD.)
I've a quite old (~16+ years old) Linux system that was back these
days when I bought it already at the _very low performance range_.
With this old system the ./configure needs less than 10 seconds,
and the build process with make about _half a minute_ for the whole
a68g Genie system. - The whole procedure, from software download,
extraction, configure/make, and start an Algol application, needs
one minute! (Make that two minutes if you are typing v_e_r_y slowly
or have a slow download link. Or just put the necessary commands in
a shell file; just did that and it needed (including the download)
less than 45 seconds, and ready to run.)
The 5 minutes I quoted may have been for CPython. It would be for some
Linux running under VirtualBox on a 2010 cheapest-in-the-shop PC.
If I try A68G now, under WSL, using a 2021 second-cheapest PC but with
SSD, I get:
  ./configure  20 seconds
  make         90 seconds
Only one minute; impressive! How about this:
 c:\qx>tm mm -r \mx\mm -r qq hello
 Hello World
 TM: 0.21
This runs my systems language /from source code/, whch then runs my interpreter /from source code/ (ie. compiles into memory and runs immediately) then runs that test program.
On 25/10/2025 14:51, bart wrote:
This is another matter. The CDECL docs talk about C and C++ type
declarations being 'gibberish'.
What do you feel about that, and the *need* for such a substantial
tool to help understand or write such declarations?
I would rather have put some effort into fixing the syntax so that
such tools are not necessary!
And I'd love to hear your plan for "fixing" the syntax of C - noting
that changing the syntax of C means getting the C standards committee to accept your suggestions, getting at least all major C compilers to
support them, and getting the millions of C programmers to use them.
On 26/10/2025 06:25, Janis Papanagnou wrote:
However the A68G configure script is 11000 lines; the CDECL one 31600 lines.
(I wonder why the latter needs 20000 more lines? I guess nobody is
curious - or they simply don't care.)
On 26/10/2025 11:26, bart wrote:
On 26/10/2025 06:25, Janis Papanagnou wrote:
So the following are after a restart of my PC:
Build CDECL under WSL (files were extracted before the restart):
60/56 seconds instead 35/49 seconds for configure/make
My demo above running both compiler and interpreter from source:
0.31 seconds instead of 0.21 seconds
New test of gcc compiling hello.c:
1 second, settling down to 0.23 seconds on subsequent builds
bart <bc@freeuk.com> writes:
On 26/10/2025 06:25, Janis Papanagnou wrote:
However the A68G configure script is 11000 lines; the CDECL one 31600 lines. >>
(I wonder why the latter needs 20000 more lines? I guess nobody is
curious - or they simply don't care.)
You should be able to figure that out yourself. You may actually
learn something useful along the way.
Start with reading the autoconf documentation, fully, until youWhatever the goals are, if they are even needed, the execution is poor.
understand the goals and the mechanisms used to meet those goals.
www.gnu.org/software/autoconf/manual/autoconf.html
bart <bc@freeuk.com> writes:
On 26/10/2025 11:26, bart wrote:
On 26/10/2025 06:25, Janis Papanagnou wrote:
So the following are after a restart of my PC:
Build CDECL under WSL (files were extracted before the restart):
60/56 seconds instead 35/49 seconds for configure/make
My demo above running both compiler and interpreter from source:
0.31 seconds instead of 0.21 seconds
New test of gcc compiling hello.c:
1 second, settling down to 0.23 seconds on subsequent builds
Get back to us when your "build + compiler" system will successfully
build all the software that currently builds with autoconf, make and gcc.
On 26/10/2025 16:04, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
On 26/10/2025 06:25, Janis Papanagnou wrote:
However the A68G configure script is 11000 lines; the CDECL one 31600 lines.
(I wonder why the latter needs 20000 more lines? I guess nobody is
curious - or they simply don't care.)
You should be able to figure that out yourself. You may actually
learn something useful along the way.
So you don't know.
What special requirements does CDECL have (which has a task that is at
least a magnitude simpler than A68G's), that requires those 20,000 extra lines?
www.gnu.org/software/autoconf/manual/autoconf.htmlWhatever the goals are, if they are even needed, the execution is poor.
That is even acknowledged in your link:
"(Before each check, they print a one-line message stating what they are checking for, so the user doesn’t get too bored while waiting for the script to finish.)"
That document is a classic example of making a fantastically complicated mountain out of a molehill.
...
On 2025-10-26, bart <bc@freeuk.com> wrote:
On 26/10/2025 16:04, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
On 26/10/2025 06:25, Janis Papanagnou wrote:
However the A68G configure script is 11000 lines; the CDECL one
31600 lines.
(I wonder why the latter needs 20000 more lines? I guess nobody is
curious - or they simply don't care.)
You should be able to figure that out yourself. You may actually
learn something useful along the way.
So you don't know.
What special requirements does CDECL have (which has a task that is
at least a magnitude simpler than A68G's), that requires those
20,000 extra lines?
I can't imagine why anyone would write cdecl (if it is written in C)
such that it's anything but a maximally conforming ISO C program,
which can be built like this:
make cdecl
without any Makefile present, in a directory in which there is just
one file: cdecl.c.
An empty ./configure script can be provided so that downstream package maintainers are less confused by the simplicity:
#!/bin/sh
echo cdecl succesfully configured; run make
There may be additional material for testing, of course.
www.gnu.org/software/autoconf/manual/autoconf.htmlWhatever the goals are, if they are even needed, the execution is
poor. That is even acknowledged in your link:
"(Before each check, they print a one-line message stating what
they are checking for, so the user doesn’t get too bored while
waiting for the script to finish.)"
That document is a classic example of making a fantastically
complicated mountain out of a molehill.
It's a pile of crap developed by (and for) imbeciles, which made a
certain small amount of sense 30+ years ago when the Unix landscape
was a much more fragmented mess than it is now.
When you write a file called Makefile.am, it's like taping a piece
of paper to your ass saying "kick me with an ugly mountain of
technical debt which doesn't contribute a fucking thing to my actual application logic".
On 2025-10-26, bart <bc@freeuk.com> wrote:
On 26/10/2025 16:04, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
On 26/10/2025 06:25, Janis Papanagnou wrote:
However the A68G configure script is 11000 lines; the CDECL one
31600 lines.
(I wonder why the latter needs 20000 more lines? I guess nobody is
curious - or they simply don't care.)
You should be able to figure that out yourself. You may actually
learn something useful along the way.
So you don't know.
What special requirements does CDECL have (which has a task that is
at least a magnitude simpler than A68G's), that requires those
20,000 extra lines?
I can't imagine why anyone would write cdecl (if it is written in C)
such that it's anything but a maximally conforming ISO C program,
which can be built like this:
make cdecl
without any Makefile present, in a directory in which there is just
one file: cdecl.c.
An empty ./configure script can be provided so that downstream package maintainers are less confused by the simplicity:
#!/bin/sh
echo cdecl succesfully configured; run make
There may be additional material for testing, of course.
www.gnu.org/software/autoconf/manual/autoconf.htmlWhatever the goals are, if they are even needed, the execution is
poor. That is even acknowledged in your link:
"(Before each check, they print a one-line message stating what
they are checking for, so the user doesn’t get too bored while
waiting for the script to finish.)"
That document is a classic example of making a fantastically
complicated mountain out of a molehill.
It's a pile of crap developed by (and for) imbeciles, which made a
certain small amount of sense 30+ years ago when the Unix landscape
was a much more fragmented mess than it is now.
This is cross-posted to comp.lang.c and comp.lang.c++.I must be doing something wrong:
Consider redirecting followups as appropriate.
cdecl, along with c++decl, is a tool that translates C or C++
declaration syntax into English, and vice versa. For example :
$ cdecl
Type `help' or `?' for help
cdecl> explain const char *foo[42]
declare foo as array 42 of pointer to const char
cdecl> declare bar as pointer to function (void) returning int
int (*bar)(void )
It's also available via the web site <https://cdecl.org/>.
On Fri, 24 Oct 2025 13:20:45 -0700[...]
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Free software still has to be usable. cdecl is usable for most of us.
[...]
I'd say that it is not sufficiently usable for most of us to actually
use it.
Michael S <already5chosen@yahoo.com> writes:
On Fri, 24 Oct 2025 13:20:45 -0700[...]
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Free software still has to be usable. cdecl is usable for most of
us.
[...]
I'd say that it is not sufficiently usable for most of us to
actually use it.
Why do you say that?
On Wed 10/22/2025 2:39 PM, Keith Thompson wrote:
...
I believe I have already posted about it here... or maybe not?
cdecl.org reports a "syntax error" for declarations with top-level
`const` qualifiers on function parameters:
void foo(char *const)
Such declarations are perfectly valid. (And adding an explcit
parameter name does not help).
The current version of cdecl.org still complains about it.
On 10/22/2025 2:39 PM, Keith Thompson wrote:
This is cross-posted to comp.lang.c and comp.lang.c++.I must be doing something wrong:
Consider redirecting followups as appropriate.
cdecl, along with c++decl, is a tool that translates C or C++
declaration syntax into English, and vice versa. For example :
$ cdecl
Type `help' or `?' for help
cdecl> explain const char *foo[42]
declare foo as array 42 of pointer to const char
cdecl> declare bar as pointer to function (void) returning int
int (*bar)(void )
It's also available via the web site <https://cdecl.org/>.
int (*fp_read) (void* const, void*, size_t)
is syntax error. It from one of my older experiments:
On Sun, 26 Oct 2025 14:56:56 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Fri, 24 Oct 2025 13:20:45 -0700[...]
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Free software still has to be usable. cdecl is usable for most of
us.
[...]
I'd say that it is not sufficiently usable for most of us to
actually use it.
Why do you say that?
I would guess that less than 1 per cent of C programmers ever used it
and less than 5% of those who used it once continued to use it
regularly.
All numbers pulled out of thin air...
However the A68G configure script is 11000 lines; the CDECL one 31600 lines.
(I wonder why the latter needs 20000 more lines? I guess nobody is
curious - or they simply don't care. OK, let's make 100,000 and see if
anyone complains! Is it possible this is some elaborate joke on the
part of auto-conf to discover just how trusting and tolerant people
can be?)
Michael S <already5chosen@yahoo.com> writes:
On Sun, 26 Oct 2025 14:56:56 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Fri, 24 Oct 2025 13:20:45 -0700[...]
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Free software still has to be usable. cdecl is usable for most
of us.
[...]
I'd say that it is not sufficiently usable for most of us to
actually use it.
Why do you say that?
I would guess that less than 1 per cent of C programmers ever used
it and less than 5% of those who used it once continued to use it regularly.
All numbers pulled out of thin air...
So it's about usefulness, not usability. You're not saying that
it works incorrectly or that it's difficult to use (which would be
usability issues), but that the job it performs is not useful for
most C programmers.
(One data point: I use it occasionally.)
On Sun, 26 Oct 2025 15:45:34 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sun, 26 Oct 2025 14:56:56 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Fri, 24 Oct 2025 13:20:45 -0700[...]
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Free software still has to be usable. cdecl is usable for most
of us.
[...]
I'd say that it is not sufficiently usable for most of us to
actually use it.
Why do you say that?
I would guess that less than 1 per cent of C programmers ever used
it and less than 5% of those who used it once continued to use it
regularly.
All numbers pulled out of thin air...
So it's about usefulness, not usability. You're not saying that
it works incorrectly or that it's difficult to use (which would be
usability issues), but that the job it performs is not useful for
most C programmers.
No, it's about usability.
I'd imagine that in order to be closer to usable tools like that would
better be integrated into programmer's text editor/IDE.
(One data point: I use it occasionally.)
How about your co-workers?
On 26/10/2025 06:25, Janis Papanagnou wrote:
(This reply is not meant for bart, but rather for all interested
folks who should not get repelled by his FUD posts.)
On 25.10.2025 00:18, bart wrote:
[...]
(I remember trying to build A68G, an interpreter, on Windows, and the
'configure' step was a major obstacle. But I was willing to isolate the
12 C source files involved, then it was built in one second.
I did of course try building it in Linux too, and it took about 5
minutes that I recall, using a spinnning hard drive, mostly spent
running through that configure script.
(I don't know what system or system configuration the poster runs.
I'm well aware that if you are using the Windows platform you may
suffer from many things; but the platform choice is your decision!
But maybe he's just misremembering; and nonetheless spreading FUD.)
I've a quite old (~16+ years old) Linux system that was back these
days when I bought it already at the _very low performance range_.
With this old system the ./configure needs less than 10 seconds,
and the build process with make about _half a minute_ for the whole
a68g Genie system. - The whole procedure, from software download,
extraction, configure/make, and start an Algol application, needs
one minute! (Make that two minutes if you are typing v_e_r_y slowly
or have a slow download link. Or just put the necessary commands in
a shell file; just did that and it needed (including the download)
less than 45 seconds, and ready to run.)
The 5 minutes I quoted may have been for CPython. It would be for some
Linux running under VirtualBox on a 2010 cheapest-in-the-shop PC.
If I try A68G now, under WSL, using a 2021 second-cheapest PC but with
SSD, I get:
./configure 20 seconds
make 90 seconds
Trying CDECL again (I've done it several times after deleting the folder):
./configure 35 seconds
make 49 seconds
However the A68G configure script is 11000 lines; the CDECL one 31600
lines.
(I wonder why the latter needs 20000 more lines? I guess nobody is
curious - or they simply don't care. OK, let's make 100,000 and see if
anyone complains! Is it possible this is some elaborate joke on the part
of auto-conf to discover just how trusting and tolerant people can be?)
Anyway, I then tried this new 3.10 A68G on the Fannkuch(9) benchmark:
a68g fann.a68 5 seconds
./fann 3+3 seconds (via a68g --compile -O3 fann.a68)
I then tried it under my scripting language (not statically typed):
qq fann 0.4 seconds (qq built with my non-optimising compiler)
'qq' takes about 0.1 seconds to build - under Windows which is
considered slow for development. So, 1000 times faster to build, and it
runs this program at least, 10 times faster, despite being dynamically
typed.
This is the vast difference between my world and yours.
The whole procedure, from software download,
extraction, configure/make, and start an Algol application, needs
one minute!
Only one minute; impressive!
How about this:
c:\qx>tm mm -r \mx\mm -r qq hello
Hello World
TM: 0.21
This runs my systems language /from source code/, whch then runs my interpreter /from source code/ (ie. compiles into memory and runs immediately) then runs that test program.
In 1/5th of a second (or 1/300th of a minute). This is equivalent to
first compiling gcc from source (and all those extra utilities you seem
to need) before using it/them to build a68g. I guess that would take a
bit more than a minute.
On 25/10/2025 16:18, David Brown wrote:
On 25/10/2025 14:51, bart wrote:
This is another matter. The CDECL docs talk about C and C++ type
declarations being 'gibberish'.
What do you feel about that, and the *need* for such a substantial
tool to help understand or write such declarations?
I would rather have put some effort into fixing the syntax so that
such tools are not necessary!
And I'd love to hear your plan for "fixing" the syntax of C - noting
that changing the syntax of C means getting the C standards committee
to accept your suggestions, getting at least all major C compilers to
support them, and getting the millions of C programmers to use them.
I have posted such proposals in the past (probably before 2010).
On 26/10/2025 16:12, bart wrote:
On 25/10/2025 16:18, David Brown wrote:
On 25/10/2025 14:51, bart wrote:
This is another matter. The CDECL docs talk about C and C++ type
declarations being 'gibberish'.
What do you feel about that, and the *need* for such a substantial
tool to help understand or write such declarations?
I would rather have put some effort into fixing the syntax so that
such tools are not necessary!
And I'd love to hear your plan for "fixing" the syntax of C - noting
that changing the syntax of C means getting the C standards committee
to accept your suggestions, getting at least all major C compilers to
support them, and getting the millions of C programmers to use them.
I have posted such proposals in the past (probably before 2010).
No, you have not.
What you have proposed is a different way to write types in
declarations, in a different language. That's fine if you are making a different language. (For the record, I like some of your suggestions,
and dislike others - my own choice for an "ideal" syntax would be
different from both your syntax and C's.)
I asked you if you had a plan for /fixing/ the syntax of /C/. You don't.
As an analogy, suppose I invited you - as an architect and builder - to
see my house, and you said you didn't like the layout of the rooms, the kitchen was too small, and you thought the cellar was pointless complexity. I ask you if you can give me a plan to fix it, and you
respond by telling me your own house is nicer.
- my own choice for an "ideal" syntax would be
different from both your syntax and C's.)
On 26.10.2025 12:26, bart wrote:
The 5 minutes I quoted may have been for CPython. It would be for some
Linux running under VirtualBox on a 2010 cheapest-in-the-shop PC.
If I try A68G now, under WSL, using a 2021 second-cheapest PC but with
SSD, I get:
./configure 20 seconds
make 90 seconds
Have you examined what WSL and Windows is adding to your numbers?
(As I've noted several times already I'd not be surprised if your
platform contributes to your disappointment here.)
And you've seen my numbers. (Older PC, no SSDs, etc. - but Unix.)
Anyway, I then tried this new 3.10 A68G on the Fannkuch(9) benchmark:
a68g fann.a68 5 seconds
./fann 3+3 seconds (via a68g --compile -O3 fann.a68)
I then tried it under my scripting language (not statically typed):
qq fann 0.4 seconds (qq built with my non-optimising compiler)
Your 'qq' is an Algol 68 implementation?
(If not then you're comparing apples to oranges!)
'qq' takes about 0.1 seconds to build - under Windows which is
considered slow for development. So, 1000 times faster to build, and it
runs this program at least, 10 times faster, despite being dynamically
typed.
This is the vast difference between my world and yours.
If 'qq' is some language unrelated to Algol 68 this difference tells
nothing. (So please clarify. - Or else stop vacuous comparisons.)
So you're again advertising your personal language and tools. - I'm not interested in non-standard language (or Windows-) tools, as you've been
told so many times (also by others).
The tools I'm using for my personal purposes, and those that I had been
using for professional purposes, all served the necessary requirements. Your's don't.
  Lua 5.4        0.65
Here's an example not related to my stuff:
 c:\cx>tim tcc lua.c
 Time: 0.120
This builds the Lua intepreter in 1/8th of a second. Now, Tiny C
generally produces indifferent code (ie. slow). Still, I get this result from my benchmark:
  Lua 5.4        0.65       (lua.exe built using Tiny C)
It's still at least FIVE TIMES FASTER than A68G!
On 27/10/2025 02:08, Janis Papanagnou wrote:
On 26.10.2025 12:26, bart wrote:
Anyway, I then tried this new 3.10 A68G on the Fannkuch(9) benchmark:
a68g fann.a68 5 seconds
./fann 3+3 seconds (via a68g --compile -O3 fann.a68)
I then tried it under my scripting language (not statically typed):
qq fann 0.4 seconds (qq built with my non-optimising compiler)
Your 'qq' is an Algol 68 implementation?
(If not then you're comparing apples to oranges!)
You've never, ever seen benchmarks comparing one language implementation
with another?
'qq' implements a pure interpreter for a dynamically typed language.
Algol68 is statically typed, which ought to give it the edge. It can be interpreted (the 5s figure) or compiled to native code (the 3s figure,
and it takes 3s to compile this 60-line program), which here makes
little difference.
So for all that trouble, A68G's performance is indifferent. If you don't
care for my language, then here some other timings:
A68G -O3/comp 6 seconds (3s to compile + 3s runtime)
A68G 5
CPython 3.14: 1.2
Lua 5.4 0.65
qq 0.4
(qq/opt 0.3 Optimised via C transpilation and gcc-O2)
PyPy 3.8: 0.2
LuaJIT: 0.12
The 0.2/0.12 timings are from JIT-accelerated versions.
[...]
A68G is poor on this benchmark. Other interpreted solutions are faster.
It is disappointing after taking all that effort to build.
So you're again advertising your personal language and tools. - I'm not
interested in non-standard language (or Windows-) tools, as you've been
told so many times (also by others).
Here's an example not related to my stuff:
c:\cx>tim tcc lua.c
Time: 0.120
This builds the Lua intepreter in 1/8th of a second. Now, Tiny C
generally produces indifferent code (ie. slow). Still, I get this result
from my benchmark:
Lua 5.4 0.65 (lua.exe built using Tiny C)
It's still at least FIVE TIMES FASTER than A68G!
The tools I'm using for my personal purposes, and those that I had been
using for professional purposes, all served the necessary requirements.
Your's don't.
I'm just showing just how astonishingly fast modern hardware can be.
Like at least a thousand times faster than a 1970s mainframe, and yet
people are still waiting on compilers!
But if you're happy with the performance of your tools, then that's fine.
On 27/10/2025 12:50, bart wrote:
Lua 5.4 0.65
Here's an example not related to my stuff:
c:\cx>tim tcc lua.c
Time: 0.120
This builds the Lua intepreter in 1/8th of a second. Now, Tiny C
generally produces indifferent code (ie. slow). Still, I get this
result from my benchmark:
Lua 5.4 0.65 (lua.exe built using Tiny C)
It's still at least FIVE TIMES FASTER than A68G!
Oops! I forgot to update the timing after copying that line. The proper figure should be:
Lua 5.4 1.5 seconds
So, sorry it's only 3 times as fast as A68G! And only twice as fast as compiled A68G code, if you forget about the latter's compilation time.
Here, also, you can build the Lua interpreter from source each time
(adding 0.12 seconds), and it would *still* be faster than A68G.
Not impressed? I thought not.
On 27.10.2025 13:58, bart wrote:
[...]
Not impressed? I thought not.
I'm sure non-dimensionally thinking folks may be impressed.
Janis
On 27.10.2025 13:50, bart wrote:
On 27/10/2025 02:08, Janis Papanagnou wrote:[...]
On 26.10.2025 12:26, bart wrote:
Anyway, I then tried this new 3.10 A68G on the Fannkuch(9) benchmark:
a68g fann.a68 5 seconds
./fann 3+3 seconds (via a68g --compile -O3 fann.a68)
I then tried it under my scripting language (not statically typed):
qq fann 0.4 seconds (qq built with my non-optimising compiler)
Your 'qq' is an Algol 68 implementation?
(If not then you're comparing apples to oranges!)
You've never, ever seen benchmarks comparing one language implementation
with another?
First of all, in communication with you here in Usenet I've seen you constantly switching goal posts. - Here again.
'qq' implements a pure interpreter for a dynamically typed language.
(Obviously completely useless to me.)
Algol68 is statically typed, which ought to give it the edge. It can be
interpreted (the 5s figure) or compiled to native code (the 3s figure,
and it takes 3s to compile this 60-line program), which here makes
little difference.
So for all that trouble, A68G's performance is indifferent. If you don't
care for my language, then here some other timings:
A68G -O3/comp 6 seconds (3s to compile + 3s runtime)
A68G 5
CPython 3.14: 1.2
Lua 5.4 0.65
qq 0.4
(qq/opt 0.3 Optimised via C transpilation and gcc-O2)
PyPy 3.8: 0.2
LuaJIT: 0.12
The 0.2/0.12 timings are from JIT-accelerated versions.
You are again switching goal posts. Here even twice; once for comparing
a68g compile times of some program, and second for comparing arbitrary
other languages.
- The topic of the sub-thread was my correction of
your misinformation was how long it takes to create a complete Genie
runtime from scratch; 45 seconds.
Speed is not an end in itself. It must be valued in comparison withSpeed seems to be important enough that huge efforts have gone into
all the other often more relevant factors (that you seem to completely
miss, even when explained to you).
I know your goals are space and speed. And that's fine in principle
(unless you're ignoring other relevant factors).
It's still at least FIVE TIMES FASTER than A68G! [2-3 TIMES FASTER]
So what? - I don't need a Lua system. So why should I care.
You are the one who seems to think that the speed factor is the most important factor to choose a language for a project. - You are wrong
for the general case. (But it may be right for your personal universe,
of course.)
The tools I'm using for my personal purposes, and those that I had been
using for professional purposes, all served the necessary requirements.
Your's don't.
I'm just showing just how astonishingly fast modern hardware can be.
Like at least a thousand times faster than a 1970s mainframe, and yet
people are still waiting on compilers!
You've been explained before many times already by many people that differences in compile time may not beat other more relevant factors.
On 27.10.2025 13:58, bart wrote:
On 27/10/2025 12:50, bart wrote:
Lua 5.4 0.65
Here's an example not related to my stuff:
c:\cx>tim tcc lua.c
Time: 0.120
This builds the Lua intepreter in 1/8th of a second. Now, Tiny C
generally produces indifferent code (ie. slow). Still, I get this
result from my benchmark:
Lua 5.4 0.65 (lua.exe built using Tiny C)
It's still at least FIVE TIMES FASTER than A68G!
Oops! I forgot to update the timing after copying that line. The proper
figure should be:
Lua 5.4 1.5 seconds
And what have you gained or lost in practice by this 0.85 seconds
delta?
(Clearly, you're wasting your time on marginalities! And thereby
completely missing or ignoring the more important factors.)
On 27/10/2025 09:44, David Brown wrote:
On 26/10/2025 16:12, bart wrote:
On 25/10/2025 16:18, David Brown wrote:
On 25/10/2025 14:51, bart wrote:
This is another matter. The CDECL docs talk about C and C++ type
declarations being 'gibberish'.
What do you feel about that, and the *need* for such a substantial
tool to help understand or write such declarations?
I would rather have put some effort into fixing the syntax so that
such tools are not necessary!
And I'd love to hear your plan for "fixing" the syntax of C - noting
that changing the syntax of C means getting the C standards
committee to accept your suggestions, getting at least all major C
compilers to support them, and getting the millions of C programmers
to use them.
I have posted such proposals in the past (probably before 2010).
No, you have not.
What you have proposed is a different way to write types in
declarations, in a different language. That's fine if you are making
a different language. (For the record, I like some of your
suggestions, and dislike others - my own choice for an "ideal" syntax
would be different from both your syntax and C's.)
I asked you if you had a plan for /fixing/ the syntax of /C/. You don't. >>
As an analogy, suppose I invited you - as an architect and builder -
to see my house, and you said you didn't like the layout of the rooms,
the kitchen was too small, and you thought the cellar was pointless
complexity. I ask you if you can give me a plan to fix it, and you
respond by telling me your own house is nicer.
Where did I say anything about my own house?
If my scheme was actually added and become popular, the old one could eventually be deprecated.
And yes it does 'fix' it by not requiring the use of tools like CDECL
when writing new code: type-specs are already in LTR, more English-like form.
- my own choice for an "ideal" syntax would be
different from both your syntax and C's.)
It sounds like it would also be different from CDECL. Perhaps you should contact the author to tell him what he's doing wrong!
On 27/10/2025 12:22, bart wrote:
Where did I say anything about my own house?
In the analogy, that would your own language, and/or your own
declaration syntax that has nothing to do with C
If my scheme was actually added and become popular, the old one could
eventually be deprecated.
Is that your "plan" ?
And yes it does 'fix' it by not requiring the use of tools like CDECL
when writing new code: type-specs are already in LTR, more English-
like form.
Most C programmers don't need cdecl. The only people that do need it, either have very little knowledge and experience of C, or are faced with code written by sadists (unfortunately that is not as rare as it should be). Some others might occasionally find such a tool /useful/, but
finding it useful is not "needing". And with your bizarre syntax
So you have set up a straw man, claimed to "fix" this imaginary problem, while actually doing nothing of the sort.
And even if your syntax was as great as you think (IMHO it is nicer in
some ways, worse in others
- and I think most C programmers would agree
on that while not being able to agree on which parts are nicer or
worse), you still haven't shown the slightest concept of your claimed
"plan" to implement it.
Yes, my ideal would be different from the output of cdecl. No, the
author is not doing something "wrong". I live in a world where
programming languages are used by more than one person, and those people
can have different opinions.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
On 10/22/2025 2:39 PM, Keith Thompson wrote:
This is cross-posted to comp.lang.c and comp.lang.c++.I must be doing something wrong:
Consider redirecting followups as appropriate.
cdecl, along with c++decl, is a tool that translates C or C++
declaration syntax into English, and vice versa. For example :
$ cdecl
Type `help' or `?' for help
cdecl> explain const char *foo[42]
declare foo as array 42 of pointer to const char
cdecl> declare bar as pointer to function (void) returning int
int (*bar)(void )
It's also available via the web site <https://cdecl.org/>.
Yes.
int (*fp_read) (void* const, void*, size_t)
is syntax error. It from one of my older experiments:
You're using the old 2.5 version. The newer forked version handles that declaration correctly, but you have to build it from source. cdecl.org
uses the old version.
Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX.[...]
So they are utterly dependent on them. So much so that it is pretty
much impossible to build this stuff on any non-UNIX environment,
unless that environment is emulated. That is what happens with WSL,
MSYS2, CYGWIN.
Lua is not Algol 68.
On 25/10/2025 16:18, David Brown wrote:[...]
[...]And I'd love to hear your plan for "fixing" the syntax of C - noting
that changing the syntax of C means getting the C standards
committee to accept your suggestions, getting at least all major C
compilers to support them, and getting the millions of C programmers
to use them.
I have posted such proposals in the past (probably before 2010).
I can't remember the exact details, but I think it is possible to
superimpose LTR type syntax on top of the existing language.
bart <bc@freeuk.com> writes:
[...]
Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX. >>[...]
So they are utterly dependent on them. So much so that it is pretty
much impossible to build this stuff on any non-UNIX environment,
unless that environment is emulated. That is what happens with WSL,
MSYS2, CYGWIN.
**Yes, you're right**.
The GNU autotools typically work smoothly when used on Unix-like
systems. They can be made to work nearly as smoothly under Windows
by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very difficult to use them on pure Windows.
On 26/10/2025 16:12, bart wrote:
On 25/10/2025 16:18, David Brown wrote:
On 25/10/2025 14:51, bart wrote:
This is another matter. The CDECL docs talk about C and C++ type
declarations being 'gibberish'.
What do you feel about that, and the *need* for such a substantial
tool to help understand or write such declarations?
I would rather have put some effort into fixing the syntax so that
such tools are not necessary!
And I'd love to hear your plan for "fixing" the syntax of C - noting
that changing the syntax of C means getting the C standards committee
to accept your suggestions, getting at least all major C compilers to
support them, and getting the millions of C programmers to use them.
I have posted such proposals in the past (probably before 2010).
No, you have not.
What you have proposed is a different way to write types in
declarations, in a different language. That's fine if you are making a different language. (For the record, I like some of your suggestions,
and dislike others - my own choice for an "ideal" syntax would be
different from both your syntax and C's.)
I asked you if you had a plan for /fixing/ the syntax of /C/. You don't.
As an analogy, suppose I invited you - as an architect and builder - to
see my house, and you said you didn't like the layout of the rooms, the kitchen was too small, and you thought the cellar was pointless
complexity. I ask you if you can give me a plan to fix it, and you
respond by telling me your own house is nicer.
On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
bart <bc@freeuk.com> writes:
[...]
Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX. >>>[...]
So they are utterly dependent on them. So much so that it is pretty
much impossible to build this stuff on any non-UNIX environment,
unless that environment is emulated. That is what happens with WSL,
MSYS2, CYGWIN.
**Yes, you're right**.
The GNU autotools typically work smoothly when used on Unix-like
systems. They can be made to work nearly as smoothly under Windows
by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very
difficult to use them on pure Windows.
The way I see the status quo in this matter is this: cross-platform
programs originating or mainly focusing on Unix-likes require effort
/from their actual authors/ to have a native Windows port.
Whereas when such programs are ported to Unix-like which their
authors do not use, it is often possible for the users to get it
working without needing help from the authors. There may be some
patch to upstream, and that's about it.
Also, a proper Windows port isn't just a way to build on Windows.
Nobody does that. Windows doens't have tools out of the box.
When you seriously commit to a Windows port, you provide a binary build
with a proper installer.
On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
bart <bc@freeuk.com> writes:
[...]
Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX. >>>[...]
So they are utterly dependent on them. So much so that it is pretty
much impossible to build this stuff on any non-UNIX environment,
unless that environment is emulated. That is what happens with WSL,
MSYS2, CYGWIN.
**Yes, you're right**.
The GNU autotools typically work smoothly when used on Unix-like
systems. They can be made to work nearly as smoothly under Windows
by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very
difficult to use them on pure Windows.
The way I see the status quo in this matter is this: cross-platform
programs originating or mainly focusing on Unix-likes require effort
/from their actual authors/ to have a native Windows port.
Whereas when such programs are ported to Unix-like which their
authors do not use, it is often possible for the users to get it
working without needing help from the authors. There may be some
patch to upstream, and that's about it.
Also, a proper Windows port isn't just a way to build on Windows.
Nobody does that. Windows doens't have tools out of the box.
When you seriously commit to a Windows port, you provide a binary build
with a proper installer.
Lua is not Algol 68.
Correct.
Lua is a useful programming language.
Algol 68 is a great source of inspiration for designers of
programming languages.
Useful programming language it is not.
I can imagine either an enhanced version of the GNU autotools,
or a new set of tools similar to it, that could support building
software from source on Windows.
On 27/10/2025 13:39, Janis Papanagnou wrote:
On 27.10.2025 13:50, bart wrote:
It's still at least FIVE TIMES FASTER than A68G! [2-3 TIMES FASTER]
So what? - I don't need a Lua system. So why should I care.
You are the one who seems to think that the speed factor is the most
important factor to choose a language for a project. - You are wrong
for the general case. (But it may be right for your personal universe,
of course.)
You are wrong.
What language do you use most?
Let's say it is C
(although you usually post about every other language except C!).
Then, suppose your C compiler was written in Python rather than C++ or whatever and run under CPython. What you think would happen to your build-times?
Now imagine further if the CPython interpreter was inself written and executed with CPython.
So, the 'speed' of a language (ie. of its typical implementation, which
also depends on the language design) does matter.
If speed wasn't an issue then we'd all be using easy dynamic languages
for productivity. In reality those easy languages are far too slow in
most cases.
Speed is not an end in itself. It must be valued in comparison
with all the other often more relevant factors (that you seem to
completely miss, even when explained to you).
[...]
You've been explained before many times already by many people that
differences in compile time may not beat other more relevant factors.
I've also explained that I work by very freqent edit-run cycles. Then compile-times matter.
This is why many like to use scripting languages
as those don't have a discernible build step.
[...]
On 10/27/2025 5:30 PM, Keith Thompson wrote:
[...]
I can imagine either an enhanced version of the GNU autotools,
or a new set of tools similar to it, that could support building
software from source on Windows.
https://vcpkg.io/en/packages?query=
Not bad, well for me, for now. Builds like a charm, so far.
[...]
On 27/10/2025 16:35, David Brown wrote:
On 27/10/2025 12:22, bart wrote:
/My syntax/ (as in my proposal) is bizarre,
but actual C type syntax isn't?!
The latter is possibly the worst-designed feature of any programming
language ever, certainly of any mainstream language. This is the syntax
for a pointer to an unbounded array of function pointers that return a pointer to int:
int *(*(*)[])()
This, is not bizarre?!
Even somebody reading has to figure out which *
corresponds to which 'pointer to', and where the name might go if using
it to declare a variable.
In the LTR syntax I suggested, it would be:
ref[]ref func()ref int
The variable name goes on the right. For declaring three such variables,
it would be:
ref[]ref func()ref int a, b, c
Meanwhile, in C as it is, it would need to be something like this:
int *(*(*a)[])(), *(*(*b)[])(), *(*(*c)[])()
Or you have to use a workaround and create a named alias for the type
(what would you call it?):
typedef int *(*(*T)[])();
T a, b, c;
It's a fucking joke.
And yes, I needed to use a tool to get that first
'int *(*(*)[])()', otherwise I can spend forever in a trial and error
process of figuring where all those brackets and asterisks go.
THIS IS WHY such tools are necessary, because the language syntax as it
is is not fit for purpose.
[...]
Yes, my ideal would be different from the output of cdecl. No, the
author is not doing something "wrong". I live in a world where
programming languages are used by more than one person, and those
people can have different opinions.
Find me one person who doesn't think that syntax like int *(*(*)[])()
is a complete joke.
David Brown <david.brown@hesbynett.no> wrote:
[...]
Sorry, "proof by analogy" is usually wrong. If you insist on
analogies the right one would be function prototypes: old style
function declarations where inherently unsafe and it was fixed
by adding new syntax for function declarations and definitions,
in parallel to old syntax. Now old style declarations are
officially retired. Bart proposed new syntax for all
declarations to be used in parallel with old ones, that is
exaclty the same fix as used to solve unsafety of old
function declarations.
bart <bc@freeuk.com> writes:
[...][...]
In my personal opinion, C's declaration syntax, cleverly based
on a somewhat loose "declaration follows use" principle,
is a not
entirely successful experiment that has caught on extraordinarily
well, probably due to C's other advantages as a systems programming
language. I would have preferred a different syntax **if** it had
been used in the original C **instead of** the current syntax. [...]
All else being equal, I would prefer a C-like language with clear left-to-right declaration syntax to C as it's currently defined.
But all else is not at all equal.
And I think that a future C that supports *both* the existing
syntax and your new syntax would be far worse than C as it is now. Programmers would have to learn both. Existing code would not
be updated. Most new code, written by experienced C programmers,
would continue to use the old syntax. Your plan to deprecate the
existing syntax would fail.
And that's why it will never happen. The ISO C committee would never consider this kind of radical change, even if it were shoehorned
into the syntax in a way that somehow doesn't break existing code.
[...]
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
On 10/27/2025 5:30 PM, Keith Thompson wrote:
[...]
I can imagine either an enhanced version of the GNU autotools,
or a new set of tools similar to it, that could support building
software from source on Windows.
https://vcpkg.io/en/packages?query=
Not bad, well for me, for now. Builds like a charm, so far.
[...]
Looks interesting, but I don't think it's quite what I was talking about (based on about 5 minutes browsing the website).
It seems to emphasize C and C++ *libraries* rather than applications.
And I don't see that it can be used to build an existing autotools-based package (like, say, cdecl) on Windows.
On 27.10.2025 18:44, bart wrote:
On 27/10/2025 16:35, David Brown wrote:
On 27/10/2025 12:22, bart wrote:
/My syntax/ (as in my proposal) is bizarre,
What was your proposal? - Anyway, it shouldn't be "bizarre"; it's
under your design-control!
but actual C type syntax isn't?!
There were reasons for that choice. And the authors have explained
them. - This doesn't make their choice any better, though, IMO.
The latter is possibly the worst-designed feature of any programming
language ever, certainly of any mainstream language. This is the syntax
for a pointer to an unbounded array of function pointers that return a
pointer to int:
int *(*(*)[])()
This, is not bizarre?!
You need to know the concept behind it. IOW, learn the language and
you will get used to it. (As with other features or "monstrosities".)
Even somebody reading has to figure out which *
corresponds to which 'pointer to', and where the name might go if using
it to declare a variable.
In the LTR syntax I suggested, it would be:
ref[]ref func()ref int
The variable name goes on the right. For declaring three such variables,
it would be:
ref[]ref func()ref int a, b, c
Meanwhile, in C as it is, it would need to be something like this:
int *(*(*a)[])(), *(*(*b)[])(), *(*(*c)[])()
Or you have to use a workaround and create a named alias for the type
(what would you call it?):
typedef int *(*(*T)[])();
T a, b, c;
It's a fucking joke.
Actually, this is a way to (somewhat) control the declaration "mess"
so that it doesn't propagate into the rest of the source code and
muddy each occurrence. It's also a good design principle (also when programming in other language) to use names for [complex] types.
I take that option 'typedef' as a sensible solution of this specific
problem with C's underlying declaration decisions.
And yes, I needed to use a tool to get that first
'int *(*(*)[])()', otherwise I can spend forever in a trial and error
process of figuring where all those brackets and asterisks go.
THIS IS WHY such tools are necessary, because the language syntax as it
is is not fit for purpose.
I never used 'cdecl' (as far as I recall). (I recall I was thinking
sometimes that such a tool could be useful.) Myself it was sufficient
to use a 'typedef' for complex cases. Constructing such expressions
is often easier than reading them.
[...]
Yes, my ideal would be different from the output of cdecl. No, the
author is not doing something "wrong". I live in a world where
programming languages are used by more than one person, and those
people can have different opinions.
Find me one person who doesn't think that syntax like int *(*(*)[])()
is a complete joke.
Maybe the authors (and all the enthusiastic adherents) of "C"?
On 10/27/2025 7:59 PM, Keith Thompson wrote:
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
On 10/27/2025 5:30 PM, Keith Thompson wrote:
[...]
I can imagine either an enhanced version of the GNU autotools,
or a new set of tools similar to it, that could support building
software from source on Windows.
https://vcpkg.io/en/packages?query=
Not bad, well for me, for now. Builds like a charm, so far.
[...]
Looks interesting, but I don't think it's quite what I was talking about
(based on about 5 minutes browsing the website).
So far, it can be used to cure some "headaches" over in Windows land... ;^)
It seems to emphasize C and C++ *libraries* rather than applications.
And I don't see that it can be used to build an existing autotools-based
package (like, say, cdecl) on Windows.
Well, if what you want is not in that list, you are shit out of
luck. ;^) It sure seems to build packages from source. For instance, I
got Cairo compiled and up and fully integrated into MSVC. Pretty nice.
At least its there. Although if it took a while to build everything,
Bart would be pulling his hair out. But, beats manually building
something that is not meant to be built on windows, uggg, sometimes,
double uggg. Ming, cygwin, ect... vcpkg, has all of them, and used them
to build certain things...
I have built Cairo on Windows, and vcpkg is just oh so easy. Well, keep
in mind, windows... ;^o
On 27.10.2025 16:11, bart wrote:
That's meaningless, but if you're interested to know...
Mostly (including my professional work) I've probably used C++.
But also other languages, depending on either projects' requirements
or, where there was a choice, what appeared to be fitting best (and
"best" sadly includes also bad languages if there's no alternative).
The build-times have rarely been an issue; never in private context,
and in professional contexts with MLOCS of code these things have
been effectively addressed.
files, or am I misremembering?)
Now imagine further if the CPython interpreter was inself written and
executed with CPython.
So, the 'speed' of a language (ie. of its typical implementation, which
also depends on the language design) does matter.
If speed wasn't an issue then we'd all be using easy dynamic languages
Huh? - Certainly not.
Speed is a topic, but as I wrote you have to put it in context
I can't tell about the "many" that you have in mind, and about their
mindset; I'm sure you either can't tell.
I'm using for very specific types of tasks "scripting languages" -
and keep in mind that there's no clean definition of that!
Michael S <already5chosen@yahoo.com> writes:
On Sun, 26 Oct 2025 14:56:56 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Fri, 24 Oct 2025 13:20:45 -0700[...]
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Free software still has to be usable. cdecl is usable for most
of us.
[...]
I'd say that it is not sufficiently usable for most of us to
actually use it.
Why do you say that?
I would guess that less than 1 per cent of C programmers ever used
it and less than 5% of those who used it once continued to use it regularly.
All numbers pulled out of thin air...
So it's about usefulness, not usability. You're not saying that
it works incorrectly or that it's difficult to use (which would be
usability issues), but that the job it performs is not useful for
most C programmers.
(One data point: I use it occasionally.)
On Sun, 26 Oct 2025 15:45:34 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sun, 26 Oct 2025 14:56:56 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Fri, 24 Oct 2025 13:20:45 -0700[...]
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Free software still has to be usable. cdecl is usable for most
of us.
[...]
I'd say that it is not sufficiently usable for most of us to
actually use it.
Why do you say that?
I would guess that less than 1 per cent of C programmers ever used
it and less than 5% of those who used it once continued to use it
regularly.
All numbers pulled out of thin air...
So it's about usefulness, not usability. You're not saying that
it works incorrectly or that it's difficult to use (which would be
usability issues), but that the job it performs is not useful for
most C programmers.
(One data point: I use it occasionally.)
Few minutes ago I typed 'pacman -S cdecl' at my msys2 command prompt.
Then I hit Y at suggestion to proceed with installation. After another
second or three I got it installed. Then tried it and even managed to
get couple of declarations properly explained.
So, now I also belong to less than 1 per cent :-)
In the process I finally understand why the build process is none-trivial. It's mostly because of interactivity.
It's very hard to build decent interactive program in portable subset
of C. Or, may be, even impossible rather than hard.
On 27.10.2025 21:39, Michael S wrote:
Lua is not Algol 68.
Correct.
Lua is a useful programming language.
(I have no stakes here. Never used it.)
Algol 68 is a great source of inspiration for designers of
programming languages.
Obviously.
Useful programming language it is not.
I have to read that as valuation of its usefulness for you.
(Otherwise, if you're speaking generally, you'd be just wrong.)
On 28/10/2025 12:56, Michael S wrote:
On Sun, 26 Oct 2025 15:45:34 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sun, 26 Oct 2025 14:56:56 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Fri, 24 Oct 2025 13:20:45 -0700[...]
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Free software still has to be usable. cdecl is usable for most
of us.
[...]
I'd say that it is not sufficiently usable for most of us to
actually use it.
Why do you say that?
I would guess that less than 1 per cent of C programmers ever used
it and less than 5% of those who used it once continued to use it
regularly.
All numbers pulled out of thin air...
So it's about usefulness, not usability. You're not saying that
it works incorrectly or that it's difficult to use (which would be
usability issues), but that the job it performs is not useful for
most C programmers.
(One data point: I use it occasionally.)
Few minutes ago I typed 'pacman -S cdecl' at my msys2 command prompt.
Then I hit Y at suggestion to proceed with installation. After another
second or three I got it installed. Then tried it and even managed to
get couple of declarations properly explained.
So, now I also belong to less than 1 per cent :-)
In the process I finally understand why the build process is none-trivial. >> It's mostly because of interactivity.
It's very hard to build decent interactive program in portable subset
of C. Or, may be, even impossible rather than hard.
I don't understand. What's hard about interactive programs?
The problem below, which is in standard C and runs on both Windows and >Linux, should give you all the interativity needed for a program like CDECL.
It reads a line of input, and prints something based on that. In between >would go all the non-interactive processing that it needs to do (parse
the line and so on).
So what's missing that could render this task impossible?
(Obviously, it will need a keyboard and display!)
----------------------------------------
#include <stdio.h>
#include <string.h>
int main() {
char buffer[1000];
puts("Type q to quit:");
while (1) {
printf("Cdecl> ");
fgets(buffer, sizeof(buffer), stdin);
if (buffer[0] == 'q') break;
printf("Input was: %s\n", buffer);
}
}
David Brown <david.brown@hesbynett.no> wrote:
On 26/10/2025 16:12, bart wrote:
On 25/10/2025 16:18, David Brown wrote:
On 25/10/2025 14:51, bart wrote:
This is another matter. The CDECL docs talk about C and C++ type
declarations being 'gibberish'.
What do you feel about that, and the *need* for such a substantial
tool to help understand or write such declarations?
I would rather have put some effort into fixing the syntax so that
such tools are not necessary!
And I'd love to hear your plan for "fixing" the syntax of C - noting
that changing the syntax of C means getting the C standards committee
to accept your suggestions, getting at least all major C compilers to
support them, and getting the millions of C programmers to use them.
I have posted such proposals in the past (probably before 2010).
No, you have not.
What you have proposed is a different way to write types in
declarations, in a different language. That's fine if you are making a
different language. (For the record, I like some of your suggestions,
and dislike others - my own choice for an "ideal" syntax would be
different from both your syntax and C's.)
I asked you if you had a plan for /fixing/ the syntax of /C/. You don't.
As an analogy, suppose I invited you - as an architect and builder - to
see my house, and you said you didn't like the layout of the rooms, the
kitchen was too small, and you thought the cellar was pointless
complexity. I ask you if you can give me a plan to fix it, and you
respond by telling me your own house is nicer.
Sorry, "proof by analogy" is usually wrong.
If you insist on
analogies the right one would be function prototypes: old style
function declarations where inherently unsafe and it was fixed
by adding new syntax for function declarations and definitions,
in parallel to old syntax. Now old style declarations are
officially retired. Bart proposed new syntax for all
declarations to be used in parallel with old ones, that is
exaclty the same fix as used to solve unsafety of old
function declarations.
IMO the worst C problem is standard process. Basically, once
a large vendor manages to subvert the language it gets
legitimized and part of the standard. OTOH old warts are
preserved for long time. Worse, new warts are introduced.
As an example, VMT-s were big opportunity to make array access
safer. But version which is in the standard skilfully
sabotages potential compiler attempts to increase safety.
If you look carefuly, there is several places in the standard
that effectively forbid static or dynamic error checks. Once
you add extra safety checks your implementation is
noncompilant.
It is likely that any standarized language is eventually
doomed to failure. This is pretty visible with Cobol,
but C seem to be on similar trajectory (but in much earlier
stage).
On 28/10/2025 03:00, Janis Papanagnou wrote:
On 27.10.2025 21:39, Michael S wrote:
Lua is not Algol 68.
Correct.
Lua is a useful programming language.
(I have no stakes here. Never used it.)
It's usefulness is demonstrated by its widespread use. It is mostly
used as a scripting or automation language integrated in other software, >rather than as a stand-alone language. It is particularly popular in
the gaming industry.
Algol 68 is a great source of inspiration for designers of
programming languages.
Obviously.
Useful programming language it is not.
I have to read that as valuation of its usefulness for you.
(Otherwise, if you're speaking generally, you'd be just wrong.)
The uselessness of Algol 68 as a programming language in the modern
world is demonstrated by the almost total non-existence of serious tools >and, more importantly, real-world code in the language. It certainly
/was/ a useful programming language, long ago, but it has not been
seriously used outside of historical hobby interest for half a century.
And unlike other ancient languages (like Cobol or Fortran) there is no
code of relevance today written in the language. Original Algol was
mostly used in research, while Algol 68 was mostly not used at all. As >C.A.R. Hoare said, "As a tool for the reliable creation of sophisticated >programs, the language was a failure".
bart <bc@freeuk.com> writes:
On 28/10/2025 12:56, Michael S wrote:
On Sun, 26 Oct 2025 15:45:34 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sun, 26 Oct 2025 14:56:56 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Fri, 24 Oct 2025 13:20:45 -0700[...]
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Free software still has to be usable. cdecl is usable for most >>>>>>>> of us.
[...]
I'd say that it is not sufficiently usable for most of us to
actually use it.
Why do you say that?
I would guess that less than 1 per cent of C programmers ever used
it and less than 5% of those who used it once continued to use it
regularly.
All numbers pulled out of thin air...
So it's about usefulness, not usability. You're not saying that
it works incorrectly or that it's difficult to use (which would be
usability issues), but that the job it performs is not useful for
most C programmers.
(One data point: I use it occasionally.)
Few minutes ago I typed 'pacman -S cdecl' at my msys2 command prompt.
Then I hit Y at suggestion to proceed with installation. After another
second or three I got it installed. Then tried it and even managed to
get couple of declarations properly explained.
So, now I also belong to less than 1 per cent :-)
In the process I finally understand why the build process is none-trivial. >>> It's mostly because of interactivity.
It's very hard to build decent interactive program in portable subset
of C. Or, may be, even impossible rather than hard.
I don't understand. What's hard about interactive programs?
The problem below, which is in standard C and runs on both Windows and
Linux, should give you all the interativity needed for a program like CDECL. >>
It reads a line of input, and prints something based on that. In between
would go all the non-interactive processing that it needs to do (parse
the line and so on).
So what's missing that could render this task impossible?
(Obviously, it will need a keyboard and display!)
----------------------------------------
#include <stdio.h>
#include <string.h>
int main() {
char buffer[1000];
puts("Type q to quit:");
while (1) {
printf("Cdecl> ");
fgets(buffer, sizeof(buffer), stdin);
if (buffer[0] == 'q') break;
printf("Input was: %s\n", buffer);
}
}
Where is the command line editing and history support
in this trivial application?
On 27/10/2025 20:52, Kaz Kylheku wrote:
On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
bart <bc@freeuk.com> writes:
[...]
Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX.[...]
So they are utterly dependent on them. So much so that it is pretty
much impossible to build this stuff on any non-UNIX environment,
unless that environment is emulated. That is what happens with WSL,
MSYS2, CYGWIN.
**Yes, you're right**.
The GNU autotools typically work smoothly when used on Unix-like
systems. They can be made to work nearly as smoothly under Windows
by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very
difficult to use them on pure Windows.
The way I see the status quo in this matter is this: cross-platform
programs originating or mainly focusing on Unix-likes require effort
/from their actual authors/ to have a native Windows port.
Whereas when such programs are ported to Unix-like which their
authors do not use, it is often possible for the users to get it
working without needing help from the authors. There may be some
patch to upstream, and that's about it.
Also, a proper Windows port isn't just a way to build on Windows.
Nobody does that. Windows doens't have tools out of the box.
When you seriously commit to a Windows port, you provide a binary build
with a proper installer.
The problem with a binary distribution is AV software on the user's
machine which can block it.
To get around that AV, you either need to have some clout, be
In my case, rather than supply a monolithic executable (EXE file, which either the app itself, or some sort of installer), I've played around
David Brown <david.brown@hesbynett.no> writes:
On 28/10/2025 03:00, Janis Papanagnou wrote:
On 27.10.2025 21:39, Michael S wrote:
Lua is not Algol 68.
Correct.
Lua is a useful programming language.
(I have no stakes here. Never used it.)
It's usefulness is demonstrated by its widespread use. It is mostly
used as a scripting or automation language integrated in other
software, rather than as a stand-alone language. It is particularly >popular in the gaming industry.
Algol 68 is a great source of inspiration for designers of
programming languages.
Obviously.
Useful programming language it is not.
I have to read that as valuation of its usefulness for you.
(Otherwise, if you're speaking generally, you'd be just wrong.)
The uselessness of Algol 68 as a programming language in the modern
world is demonstrated by the almost total non-existence of serious
tools and, more importantly, real-world code in the language. It
certainly /was/ a useful programming language, long ago, but it has
not been seriously used outside of historical hobby interest for
half a century. And unlike other ancient languages (like Cobol or
Fortran) there is no code of relevance today written in the
language. Original Algol was mostly used in research, while Algol
68 was mostly not used at all. As C.A.R. Hoare said, "As a tool for
the reliable creation of sophisticated programs, the language was a >failure".
There is still one computer system that uses Algol as both
the system programming language, and for applications.
Unisys Clearpath (descendents of the Burroughs B6500).
I can't imagine why anyone would write cdecl (if it is written in C)
such that it's anything but a maximally conforming ISO C program,
which can be built like this:
make cdecl
without any Makefile present, in a directory in which there is just
one file: cdecl.c.
You are exaggerating.
There is nothing wrong with multiple files and small nice manually
In that regard autotools resemble Postel's principle - the most harmful
On Tue, 28 Oct 2025 16:05:47 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
David Brown <david.brown@hesbynett.no> writes:
On 28/10/2025 03:00, Janis Papanagnou wrote:
On 27.10.2025 21:39, Michael S wrote:
Lua is not Algol 68.
Correct.
Lua is a useful programming language.
(I have no stakes here. Never used it.)
It's usefulness is demonstrated by its widespread use. It is mostly
used as a scripting or automation language integrated in other
software, rather than as a stand-alone language. It is particularly
popular in the gaming industry.
Algol 68 is a great source of inspiration for designers of
programming languages.
Obviously.
Useful programming language it is not.
I have to read that as valuation of its usefulness for you.
(Otherwise, if you're speaking generally, you'd be just wrong.)
The uselessness of Algol 68 as a programming language in the modern
world is demonstrated by the almost total non-existence of serious
tools and, more importantly, real-world code in the language. It
certainly /was/ a useful programming language, long ago, but it has
not been seriously used outside of historical hobby interest for
half a century. And unlike other ancient languages (like Cobol or
Fortran) there is no code of relevance today written in the
language. Original Algol was mostly used in research, while Algol
68 was mostly not used at all. As C.A.R. Hoare said, "As a tool for
the reliable creation of sophisticated programs, the language was a
failure".
There is still one computer system that uses Algol as both
the system programming language, and for applications.
Unisys Clearpath (descendents of the Burroughs B6500).
Is B6500 ALGOL related to A68?
Michael S <already5chosen@yahoo.com> writes:
On Tue, 28 Oct 2025 16:05:47 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
David Brown <david.brown@hesbynett.no> writes:
On 28/10/2025 03:00, Janis Papanagnou wrote:
On 27.10.2025 21:39, Michael S wrote:
Lua is not Algol 68.
Correct.
Lua is a useful programming language.
(I have no stakes here. Never used it.)
It's usefulness is demonstrated by its widespread use. It is
mostly used as a scripting or automation language integrated in
other software, rather than as a stand-alone language. It is
particularly popular in the gaming industry.
Algol 68 is a great source of inspiration for designers of
programming languages.
Obviously.
Useful programming language it is not.
I have to read that as valuation of its usefulness for you.
(Otherwise, if you're speaking generally, you'd be just wrong.)
The uselessness of Algol 68 as a programming language in the
modern world is demonstrated by the almost total non-existence of
serious tools and, more importantly, real-world code in the
language. It certainly /was/ a useful programming language, long
ago, but it has not been seriously used outside of historical
hobby interest for half a century. And unlike other ancient
languages (like Cobol or Fortran) there is no code of relevance
today written in the language. Original Algol was mostly used in
research, while Algol 68 was mostly not used at all. As C.A.R.
Hoare said, "As a tool for the reliable creation of sophisticated
programs, the language was a failure".
There is still one computer system that uses Algol as both
the system programming language, and for applications.
Unisys Clearpath (descendents of the Burroughs B6500).
Is B6500 ALGOL related to A68?
A-series ALGOL has many extensions.
DCAlgol, for example, is used to create applications
for data communications (e.g. poll-select multidrop
applications such as teller terminals, etc).
NEWP is an algol dialect used for systems programming
and the operating system itself.
ALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000098-517/86000098-517.pdf
DCALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000841-208.pdf
NEWP: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-21.0/86002003-409.pdf
On 28/10/2025 03:00, Janis Papanagnou wrote:
On 27.10.2025 21:39, Michael S wrote:
[ snip Lua statements ]
Algol 68 is a great source of inspiration for designers of
programming languages.
Obviously.
Useful programming language it is not.
I have to read that as valuation of its usefulness for you.
(Otherwise, if you're speaking generally, you'd be just wrong.)
The uselessness of Algol 68 as a programming language in the modern
world is demonstrated by the almost total non-existence of serious tools
and, more importantly, real-world code in the language.
It certainly /was/ a useful programming language, long ago,
but it has not been
seriously used outside of historical hobby interest for half a century.
And unlike other ancient languages (like Cobol or Fortran) there is no
code of relevance today written in the language.
Original Algol was
mostly used in research, while Algol 68 was mostly not used at all. As C.A.R. Hoare said, "As a tool for the reliable creation of sophisticated programs, the language was a failure".
I'm sure there are /some/ people who have or will write real code in
Algol 68 in modern times
(the folks behind the new gcc Algol 68
front-end want to be able to write code in the language),
but it is very much a niche language.
On Tue, 28 Oct 2025 16:05:47 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
There is still one computer system that uses Algol as both
the system programming language, and for applications.
Unisys Clearpath (descendents of the Burroughs B6500).
Is B6500 ALGOL related to A68?
My impression from Wikipedia article is that B5000 ALGOL was a
proprietary off-spring of A60. Wikipedia says nothing about sources of
B6500 ALGOL, but considering that Burroughs was an American enterprise
and that back at time in US ALGOL 68 was widely considered as a failed European experiment I would guess that B6500 ALGOL is derived from
B5000 ALGOL rather than from A68.
been effectively addressed. (I recall you were unfamiliar with make
files, or am I misremembering?)
On 28/10/2025 02:35, Janis Papanagnou wrote:[...]
On 27.10.2025 16:11, bart wrote:
If speed wasn't an issue then we'd all be using easy dynamic languagesHuh? - Certainly not.
*I* would! That's why I made my scripting languages as fast and
capable as possible, so they could be used for more tasks.
However, if I dare to suggest that even one other person in the world
might also have the same desire, you'd say that I can't possibly know
that.
And yet here you are: you say 'certainly not'. Obviously *you* know
everyone else's mindset!
And yet here you are: you say 'certainly not'. Obviously *you* know
everyone else's mindset!
On 2025-10-28, bart <bc@freeuk.com> wrote:
On 27/10/2025 20:52, Kaz Kylheku wrote:
On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
bart <bc@freeuk.com> writes:
[...]
Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX.[...]
So they are utterly dependent on them. So much so that it is pretty
much impossible to build this stuff on any non-UNIX environment,
unless that environment is emulated. That is what happens with WSL,
MSYS2, CYGWIN.
**Yes, you're right**.
The GNU autotools typically work smoothly when used on Unix-like
systems. They can be made to work nearly as smoothly under Windows
by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very
difficult to use them on pure Windows.
The way I see the status quo in this matter is this: cross-platform
programs originating or mainly focusing on Unix-likes require effort
/from their actual authors/ to have a native Windows port.
Whereas when such programs are ported to Unix-like which their
authors do not use, it is often possible for the users to get it
working without needing help from the authors. There may be some
patch to upstream, and that's about it.
Also, a proper Windows port isn't just a way to build on Windows.
Nobody does that. Windows doens't have tools out of the box.
When you seriously commit to a Windows port, you provide a binary build
with a proper installer.
The problem with a binary distribution is AV software on the user's
machine which can block it.
Well, then you're fucked. (Which, anyway, is a good general adjective
for someone still depending on Microsoft Windows.)
The problem with source distribution is that users on Windows don't
have any tooling. To get tooling, they would need to install binaries.
To get around that AV, you either need to have some clout, be
The way you do that is by developing a compelling program that helps
users get their work done and becomes popular, so users (and their
managers) can they convince their IT that they need it.
In my case, rather than supply a monolithic executable (EXE file, which
either the app itself, or some sort of installer), I've played around
You are perhaps too hastily skipping over the idea of "some sort of installer".
Yes, use an installer for Windows if you're doing something
serious that is offered to the public, rather than just to a handful of friends or customers.
bart <bc@freeuk.com> writes:
On 28/10/2025 02:35, Janis Papanagnou wrote:[...]
On 27.10.2025 16:11, bart wrote:
If speed wasn't an issue then we'd all be using easy dynamic languagesHuh? - Certainly not.
*I* would! That's why I made my scripting languages as fast and
capable as possible, so they could be used for more tasks.
However, if I dare to suggest that even one other person in the world
might also have the same desire, you'd say that I can't possibly know
that.
And yet here you are: you say 'certainly not'. Obviously *you* know
everyone else's mindset!
I'll give this one more try.
This kind of thing makes it difficult to communicate with you.
This is why many like to use scripting languages
as those don't have a discernible build step.
I can't tell about the "many" that you have in mind, and about theirmindset; I'm sure you either can't tell.
In this particular instances, you wrote that "we'd **all** be using easy dynamic languages" (emphasis added).
Janis replied "Certainly not." -- meaning that we would not **all** be
using easy dynamic languages. Janis is correct if there are only a few people, or even one person, who would not use easy dynamic languages.
In reply to that, you wrote that **you** would use such languages --
which is fine and dandy, but it doesn't refute what Janis wrote.
Nobody at any time claimed that *nobody* would use easy dynamic
languages. Obviously some people do and some people don't. If speed
were not an issue, that would still be the case, though it would likely change the numbers. (There are valid reasons other than speed to use non-dynamic languages.)
Are you with me so far?
You then wrote:
However, if I dare to suggest that even one other person in the world
might also have the same desire, you'd say that I can't possibly know
that.
That's wrong. I'll assume it was an honest mistake. If you suggested
that even one other person might also have the same desire, I don't
think anyone would dispute it. *Of course* there are plenty of people
who want to use dynamic languages, and there would be more if speed were
not an issue. As you have done before, you make incorrect assumptions
about other people's thoughts and motives.
And yet here you are: you say 'certainly not'. Obviously *you* know
everyone else's mindset!
The "certainly not" was in response to your claim that we would ALL
be using dynamic languages, a claim that was at best hyberbole. Nobody
has claimed to know everyone else's mindset.
You misunderstood what Janis wrote.
This post is likely to be a waste of time, but I'm prepared to be
pleasantly surprised.
On 28/10/2025 17:03, Kaz Kylheku wrote:
On 2025-10-28, bart <bc@freeuk.com> wrote:
On 27/10/2025 20:52, Kaz Kylheku wrote:
On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
bart <bc@freeuk.com> writes:
[...]
Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX.[...]
So they are utterly dependent on them. So much so that it is pretty >>>>>> much impossible to build this stuff on any non-UNIX environment,
unless that environment is emulated. That is what happens with WSL, >>>>>> MSYS2, CYGWIN.
**Yes, you're right**.
The GNU autotools typically work smoothly when used on Unix-like
systems. They can be made to work nearly as smoothly under Windows
by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very >>>>> difficult to use them on pure Windows.
The way I see the status quo in this matter is this: cross-platform
programs originating or mainly focusing on Unix-likes require effort
/from their actual authors/ to have a native Windows port.
Whereas when such programs are ported to Unix-like which their
authors do not use, it is often possible for the users to get it
working without needing help from the authors. There may be some
patch to upstream, and that's about it.
Also, a proper Windows port isn't just a way to build on Windows.
Nobody does that. Windows doens't have tools out of the box.
When you seriously commit to a Windows port, you provide a binary build >>>> with a proper installer.
The problem with a binary distribution is AV software on the user's
machine which can block it.
Well, then you're fucked. (Which, anyway, is a good general adjective
for someone still depending on Microsoft Windows.)
The problem with source distribution is that users on Windows don't
have any tooling. To get tooling, they would need to install binaries.
There seems little problem with installing well-known compilers.
An installer is just an executable like any other, at least if it as a
.EXE extension.
If you supply a one-file, self-contained ready-to-run application, then
it doesn't really need installing. Wherever it happens to reside after downloading, it can happily be run from there!
The only thing that's needed is to make it so that it can be run from anywhere without needing to type its path. But I can't remember any apps I've installed recently that seem to get that right, even with a
long-winded installer:
On 28/10/2025 21:59, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 28/10/2025 02:35, Janis Papanagnou wrote:[...]
On 27.10.2025 16:11, bart wrote:
I'll give this one more try.If speed wasn't an issue then we'd all be using easy dynamic languages >>>> Huh? - Certainly not.
*I* would! That's why I made my scripting languages as fast and
capable as possible, so they could be used for more tasks.
However, if I dare to suggest that even one other person in the world
might also have the same desire, you'd say that I can't possibly know
that.
And yet here you are: you say 'certainly not'. Obviously *you* know
everyone else's mindset!
This kind of thing makes it difficult to communicate with you.
You're talking to the wrong guy. It's JP who's difficult to talk to.
He (I assume) always dismisses every single one of my arguments out of hand:
Build speed is never a problem - ever. The speed of any language
implemention is never a concern either.
On 28/10/2025 02:35, Janis Papanagnou wrote:
On 27.10.2025 16:11, bart wrote:
That's meaningless, but if you're interested to know...
Mostly (including my professional work) I've probably used C++.
But also other languages, depending on either projects' requirements
or, where there was a choice, what appeared to be fitting best (and
"best" sadly includes also bad languages if there's no alternative).
Which bad languages are these?
[...]
That CDECL took, what, 49 seconds on my machine, to process 68Kloc of C? That's a whopping 1400 lines per second!
If we go back 45 years to machines that were 1000 times slower,
the same
process would only manage 1.4 lines per second, and it would take 13
HOURS, to create an interactive program that explained what 'int (*(*(*)))[]()' (whatever it was) might mean.
So, yeah, build-time is a problem, even on the ultra-fast hardware we
have now.
Bear in mind that CDECL (like every finished product you build from
source) is a working, debugged program. You shouldn't need to do that
much analysis of it. And here, its performance is not critical either:
you don't even need fast code from it.
(I recall you were unfamiliar with make
files, or am I misremembering?)
I know makefiles. Never used them, never will.
You might recall that I create my own solutions.
Now imagine further if the CPython interpreter was inself written and
executed with CPython.
So, the 'speed' of a language (ie. of its typical implementation, which
also depends on the language design) does matter.
If speed wasn't an issue then we'd all be using easy dynamic languages
Huh? - Certainly not.
*I* would! That's why I made my scripting languages as fast and capable
as possible, so they could be used for more tasks.
However, if I dare to suggest that even one other person in the world
might also have the same desire, you'd say that I can't possibly know that.
And yet here you are: you say 'certainly not'. Obviously *you* know
everyone else's mindset!
Speed is a topic, but as I wrote you have to put it in context
Actually, the real topic is slowness. I'm constantly coming across
things which I know (from half a century working with computers) are far slower than they ought to be.
But I'm also coming across people who seem to accept that slowness as
just how things are. They should question things more!
I can't tell about the "many" that you have in mind, and about their
mindset; I'm sure you either can't tell.
I'm pretty sure there are quite a few million users of scripting languages.
I'm using for very specific types of tasks "scripting languages" -
and keep in mind that there's no clean definition of that!
They have typical characteristics as I'm quite sure you're aware. For example:
* Dynamic typing
* Run from source
* Instant edit-run cycle
* Possible REPL
* Uncluttered syntax
* Higher level features
* Extensive libraries so that you can quickly 'script' most tasks
So, interactivity and spontaneity. But they also have cons:
* Slower execution
* Little compile-time error checking
* Less control (of data structures for example)
On 28/10/2025 21:59, Keith Thompson wrote:
[...]
He (I assume) always dismisses every single one of my arguments out of
hand:
Build speed is never a problem - ever.
The speed of any language implemention is never a concern either.
[...]
When I gave the example of my language that was 1000 times faster to
build than A68G, and which ran that test 10 times faster than A68G, that apparently doesn't count; he doesn't care; or I'm changing the goalposts.
[...]
On the face of it, it is uncontroversial: they do allow rapid
development and instant feedback, as one of their several pros. Yet, JP
feels the need to be contrary:
I can't tell about the "many" that you have in mind, and about theirmindset; I'm sure you either can't tell.
And now you have joined in, to back him up!
[...]
[...]
You misunderstood what Janis wrote.
I understand what he's trying to do. He despises me; he thinks the
projects I work on are worthless.
[...] Meanwhile he's a 'professional', as stated many times.
[...]
On 29.10.2025 00:14, bart wrote:
On 28/10/2025 21:59, Keith Thompson wrote:
[...]
He (I assume) always dismisses every single one of my arguments out of
hand:
No, I'm trying to speak about various things; basically my focus
is the facts. Not the persons involved. But there's persons with
specific mindsets (like you) that provoke reactions; on flaws in
your logic, misrepresentations, limited perspectives, etc.
Build speed is never a problem - ever.
Like here. You're making things up. - For example I clearly said;
"Speed is a topic". But since you're so pathologically focused on
that factor that you miss the important projects' contexts. So I
then even quoted that (in case you missed it):
Speed is not an end in itself. It must be valued in comparison
with all the other often more relevant factors (that you seem to
completely miss, even when explained to you).
The speed of any language implemention is never a concern either.
Nonsense.
[...]
When I gave the example of my language that was 1000 times faster to
build than A68G, and which ran that test 10 times faster than A68G, that
apparently doesn't count; he doesn't care; or I'm changing the goalposts.
Exactly. Or comparing apples and oranges. - Sadly you do all that
regularly.
[...]
On the face of it, it is uncontroversial: they do allow rapid
development and instant feedback, as one of their several pros. Yet, JP
feels the need to be contrary:
I can't tell about the "many" that you have in mind, and about theirmindset; I'm sure you either can't tell.
And now you have joined in, to back him up!
Bart, you should take Keith's words meant benevolent; all he's trying
was you not always assuming that we want to hurt you if we criticize
any misconceptions in your thinking or considering a topic only from
one isolate perspective. If you continue to assume that the "worst"
was meant, and only against you, you won't get anywhere.
Keith has explained in his posts exactly what was said and meant, and
made your discussion maneuvers explicit. (I would have been happier
if you, Bart, would have noticed yourself what was obvious to Keith.)
[...]
[...]
You misunderstood what Janis wrote.
I understand what he's trying to do. He despises me; he thinks the
Obviously you don't understand, and certainly also don't know what
I think; if you would understand it you wouldn't have written this
nonsense.
projects I work on are worthless.
Actually, as far as I saw your projects, methods, and targets, yes;
they are completely worthless _for me_. (Mind the emphasis.)
I also doubt that they are of worth in typical professional contexts;
since they seem to lack some basic properties needed in professional contexts. - But that is your problem, not mine. (I just don't care.)
[...] Meanwhile he's a 'professional', as stated many times.
Oh, my perception is that the regulars here are *all* professionals!
And (typically) even to a high degree. - That's, I think, one reason
why you sometimes (often?) get headwind from the audience.
What I'm regularly trying to tell you is that your project setups
and results might only rarely serve the requirements in professional _projects_ as you find them in _professional software companies_.
You cannot seem to accept that.
Personally I'm not working anymore professionally. (I mentioned that occasionally.) But I've still the expertise from my professional work
and education, and I share my experiences to those who are interested.
You, personally, are of no interest to me; your presumptions are thus
wrong. (I'm interested in CS and IT topics.)
On 28.10.2025 12:16, bart wrote:
* Less control (of data structures for example)
Not sure what you mean (control constructs, more data structures).
But have a look into Unix shells for control constructs, and into
Kornshell specifically for data structures.
It's a very inhomogeneous area. Impossible to clearly classify.
Janis
On 28.10.2025 12:16, bart wrote:
On 28/10/2025 02:35, Janis Papanagnou wrote:
On 27.10.2025 16:11, bart wrote:
That's meaningless, but if you're interested to know...
Mostly (including my professional work) I've probably used C++.
But also other languages, depending on either projects' requirements
or, where there was a choice, what appeared to be fitting best (and
"best" sadly includes also bad languages if there's no alternative).
Which bad languages are these?
Are you hunting for a language war discussion? - I won't start it here.
If you want, please start an appropriate topic in comp.lang.misc or so.
[...]
That CDECL took, what, 49 seconds on my machine, to process 68Kloc of C?
That's a whopping 1400 lines per second!
If we go back 45 years to machines that were 1000 times slower,
We are not in these days any more.
Nowadays there's much more complex
software; some inherently bad designed software, and in other cases
they might not care about tweaking the last second out of a process
(for various reasons). So this comparison isn't really contributing
anything here.
So, yeah, build-time is a problem, even on the ultra-fast hardware we
have now.
What problem? - That you don't want to wait a few seconds?
I know makefiles. Never used them, never will.
(Do what you prefer. - After all you're not cooperating with others in
your personal projects, as I understood, so there's no need to "learn"
[or just use!] things you don't like. If you think it's a good idea to
spend time in writing own code for already solved tasks, I'm fine with
that.)
You might recall that I create my own solutions.
I don't recall, to be honest. But let's rather say; I'm not astonished
that you have "created your own solutions". (Where other folks would
just use an already existing, flexibly and simply usable, working and supported solution.) - So that's your problem not anyone else's.
I also think that there a not few people that accept inferior quality;
how else could the success of, say, DOS, Windows, and off-the-shelf
MS office software, be explained.
* Dynamic typing
Marcel van der Veer is advertising Genie (his Algol 68 interpreter) as
a system usable for scripting. (With no dynamic but static typing.)
* Run from source
How about JIT, how about intermediate languages?
* Little compile-time error checking
(We already commented in your above point "Dynamic typing".)
* Less control (of data structures for example)
Not sure what you mean (control constructs, more data structures).
It's a very inhomogeneous area. Impossible to clearly classify.
I don't need a linker, I don't need a makefile, I don't need lists of dependencies between modules, I don't need independent compilation, I
don't use object files.
On 28.10.2025 15:59, David Brown wrote:
On 28/10/2025 03:00, Janis Papanagnou wrote:
On 27.10.2025 21:39, Michael S wrote:
[ snip Lua statements ]
Algol 68 is a great source of inspiration for designers of
programming languages.
Obviously.
Useful programming language it is not.
I have to read that as valuation of its usefulness for you.
(Otherwise, if you're speaking generally, you'd be just wrong.)
The uselessness of Algol 68 as a programming language in the modern
world is demonstrated by the almost total non-existence of serious tools
and, more importantly, real-world code in the language.
Obviously you are mixing the terms usefulness and dissemination
(its actual use). Please accept that I'm differentiating here.
There's quite some [historic] languages that were very useful but
couldn't disseminate. (For another prominent example cf. Simula,
that invented not only the object oriented principles with classes
and inheritance, was a paragon for quite some OO-languages later,
and it made a lot more technical and design inventions, some even
now still unprecedented.) It's a pathological historic phenomenon
that programming languages from the non-US American locations had
inherent problems to disseminate especially back these days!
Reasons for dissemination of a language are multifold; back then
(but to a degree also today) they were often determined by political
and marketing factors... (you can read about that in various historic documents and also in later ruminations about computing history)
It certainly /was/ a useful programming language, long ago,
...as you seem to basically agree to here. (At least as far as you
couple usefulness with dissemination.)
but it has not been
seriously used outside of historical hobby interest for half a century.
(Make that four decades. It's been used in the mid 1980's. - Later
I didn't follow it anymore, so I cannot tell about the 1990's.)
(I also disagree in your valuation "hobby interest"; for "hobbies"
there were easier accessible languages used, not systems that were
back these days mainly available on mainframes only.)
As far as you mean in programming software systems, that may be true;
I cannot tell that I'd have an oversight who did use it. I've read
about various applications, though; amongst them that it's even been
used as a systems programming language (where I was astonished about).
And unlike other ancient languages (like Cobol or Fortran) there is no
code of relevance today written in the language.
Probably right. (That would certainly be also my guess.)
Original Algol was
mostly used in research, while Algol 68 was mostly not used at all. As
C.A.R. Hoare said, "As a tool for the reliable creation of sophisticated
programs, the language was a failure".
I don't know the context of his statement. If you know the language
you might admit that reliable software is exactly one strong property
of that language. (Per se already, but especially so if compared to
languages like "C", the language discussed in this newsgroup, with an extremely large dissemination and also impact.)
I'm sure there are /some/ people who have or will write real code in
Algol 68 in modern times
The point was that the language per se was and is useful. But its
actual usage for developing software systems seems to have been of
little and more so it's currently of no importance, without doubt.
(the folks behind the new gcc Algol 68
front-end want to be able to write code in the language),
There's more than the gcc folks. (I've heard, that gcc has taken some substantial code from Genie, an Algol 68 "compiler-interpreter" that
is still maintained. BTW; I'm for example using that one, not gcc's.)
but it is very much a niche language.
It's _functionally_ a general purpose language, not a niche language
(in the sense of "special purpose language"). Its dissemination makes
it to a "niche language", that's true. It's in practice just a dead
language. It's rarely used by anyone. But it's a very useful language.
On 28/10/2025 21:59, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 28/10/2025 02:35, Janis Papanagnou wrote:[...]
On 27.10.2025 16:11, bart wrote:
If speed wasn't an issue then we'd all be using easy dynamic languages >>>> Huh? - Certainly not.
*I* would! That's why I made my scripting languages as fast and
 capable as possible, so they could be used for more tasks.
However, if I dare to suggest that even one other person in the world
might also have the same desire, you'd say that I can't possibly know
that.
And yet here you are: you say 'certainly not'. Obviously *you* know
everyone else's mindset!
I'll give this one more try.
This kind of thing makes it difficult to communicate with you.
You're talking to the wrong guy. It's JP who's difficult to talk to.
He (I assume) always dismisses every single one of my arguments out of
hand:
Build speed is never a problem - ever. The speed of any language implemention is never a concern either.
On 10/29/25 15:40, bart wrote:
I don't need a linker, I don't need a makefile, I don't need lists of
dependencies between modules, I don't need independent compilation, I
don't use object files.
    s/don't need/refuse to use/
On 28/10/2025 20:14, Janis Papanagnou wrote:
Reasons for dissemination of a language are multifold; back then
(but to a degree also today) they were often determined by political
and marketing factors... (you can read about that in various historic
documents and also in later ruminations about computing history)
I can certainly agree that some languages, including Algol, Algol 68 and Simula, have had very significant influence on the programming world and other programming languages, despite limited usage. I was interpreting "useful programming language" as meaning "a language useful for writing programs" - and neither Algol 68 nor Simula are sensible choices for
writing code today. Neither of them were ever appropriate choices for
many programming tasks (Algol and its derivatives was used a lot more
than Algol 68). The lack of significant usage of these languages beyond
a few niche cases is evidence (but not proof) that they were never particularly useful as programming languages.
bart <bc@freeuk.com> writes:
On 28/10/2025 21:59, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 28/10/2025 02:35, Janis Papanagnou wrote:[...]
On 27.10.2025 16:11, bart wrote:
I'll give this one more try.If speed wasn't an issue then we'd all be using easy dynamic languages >>>>> Huh? - Certainly not.
*I* would! That's why I made my scripting languages as fast and
capable as possible, so they could be used for more tasks.
However, if I dare to suggest that even one other person in the world
might also have the same desire, you'd say that I can't possibly know
that.
And yet here you are: you say 'certainly not'. Obviously *you* know
everyone else's mindset!
This kind of thing makes it difficult to communicate with you.
You're talking to the wrong guy. It's JP who's difficult to talk to.
No, I'm talking to you. It turns out that was a mistake.
My post was **only** about your apparent confusion about a single
statement, quoted above. I wasn't talking about JP personally, or about
any of his other interactions with you. I explained in great detail
what I was referring to. You ignored it.
You seem unwilling or unable to focus on one thing.
He (I assume) always dismisses every single one of my arguments out of hand: >>
Build speed is never a problem - ever. The speed of any language
implemention is never a concern either.
And here you are putting words in other people's mouths.
I think you goal is to argue, not to do anything that might result in agreement or learning.
On 26/10/2025 16:04, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
On 26/10/2025 06:25, Janis Papanagnou wrote:
However the A68G configure script is 11000 lines; the CDECL one 31600 lines.
(I wonder why the latter needs 20000 more lines? I guess nobody is
curious - or they simply don't care.)
You should be able to figure that out yourself. You may actually
learn something useful along the way.
So you don't know.
What special requirements does CDECL have (which has a task that is at
least a magnitude simpler than A68G's), that requires those 20,000 extra lines?
On 29/10/2025 00:14, bart wrote:
On 28/10/2025 21:59, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 28/10/2025 02:35, Janis Papanagnou wrote:[...]
On 27.10.2025 16:11, bart wrote:
If speed wasn't an issue then we'd all be using easy dynamicHuh? - Certainly not.
languages
*I* would! That's why I made my scripting languages as fast and
 capable as possible, so they could be used for more tasks.
However, if I dare to suggest that even one other person in the world
might also have the same desire, you'd say that I can't possibly know
that.
And yet here you are: you say 'certainly not'. Obviously *you* know
everyone else's mindset!
I'll give this one more try.
This kind of thing makes it difficult to communicate with you.
You're talking to the wrong guy. It's JP who's difficult to talk to.
He (I assume) always dismisses every single one of my arguments out of
hand:
Build speed is never a problem - ever. The speed of any language
implemention is never a concern either.
Bart, I think this all comes down to some basic logic that you get wrong regularly :
The opposite of "X is always true" is /not/ "X is always false" or that "(not X) is always true". It is that "X is /sometimes/ false", or that "(not X) is /sometimes/ true".
You get this wrong repeatedly when you and I are in disagreement, and I
see it again and again with other people - such as with both Janis and Keith.
No one, in any of the posts I have read in c.l.c. in countless years,
has ever claimed that "build speed is /never/ a problem". People have regularly said that it /often/ is not a problem, or it is not a problem
in their own work, or that slow compile times can often be dealt with in various ways so that it is not a problem. People don't disagree that
build speed can be an issue - they disagree with your claims that it
is /always/ an issue (except when using /your/ tools, or perhaps tcc).
Michael S <already5chosen@yahoo.com> writes:...
On Tue, 28 Oct 2025 16:05:47 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
There is still one computer system that uses Algol as both
the system programming language, and for applications.
Unisys Clearpath (descendents of the Burroughs B6500).
Is B6500 ALGOL related to A68?
A-series ALGOL has many extensions.
DCAlgol, for example, is used to create applications
for data communications (e.g. poll-select multidrop
applications such as teller terminals, etc).
NEWP is an algol dialect used for systems programming
and the operating system itself.
ALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000098-517/86000098-517.pdf
DCALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000841-208.pdf
NEWP: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-21.0/86002003-409.pdf
On 29/10/2025 01:48, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 28/10/2025 21:59, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 28/10/2025 02:35, Janis Papanagnou wrote:[...]
On 27.10.2025 16:11, bart wrote:
If speed wasn't an issue then we'd all be using easy dynamic languages [...]
On 29/10/2025 16:12, David Brown wrote:
On 29/10/2025 00:14, bart wrote:
On 28/10/2025 21:59, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 28/10/2025 02:35, Janis Papanagnou wrote:[...]
On 27.10.2025 16:11, bart wrote:
If speed wasn't an issue then we'd all be using easy dynamicHuh? - Certainly not.
languages
*I* would! That's why I made my scripting languages as fast and
 capable as possible, so they could be used for more tasks.
However, if I dare to suggest that even one other person in the world >>>>> might also have the same desire, you'd say that I can't possibly know >>>>> that.
And yet here you are: you say 'certainly not'. Obviously *you* know
everyone else's mindset!
I'll give this one more try.
This kind of thing makes it difficult to communicate with you.
You're talking to the wrong guy. It's JP who's difficult to talk to.
He (I assume) always dismisses every single one of my arguments out
of hand:
Build speed is never a problem - ever. The speed of any language
implemention is never a concern either.
Bart, I think this all comes down to some basic logic that you get
wrong regularly :
The opposite of "X is always true" is /not/ "X is always false" or
that "(not X) is always true". It is that "X is /sometimes/ false",
or that "(not X) is /sometimes/ true".
You get this wrong repeatedly when you and I are in disagreement, and
I see it again and again with other people - such as with both Janis
and Keith.
No one, in any of the posts I have read in c.l.c. in countless years,
has ever claimed that "build speed is /never/ a problem". People have
regularly said that it /often/ is not a problem, or it is not a
problem in their own work, or that slow compile times can often be
dealt with in various ways so that it is not a problem. People don't
disagree that build speed can be an issue - they disagree with your
claims that it is /always/ an issue (except when using /your/ tools,
or perhaps tcc).
It was certainly an issue here: the 'make' part of building CDECL and
A68G, I considered slow for the scale of the task given that the apps
are 68 and 78Kloc (static total of .c and .h files).
On 29/10/2025 16:12, David Brown wrote:
On 29/10/2025 00:14, bart wrote:
On 28/10/2025 21:59, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 28/10/2025 02:35, Janis Papanagnou wrote:[...]
On 27.10.2025 16:11, bart wrote:
If speed wasn't an issue then we'd all be using easy dynamicHuh? - Certainly not.
languages
*I* would! That's why I made my scripting languages as fast and
 capable as possible, so they could be used for more tasks.
However, if I dare to suggest that even one other person in the world >>>> might also have the same desire, you'd say that I can't possibly know >>>> that.
And yet here you are: you say 'certainly not'. Obviously *you* know
everyone else's mindset!
I'll give this one more try.
This kind of thing makes it difficult to communicate with you.
You're talking to the wrong guy. It's JP who's difficult to talk to.
He (I assume) always dismisses every single one of my arguments out of
hand:
Build speed is never a problem - ever. The speed of any language
implemention is never a concern either.
Bart, I think this all comes down to some basic logic that you get wrong regularly :
The opposite of "X is always true" is /not/ "X is always false" or that "(not X) is always true". It is that "X is /sometimes/ false", or that "(not X) is /sometimes/ true".
You get this wrong repeatedly when you and I are in disagreement, and I see it again and again with other people - such as with both Janis and Keith.
No one, in any of the posts I have read in c.l.c. in countless years,
has ever claimed that "build speed is /never/ a problem". People have regularly said that it /often/ is not a problem, or it is not a problem
in their own work, or that slow compile times can often be dealt with in various ways so that it is not a problem. People don't disagree that build speed can be an issue - they disagree with your claims that it
is /always/ an issue (except when using /your/ tools, or perhaps tcc).
It was certainly an issue here: the 'make' part of building CDECL and
A68G, I considered slow for the scale of the task given that the apps
are 68 and 78Kloc (static total of .c and .h files).
A68G I know takes 90 seconds to build (since I've just tried it again;
it took long enough that I had an ice-cream while waiting, so that's something).
That's under 1Kloc per second; not great.
But at least all the optimising would have produced a super-fast
executable? Well, that's disappointing too; no-one can say that A68G is fast.
I said that my equivalent product was 1000 times faster to build (don't forget the configure nonsense) and it ran 10 times faster on the same test.
That is a quite remarkable difference. VERY remarkable. Only some of it
is due to my product being smaller (but it's not 1000 times smaller!).
This was stated to demonstrate how different my world was.
My view is that there is something very wrong with the build systems everyone here uses. But I can understand that no one wants to admit that they're that bad.
You find ways around it, you get inured to it, but you just have to use
much more powerful machines than mine, but I would go round the bend if
I had to work with something so unresponsive.
bart <bc@freeuk.com> writes:
On 29/10/2025 01:48, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
On 28/10/2025 21:59, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 28/10/2025 02:35, Janis Papanagnou wrote:[...]
On 27.10.2025 16:11, bart wrote:
If speed wasn't an issue then we'd all be using easy dynamic languages
Bart, is the above statement literally accurate?
Do you believe that
we would ALL be using "easy dynamic languages" if speed were not an
issue, meaning that non-dynamic languages would die out completely?
That's what this whole sub-argument is about.
really meant is that dynamic languages would be more popular than
they are now if speed were not an issue. Possibly someone just took
your figuratative statement a little too literally. If that's the
case, please just say so.
bart <bc@freeuk.com> wrote:
On 26/10/2025 16:04, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
On 26/10/2025 06:25, Janis Papanagnou wrote:
However the A68G configure script is 11000 lines; the CDECL one 31600 lines.
(I wonder why the latter needs 20000 more lines? I guess nobody is
curious - or they simply don't care.)
You should be able to figure that out yourself. You may actually
learn something useful along the way.
So you don't know.
What special requirements does CDECL have (which has a task that is at
least a magnitude simpler than A68G's), that requires those 20,000 extra
lines?
I did not look deeply, but cdecl is using automake and related
tools. IIUC you can have small real source, and depend on
autools to provide tests. This is likely to bring tons of
irrelevant tests into configure. Or you can specify precisely
which tests are needed. In the second case you need to
write more code, but generated configure is smaller.
My working hypotesis is that cdecl is relatively simple program,
so autotools defaults lead to working build. And nobody was
motiveted enough to select what is needed, so configure
contains a lot of code which is useful sometimes, but probably
not for cdel.
BTW: In one "my" project there is hand-written configure.ac
which is select tests that are actually needed for the
project. Automake in _not_ used. Generated configure
has 8564 lines. But the project has rather complex
requirements and autotools defaults are unlikely to
work, so one really have to explicitly handle various
details.
But the crux of the matter, and I can't stress this enough as it never[...]
seems to get through to you, is that fast enough is fast enough. No
one cares how long cdecl takes to build.
On 29/10/2025 22:21, bart wrote:
It was certainly an issue here: the 'make' part of building CDECL and
A68G, I considered slow for the scale of the task given that the apps
are 68 and 78Kloc (static total of .c and .h files).
I have no interest in A68G. I have no stake in cdecl or knowledge (or particular interest) in how it was written, and how appropriate the
number of lines of code are for the task in hand. I am confident it
could have been written in a different way with less code - but not at
all confident that doing so would be in any way better for the author of
the program. I am also confident that you know far too little about
what the program can do, or why it was written the way it was, to judge whether it has a "reasonable" number of lines of code, or not.
However, it's easy to look at the facts. The "src" directory from the github clone has about 50,000 lines of code in .c files, and 18,000
lines of code in .h files. The total is therefore about 68 kloc of source. This does not at all mean that compilation processes exactly 68 thousand lines of code - it will be significantly more than that as
headers are included by multiple files, and lots of other headers from
the C standard library and other libraries are included. Let's guess
100 kloc.
The build process takes 8 seconds on my decade-old machine, much of
which is something other than running the compiler. (Don't ask me what
it is doing - I did not write this software, design its build process,
or determine how the program is structured and how it is generated by
yacc or related tools. This is not my area of expertise.) If for some strange reason I choose to run "make" rather than "make -j", thus
wasting much of my computer's power, it takes 16 seconds. Some of these non-compilation steps do not appear to be able to run in parallel, and a couple of the compilations (like "parser.c", which appears to be from a parser generator rather than specifically written) are large and take a couple of seconds to compile. My guess is that the actual compilations
are perhaps 4 seconds. Overall, I make it 25 kloc per second. While I don't think that is a particularly relevant measure of anything useful,
it does show that either you are measuring the wrong thing, using a
wildly inappropriate or limited build environment, or are unaware of how
to use your computer to build code.
(And my computer cpu was about 30%
busy doing other productive tasks, such as playing a game, while I was
doing those builds.)
So, you are exaggerating, mismeasuring or misusing your system to get
build times that are well over an order of magnitude worse than
expected. This follows your well-established practice.
output<warnings>
And you claim your own tools would be 1000 times faster.
 Maybe they
would be. Certainly there have been tools in the past that are much smaller and faster than modern tools, and were useful at the time.
Modern tools do so much more, however. A tool that doesn't do the job needed is of no use for a given task, even if it could handle other
tasks quickly.
But the crux of the matter, and I can't stress this enough as it never
seems to get through to you, is that fast enough is fast enough. No one cares how long cdecl takes to build.
Of course everyone agrees that smaller and faster is better, all things being equal - but all things are usually /not/ equal, and once something
is fast enough to be acceptable, making it faster is not a priority.
On 29/10/2025 22:10, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 29/10/2025 01:48, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
On 28/10/2025 21:59, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 28/10/2025 02:35, Janis Papanagnou wrote:[...]
On 27.10.2025 16:11, bart wrote:
If speed wasn't an issue then we'd all be using easy dynamic languages
Bart, is the above statement literally accurate?
Literally as in all 8.x billion individuals on the planet, including
infants and people in comas, would be using such languages?
This is what you seem to be suggesting that I mean, and here you're
both being overly pedantic. You could just agree with me you know!
'If X then we'd all be doing Y' is a common English idiom, suggesting
X was a no-brainer.
Do you believe that
we would ALL be using "easy dynamic languages" if speed were not an
issue, meaning that non-dynamic languages would die out completely?
Yes, I believe that if dynamic languages, however they are
implemented, could always deliver native code speeds, then a huge
number of people, and companies, would switch because of that and
other benefits.
On 29/10/2025 23:04, David Brown wrote:
On 29/10/2025 22:21, bart wrote:
It was certainly an issue here: the 'make' part of building CDECL and
A68G, I considered slow for the scale of the task given that the apps
are 68 and 78Kloc (static total of .c and .h files).
I have no interest in A68G. I have no stake in cdecl or knowledge (or
particular interest) in how it was written, and how appropriate the
number of lines of code are for the task in hand. I am confident it
could have been written in a different way with less code - but not at
all confident that doing so would be in any way better for the author of
the program. I am also confident that you know far too little about
what the program can do, or why it was written the way it was, to judge
whether it has a "reasonable" number of lines of code, or not.
However, it's easy to look at the facts. The "src" directory from the
github clone has about 50,000 lines of code in .c files, and 18,000
lines of code in .h files. The total is therefore about 68 kloc of
source. This does not at all mean that compilation processes exactly 68 >> thousand lines of code - it will be significantly more than that as
headers are included by multiple files, and lots of other headers from
the C standard library and other libraries are included. Let's guess
100 kloc.
Yes, that's why I said the 'static' line counts are 68 and 78K. Maybe
the slowdown is due to some large headers that lie outside the problem
(not the standard headers), but so what? (That would be a shortcoming of
the C language.)
The A68G sources also contain lots of upper-case content, so perhaps
macro expansion is going on too.
The bottom line is this is an 80Kloc app that takes that long to buidld.
The build process takes 8 seconds on my decade-old machine, much of
which is something other than running the compiler. (Don't ask me what
it is doing - I did not write this software, design its build process,
or determine how the program is structured and how it is generated by
yacc or related tools. This is not my area of expertise.) If for some >> strange reason I choose to run "make" rather than "make -j", thus
wasting much of my computer's power, it takes 16 seconds. Some of these >> non-compilation steps do not appear to be able to run in parallel, and a
couple of the compilations (like "parser.c", which appears to be from a
parser generator rather than specifically written) are large and take a
couple of seconds to compile. My guess is that the actual compilations
are perhaps 4 seconds. Overall, I make it 25 kloc per second. While I >> don't think that is a particularly relevant measure of anything useful,
it does show that either you are measuring the wrong thing, using a
wildly inappropriate or limited build environment, or are unaware of how
to use your computer to build code.
Tell me then how I should do it to get single-figure build times for a
fresh build. But whatever it is, why doesn't it just do that anyway?!
(And my computer cpu was about 30%
busy doing other productive tasks, such as playing a game, while I was
doing those builds.)
So, you are exaggerating, mismeasuring or misusing your system to get
build times that are well over an order of magnitude worse than
expected. This follows your well-established practice.
So, what exactly did I do wrong here (for A68G):
root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
real 1m32.205s
user 0m40.813s
sys 0m7.269s
This 90 seconds is the actual time I had to hang about waiting. I'd be interested in how I managed to manipulate those figures!
BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:
root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make
output<warnings>
real 0m49.512s
user 0m19.033s
sys 0m3.911s
On the RPi4 (usually 1/3 the speed of my PC), the make-time for A68G was
137 seconds (using SD storage; the PC uses SSD),
so perhaps 40 seconds
on the PC, suggesting that the underlying Windows file system may be
slowing things down, but I don't know.
You'd probably dismiss it as irrelevant, but lots of such improvements
build up. At least it is good that some people are looking at such aspects.
https://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST- Leaner-Faster.html)
Assuming that you have enough RAM you should try at least using
'make -j 3', that is allow make to use up to 3 jobs. I wrote
at least, because AFAIK cheapest PC CPU-s of reasonable age
have at least 2 cores, so to fully utilize the machine you
need at least 2 jobs. 3 is better, because some jobs may wait
for I/O.
antispam@fricas.org (Waldek Hebisch) writes:
[...]
Assuming that you have enough RAM you should try at least using
'make -j 3', that is allow make to use up to 3 jobs. I wrote
at least, because AFAIK cheapest PC CPU-s of reasonable age
have at least 2 cores, so to fully utilize the machine you
need at least 2 jobs. 3 is better, because some jobs may wait
for I/O.
I haven't been using make's "-j" option for most of my builds.
I'm going to start doing so now (updating my wrapper script).
I initially tried replacing "make" by "make -j", with no numeric
argument. The result was that my system nearly froze (the load
average went up to nearly 200). It even invoked the infamous OOM
killer. "make -j" tells make to use as many parallel processes
as possible.
"make -j $(nproc)" is much better. The "nproc" command reports the
number of available processing units. Experiments with a fairly
large build show that arguments to "-j" larger than $(nproc) do
not speed things up (on a fairly old machine with nproc=4). I had
speculated that "make -n 5" might be worthwhile of some processes
were I/O-bound, but that doesn't appear to be the case.
This applies to GNU make. There are other "make" implementations
which may or may not have a similar feature.
[...]
Because existing solutions DIDN'T EXIST in a practical form (remember I worked with 8-bit computers), or they were hopelessly slow and
complicated on restricted hardware.
I don't need a linker, I don't need a makefile, I don't need lists of dependencies between modules, I don't need independent compilation, I
don't use object files.
The generated makefile for the 49-module CDECL project is 2000 lines of gobbledygook; that's not really selling it to me!
If *I* had a 49-module C project, the build info I'd supply you would basically be that list of files, plus the source files.
$ grep 'model name' /proc/cpuinfo | uniq
model name : AMD Ryzen Threadripper 3970X 32-Core Processor
I haven't been using make's "-j" option for most of my builds.
I'm going to start doing so now (updating my wrapper script).
On 29/10/2025 22:10, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 29/10/2025 01:48, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
On 28/10/2025 21:59, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 28/10/2025 02:35, Janis Papanagnou wrote:[...]
On 27.10.2025 16:11, bart wrote:
If speed wasn't an issue then we'd all be using easy dynamic >>>>>>>>> languages
Bart, is the above statement literally accurate?
Literally as in all 8.x billion individuals on the planet, including
infants and people in comas, would be using such languages?
This is what you seem to be suggesting that I mean, and here you're both being overly pedantic. You could just agree with me you know!
'If X then we'd all be doing Y' is a common English idiom, suggesting X
was a no-brainer.
 Do you believe that
we would ALL be using "easy dynamic languages" if speed were not an
issue, meaning that non-dynamic languages would die out completely?
Yes, I believe that if dynamic languages, however they are implemented, could always deliver native code speeds, then a huge number of people,
and companies, would switch because of that and other benefits.
Bear in mind that if that was the case, then new dynamic languages could emerge that help broad their range of applications.
On 29/10/2025 23:04, David Brown wrote:
On 29/10/2025 22:21, bart wrote:
It was certainly an issue here: the 'make' part of building CDECL and
A68G, I considered slow for the scale of the task given that the apps
are 68 and 78Kloc (static total of .c and .h files).
I have no interest in A68G. I have no stake in cdecl or knowledge (or
particular interest) in how it was written, and how appropriate the
number of lines of code are for the task in hand. I am confident it
could have been written in a different way with less code - but not at
all confident that doing so would be in any way better for the author
of the program. I am also confident that you know far too little
about what the program can do, or why it was written the way it was,
to judge whether it has a "reasonable" number of lines of code, or not.
However, it's easy to look at the facts. The "src" directory from the
github clone has about 50,000 lines of code in .c files, and 18,000
lines of code in .h files. The total is therefore about 68 kloc of
source. This does not at all mean that compilation processes exactly
68 thousand lines of code - it will be significantly more than that as
headers are included by multiple files, and lots of other headers from
the C standard library and other libraries are included. Let's guess
100 kloc.
Yes, that's why I said the 'static' line counts are 68 and 78K. Maybe
the slowdown is due to some large headers that lie outside the problem
(not the standard headers), but so what? (That would be a shortcoming of
the C language.)
The A68G sources also contain lots of upper-case content, so perhaps
macro expansion is going on too.
The bottom line is this is an 80Kloc app that takes that long to buidld.
The build process takes 8 seconds on my decade-old machine, much of
which is something other than running the compiler. (Don't ask me
what it is doing - I did not write this software, design its build
process, or determine how the program is structured and how it is
generated by yacc or related tools. This is not my area of
expertise.)Â If for some strange reason I choose to run "make" rather
than "make -j", thus wasting much of my computer's power, it takes 16
seconds. Some of these non-compilation steps do not appear to be able
to run in parallel, and a couple of the compilations (like "parser.c",
which appears to be from a parser generator rather than specifically
written) are large and take a couple of seconds to compile. My guess
is that the actual compilations are perhaps 4 seconds. Overall, I
make it 25 kloc per second. While I don't think that is a
particularly relevant measure of anything useful, it does show that
either you are measuring the wrong thing, using a wildly inappropriate
or limited build environment, or are unaware of how to use your
computer to build code.
Tell me then how I should do it to get single-figure build times for a
fresh build. But whatever it is, why doesn't it just do that anyway?!
(And my computer cpu was about 30% busy doing other productive tasks,
such as playing a game, while I was doing those builds.)
So, you are exaggerating, mismeasuring or misusing your system to get
build times that are well over an order of magnitude worse than
expected. This follows your well-established practice.
So, what exactly did I do wrong here (for A68G):
 root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
 real   1m32.205s
 user   0m40.813s
 sys    0m7.269s
This 90 seconds is the actual time I had to hang about waiting. I'd be interested in how I managed to manipulate those figures!
BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:
 root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make
output <warnings>
 real   0m49.512s
 user   0m19.033s
 sys    0m3.911s
On the RPi4 (usually 1/3 the speed of my PC), the make-time for A68G was
137 seconds (using SD storage; the PC uses SSD), so perhaps 40 seconds
on the PC, suggesting that the underlying Windows file system may be
slowing things down, but I don't know.
However the same PC, under actual Windows, manages this:
 c:\qx>tim mm qq
 Compiling qq.m to qq.exe     (500KB but half is data; A68G is 1MB?)
 Time: 0.084
And this:
 c:\cx>tim tcc lua.c          (250-400KB)
 Time: 0.124
And you claim your own tools would be 1000 times faster.
In this case, yes. The figure is more typically around 100 if the other compiler is optimising, however that would be representations of the
same program. A68G is somewhat bigger than my product.
 Maybe they would be. Certainly there have been tools in the past
that are much smaller and faster than modern tools, and were useful at
the time. Modern tools do so much more, however. A tool that doesn't
do the job needed is of no use for a given task, even if it could
handle other tasks quickly.
It ran my test program; that's what counts!
But the crux of the matter, and I can't stress this enough as it never
seems to get through to you, is that fast enough is fast enough. No
one cares how long cdecl takes to build.
I don't care either; I just wanted to try it.
But I pick up things that nobody else seems to: this particular build
was unusually slow; why was that? Perhaps there's a bottleneck in the process that needs to be fixed, or a bug, that would give benefits when
it does matter.
(An article posted in Reddit detailed how a small change in how Clang
worked made a 5-7% difference in build times for large projects.
You'd probably dismiss it as irrelevant, but lots of such improvements
build up. At least it is good that some people are looking at such aspects.
https://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST-Leaner-Faster.html)
Of course everyone agrees that smaller and faster is better, all
things being equal - but all things are usually /not/ equal, and once
something is fast enough to be acceptable, making it faster is not a
priority.
My compilers have already reached that threshold (most stuff builds in
the time it takes to take my finger off the Enter button). But most mainstream compilers are a LONG way off.
antispam@fricas.org (Waldek Hebisch) writes:
[...]
Assuming that you have enough RAM you should try at least using
'make -j 3', that is allow make to use up to 3 jobs. I wrote
at least, because AFAIK cheapest PC CPU-s of reasonable age
have at least 2 cores, so to fully utilize the machine you
need at least 2 jobs. 3 is better, because some jobs may wait
for I/O.
I haven't been using make's "-j" option for most of my builds.
I'm going to start doing so now (updating my wrapper script).
I initially tried replacing "make" by "make -j", with no numeric
argument. The result was that my system nearly froze (the load
average went up to nearly 200). It even invoked the infamous OOM
killer. "make -j" tells make to use as many parallel processes
as possible.
"make -j $(nproc)" is much better. The "nproc" command reports the
number of available processing units. Experiments with a fairly
large build show that arguments to "-j" larger than $(nproc) do
not speed things up (on a fairly old machine with nproc=4). I had
speculated that "make -n 5" might be worthwhile of some processes
were I/O-bound, but that doesn't appear to be the case.
This applies to GNU make. There are other "make" implementations
which may or may not have a similar feature.
On 30/10/2025 01:36, bart wrote:
Try "make -j" rather than "make" to build in parallel. That is not the default mode for make, because you don't lightly change the default behaviour of a program that millions use regularly and have used over
many decades. Some build setups (especially very old ones) are not designed to work well with parallel building, so having the "safe"
single task build as the default for make is a good idea.
I would also, of course, recommend Linux for these things. Or get a
cheap second-hand machine and install Linux on that - you don't need anything fancy. As you enjoy comparative benchmarks, the ideal would be duplicate hardware with one system running Windows, the other Linux.
(Dual boot is a PITA, and I am not suggesting you mess up your normal
daily use system.)
Raspberry Pi's are great for lots of things, but they are not fast for building software - most models have too little memory to support all
the cores in big parallel builds, they can overheat when pushed too far,
and their "disks" are very slow. If you have a Pi 5 with lots of ram,
and use a tmpfs filesystem for the build, it can be a good deal faster.
(And my computer cpu was about 30% busy doing other productive tasks,
such as playing a game, while I was doing those builds.)
So, you are exaggerating, mismeasuring or misusing your system to get
build times that are well over an order of magnitude worse than
expected. This follows your well-established practice.
So, what exactly did I do wrong here (for A68G):
  root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
  real   1m32.205s
  user   0m40.813s
  sys    0m7.269s
This 90 seconds is the actual time I had to hang about waiting. I'd be
interested in how I managed to manipulate those figures!
Try "time make -j" as a simple step.
But I pick up things that nobody else seems to: this particular build
was unusually slow; why was that? Perhaps there's a bottleneck in the
process that needs to be fixed, or a bug, that would give benefits
when it does matter.
Do you think there is a reason why /you/ get fixated on these things,
and no one else in this group appears to be particularly bothered?
Usually when a person thinks that they are seeing something no one else sees, they are wrong.
And I fully understand that build times for large projects are
important, especially during development.
But I do not share your obsession that compile and build times are the critical factor or the defining feature for a compiler (or toolchain in general).
This is not a goal most compiler vendors have. When people are not particularly bothered about the speed of compilation for their files,
the speed is good enough - people are more interested in other things.
They are more interested in features like better checks, more helpful warnings or information, support for newer standards, better
optimisation, and so on.
Mainstream compiler vendors do care about speed - but not about the
speed of the little C programs you write and compile. They put a huge amount of effort into the speed for situations where it matters, such as
for building very large projects, or building big projects with advanced optimisations (like link-time optimisations across large numbers of
files and modules), or working with code that is inherently slow to
compile (like C++ code with complex templates or significant compile-
time compilation).
bart <bc@freeuk.com> wrote:
Because existing solutions DIDN'T EXIST in a practical form (remember I
worked with 8-bit computers), or they were hopelessly slow and
complicated on restricted hardware.
I don't need a linker, I don't need a makefile, I don't need lists of
dependencies between modules, I don't need independent compilation, I
don't use object files.
The generated makefile for the 49-module CDECL project is 2000 lines of
gobbledygook; that's not really selling it to me!
If *I* had a 49-module C project, the build info I'd supply you would
basically be that list of files, plus the source files.
I sometime work with 8-bit microcontrollers. More frequently I work
with 32-bit microcontrollers of size comparable to 8-bit
microcontrollers. One target has 4 kB RAM (plus 16 kB flash for
storing programs). On such targets I care about program size.
I found it convenient during developement to run programs from
RAM, so ideally program + data should fit in 4 kB. And frequently
it fits. I have separate modules. For example, usually before
doing anything else I need to configure the clock. Needed clock
speed depends on program. I could use a general clock setting
routine that can set "any" clock speed. But such routine would
be more complicated and consequently bigger than a more specialized
one. So I have a few versions so that each version sets a single
clock speed and is doing only what is necessary for this speed. Microcontrollers contain several built-in devices, they need
drivers. But it is almost impossible to use all devices and
given program usually uses only a few devices. So in programs
I just include what is needed.
My developement process is work in progress, there are some
things which I would like to improve. But I need to organize
things, for which I use files. There are compiler options,
paths to tools and libraries. In other words, there is
essential info outside C files. I use Makefile-s to record
this info. It is quite likely that in the future I will
have a tool to create specialized C code from higher level
information. In such case my dependecies will get more
complex.
Modern microcontrollers are quite fast compared to their
typical tasks, so most of the time speed of code is not
critical. But I write interrupt handlers and typically
interrupt handler should be as fast as possible, so speed
matters here. And as I wrote size of compiled code is
important. So compiler that quickly generates slow and big
code is of limited use to me. Given that files are usually
rather small I find gcc speed reasonable (during developement
I usually do not need to wait for compilation, it is fast
enough).
Certainly better compiler is possible. But given need to
generate reasonably good code for several differen CPU-s
(there are a few major familes and within family there are
variations affecting generated code) this is big task.
One could have better language than C. But currenly it
seems that I will be able to get features that I want by
generating code. Of course, if you look at whole toolchain
and developement process this is much more complicated than
specialized compiler for specialized language. But creating
whole environment with features that I want is a big task.
By using gcc I reduce amount of work that _I_ need to do.
I wrote several pieces of code that are available in existing
libraries (because I wanted to have smaller specialized
version), so I probably do more work than typical developer.
But life is finite so one need to choose what is worth
(re)doing as opposed to reusing existing code.
BTW: Using usual recipes, frequently gives much bigger programs,
for example program blinking a LED (embedded equivelent of
"Hello world") may take 20-30 kB (with my approach it is
552 bytes, most of which is essentially forced by MCU
architecure).
So, gcc and make _I_ find useful. For microcontroller
projects I currently do not need 'configure' and related
machinery, but do not exlude that in the future.
Note that while I am developing programs, my focus is on
providing a library and developement process. That is
potential user is supposed to write code which should
integrate with code that I wrote. So I either need
some amalgamation at source level or linking. ATM linking
works better. So I need linking, in the sense that if
I were forbiden to use linking, I would have to develop
some replacement and that could be substantial work and
inconvenience, for example textual amalgamation would
increase build time from rather satisfactory now to
probably noticable delay.
I don't need a linker, I don't need a makefile, I don't need lists of dependencies between modules, I don't need independent compilation, Idon't use object files.
bart <bc@freeuk.com> wrote:
On 29/10/2025 23:04, David Brown wrote:
On 29/10/2025 22:21, bart wrote:
BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:
root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make >>>output
<warnings>
real 0m49.512s
user 0m19.033s
sys 0m3.911s
Those numbers indicate that there is something wrong with your
machine. Sum of second and third line above give CPU time.
Real time is twice as large, so something is slowing down things.
One possible trouble is having too small RAM, then OS is swaping
data to/from disc. Some programs do a lot of random I/O, that
can be slow on spinning disc, but SSD-s usually are much
faster at random I/O.
Assuming that you have enough RAM you should try at least using
'make -j 3', that is allow make to use up to 3 jobs. I wrote
at least, because AFAIK cheapest PC CPU-s of reasonable age
have at least 2 cores, so to fully utilize the machine you
need at least 2 jobs. 3 is better, because some jobs may wait
for I/O.
FYI, reasonably typical report for normal make (without -j
option) on my machine is:
real 0m4.981s
user 0m3.712s
sys 0m0.963s
antispam@fricas.org (Waldek Hebisch) writes:
bart <bc@freeuk.com> wrote:
On 29/10/2025 23:04, David Brown wrote:
On 29/10/2025 22:21, bart wrote:
BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:
root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make
output<warnings>
real 0m49.512s
user 0m19.033s
sys 0m3.911s
Those numbers indicate that there is something wrong with your
machine. Sum of second and third line above give CPU time.
Real time is twice as large, so something is slowing down things.
One possible trouble is having too small RAM, then OS is swaping
data to/from disc. Some programs do a lot of random I/O, that
can be slow on spinning disc, but SSD-s usually are much
faster at random I/O.
Assuming that you have enough RAM you should try at least using
'make -j 3', that is allow make to use up to 3 jobs. I wrote
at least, because AFAIK cheapest PC CPU-s of reasonable age
have at least 2 cores, so to fully utilize the machine you
need at least 2 jobs. 3 is better, because some jobs may wait
for I/O.
FYI, reasonably typical report for normal make (without -j
option) on my machine is:
real 0m4.981s
user 0m3.712s
sys 0m0.963s
Just for grins, here's a report for a full rebuild of a real-world project that I build regularly. Granted most builds are partial (e.g. one or
two source files touched) and take far less time (15 seconds or so,
most of which is make calling stat(2) on a few hundred source files
on an NFS filesystem). Close to three million SLOC, mostly in header
files. C++.
$ time make -s -j96
real 9m10.38s
user 3h50m15.59s
sys 9m58.20s
I'd challenge Bart to match that with a similarly sized project using
his compiler and toolset, but I seriously doubt that this project could
be effectively implemented using his personal language and toolset.
On 30/10/2025 04:24, Keith Thompson wrote:
I haven't been using make's "-j" option for most of my builds.
I'm going to start doing so now (updating my wrapper script).
Well, let's see, on approximately 10,000 lines of code:
$ make clean
$time make
real 0m2.391s
user 0m2.076s
sys 0m0.286s
$ make clean
$time make -j $(nproc)
real 0m0.041s
user 0m0.021s
sys 0m0.029s
That's a reduction in wall clock time of 4 minutes per MLOC to 4
*seconds* per MLOC. I can't deny I'm impressed.
On 30/10/2025 10:15, David Brown wrote:
On 30/10/2025 01:36, bart wrote:
Try "make -j" rather than "make" to build in parallel. That is not
the default mode for make, because you don't lightly change the
default behaviour of a program that millions use regularly and have
used over many decades. Some build setups (especially very old ones)
are not designed to work well with parallel building, so having the
"safe" single task build as the default for make is a good idea.
I would also, of course, recommend Linux for these things. Or get a
cheap second-hand machine and install Linux on that - you don't need
anything fancy. As you enjoy comparative benchmarks, the ideal would
be duplicate hardware with one system running Windows, the other
Linux. (Dual boot is a PITA, and I am not suggesting you mess up your
normal daily use system.)
Raspberry Pi's are great for lots of things, but they are not fast for
building software - most models have too little memory to support all
the cores in big parallel builds, they can overheat when pushed too
far, and their "disks" are very slow. If you have a Pi 5 with lots of
ram, and use a tmpfs filesystem for the build, it can be a good deal
faster.
(And my computer cpu was about 30% busy doing other productive
tasks, such as playing a game, while I was doing those builds.)
So, you are exaggerating, mismeasuring or misusing your system to
get build times that are well over an order of magnitude worse than
expected. This follows your well-established practice.
So, what exactly did I do wrong here (for A68G):
  root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
  real   1m32.205s
  user   0m40.813s
  sys    0m7.269s
This 90 seconds is the actual time I had to hang about waiting. I'd
be interested in how I managed to manipulate those figures!
Try "time make -j" as a simple step.
OK, "make -j" gave a real time of 30s, about three times faster. (Not
quite sure how that works, given that my machine has only two cores.)
However, I don't view "-j", and parallelisation, as a solution to slow compilation. It is just a workaround, something you do when you've
exhausted other possibilities.
You have to get raw compilation fast enough first.
Suppose I had the task of transporting N people from A to B in my car,
but I can only take four at a time and have to get them there by a
certain time.
One way of helping out is to use "-j": get multiple drivers with their
own cars to transport them in parallel.
Imagine however that my car and all those others can only go at walking pace: 3mph instead of 30mph. Then sure, you can recruit enough
volunteers to get the task done in the necessary time (putting aside the practical details).
But can you a see a fundamental problem that really ought to be fixed
first?
But I pick up things that nobody else seems to: this particular build
was unusually slow; why was that? Perhaps there's a bottleneck in the
process that needs to be fixed, or a bug, that would give benefits
when it does matter.
Do you think there is a reason why /you/ get fixated on these things,
and no one else in this group appears to be particularly bothered?
Usually when a person thinks that they are seeing something no one
else sees, they are wrong.
Quite a few people have suggested that there is something amiss about my 1:32 and 0:49 timings. One has even said there is something wrong with
my machine.
You have even suggested I have manipulated the figures!
So was I right in sensing something was off, or not?
And I fully understand that build times for large projects are
important, especially during development.
But I do not share your obsession that compile and build times are the
critical factor or the defining feature for a compiler (or toolchain
in general).
I find fast compile-times useful for several reasons:
*I develop whole-program compilers* This means all sources have to be compiled at the same time, as there is no independent compilation at the module level.
The advantage is that I don't need the complexity of makefiles to help decide which dependent modules need recompiling.
*It can allow programs to be run directly from source* This is something that is being explored via complex JIT approaches. But my AOT compiler
is fast enough that that is not necessary
*It also allow programs to be interpreted* This is like run from source,
but the compilation is faster as it can stop at the IL. (Eg. sqlite3 compiles in 150ms instead of 250ms.)
*It can allow whole-program optimisation* This is not something I take advantage of much yet. But it allows a simpler approach than either LTO,
so somehow figuring out to create a one-file amalgamation.
So it enables interesting new approaches. Imagine if you download the
CDECL bundle and then just run it without needing to configure anything,
or having to do 'make', or 'make -j'.
Forget ./configure, forget make. Of course you can do the same thing,
maybe there is 'make -run', the difference is that the above is instant.
This is not a goal most compiler vendors have. When people are not
particularly bothered about the speed of compilation for their files,
the speed is good enough - people are more interested in other things.
They are more interested in features like better checks, more helpful
warnings or information, support for newer standards, better
optimisation, and so on.
See the post from Richard Heathfield where he is pleasantly surprised
that he can get a 60x speedup in build-time.
People like fast tools!
On 30/10/2025 14:13, Scott Lurndal wrote:
antispam@fricas.org (Waldek Hebisch) writes:
bart <bc@freeuk.com> wrote:
On 29/10/2025 23:04, David Brown wrote:
On 29/10/2025 22:21, bart wrote:
BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:
root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make >>>>> output
<warnings>
real 0m49.512s
user 0m19.033s
sys 0m3.911s
Those numbers indicate that there is something wrong with your
machine. Sum of second and third line above give CPU time.
Real time is twice as large, so something is slowing down things.
One possible trouble is having too small RAM, then OS is swaping
data to/from disc. Some programs do a lot of random I/O, that
can be slow on spinning disc, but SSD-s usually are much
faster at random I/O.
Assuming that you have enough RAM you should try at least using
'make -j 3', that is allow make to use up to 3 jobs. I wrote
at least, because AFAIK cheapest PC CPU-s of reasonable age
have at least 2 cores, so to fully utilize the machine you
need at least 2 jobs. 3 is better, because some jobs may wait
for I/O.
FYI, reasonably typical report for normal make (without -j
option) on my machine is:
real 0m4.981s
user 0m3.712s
sys 0m0.963s
Just for grins, here's a report for a full rebuild of a real-world project >> that I build regularly. Granted most builds are partial (e.g. one or
two source files touched) and take far less time (15 seconds or so,
most of which is make calling stat(2) on a few hundred source files
on an NFS filesystem). Close to three million SLOC, mostly in header
files. C++.
What is the total size of the produced binaries?
That will me an idea of
the true LoC for the project.
How many source files (can include headers) does it involve? How many >binaries does it actually produce?
$ time make -s -j96
real 9m10.38s
user 3h50m15.59s
sys 9m58.20s
I'd challenge Bart to match that with a similarly sized project using
his compiler and toolset, but I seriously doubt that this project could
be effectively implemented using his personal language and toolset.
If what you are asking is how my toolset can cope with a project on this >scale, then I can have a go at emulating it, given the information above.
I can tell you that over 4 hours, and working at generating 3-5MB per >second, my compiler could produce 40-70GB of binary code in that time,
$time make -j $(nproc)
On 30/10/2025 13:07, bart wrote:
OK, "make -j" gave a real time of 30s, about three times faster.
(Not quite sure how that works, given that my machine has only two
cores.)
You presumably understand how multi-tasking works when there are more processes than there are cores to run them. Sometimes you have more processes ready to run, in which case some have to wait. But
sometimes processes are already waiting for something else (typically
disk I/O here, but it could be networking or other things). So while
one compile task is waiting for the disk, another one can be running.
It's not common for the speedup from "make -j" or "make -j N" for
some number N to be greater than the number of cores, but it can
happen for small numbers of cores and slow disk.
In article <10dv52b$3gq3j$1@dont-email.me>,
Richard Heathfield <rjh@cpax.org.uk> wrote:
$time make -j $(nproc)
Eww. How does make distinguish between j with an argument and
j with no argument and a target?
bart <bc@freeuk.com> writes:
On 30/10/2025 14:13, Scott Lurndal wrote:
antispam@fricas.org (Waldek Hebisch) writes:
bart <bc@freeuk.com> wrote:
On 29/10/2025 23:04, David Brown wrote:
On 29/10/2025 22:21, bart wrote:
BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are: >>>>>
root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make >>>>>> output
<warnings>
real 0m49.512s
user 0m19.033s
sys 0m3.911s
Those numbers indicate that there is something wrong with your
machine. Sum of second and third line above give CPU time.
Real time is twice as large, so something is slowing down things.
One possible trouble is having too small RAM, then OS is swaping
data to/from disc. Some programs do a lot of random I/O, that
can be slow on spinning disc, but SSD-s usually are much
faster at random I/O.
Assuming that you have enough RAM you should try at least using
'make -j 3', that is allow make to use up to 3 jobs. I wrote
at least, because AFAIK cheapest PC CPU-s of reasonable age
have at least 2 cores, so to fully utilize the machine you
need at least 2 jobs. 3 is better, because some jobs may wait
for I/O.
FYI, reasonably typical report for normal make (without -j
option) on my machine is:
real 0m4.981s
user 0m3.712s
sys 0m0.963s
Just for grins, here's a report for a full rebuild of a real-world project >>> that I build regularly. Granted most builds are partial (e.g. one or
two source files touched) and take far less time (15 seconds or so,
most of which is make calling stat(2) on a few hundred source files
on an NFS filesystem). Close to three million SLOC, mostly in header
files. C++.
What is the total size of the produced binaries?
There are 181 shared objects (DLL in windows speak) and
six binaries produced by the build. The binaries are all quite small since they dynamically link at runtime with the necessary
shared objects, the set of which can vary from run-to-run.
The largest shared object is 7.5MB.
text data bss dec hex filename
6902921 109640 1861744 8874305 876941 lib/libXXX.so
That will me an idea of
the true LoC for the project.
There is really no relationship between SLoC and binary size.
There are about 16 million SLOC (it's been a while since I
last run sloccount against this codebase).
$ sloccount .
Totals grouped by language (dominant language first):
ansic: 11905053 (72.22%)
python: 2506984 (15.21%)
cpp: 1922112 (11.66%)
tcl: 87725 (0.53%)
asm: 42745 (0.26%)
sh: 14333 (0.09%)
Total Physical Source Lines of Code (SLOC) = 16,484,351 Development Effort Estimate, Person-Years (Person-Months) = 5,357.42 (64,289.00)
(Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
Schedule Estimate, Years (Months) = 13.99 (167.89)
(Basic COCOMO model, Months = 2.5 * (person-months**0.38))
Estimated Average Number of Developers (Effort/Schedule) = 382.92
Total Estimated Cost to Develop = $ 723,714,160
(average salary = $56,286/year, overhead = 2.40).
The bulk of the ANSI C code are header files generated from
YAML, likewise most of the python code (used for unit testing).
The primary functionality is in the C++ (cpp) code.
The application is highly multithreaded (circa 100 threads in
an average run).
How many source files (can include headers) does it involve? How many
binaries does it actually produce?
$ time make -s -j96
real 9m10.38s
user 3h50m15.59s
sys 9m58.20s
I'd challenge Bart to match that with a similarly sized project using
his compiler and toolset, but I seriously doubt that this project could
be effectively implemented using his personal language and toolset.
If what you are asking is how my toolset can cope with a project on this
scale, then I can have a go at emulating it, given the information above.
I can tell you that over 4 hours, and working at generating 3-5MB per
second, my compiler could produce 40-70GB of binary code in that time,
That's a completely irrelevent metric.
On 30/10/2025 13:07, bart wrote:
You moan that compiles are too slow. Yet doing them in parallel is a "workaround". Avoiding compiling unnecessarily is a "workaround".
Caching compilation work is a "workaround". Using a computer from this century is a "workaround". Using a decent OS is a "workaround". Is / everything/ that would reduce your scope for complaining loudly to the
wrong people a workaround?
Of course this kind of thing does not change the fundamental speed of
the compiler, but it is very much a solution to problems, frustration or issues that people might have from compilers being slower than they
might want. "make -j" does not make the compiler faster, but it does
mean that the speed of the compiler is less of an issue.
You have to get raw compilation fast enough first.
Why? And - again - the "raw" compilation of gcc on C code, for my
usage, is already more than fast enough for my needs.
 If it were
faster, I would still use make. If it ran at 1 MLOC per second, I'd
still use make, and I'd still structure my code the same way, and I'd
still run on Linux.
No, I contend that big compilers do seem to go at 3mph, or worse.But can you a see a fundamental problem that really ought to be fixed
first?
Sure - if that were realistic. But a more accurate model is that the
cars go at 30 mph
People use make for many reasons - incremental building and dependency management is just one (albeit important) aspect. You mentioned in
another post that "Python does not need make" - I have Python projects
that are organised by makefiles.
the time and effort you have spend complaining in c.l.c. about "make"
and instead learned about it, you'd be writing makefiles in your sleep.
It really is not that hard, and you will never convince me you are not
smart enough to understand it quickly and easily.
*It can allow programs to be run directly from source* This is
something that is being explored via complex JIT approaches. But my
AOT compiler is fast enough that that is not necessary
I don't see what that is at all important for C programming. Why would someone want to use C for scripting? If I had a C file "test.c" that
was short enough to be realistic for use as a script, and did not care
about optimisation or static checking, I could just type "make test
&& ./test" to run it pretty much instantly.
Almost everyone who uses cdecl does that already. Enthusiasts living on the cutting edge need to spend a couple of minutes downloading and
building the latest versions, but other people will use pre-built binaries. And those people are already very familiar with the "./ configure && make -j 8 && sudo make install" sequence.
Forget ./configure, forget make. Of course you can do the same thing,
maybe there is 'make -run', the difference is that the above is instant.
To be clear - I do think autotools is usually unnecessary, overly
complex, slow, and long outdated.
There were no details in that post - I suspect it was not /entirely/ serious.
Eww. How does make distinguish between j with an argument and
j with no argument and a target?
$ man 3 getopt
Standard unix semantics since, well, forever. 'j' with
no argument is an error.
In article <jhNMQ.1338175$Jgh9.1030888@fx15.iad>,
Scott Lurndal <slp53@pacbell.net> wrote:
Eww. How does make distinguish between j with an argument and
j with no argument and a target?
$ man 3 getopt
Standard unix semantics since, well, forever. 'j' with
no argument is an error.
The upstream articles refer to Gnu make, which evidently does not
conform to that.
On 30/10/2025 15:04, David Brown wrote:
On 30/10/2025 13:07, bart wrote:
You moan that compiles are too slow. Yet doing them in parallel is a
"workaround". Avoiding compiling unnecessarily is a "workaround".
Caching compilation work is a "workaround". Using a computer from this
century is a "workaround". Using a decent OS is a "workaround". Is /
everything/ that would reduce your scope for complaining loudly to the
wrong people a workaround?
Yes, they are all workarounds to cope with unreasonably slow compilers.
They in fact all come across as excuses for your favorite compiler being slow.
Which one of these methods would you use to advertise the LPS throughput
of a compiler that you develop?
In article <10dv52b$3gq3j$1@dont-email.me>,
Richard Heathfield <rjh@cpax.org.uk> wrote:
$time make -j $(nproc)
Eww. How does make distinguish between j with an argument and
j with no argument and a target?
$ make -j a
make: *** No rule to make target 'a'. Stop.
$ make -j 3
make: *** No targets specified and no makefile found. Stop.
$ make 3
cc 3.c -o 3
That's a really bad idea.
Try "time make -j" as a simple step.[...]
On 30/10/2025 15:04, David Brown wrote:
On 30/10/2025 13:07, bart wrote:
You moan that compiles are too slow. Yet doing them in parallel is a
"workaround". Avoiding compiling unnecessarily is a "workaround".
Caching compilation work is a "workaround". Using a computer from
this century is a "workaround". Using a decent OS is a "workaround".
Is / everything/ that would reduce your scope for complaining loudly
to the wrong people a workaround?
Yes, they are all workarounds to cope with unreasonably slow compilers.
They in fact all come across as excuses for your favorite compiler being slow.
Which one of these methods would you use to advertise the LPS throughput
of a compiler that you develop?
Of course this kind of thing does not change the fundamental speed of
the compiler, but it is very much a solution to problems, frustration
or issues that people might have from compilers being slower than they
might want. "make -j" does not make the compiler faster, but it does
mean that the speed of the compiler is less of an issue.
You have to get raw compilation fast enough first.
Why? And - again - the "raw" compilation of gcc on C code, for my
usage, is already more than fast enough for my needs.
Not for mine, sorry.
 If it were faster, I would still use make. If it ran at 1 MLOC per
second, I'd still use make, and I'd still structure my code the same
way, and I'd still run on Linux.
If it ran 1Mlps, then half of make would be pointless.
However, with C, it would run into other problems, like heavy include
files, which would normally be repeatedly processed per-module. (This is something my language solves, but I also suggested, elsewhere in the
thread, a way it could be mitigated in C.)
No, I contend that big compilers do seem to go at 3mph, or worse.But can you a see a fundamental problem that really ought to be fixed
first?
Sure - if that were realistic. But a more accurate model is that the
cars go at 30 mph
We can argue about how much extra work your compilers do than mine, so
let's look at a slightly different tool: assemblers.
Assembly is a straightforward task: there is no deep analysis, no optimisation, so it should be very quick, yes? Well have a look this
survey I did from a couple of years ago:
https://www.reddit.com/r/Compilers/comments/1c41y6d/assembler_survey/
There are quite a range of speeds! So what are those slow products up to that take so long?
People use make for many reasons - incremental building and dependency
management is just one (albeit important) aspect. You mentioned in
another post that "Python does not need make" - I have Python projects
that are organised by makefiles.
Makefiles sound to me like your 'hammer' then.
 And honestly, if you had taken 1% of
the time and effort you have spend complaining in c.l.c. about "make"
and instead learned about it, you'd be writing makefiles in your
sleep. It really is not that hard, and you will never convince me you
are not smart enough to understand it quickly and easily.
I simply don't like them; sorry. Everything they might do, is taken care
of by language design, or by my compiler, or by scripting in a proper scripting language.
And they are ugly.
David Brown <david.brown@hesbynett.no> writes:
[...]
Try "time make -j" as a simple step.[...]
In my recent testing, "make -j" without a numeric argument (which
tells make to run as many parallel steps as possible) caused my
system to bog down badly. This was on a fairly large project (I used
vim); it might not be as much of a problem with a smaller project.
I've found that "make -j $(nproc)" is safer. The "nproc" command
is likely to be available on any system that has a "make" command.
It occurs to me that "make -j N" can fail if the Makefile does
not correctly reflect all the dependencies. I suspect this is
less likely to be a problem if the Makefile is generated rather
than hand-written.
On 30/10/2025 13:07, bart wrote:
Quite a few people have suggested that there is something amiss about my
1:32 and 0:49 timings. One has even said there is something wrong with
my machine.
Maybe there /is/ something wrong with your machine or setup. If you
have a 2 core machine, it is presumably a low-end budget machine from perhaps 15 years ago. I'm all in favour of keeping working systems and
I strongly disapprove of some people's two or three year cycles for
swapping out computers, but there is a balance somewhere. With such an
old system, I presume you also have old Windows (my office Windows
machine is Windows 7), and thus the old and very slow style of WSL.
That, I think, could explain the oddities in your timings.
On 2025-10-30, bart <bc@freeuk.com> wrote:
On 30/10/2025 15:04, David Brown wrote:
On 30/10/2025 13:07, bart wrote:
You moan that compiles are too slow. Yet doing them in parallel is a
"workaround". Avoiding compiling unnecessarily is a "workaround".
Caching compilation work is a "workaround". Using a computer from this >>> century is a "workaround". Using a decent OS is a "workaround". Is / >>> everything/ that would reduce your scope for complaining loudly to the
wrong people a workaround?
Yes, they are all workarounds to cope with unreasonably slow compilers.
The idea of incremental rebuilding goes back to a time when compilers
were fast, but machines were slow.
If you had /those/ exact compilers today, and used them for even a pretty large project, you could likely do a full rebuild every time.
But incremental building didn't go away because we already had it,
and we took that into account when maintaining compilers.
Basically, decades ago, we accepted the idea that it can take several
seconds to compile the average file, and that we have incremental
building to help with that.
And so, unsurprisingly, as machines got several orders of magnitude
faster, people we have made compilers do more and become more bloated,
so that it can still take seconds to do one file, and you use make to
avoid doing it.
A lot of is it the optimization. Disable optimization and GCC is
something like 15X faster.
Optimization exhibits diminshing returns. It takes more and more
work for less and less gain. It's really easy to make optimization
take 10X longer for a fraction of a percent increase in speed.
Yet, it tends to be done because of the reasoning that the program is compiled once, and then millions of instances of the program are run
all over the world.
One problem in optimization is that it is expensive to look for the conditions that enable a certain optimization. It is more expensive
than doing the optimization, because the optimization is often
a conceptually simple code transformation that can be done quickly,
when the conditions are identified. But compiler has to look for those conditions everywhere, in every segment of code, every basic block.
But it may turn out that there is a "hit" for those conditions in
something like one file out of every hundred, or even more rarely.
When there is no "hit" for the optimization's conditions, then it
doesn't take place, and all that time spent looking for it is just
making the compiler slower.
The problem is that to get the best possible optimization, you have to
look for numerous such rare conditions. When one of them doesn't "hit",
one of the others might. The costs of these add up. Over time,
compiler developers tend to add optimizatons much more than remove them.
They in fact all come across as excuses for your favorite compiler being
slow.
Well, yes. Since we've had incremental rebuilding since the time VLSI machines were measured in single digit Mhz, we've taken it for granted
that it will be used and so, to reiterate, that excuses the idea of
a compiler taking several seconds to do one file.
Which one of these methods would you use to advertise the LPS throughput
of a compiler that you develop?
It would be a lie to measure lines per second on anything but
a single-core, complete rebuild of the benchmark program.
High LPS compilers are somehow not winning in the programming
marketplace, or at least some segments.
That field is open!
Once upon a time it seemed that GCC would remain unchallenged. Then
Clang came along: but it too got huge, fat and slow within a bunch of
years. This is mainly due to trying to have good optimizations.
You will never get a C compiler that has very high LSP throughput, but doesn't optimize as well as the "leading brand", to make inroads into
the ecosystem dominated by the "leading brand".
On 30/10/2025 18:59, Kaz Kylheku wrote:
On 2025-10-30, bart <bc@freeuk.com> wrote:
On 30/10/2025 15:04, David Brown wrote:The idea of incremental rebuilding goes back to a time when
On 30/10/2025 13:07, bart wrote:
You moan that compiles are too slow. Yet doing them in parallel is a >>>> "workaround". Avoiding compiling unnecessarily is a "workaround".
Caching compilation work is a "workaround". Using a computer from this >>>> century is a "workaround". Using a decent OS is a "workaround". Is / >>>> everything/ that would reduce your scope for complaining loudly to the >>>> wrong people a workaround?
Yes, they are all workarounds to cope with unreasonably slow compilers.
compilers
were fast, but machines were slow.
What do you mean by incremental rebuilding? I usually talk about /independent/ compilation.
Then incremental builds might be about deciding which modules to
recompile, except that that is so obvious, you didn't give it a name.
Compile the one file you've just edited. If it might impact on any
others (you work on a project for months, you will know it
intimately), then you just compile the lot.
On 30/10/2025 13:07, bart wrote:
Maybe there /is/ something wrong with your machine or setup. If you
have a 2 core machine, it is presumably a low-end budget machine from perhaps 15 years ago. I'm all in favour of keeping working systems and
I strongly disapprove of some people's two or three year cycles for
swapping out computers, but there is a balance somewhere. With such an
old system, I presume you also have old Windows (my office Windows
machine is Windows 7), and thus the old and very slow style of WSL.
That, I think, could explain the oddities in your timings.
You have even suggested I have manipulated the figures!
No, I did not. I have at various times suggested that you cherry-pick, that you might have poor methodology and that you sometimes benchmark in
an unrealistic way in order to give yourself a bigger windmill for your tilting.
So, you are exaggerating, mismeasuring or misusing your system to getbuild times that are well over an order of magnitude worse than
So was I right in sensing something was off, or not?
You were wrong in thinking something was off about cdecl or its build.
And it should not be news to you that there is something very suboptimal about your computer environment, as this is not exactly the first time
it has been discussed.
bart <bc@freeuk.com> writes:
On 30/10/2025 18:59, Kaz Kylheku wrote:
On 2025-10-30, bart <bc@freeuk.com> wrote:
On 30/10/2025 15:04, David Brown wrote:compilers
On 30/10/2025 13:07, bart wrote:
You moan that compiles are too slow. Yet doing them in parallel is a >>>>> "workaround". Avoiding compiling unnecessarily is a "workaround".
Caching compilation work is a "workaround". Using a computer from this >>>>> century is a "workaround". Using a decent OS is a "workaround". Is / >>>>> everything/ that would reduce your scope for complaining loudly to the >>>>> wrong people a workaround?
Yes, they are all workarounds to cope with unreasonably slow compilers. >>> The idea of incremental rebuilding goes back to a time when
were fast, but machines were slow.
What do you mean by incremental rebuilding? I usually talk about
/independent/ compilation.
Then incremental builds might be about deciding which modules to
recompile, except that that is so obvious, you didn't give it a name.
Compile the one file you've just edited. If it might impact on any
others (you work on a project for months, you will know it
intimately), then you just compile the lot.
I'll assume that was a serious question. Even if you don't care,
others might.
Let's say I'm working on a project that has a bunch of *.c and
*.h files.
If I modify just foo.c, then type "make", it will (if everything
is set up correctly) recompile "foo.c" generating "foo.o", and
then run a link step to recreate any executable that depends on
"foo.o". It knows it doesn't have to recompile "bar.c" because
"bar.o" sill exists and is newer than "bar.c".
Perhaps the project provides several executable programs, and
only two of them rely on foo.o. Then it can relink just those
two executables.
This is likely to give you working executables substantially
faster than if you did a full rebuild. It's more useful while
you're developing and updating a project than when you download
the source and build it once.
(I often tend to do full rebuilds anyway, for vague reasons I won't
get into.)
This depends on all relevant dependencies being reflected in the
Makefile, and on file timestamps being updated correctly when files
are edited. (In the distant past, I've run into problems with the
latter when the files are on an NFS server and the server and client
have their clocks set differently.)
(I'll just go ahead and acknowledge, so you don't have to, that
this might not be necessary if the build tools are infinitely fast.)
If I've done a "make clean" or "git clean", or started from scratch
by cloning a git repo or unpacking a .tar.gz file, then any generated
files will not be present, and typing "make" will have to rebuild
everything.
[...]
antispam@fricas.org (Waldek Hebisch) writes:
[...]
Assuming that you have enough RAM you should try at least using
'make -j 3', that is allow make to use up to 3 jobs. I wrote
at least, because AFAIK cheapest PC CPU-s of reasonable age
have at least 2 cores, so to fully utilize the machine you
need at least 2 jobs. 3 is better, because some jobs may wait
for I/O.
I haven't been using make's "-j" option for most of my builds.
I'm going to start doing so now (updating my wrapper script).
I initially tried replacing "make" by "make -j", with no numeric
argument. The result was that my system nearly froze (the load
average went up to nearly 200). It even invoked the infamous OOM
killer. "make -j" tells make to use as many parallel processes
as possible.
"make -j $(nproc)" is much better. The "nproc" command reports the
number of available processing units. Experiments with a fairly
large build show that arguments to "-j" larger than $(nproc) do
not speed things up (on a fairly old machine with nproc=4). I had
speculated that "make -n 5" might be worthwhile of some processes
were I/O-bound, but that doesn't appear to be the case.
If I were developing a compiler, I would not advertise any kind of lines-per-second value. It is a totally useless metric - as useless as measuring developer performance on the lines of code he/she writes per day.
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
antispam@fricas.org (Waldek Hebisch) writes:
[...]
Assuming that you have enough RAM you should try at least using
'make -j 3', that is allow make to use up to 3 jobs. I wrote
at least, because AFAIK cheapest PC CPU-s of reasonable age
have at least 2 cores, so to fully utilize the machine you
need at least 2 jobs. 3 is better, because some jobs may wait
for I/O.
I haven't been using make's "-j" option for most of my builds.
I'm going to start doing so now (updating my wrapper script).
I initially tried replacing "make" by "make -j", with no numeric
argument. The result was that my system nearly froze (the load
average went up to nearly 200). It even invoked the infamous OOM
killer. "make -j" tells make to use as many parallel processes
as possible.
"make -j $(nproc)" is much better. The "nproc" command reports the
number of available processing units. Experiments with a fairly
large build show that arguments to "-j" larger than $(nproc) do
not speed things up (on a fairly old machine with nproc=4). I had
speculated that "make -n 5" might be worthwhile of some processes
were I/O-bound, but that doesn't appear to be the case.
I frequently build my project on a few different machines. My
machines typically are generously (compared to compiler need)
equipped with RAM. Measuring several builds '-j 3' gave me
fastest build on 2 core machine (no hyperthreading), '-j 7'
gave me fastest build on old 4 core machine with hyperthreading
(so 'nproc' reported 8 cores). In general, increasing number
of jobs I see increasing total CPU time, but real time may go
down because more jobs can use time where CPU(s) would be
otherwise idle. At some number of jobs I get best real time
and with larger number of jobs overheads due to multiple jobs
seem to dominate leading to increase in real time. If number
of jobs is too high I get slowdown due to lack of real memory.
On 12 core machine (24 logical cores) I use '-j 20'. Increasing
number of jobs give sligtly faster build, but difference is
small, so I prefer to have more cores availble for interactive
use.
Of course, that is balancing tradeoffs, your builds may have
different characteristics than mine. I just wanted to say
that _sometimes_ going beyond number of cores is useful.
IIUC what Bart wrote he got 3 times speedup using '-j 3'
on two core machine, which is unusually good speedup. IME
normally 3 jobs on 2 core machine is neutral or gives small
speedup. OTOH with hyperthreading activationg logical core
my slow down its twin. Consequently using less jobs than
logical cores may be better.
On 30/10/2025 23:44, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
[...]What do you mean by incremental rebuilding? I usually talk aboutI'll assume that was a serious question. Even if you don't care,
/independent/ compilation.
Then incremental builds might be about deciding which modules to
recompile, except that that is so obvious, you didn't give it a name.
Compile the one file you've just edited. If it might impact on any
others (you work on a project for months, you will know it
intimately), then you just compile the lot.
others might.
I never came across any version of 'make' in the DEC OSes I used in[...]
the 1970s, in the 1980s did see it either.
In any case it wouldn't have worked with my compiler, as it was not a discrete program: it was memory-resident together with an editor, as
part of my IDE.
This helped to get fast turnarounds even on floppy-based 8-bit systems.
Plus, I wouldn't have felt the issue was of any great importance:
On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
If I were developing a compiler, I would not advertise any kind of
lines-per-second value. It is a totally useless metric - as useless as
measuring developer performance on the lines of code he/she writes per day.
If that were your only advantage, you'd have to flout it.
"[[ Our compiler emits lousy code, emits only half the required ISO diagnostics (and those are all there are), and is compatible with only
75% of your system's header files, and 80% of the ABI, but ...]] have you seen the raw speed in lines per second?"
bart <bc@freeuk.com> writes:
On 30/10/2025 23:44, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
[...]What do you mean by incremental rebuilding? I usually talk aboutI'll assume that was a serious question. Even if you don't care,
/independent/ compilation.
Then incremental builds might be about deciding which modules to
recompile, except that that is so obvious, you didn't give it a name.
Compile the one file you've just edited. If it might impact on any
others (you work on a project for months, you will know it
intimately), then you just compile the lot.
others might.
[...]
I never came across any version of 'make' in the DEC OSes I used in
the 1970s, in the 1980s did see it either.
In any case it wouldn't have worked with my compiler, as it was not a
discrete program: it was memory-resident together with an editor, as
part of my IDE.
This helped to get fast turnarounds even on floppy-based 8-bit systems.
Plus, I wouldn't have felt the issue was of any great importance:
You asked what incremental building means. I told you. Your only
response is to let us all know that you don't find it useful.
On 31/10/2025 01:16, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
On 30/10/2025 23:44, Keith Thompson wrote:[...]
bart <bc@freeuk.com> writes:
[...]What do you mean by incremental rebuilding? I usually talk aboutI'll assume that was a serious question. Even if you don't care,
/independent/ compilation.
Then incremental builds might be about deciding which modules to
recompile, except that that is so obvious, you didn't give it a name. >>>>>
Compile the one file you've just edited. If it might impact on any
others (you work on a project for months, you will know it
intimately), then you just compile the lot.
others might.
[...]
I never came across any version of 'make' in the DEC OSes I used in
the 1970s, in the 1980s did see it either.
In any case it wouldn't have worked with my compiler, as it was not a
discrete program: it was memory-resident together with an editor, as
part of my IDE.
This helped to get fast turnarounds even on floppy-based 8-bit systems.
Plus, I wouldn't have felt the issue was of any great importance:
You asked what incremental building means. I told you. Your only
response is to let us all know that you don't find it useful.
Actually I didn't mention 'make'. I said what I thought it meant, and
I expanded on that in my reply to you.
You mentioned 'make', and I also explained why it wouldn't have been
any good to me.
In any case, you still have to give that dependency information to
'make', and maintain it, as well as all info about the constituent
files of the project.
On 30/10/2025 15:04, David Brown wrote:
On 30/10/2025 13:07, bart wrote:
Maybe there /is/ something wrong with your machine or setup. If you
have a 2 core machine, it is presumably a low-end budget machine from
perhaps 15 years ago. I'm all in favour of keeping working systems
and I strongly disapprove of some people's two or three year cycles
for swapping out computers, but there is a balance somewhere. With
such an old system, I presume you also have old Windows (my office
Windows machine is Windows 7), and thus the old and very slow style of
WSL. That, I think, could explain the oddities in your timings.
The machine is from 2021. It has an SSD, 8GB, and runs Windows 11. It
uses WSL version 2.
It is fast enough for my 40Kloc compiler to self-host itself repeatedly
at about 15Hz (ie. produce 15 new generations per second). And that is
using unoptimised x64 code:
 c:\mx2>tim ms ms ms ms ms ms ms ms ms ms ms ms ms ms ms hello
 Hello, World
 Time: 1.017
Hmm, I'm only counting 14 'ms' after the first. So apologies, it is only 14Hz!
On 27/10/2025 13:39, Janis Papanagnou wrote:
On 27.10.2025 13:50, bart wrote:
On 27/10/2025 02:08, Janis Papanagnou wrote:
On 26.10.2025 12:26, bart wrote:
Speed is not an end in itself. It must be valued in comparison withSpeed seems to be important enough that huge efforts have gone into
all the other often more relevant factors (that you seem to completely
miss, even when explained to you).
creating the best optimising compilers over decades.
Fantastically complex products like LLVM exist, which take 100 times
longer to compile code than a naive compiler, in order to eke out the
last bit of performance.
Similarly, massive investment has gone into making dynamic languages
fast, like the state-of-the-art products used in running JavaScript, or
the numerous JIT approaches used to accelerate languages like Python and Ruby.
Build-speed is taken seriously enough, and most 'serious' compilers are
slow enough, that complex build systems exist, which use dependencies in order to avoid compilation as much as possible.
Or failing that, by parallelising builds across multiple cores, possibly even across distributed machines.
So, fortunately some people take this stuff more seriously than you do.
I am also involved in this field, and my experimental work takes the approach of simplicity to achieve results.
I know your goals are space and speed. And that's fine in principle
(unless you're ignoring other relevant factors).
LLVM is a backend project which is massively bigger, more complex and
slower (in build speed) than my stuff, by a number of magnitudes in each case.
The resulting code however, might only be a fraction of a magnitude
faster (for example the 0.3 vs 0.4 timings above, achieved via gcc, but
LLVM would be similar).
And that's if you apply the optimiser, which I would only use for
production builds, or for benchmarking. Otherwise its code is just as
poor as mine, or worse, but it still takes longer to build stuff!
For me the trade-offs of a big, cumbersome product don't work. I like my near-zero builds and can work more spontaneously!
It's still at least FIVE TIMES FASTER than A68G! [2-3 TIMES FASTER]
So what? - I don't need a Lua system. So why should I care.
You are the one who seems to think that the speed factor is the most
important factor to choose a language for a project. - You are wrong
for the general case. (But it may be right for your personal universe,
of course.)
You are wrong. What language do you use most? Let's say it is C
(although you usually post about every other language except C!).
Then, suppose your C compiler was written in Python rather than C++ or whatever and run under CPython. What you think would happen to your build-times?
Now imagine further if the CPython interpreter was inself written and executed with CPython.
So, the 'speed' of a language (ie. of its typical implementation, which
also depends on the language design) does matter.
If speed wasn't an issue then we'd all be using easy dynamic languages
for productivity. In reality those easy languages are far too slow in
most cases.
The tools I'm using for my personal purposes, and those that I had been >>>> using for professional purposes, all served the necessary requirements. >>>> Your's don't.
I'm just showing just how astonishingly fast modern hardware can be.
Like at least a thousand times faster than a 1970s mainframe, and yet
people are still waiting on compilers!
You've been explained before many times already by many people that
differences in compile time may not beat other more relevant factors.
I've also explained that I work by very freqent edit-run cycles. Then compile-times matter. This is why many like to use scripting languages
as those don't have a discernible build step.
But I can use my system language, *or* C via my compiler, just like a scripting language.
You will find now various projects that apply JIT-techniques to such languages in an effort to provide a similar experience. (I don't need
such techniques as my AOT compilers already work near-instantly.)
[...]
The data structure that defines the '-j' option in the GNU make
source is:
static struct command_switch switches[] =
{
// ...
{ 'j', positive_int, &arg_job_slots, 1, 1, 0, 0, &inf_jobs, &default_job_slots,
"jobs", 0 },
//...
};
Yes, it's odd that "-j" may or may not be followed by an argument.
The way it works is that if the following argument exists and is
(a string representing) a positive integer, it's taken as "-j N",
otherwise it's taken as just "-j".
A make argument that's not an option is called a "target"; for
example in "make -j 4 foo", "foo" is the target. A target whose name
is a positive integer is rare enough that the potential ambiguity
is almost never an issue. If it is, you can use the long form:
"make --jobs" or "make --jobs=N".
I think it would have been cleaner if the argument to "-j" had
been mandatory, with an argument of "0", "-1", or "max" having
some special meaning. But changing it could break existing scripts
that invoke "make -j" (though as I've written elsethread, "make -j"
can cause problems).
It would also have been nice if the "make -j $(nproc)" functionality
had been built into make.
The existing behavior is a bit messy, but it works, and I've never
run into any actual problems with the way the options are parsed.
(I've never had any speed issues with make, so I've never used -j;
despite it comes "for free". - But I've also no 64 kernel CPUs or
MLOC-sized projects at home.)
Janis
On 30/10/2025 18:49, bart wrote:
[...]
If I were developing a compiler, I would not advertise any kind of lines-per-second value. It is a totally useless metric -
as useless as measuring developer performance on the lines of code
he/she writes per day.
Anyway, C is often used as a target for compilers of other languages.
There, it should be validated code, and so needs little error checking.
It might not even use any headers (my generated C doesn't).
There certainly are makefile builds that might not work correctly with parallel builds. And I think you are right that this is typically a dependency specification issue, and that generating dependencies automatically in some way should have lower risk of problems.
On 31/10/2025 00:28, Kaz Kylheku wrote:
On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
If I were developing a compiler, I would not advertise any kind of
lines-per-second value. It is a totally useless metric - as
useless as measuring developer performance on the lines of code
he/she writes per day.
If that were your only advantage, you'd have to flout it.
"[[ Our compiler emits lousy code, emits only half the required ISO diagnostics (and those are all there are), and is compatible with
only 75% of your system's header files, and 80% of the ABI, but
...]] have you seen the raw speed in lines per second?"
How would Turbo C compare then?
If that were your only advantage, you'd have to flout it.
People into compilers are obsessed with optimisation. It can be a
necessity for languages that generate lots of redundant code that needs
to be cleaned up, but not so much for C.
Typical differences of between -O0 and -O2 compiled code can be 2:1.
However even the most terrible native code will be a magnitude faster
than interpreted code.
On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
If I were developing a compiler, I would not advertise any kind of
lines-per-second value. It is a totally useless metric - as useless as
measuring developer performance on the lines of code he/she writes per day.
If that were your only advantage, you'd have to flout it.
"[[ Our compiler emits lousy code, emits only half the required ISO diagnostics (and those are all there are), and is compatible with only
75% of your system's header files, and 80% of the ABI, but ...]] have you seen the raw speed in lines per second?"
bart <bc@freeuk.com> writes:
On 30/10/2025 14:13, Scott Lurndal wrote:
antispam@fricas.org (Waldek Hebisch) writes:
bart <bc@freeuk.com> wrote:
On 29/10/2025 23:04, David Brown wrote:
On 29/10/2025 22:21, bart wrote:
BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are: >>>>>
root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make >>>>>> output
<warnings>
real 0m49.512s
user 0m19.033s
sys 0m3.911s
Those numbers indicate that there is something wrong with your
machine. Sum of second and third line above give CPU time.
Real time is twice as large, so something is slowing down things.
One possible trouble is having too small RAM, then OS is swaping
data to/from disc. Some programs do a lot of random I/O, that
can be slow on spinning disc, but SSD-s usually are much
faster at random I/O.
Assuming that you have enough RAM you should try at least using
'make -j 3', that is allow make to use up to 3 jobs. I wrote
at least, because AFAIK cheapest PC CPU-s of reasonable age
have at least 2 cores, so to fully utilize the machine you
need at least 2 jobs. 3 is better, because some jobs may wait
for I/O.
FYI, reasonably typical report for normal make (without -j
option) on my machine is:
real 0m4.981s
user 0m3.712s
sys 0m0.963s
Just for grins, here's a report for a full rebuild of a real-world project >>> that I build regularly. Granted most builds are partial (e.g. one or
two source files touched) and take far less time (15 seconds or so,
most of which is make calling stat(2) on a few hundred source files
on an NFS filesystem). Close to three million SLOC, mostly in header
files. C++.
What is the total size of the produced binaries?
There are 181 shared objects (DLL in windows speak) and
six binaries produced by the build. The binaries are all quite small since they dynamically link at runtime with the necessary
shared objects, the set of which can vary from run-to-run.
The largest shared object is 7.5MB.
text data bss dec hex filename
6902921 109640 1861744 8874305 876941 lib/libXXX.so
On 30/10/2025 23:44, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
This is likely to give you working executables substantially
faster than if you did a full rebuild. It's more useful while
you're developing and updating a project than when you download
the source and build it once.
I never came across any version of 'make' in the DEC OSes I used in the >1970s, in the 1980s did see it either.
In any case it wouldn't have worked with my compiler, as it was not a >discrete program: it was memory-resident together with an editor, as
part of my IDE.
This helped to get fast turnarounds even on floppy-based 8-bit systems.
On 30/10/2025 21:37, Keith Thompson wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Try "time make -j" as a simple step.[...]
In my recent testing, "make -j" without a numeric argument (which
tells make to run as many parallel steps as possible) caused my
system to bog down badly. This was on a fairly large project (I used
vim); it might not be as much of a problem with a smaller project.
I've found that "make -j $(nproc)" is safer. The "nproc" command
is likely to be available on any system that has a "make" command.
It occurs to me that "make -j N" can fail if the Makefile does
not correctly reflect all the dependencies. I suspect this is
less likely to be a problem if the Makefile is generated rather
than hand-written.
There certainly are makefile builds that might not work correctly with >parallel builds. And I think you are right that this is typically a >dependency specification issue, and that generating dependencies >automatically in some way should have lower risk of problems. I think
it is also typically on older makefiles - from the days of single core >machines where "make -j N" was not considered - that had such issues.
On 30/10/2025 16:22, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
What is the total size of the produced binaries?
There are 181 shared objects (DLL in windows speak) and
six binaries produced by the build. The binaries are all quite small since >> they dynamically link at runtime with the necessary
shared objects, the set of which can vary from run-to-run.
The largest shared object is 7.5MB.
text data bss dec hex filename
6902921 109640 1861744 8874305 876941 lib/libXXX.so
Well, I've done a couple of small tests.
The first was in generating 200 'small' DLLs - duplicates of the same >library. This took 6 seconds to produce 200 libraries of 50KB each (10MB >total). Each library is 5KB as it includes my language's standard libs.
bart <bc@freeuk.com> writes:
On 30/10/2025 16:22, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
What is the total size of the produced binaries?
There are 181 shared objects (DLL in windows speak) and
six binaries produced by the build. The binaries are all quite small since
they dynamically link at runtime with the necessary
shared objects, the set of which can vary from run-to-run.
The largest shared object is 7.5MB.
text data bss dec hex filename
6902921 109640 1861744 8874305 876941 lib/libXXX.so
Well, I've done a couple of small tests.
Pointlessly.
The first was in generating 200 'small' DLLs - duplicates of the same
library. This took 6 seconds to produce 200 libraries of 50KB each (10MB
total). Each library is 5KB as it includes my language's standard libs.
The shared object 'text' size ranges from 500KB to 14MB.
Your toy projects aren't representative of real world application development. Can you not understand that?
On 31/10/2025 00:23, bart wrote:
People into compilers are obsessed with optimisation. It can be a
necessity for languages that generate lots of redundant code that
needs to be cleaned up, but not so much for C.
Typical differences of between -O0 and -O2 compiled code can be 2:1.
However even the most terrible native code will be a magnitude faster
than interpreted code.
You live in a world of x86 (with brief visits to 64-bit ARM). You used
to work with smaller processors and lower level code, but seem to have forgotten that long ago.
A prime characteristic of modern x86 processors is that they are
extremely good at running extremely bad code.
They are targeted at
systems where being able to run old binaries is essential. A great deal
of the hardware in an x86 cpu core is there to handle poorly optimised
code - lots of jumps and function calls get predicted and speculated,
data that is pushed onto and pulled off the stack gets all kinds of fast paths and short-circuits, and so on. And then there is the memory - if code has to wait for data from ram, the cpu can happily execute hundreds
of cycles of unnecessary unoptimised code without making any difference
to the final speed.
Big ARM processors - such as on Pi's - have the same effects, though to
a somewhat lesser extent.
A prime characteristic of user programs on PC's and other "big" systems
is that a lot of the time is spent doing things other than running the
user code - file I/O, screen display, OS calls, or code in static
libraries, DLLs (or SOs), etc. That stuff is completely unaffected by
the efficiency of the user code - that's why interpreted or VM code is
fast enough for a very wide range of use-cases.
And if you are working with Windows systems with an MS DLL for the C
runtime library (as used by some C toolchains on Windows, but not all),
then you can get more distortions. If you have a call to memcpy that
uses an external DLL, that is going to take perhaps 500 clock cycles
even for a small fixed size of memcpy (assuming all code and data is in cache). The user code for the call might be 10 cycles or 20 cycles depending on the optimisation - compiler optimisation makes no
measurable difference here. But if the toolchain uses a static library
for memcpy and can optimise locally to replace the call, the static call
to general memcpy code might take 200 cycles while the local code takes
10 cycles. Suddenly the difference between optimising and non-
optimising is huge.
Then there is the type of code you are dealing with. Some code is very
cpu intensive and can benefit from optimisations, other code is not.
And optimisation is not just a matter of choosing -O0 or -O2 flags.
 It
can mean thought and changes in the source code (some standard C
changes, like use of "restrict" parameters, some compiler-specific
changes like gcc attributes or builtins, and some target specific like organising data to fit cache usage).
 And it can mean careful flag
choices - different specific optimisations suitable for the code at
hand, and target related flags for enabling more target features.
 I am
entirely confident that you have done nothing of these things when testing. That's not necessarily a bad thing in itself, when looking at widely portable source compiled to generic binaries, but it gives a very unrealistic picture of compiler optimisations and what can be achieved
by someone who knows how to work with their compiler.
All this conspires to give you this 2:1 ratio that you regularly state
for the difference between optimised code and unoptimised code - gcc -O2
and gcc -O0.
In reality, people can often achieve far greater ratios for the type of
code where performance matters and where it is is achievable. Someone working on game engines on an x86 would probably expect at least 10
times difference between the flags they use, and no optimisation flags.
For the targets I use, which are (generally) not super-scaler, out-of- order, etc., five to ten times difference is not uncommon.
 And when you
throw C++ or other modern languages into the mix (remember, gcc and clang/llvm are not simple C compilers), the benefits of inlining and
other inter-procedural optimisations can easily be an order of
magnitude. (This is one reason why gcc and clang enable a number of optimisations, including at least inlining of functions marked appropriately, even with no optimisation flags specified.)
You can continue to believe that high-end toolchains are no more than
twice as good as your own compiler or tcc, if you like.
On 31/10/2025 13:57, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
On 30/10/2025 16:22, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
What is the total size of the produced binaries?
There are 181 shared objects (DLL in windows speak) and
six binaries produced by the build. The binaries are all quite small since
they dynamically link at runtime with the necessary
shared objects, the set of which can vary from run-to-run.
The largest shared object is 7.5MB.
text data bss dec hex filename
6902921 109640 1861744 8874305 876941 lib/libXXX.so
Well, I've done a couple of small tests.
Pointlessly.
The first was in generating 200 'small' DLLs - duplicates of the same
library. This took 6 seconds to produce 200 libraries of 50KB each (10MB >>> total). Each library is 5KB as it includes my language's standard libs.
The shared object 'text' size ranges from 500KB to 14MB.
Well, I asked for some figures, and they were lacking. And here, the
14MB figure contradicts the 7.5MB you mentioned above as the largest object.
Your toy projects aren't representative of real world application
development. Can you not understand that?
I don't believe you. Clearly my tests show that basic conversion of HLL
code to native code can be easily done at several MB per second even on
my low-end hardware - per core.
If your tests have a effective throughput far below that, then either
you have very slow compilers, or are doing a mountain of work unrelated
to compiling, or the orchestration of the whole process is poor, or some >combination.
(You mentioned there are nearly 400 developers involved? It sounds like
a management problem.
bart <bc@freeuk.com> writes:
On 31/10/2025 13:57, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
On 30/10/2025 16:22, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
What is the total size of the produced binaries?
There are 181 shared objects (DLL in windows speak) and
six binaries produced by the build. The binaries are all quite small since
they dynamically link at runtime with the necessary
shared objects, the set of which can vary from run-to-run.
The largest shared object is 7.5MB.
text data bss dec hex filename
6902921 109640 1861744 8874305 876941 lib/libXXX.so
Well, I've done a couple of small tests.
Pointlessly.
The shared object 'text' size ranges from 500KB to 14MB.
The first was in generating 200 'small' DLLs - duplicates of the same
library. This took 6 seconds to produce 200 libraries of 50KB each (10MB >>>> total). Each library is 5KB as it includes my language's standard libs. >>>
Well, I asked for some figures, and they were lacking. And here, the
14MB figure contradicts the 7.5MB you mentioned above as the largest object.
The 7.5MB was the shared object containing the main code. 14MB
was one outlier that I hadn't expected to be so large a text region (am actually looking into that now, I suspect the gcc optimizer doesn't handle
a particular bit of generated data structure initialization sequence very well).
$ size lib/*.so | cut -f 1
text
367395
8053916
A couple are third-party libraries distributed
in binary form (e.g. the ones with 30+Mbytes of text).
If your tests have a effective throughput far below that, then either
you have very slow compilers, or are doing a mountain of work unrelated
to compiling, or the orchestration of the whole process is poor, or some
combination.
Or your tools are not capable of building a project of this size
and complexity. If they were, they'd likely take even _more_ time
to run.
(You mentioned there are nearly 400 developers involved? It sounds like
a management problem.
I said nothing about the number of developers (perhaps you were looking
at the output of the 'sloccount' command?)
Between 2 and 8 developers have worked on this project
at any one time over the last 15 years.
On 31/10/2025 00:28, Kaz Kylheku wrote:
On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
If I were developing a compiler, I would not advertise any kind ofIf that were your only advantage, you'd have to flout it.
lines-per-second value. It is a totally useless metric - as useless as
measuring developer performance on the lines of code he/she writes per day. >>
"[[ Our compiler emits lousy code, emits only half the required ISO
diagnostics (and those are all there are), and is compatible with only
75% of your system's header files, and 80% of the ABI, but ...]] have you
seen the raw speed in lines per second?"
How would Turbo C compare then?
On 30/10/2025 10:15, David Brown wrote:
On 30/10/2025 01:36, bart wrote:
So, what exactly did I do wrong here (for A68G):
  root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
  real   1m32.205s
  user   0m40.813s
  sys    0m7.269s
This 90 seconds is the actual time I had to hang about waiting. I'd be
interested in how I managed to manipulate those figures!
Try "time make -j" as a simple step.
OK, "make -j" gave a real time of 30s, about three times faster. (Not<snip>
quite sure how that works, given that my machine has only two cores.)
However, I don't view "-j", and parallelisation, as a solution to slow compilation. It is just a workaround, something you do when you've
exhausted other possibilities.
You have to get raw compilation fast enough first.
Quite a few people have suggested that there is something amiss about my 1:32 and 0:49 timings. One has even said there is something wrong with
my machine.
In article <20251030172415.416@kylheku.com>,
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
If that were your only advantage, you'd have to flout it.
Flaunt.
On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
If I were developing a compiler, I would not advertise any kind of
lines-per-second value. It is a totally useless metric - as useless as
measuring developer performance on the lines of code he/she writes per day.
If that were your only advantage, you'd have to flout it.
"[[ Our compiler emits lousy code, emits only half the required ISO diagnostics (and those are all there are), and is compatible with only
75% of your system's header files, and 80% of the ABI, but ...]]
And as for diagnostics, it seems that you have actively know about
them and explicitly enable checking for them.
bart <bc@freeuk.com> wrote:
On 30/10/2025 10:15, David Brown wrote:<snip>
On 30/10/2025 01:36, bart wrote:
So, what exactly did I do wrong here (for A68G):
  root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
  real   1m32.205s
  user   0m40.813s
  sys    0m7.269s
This 90 seconds is the actual time I had to hang about waiting. I'd be >>>> interested in how I managed to manipulate those figures!
Try "time make -j" as a simple step.
OK, "make -j" gave a real time of 30s, about three times faster. (Not
quite sure how that works, given that my machine has only two cores.)
However, I don't view "-j", and parallelisation, as a solution to slow
compilation. It is just a workaround, something you do when you've
exhausted other possibilities.
You have to get raw compilation fast enough first.
Quite a few people have suggested that there is something amiss about my
1:32 and 0:49 timings. One has even said there is something wrong with
my machine.
Yes, I wrote this. 90 seconds in itself could be OK, your machine
just could be slow. But the numbers you gave clearly show that
that only about 50% of time on _one_ core is used to do the build.
So something is slowing down your machine. And this is specific to
your setup, as other people running build on Linux get better than
90% CPU utilization. You apparently get offended by this statement.
If you are realy interested if fast tools you should investigate
what is causing this.
Anyway, there could be a lot of different reasons for slowdown.
Fact that you get 3 times faster build using 'make -j' suggests
that some other program is competing for CPU and using more jobs
allows getting higher share of CPU. If that affects only programs
running under WSL, than your numbers may or may not be relevant to
WSL experience, but are incomparable to Linux timings. If slowdown
affects all programs on your machine, then you should be interested
in eliminating it, because it would also make your compiler faster.
But that is your machine, if you not curious what happens that
is OK.
If slowdown
affects all programs on your machine, then you should be interested
in eliminating it, because it would also make your compiler faster.
On 31/10/2025 22:01, Waldek Hebisch wrote:
Anyway, there could be a lot of different reasons for slowdown.
Fact that you get 3 times faster build using 'make -j' suggests
that some other program is competing for CPU and using more jobs
allows getting higher share of CPU. If that affects only programs
running under WSL, than your numbers may or may not be relevant to
WSL experience, but are incomparable to Linux timings. If slowdown
affects all programs on your machine, then you should be interested
in eliminating it, because it would also make your compiler faster.
But that is your machine, if you not curious what happens that
is OK.
I've no idea what this is up to. But here, I managed to compile that
file my way (I copied it to a place where the relevant headers were all
in one place):
  gcc -O2 -c a68g-conversion.c
Now real time is 0.14 seconds (recall it was 0.45). User time is still 0.08s.
So, what is all that crap that is making it 3 times slower? And do we
need all those -Wall checks, given that this is a working, debugged
program?
I suggest a better approach would be to get rid of that rubbish and
simplify it, rather than keep it in but having to call in reinforcements
by employing extra cores, don't you think?
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,076 |
| Nodes: | 10 (1 / 9) |
| Uptime: | 81:25:59 |
| Calls: | 13,805 |
| Files: | 186,990 |
| D/L today: |
7,295 files (2,452M bytes) |
| Messages: | 2,443,304 |