• Algol 68 / Genie - opinions on local procedures?

    From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Mon Aug 18 04:52:24 2025
    From Newsgroup: comp.lang.misc

    In a library source for rational numbers I'm using a GCD function
    to normalize the rational numbers. This function is used from other
    rational operations regularly since all numbers are always stored
    in their normalized form.

    I defined the 'PROC rat_gcd' in global space but with Algol 68 it
    could also be defined locally (to not pollute the global namespace)
    like

    ELSE # normalize #
    PROC rat_gcd = ... ;

    INT nom = ABS a, den = ABS b;
    INT sign = SIGN a * SIGN b;
    INT q = rat_gcd (nom, den);
    ( sign * nom OVER q, den OVER q )
    FI

    though performance measurements showed some noticeable degradation
    with a local function definition as depicted.

    I'd prefer it to be local but since it's ubiquitously used in that
    library the performance degradation (about 15% on avg) annoys me.

    Opinions on that?

    Janis
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Mon Aug 18 16:54:53 2025
    From Newsgroup: comp.lang.misc

    On 18/08/2025 03:52, Janis Papanagnou wrote:
    I defined the 'PROC rat_gcd' in global space but with Algol 68 it
    could also be defined locally (to not pollute the global namespace)
    like
    [... snip ...]
    though performance measurements showed some noticeable degradation
    with a local function definition as depicted.

    I can't /quite/ reproduce your problem. If I run just the
    interpreter ["a68g myprog.a68g"] then on my machine the timings are
    identical. If I optimise ["a68g -O3 myprog.a68g"], then /first time/
    through, I get a noticeable degradation [about 10% on my machine],
    but the timings converge if I run them repeatedly. YMMV. I suspect
    it's to do with storage management, and later runs are able to re-use
    heap storage that had to be grabbed first time. But that could be
    completely up the pole. Marcel would probably know.

    If you see the same, then I suggest you don't run programs
    for a first time. [:-)]

    I'd prefer it to be local but since it's ubiquitously used in that
    library the performance degradation (about 15% on avg) annoys me.
    Opinions on that?

    Personally, I'd always go for the version that looks nicer
    [ie, in keeping with your own inclinations, with the spirit of A68,
    with the One True (A68) indentation policy, and so on]. If you're
    worried about 15%, that will be more than compensated for by your
    next computer! If you're Really Worried about 15%, then I fear it's
    back to C [or whatever]; but that will cost you more than 15% in
    development time.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/West
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Mon Aug 18 18:30:54 2025
    From Newsgroup: comp.lang.misc

    On 18.08.2025 17:54, Andy Walker wrote:
    On 18/08/2025 03:52, Janis Papanagnou wrote:
    I defined the 'PROC rat_gcd' in global space but with Algol 68 it
    could also be defined locally (to not pollute the global namespace)
    like
    [... snip ...]
    though performance measurements showed some noticeable degradation
    with a local function definition as depicted.

    I can't /quite/ reproduce your problem. If I run just the
    interpreter ["a68g myprog.a68g"] then on my machine the timings are identical. If I optimise ["a68g -O3 myprog.a68g"], then /first time/ through, I get a noticeable degradation [about 10% on my machine],
    but the timings converge if I run them repeatedly. YMMV.

    Actually, with more tests, the variance got even greater; from 10%
    to 45% degradation. The variances, though, did not converge [in my environment].

    I suspect
    it's to do with storage management, and later runs are able to re-use
    heap storage that had to be grabbed first time.

    I also suspected some storage management effect; maybe that the GC
    got active at various stages. (But the code did not use anything
    that would require GC; to be honest, I'm puzzled.)

    But that could be
    completely up the pole. Marcel would probably know.

    If you see the same, then I suggest you don't run programs
    for a first time. [:-)]

    :-)


    I'd prefer it to be local but since it's ubiquitously used in that
    library the performance degradation (about 15% on avg) annoys me.
    Opinions on that?

    Personally, I'd always go for the version that looks nicer
    [ie, in keeping with your own inclinations, with the spirit of A68,
    with the One True (A68) indentation policy, and so on].

    That's what I'm tending towards. I think I'll put the GCD function
    in local scope to keep it away from the interface.

    If you're
    worried about 15%, that will be more than compensated for by your
    next computer!

    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an update
    here it's not my priority. ;-)

    If you're Really Worried about 15%, then I fear it's

    Not really. It's not the 10-45%, it's more the feeling that a library
    function should not only conform to the spirit of good software design
    but also be efficiently implemented (also in Algol 68).

    The "problem" (my "problem") here is that the effect should not appear
    in the first place since static scoping should not cost performance; I
    suppose it's an effect of Genie being effectively an interpreter here.

    But my Algol 68 programming is anyway just recreational, for fun, so
    I'll go with the cleaner (slower) implementation.

    back to C [or whatever]; but that will cost you more than 15% in
    development time.

    Uh-oh! - But no, that's not my intention here. ;-)

    Thanks!

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Tue Aug 19 00:45:00 2025
    From Newsgroup: comp.lang.misc

    On 18/08/2025 17:30, Janis Papanagnou wrote:
    Actually, with more tests, the variance got even greater; from 10%
    to 45% degradation. The variances, though, did not converge [in my environment].

    Ah. Then I backtrack from my previous explanation to an
    alternative, that your 15yo computer has insufficient cache, so
    every new run chews up more and more real storage. Or something.
    You may get some improvement by running "sweep heap" or similar
    from time to time, or using pragmats to allocate more storage.

    I also suspected some storage management effect; maybe that the GC
    got active at various stages. (But the code did not use anything
    that would require GC; to be honest, I'm puzzled.)

    ISTR that A68G uses heap storage rather more than you might
    expect. I think Marcel's documentation has more info.

    [...]
    If you're
    worried about 15%, that will be more than compensated for by your
    next computer!
    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an update
    here it's not my priority. ;-)

    Ah. I thought I was bad, keeping computers 10 years or so!
    I got a new one a couple of years back, and the difference in speed
    and storage was just ridiculous.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Soler
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Tue Aug 19 02:44:58 2025
    From Newsgroup: comp.lang.misc

    On 19.08.2025 01:45, Andy Walker wrote:
    [...]
    If you're
    worried about 15%, that will be more than compensated for by your
    next computer!
    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an update
    here it's not my priority. ;-)

    Ah. I thought I was bad, keeping computers 10 years or so!
    I got a new one a couple of years back, and the difference in speed
    and storage was just ridiculous.

    Well, used software tools (and their updates) required me to at least
    upgrade memory! (That's actually one point that annoys me in "modern"
    software development; rarely anyone seems to care economizing resource requirements.) But all the rest, especially the things that influence performance (CPU [speed, cores], graphic card, HDs/Cache, whatever) is comparably old stuff in my computer; but it works for me.[*]

    And, by the way, thanks for your suggestions and helpful information
    on my questions in all my recent Algol posts! It's also very pleasant
    being able to substantially exchange ideas on this (IMO) interesting
    legacy topic.

    Janis

    [*] If anything I'd probably only need an ASCII accelerating graphics
    card; see https://www.bbspot.com/News/2003/02/ati_ascii.html ;-)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Wed Aug 20 00:47:31 2025
    From Newsgroup: comp.lang.misc

    On 19/08/2025 01:44, Janis Papanagnou wrote:
    [...] (That's actually one point that annoys me in "modern"
    software development; rarely anyone seems to care economizing resource requirements.) [...]

    Yeah. From time to time I wonder what would happen if we ran
    7th Edition Unix on a modern computer. Sadly, I have to admit that
    I too am rather careless of resources; if you have terabytes of SSD,
    it seems to be a waste of time worrying about a few megabytes.

    And, by the way, thanks for your suggestions and helpful information
    on my questions in all my recent Algol posts! It's also very pleasant
    being able to substantially exchange ideas on this (IMO) interesting
    legacy topic.

    You're very welcome, and I reciprocate your pleasure.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Peerson
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Wed Aug 20 00:43:22 2025
    From Newsgroup: comp.lang.misc

    On Wed, 20 Aug 2025 00:47:31 +0100, Andy Walker wrote:

    From time to time I wonder what would happen if we ran
    7th Edition Unix on a modern computer.

    The Linux kernel source is currently over 40 million lines, and I
    understand the vast majority of that is device drivers.

    If you were to run an old OS on new hardware, that would need drivers for
    that new hardware, too.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Wed Aug 20 23:58:58 2025
    From Newsgroup: comp.lang.misc

    On 20/08/2025 01:43, Lawrence D’Oliveiro wrote:
    [I wrote:]
    From time to time I wonder what would happen if we ran
    7th Edition Unix on a modern computer.
    The Linux kernel source is currently over 40 million lines, and I
    understand the vast majority of that is device drivers.

    You seem to be making Janis's point, but that doesn't seem to
    be your intention?

    If you were to run an old OS on new hardware, that would need drivers for that new hardware, too.

    Yes, but what is so special about a modern disc drive, monitor, keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
    than its equivalent for a PDP-11? Does this not again make Janis's point?

    Granted that the advent of 32- and 64-bit integers and addresses
    makes some programming much easier, and that we can no longer expect
    browsers and other major tools to fit into 64+64K bytes, is the actual
    bloat in any way justified? It's not just kernels and user software --
    it's also the documentation. In V7, "man cc" generates just under two
    pages of output; on my current computer, it generates over 27000 lines,
    call it 450 pages, and is thereby effectively unprintable and unreadable,
    so it is largely wasted.

    For V7, the entire documentation fits comfortably into two box
    files, and the entire source code is a modest pile of lineprinter output.
    Most of the commands on my current computer are undocumented and unused,
    and I have no idea at all what they do.

    Yes, I know how that "just happens", and I'm observing rather
    than complaining [I'd rather write programs, browse and send/read e-mails
    on my current computer than on the PDP-11]. But it does all give food for thought.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Peerson
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Thu Aug 21 02:59:32 2025
    From Newsgroup: comp.lang.misc

    On Wed, 20 Aug 2025 23:58:58 +0100, Andy Walker wrote:

    On 20/08/2025 01:43, Lawrence D’Oliveiro wrote:

    If you were to run an old OS on new hardware, that would need
    drivers for that new hardware, too.

    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines
    more than its equivalent for a PDP-11?

    Keyboard and mouse -- USB.

    Disk drive -- that might connect via SCSI or SATA. Either one requires
    common SCSI-handling code. Plus you want a filesystem, don’t you?
    Preferably a modern one with better performance and reliability than
    Bell Labs was able to offer, back in the day. That requires caching
    and journalling support. Plus modern drives have monitoring built-in,
    which you will want to access. And you want RAID, which didn’t exist
    back then?

    Monitor -- video in the Linux kernel goes through the DRM (“Direct
    Rendering Manager”) layer. Unix didn’t have GUIs back then, but you
    will likely want them now. The PDP-11 back then accessed its console
    (and other terminals) through serial ports. You might still want
    drivers for those, too.

    Both video and disk handling in turn would be built on the common
    PCI-handling code.

    Remember there is also hot-plugging support for these devices, which
    was unheard of back in the day.

    The CPU+support chipset itself will need some drivers, beyond what was conceived back then: for example, control of the various levels of
    caching, power saving, sensor monitoring, and of course memory
    management needs to be much more sophisticated nowadays.

    And what about networking? Would you really want to run a machine in a
    modern environment without networking?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Thu Aug 21 21:02:55 2025
    From Newsgroup: comp.lang.misc

    On 19/08/2025 01:44, Janis Papanagnou wrote:
    [...] (That's actually one point that annoys me in "modern"
    software development; rarely anyone seems to care economizing resource requirements.) [...]

    On 21.08.2025 00:58, Andy Walker wrote:
    On 20/08/2025 01:43, Lawrence D’Oliveiro wrote:
    The Linux kernel source is currently over 40 million lines, and I
    understand the vast majority of that is device drivers.

    You seem to be making Janis's point, but that doesn't seem to
    be your intention?

    If you were to run an old OS on new hardware, that would need drivers for
    that new hardware, too.

    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
    than its equivalent for a PDP-11? [...]

    This was actually what I was also thinking when I read Lawrence's
    statement. (And even given his later more thorough list of modern functionalities this still doesn't quite explain the need for *so*
    much resource demands, IMO. I mean; didn't they flew to the moon
    in capsules with computers of kilobytes of memory. Yes, nowadays
    we have more features to support. But in previous days they *had*
    to economize; they had to "squeeze" algorithms to fit into 1 kiB
    of memory. Nowadays no one cares. And the computers running that
    software are an "externality"; there's no incentive, it seems,
    to write the software sophistically in an ergonomic way.)

    But that was (in my initial complaint; see above) anyway just one
    aspect of many.

    You already mentioned documentation. There we not only see this
    extreme huge and often badly structure unclear texts but also the
    information to text-size ratio is often in an extreme imbalance;
    to mention a few keywords, DOC, HTML, XML, JSON - and where the
    problem is not (not only) that one or the other of the formats is
    absolutely huge, but also that it's relatively huge compared to
    an equally or better fitting use of a more primitive format.

    Related to that; some HTML pages you load that contain just text
    payloads of few kiB but that has not only the HTML overhead but
    also loads Mebi (or Gibi?) bytes through dozens of JS libraries;
    and they're not even used! And yet I haven't mentioned pages that
    add more storage and performance demands due to advertisement
    logic (with more delays, and "of course" not considering data
    privacy); but that's of course intentionally (it's your choice).

    Economy is also related to GUI ergonomy; in configurability and
    usability. You can configure all sorts of GUI properties like GUI-schemes/appearance, you can adjust buttons left or right,
    but you cannot get a button with a necessary function, or one
    function in an easy accessible way. GUI's are overloaded with
    all sorts of trash which inevitably leads to an uneconomic use,
    and necessary features are unsupported or cannot be configured.
    (But providing such [only] fancy features contributes also to
    the code size.)

    Then there's the unnecessary dependencies. Just recently there
    was a discussion about (I think) the ffmpeg tool; it was shown
    that it includes hundreds of external libraries! Yet worse, many
    of them not serving it's main task (video processing/converting)
    but things like LDAP, and *tons* of libraries concerning Samba;
    the latter is also a problem of bad software organization given
    that there's so many libraries to be added for SMB "support"
    (whether that should be a part of a video converter or not).

    But also the performance or the systems/application design. If
    you start, e.g., a picture viewer and you have to wait a long
    time because the software designer thought it to be a good idea
    to present the directory tree in a separate part of the window,
    and to achieve that the program needs to recursively parse a
    huge subdirectory structure, and until you finally see that
    single picture that you wanted to see - and whose file name
    you already provided as argument! - half a minute passed.

    Or use of bad algorithms. Like a graphics processing software
    that doesn't terminate when trying to 90° rotate a large image
    because it does try to do the rotation unsophisticatedly with
    a copy of the huge memory and with bit-wise operations instead
    using fast and lossless in-place algorithms (that are commonly
    known already since half a century).

    Etc. etc. - Above just off the top of my head; there's surely
    much more to say about economy and software development.

    And an important consequence is that bad design and bloat will
    make systems usually also less stable and unreliable. And it's
    often hard (or even impossible) to fix such monstrosities.

    <end of rant>

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Sat Aug 23 00:42:01 2025
    From Newsgroup: comp.lang.misc

    On 21/08/2025 03:59, Lawrence D’Oliveiro wrote:
    On Wed, 20 Aug 2025 23:58:58 +0100, Andy Walker wrote:
    On 20/08/2025 01:43, Lawrence D’Oliveiro wrote:
    If you were to run an old OS on new hardware, that would need
    drivers for that new hardware, too.
    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines
    more than its equivalent for a PDP-11?
    Keyboard and mouse -- USB. [...]

    You've given us a list of 20-odd features of modern systems
    that have been developed since 7th Edition Unix, and could no doubt
    think of another 20. What you didn't attempt was to explain why all
    these nice things need to occupy 40M lines of code. That's, give or
    take, 600k pages of code, call it 2000 books. That's, on your figures,
    just the kernel source; specifications [assuming there are such!] and documentation no doubt double that, and it's already more than normal
    people can read and understand. There is similar bloat is the commands
    and in the manual entries. It's out of control, witness the updates
    that come in every few days. It's fatally easy to say of "sh" or "cc"
    or "firefox" or ... "Wouldn't it be nice if it did X?", and fatally
    hard to say "It shouldn't really be doing X.", as there's always the possibility of someone somewhere who might perhaps be using it.

    See also Janis's nearby article.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Kinross
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Sat Aug 23 02:29:54 2025
    From Newsgroup: comp.lang.misc

    On 23.08.2025 01:42, Andy Walker wrote:
    On 21/08/2025 03:59, Lawrence D’Oliveiro wrote:
    [...]

    You've given us a list of 20-odd features of modern systems
    that have been developed since 7th Edition Unix, and could no doubt
    think of another 20. What you didn't attempt was to explain why all
    these nice things need to occupy 40M lines of code. That's, give or
    take, 600k pages of code, call it 2000 books. That's, on your figures,
    just the kernel source; [...]

    That was a point I also found to be a very disturbing statement;
    I recall the kernel was designed to be small, the period of stay
    in kernel routines should generally also be short! - And now we
    have millions of lines that are either just idle or used against
    the Unix's design and operating principles?

    Meanwhile - I think probably since AIX? - we don't any more need
    to compile the drivers into the kernel (as formerly with SUN OS,
    for example). But does that really mean that all the drivers now
    bloat the kernel [as external modules] as well? - Sounds horrible.

    But I'm no expert on this topic, so interested to be enlightened
    if the situation is really as bad as Lawrence sketched it.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sat Aug 23 02:36:45 2025
    From Newsgroup: comp.lang.misc

    On Sat, 23 Aug 2025 00:42:01 +0100, Andy Walker wrote:

    What you didn't attempt was to explain why all these nice things
    need to occupy 40M lines of code.

    Go look at the code itself.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Tue Aug 26 18:42:05 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-19, Janis Papanagnou wrote:
    On 19.08.2025 01:45, Andy Walker wrote:

    If you're worried about 15%, that will be more than compensated
    for by your next computer!

    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an
    update here it's not my priority. ;-)

    Ah. I thought I was bad, keeping computers 10 years or so! I got
    a new one a couple of years back, and the difference in speed and
    storage was just ridiculous.

    Reading http://en.wikipedia.org/wiki/E-waste , I'm inclined to
    think that keeping computers for a decade might be not so bad
    a thing after all.

    Well, used software tools (and their updates) required me to at
    least upgrade memory! (That's actually one point that annoys me
    in "modern" software development; rarely anyone seems to care
    economizing resource requirements.)

    I doubt it's so much lack of care as it is simply being not a
    priority. Still, all the more reason to direct attention to
    the cases where such care /is/ given. Thankfully, the problem
    /is/ a known one (say, [1]), and IME, there still /are/ lean
    programs to choose from.

    By the by, I've been looking for "simple" self-hosting compilers
    recently - something with source that a semi-dedicated person
    can read through in reasonable time. What I've found so far is
    Pygmy Forth (naturally, I guess) and the T3X family of languages
    [2]. Are there perhaps other such compilers worthy of mention?

    [1] http://spectrum.ieee.org/lean-software-development
    [2] http://pygmy.utoh.org/pygmyforth.html
    [3] http://t3x.org/t3x/

    I'll also try to address here specific points raised elsewhere
    in this thread, particularly news:1087qgv$14ret$1@dont-email.me .

    First, the 4e7 lines of Linux code is somewhat unfair a measure.
    On my system, less than 5% of individual modules built from the
    Linux source are loaded right now:

    $ lsmod | wc -l
    175
    $ find /lib/modules/6.1.0-37-amd64/ -xdev -type f -name \*.ko | wc -l
    4024
    $

    That value would of course vary from system to system, but I'd
    think it's safe to say that in at least 90% of all deployments,
    less than 10% of Linux code will be loaded at any given time.

    For those who are looking for a system with more "comprehensible"
    sources, I would recommend NetBSD. And if anything, I personally
    find its list of supported platforms, http://netbsd.org/ports/ ,
    fairly impressive.

    Don't get me wrong: NetBSD won't fit for every use case Linux-based
    systems cover - the complexity of the Linux kernel isn't there
    for nothing - but just in case you /can/ live with a "limited"
    OS (say, one that doesn't support Docker), thanks to NetBSD, you
    /do/ have that option.

    With regards to applications, while binary distributions tend to
    opt to have the most "fully functional" built of any given
    package - from whence come lots of dependencies - a source-based
    one allows /you/ to choose what you need. And pkgsrc for NetBSD
    is such a distribution. Gentoo is a Linux-based distribution
    along the same lines.

    As to websites and JS libraries, for the past 25 years I've been
    using as my primary one a browser, Lynx, that never had support
    for JS, and likely never will have. IME, an /awful lot/ of
    websites are usable and useful entirely without JS. For those
    interested, I've recently made several comments in defense of
    "JS-free" web and web browsers, such as [4, 5, 6].

    [4] news:ID351XcOrll9pkb7@violet.siamics.net
    [5] news:6brTAD5tWnddeHXd@violet.siamics.net
    [6] news:ii6tqUtTe0Vi-Fnh@violet.siamics.net
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Wed Aug 27 00:28:20 2025
    From Newsgroup: comp.lang.misc

    On Tue, 26 Aug 2025 18:42:05 +0000, Ivan Shmakov wrote:

    First, the 4e7 lines of Linux code is somewhat unfair a measure. On
    my system, less than 5% of individual modules built from the Linux
    source are loaded right now ...

    Greg Kroah-Hartman is reported to have said that a typical
    workstation/server Linux kernel build only needs about 1½ million
    lines of source code. A more complex build, like an Android kernel,
    needs something like 3× that.

    For those who are looking for a system with more "comprehensible"
    sources, I would recommend NetBSD. And if anything, I personally
    find its list of supported platforms, http://netbsd.org/ports/ ,
    fairly impressive.

    Bit misleading, though. Note it counts “Xen” (a Linux-based
    hypervisor) as a separate platform. Also, look at all the different
    68k, MIPS, ARM and PowerPC-based machines that are individually
    listed.

    Linux counts platform support based solely on CPU architecture (not
    surprising, since it’s just a kernel, not the userland as well). It
    covers all those CPUs listed (except maybe VAX), and a bunch of others
    as well.

    Each directory here <https://github.com/torvalds/linux/tree/master/arch> represents a separate supported architecture. Note extras like arm64,
    riscv, loongarch and s390.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Wed Aug 27 07:53:00 2025
    From Newsgroup: comp.lang.misc

    On 26.08.2025 20:42, Ivan Shmakov wrote:
    On 2025-08-19, Janis Papanagnou wrote:

    Well, used software tools (and their updates) required me to at
    least upgrade memory! (That's actually one point that annoys me
    in "modern" software development; rarely anyone seems to care
    economizing resource requirements.)

    I doubt it's so much lack of care as it is simply being not a
    priority. [...]

    But those are depending each other. - Quoting from your link below...

    Wirth:
    Time pressure is probably the foremost reason behind the emergence
    of bulky software. The time pressure that designers endure discourages
    careful planning. It also discourages improving acceptable solutions;
    instead, it encourages quickly conceived software additions and
    corrections. Time pressure gradually corrupts an engineers standard
    of quality and perfection. It has a detrimental effect on people as
    well as products.

    And, to be yet more clear; I also think it's [widely] just ignorance!
    (The mere existence of the article you quoted below is per se already
    a strong sign for that. But also other experiences, like talks with
    many IT-folks of various age and background reinforced my opinion on
    that.)

    [...]

    [1] http://spectrum.ieee.org/lean-software-development

    Thanks for the link; worth reading.

    (And also learned BTW that I missed that N. Wirth deceased last year.)

    [...]

    As to websites and JS libraries, for the past 25 years I've been
    using as my primary one a browser, Lynx, that never had support
    for JS, and likely never will have. IME, an /awful lot/ of
    websites are usable and useful entirely without JS. [...]

    Lynx. This is great. - I recall that in the 1990's I had a student in
    my team who had to provide some HTML information; I asked him to test
    his data in two common browsers (back these days I think Netscape and
    the MS IE), and (for obvious reasons), also with Lynx!

    (Privately I had later written HTML/JS to create applications (with
    dynamic content) since otherwise that would not have been possible;
    I had no own server with some application servers available. But I
    didn't use any frameworks or external libraries. Already bad enough.)

    But even with Browsers and JS activated with my old Firefox I cannot
    use or read many websites nowadays; because they demand newer browser
    versions.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Sat Aug 30 19:10:42 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-27, Lawrence D'Oliveiro wrote:
    On Tue, 26 Aug 2025 18:42:05 +0000, Ivan Shmakov wrote:

    For those who are looking for a system with more "comprehensible"
    sources, I would recommend NetBSD. And if anything, I personally
    find its list of supported platforms, http://netbsd.org/ports/ ,
    fairly impressive.

    Bit misleading, though. Note it counts "Xen" (a Linux-based
    hypervisor) as a separate platform.

    What do you mean by "Linux-based"? NetBSD supports running
    as both Xen domU (unprivileged) /and/ dom0 (privileged.)
    AIUI, it's possible to run Linux domUs when NetBSD is dom0,
    and vice versa.

    Also, look at all the different 68k, MIPS, ARM and PowerPC-based
    machines that are individually listed.

    Linux counts platform support based solely on CPU architecture (not surprising, since it's just a kernel, not the userland as well).

    There's a "Ports by CPU architecture" section down the NetBSD
    ports page; it lists 16 individual CPU architectures.

    My point was that GNU/Linux distributions typically support
    less, and so do other BSDs (IIRC.) For instance, [1] lists 8:

    Architectures: all amd64 arm64 armel armhf i386 ppc64el riscv64 s390x

    [1] http://cdn-fastly.deb.debian.org_debian_dists_trixie_InRelease

    (And I'm pretty certain I saw ones that only support one or two.)

    The way I see it, it's the /kernel/ that it takes the most
    effort to port to a new platform - as it's where the support
    for peripherals lives, including platform-specific ones.

    No idea why Debian doesn't support other architectures supported
    by Linux. I'm going to guess it's lack of volunteers.

    It covers all those CPUs listed (except maybe VAX), and a bunch of
    others as well.

    Each directory here <https://github.com/torvalds/linux/tree/master/arch> represents a separate supported architecture. Note extras like arm64,

    Getting actual data out of Microsoft Github pages is a bit more
    involved than I'd prefer. Still:

    $ curl -- https://github.com/torvalds/linux/tree/master/arch \
    | pcregrep -ao1 -- "\"path\":\"arch/([/0-9a-z_.-]+)\"" | nl -ba
    1 alpha
    2 arc
    3 arm
    4 arm64
    5 csky
    6 hexagon
    7 loongarch
    8 m68k
    9 microblaze
    10 mips
    11 nios2
    12 openrisc
    13 parisc
    14 powerpc
    15 riscv
    16 s390
    17 sh
    18 sparc
    19 um
    20 x86
    21 xtensa
    22 .gitignore
    $

    So, yes, I guess it does beat NetBSD in that respect. But I
    still think that if you're interested in understanding how your
    OS works - at the source code level - you'd be better with
    NetBSD than with a Linux-based OS. (Not /quite/ a priority
    for me personally, TBH, but I appreciate it being an option.)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Sat Aug 30 19:39:49 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-27, Janis Papanagnou wrote:
    On 26.08.2025 20:42, Ivan Shmakov wrote:
    On 2025-08-19, Janis Papanagnou wrote:

    Well, used software tools (and their updates) required me to at
    least upgrade memory! (That's actually one point that annoys me
    in "modern" software development; rarely anyone seems to care
    economizing resource requirements.)

    I doubt it's so much lack of care as it is simply being not a
    priority.

    But those are depending each other.

    I guess I should've expressed myself better: engineering is
    all about trade-offs, and there're often other things to care
    about once the program runs "fast enough" on the hardware that
    the customers are /assumed/ to have.

    Not to mention that taking too long to 'polish' your product,
    you risk ending up lagging behind your competitors.

    I could only hope that environmental concerns will eventually
    make resource usage a more important issue for code writers.

    And, to be yet more clear; I also think it's [widely] just ignorance!
    (The mere existence of the article you quoted below is per se already a strong sign for that. But also other experiences, like talks with many IT-folks of various age and background reinforced my opinion on that.)

    I suppose it might be the case of people involved with computers
    professionally not seeing much point in acquiring the skills that
    aren't in demand by employers.

    (Privately I had later written HTML/JS to create applications (with
    dynamic content) since otherwise that would not have been possible;
    I had no own server with some application servers available. But I
    didn't use any frameworks or external libraries. Already bad enough.)

    I can't say I'm a big fan of JS or ES, yet there're certainly
    languages I like even less. FWIW, I prefer to stick to ES 5.1,
    http://262.ecma-international.org/5.1/ specifically, as then I
    can use http://duktape.org/ or http://mujs.com/ to test the
    bulk of my code, rather than running it in Chromium or Firefox.

    Like I've mentioned elsewhere, it's not the language, or even
    its use to create web applications, that irks me: it's that
    often enough when I want some data, what I get instead is some
    application that I /must/ use to access that same data - in a
    manner predefined by its developer (say, one record at a time),
    and not particularly conductive to the task /I/ have at hand.

    As to frameworks, my /impression/ is that it makes sense to
    familiarize oneself with them only when there're actually
    /lots/ of similar programming problems that need to be solved,
    particularly when writing code as part of a team. As it never
    was the case for me personally, I've never seen much sense in
    investing effort into learning any framework, JS or otherwise.

    But even with Browsers and JS activated with my old Firefox I cannot
    use or read many websites nowadays; because they demand newer browser versions.

    "Demand" how?

    Server-side code can of course make arbitrary decisions based
    on the User-Agent: string, but that's a poor practice in general,
    and typically such restrictions can be bypassed by reading the
    archived copy of the webpage from http://web.archive.org/ .

    Also works when it's not a browser but /TLS/ version issue.

    Alternatively, associated JS code can test browser's capabilities,
    but that can be circumvented by disabling JS altogether.

    Also to mention is that many websites these days rely on some
    sort of "DDoS protection service" external to them. (I run my
    own servers, so I /do/ know some of the pain of mitigating heaps
    of junk requests originating from botnets - mainly compromised
    "wireless routers" I believe.)

    Such services employ captchas, and those in turn require JS,
    and might require recent browser versions as well. If that's
    the case, http://web.archive.org/ might or might not help.

    Other than using Wayback Machine, I believe there's no easy
    solution to this problem: should the operator disable "protection
    service," they risk the site becoming bogged down by junk requests
    and no longer available to legitimate users. Conversely, by
    employing such a service, they inconvenience their users, for
    even those who /do/ run modern browsers, will presumably have
    better things to do than solving captchas.

    So, personally, when encountering such behavior, I try Wayback
    Machine first. If it doesn't get me a version of the webpage
    as recent as I need, I consider contacting the website operator
    so that they might check and possibly tweak their "protection"
    settings to allow archival. If they can't, or won't, fix it,
    well, as mTCP HTTPSERV.EXE puts it, "countless more exist."
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sat Aug 30 22:43:12 2025
    From Newsgroup: comp.lang.misc

    On Sat, 30 Aug 2025 19:10:42 +0000, Ivan Shmakov wrote:

    On Wed, 27 Aug 2025 00:28:20 -0000 (UTC), Lawrence D’Oliveiro
    wrote:

    Bit misleading, though. Note it counts "Xen" (a Linux-based
    hypervisor) as a separate platform.

    What do you mean by "Linux-based"?

    I mean that Xen runs an actual Linux kernel in the hypervisor, and
    supports regular Linux distros as guests -- they don’t need to be
    modified to specially support Xen, or any other hypervisor. It’s
    Linux above, and Linux below -- Linux at every layer.

    NetBSD supports running as both Xen domU (unprivileged) /and/ dom0 (privileged.)

    Linux doesn’t count these as separate platforms. They’re just
    considered a standard part of regular platform support.

    Linux counts platform support based solely on CPU architecture (not
    surprising, since it's just a kernel, not the userland as well).

    There's a "Ports by CPU architecture" section down the NetBSD
    ports page; it lists 16 individual CPU architectures.

    That’s not as many as Linux.

    My point was that GNU/Linux distributions typically support
    less ...

    But that’s an issue with the various distributions, not with the Linux
    kernel itself. In the BSD world, there is no separate of “kernel” from “distribution”. That makes things less flexible than the Linux world.

    For example, while base Debian itself may support something under a
    dozen architectures, there are offshoots of Debian that cover others.

    The way I see it, it's the /kernel/ that it takes the most
    effort to port to a new platform - as it's where the support
    for peripherals lives, including platform-specific ones.

    Given that the Linux kernel supports more of these different platforms
    than any BSD can manage, I think you’re just reinforcing my point.

    But I still think that if you're interested in understanding how
    your OS works - at the source code level - you'd be better with
    NetBSD than with a Linux-based OS.

    Linux separates the kernel from the userland. That makes things
    simpler than running everything together, as the BSDs do.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sat Aug 30 22:45:27 2025
    From Newsgroup: comp.lang.misc

    On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:

    Not to mention that taking too long to 'polish' your product, you
    risk ending up lagging behind your competitors.

    I would say, the open-source world is a counterexample to this. Look at
    how long it took GNU and Linux to end up dominating the entire computing landscape -- it didn’t happen overnight.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Sun Aug 31 08:32:20 2025
    From Newsgroup: comp.lang.misc

    On 30.08.2025 21:39, Ivan Shmakov wrote:
    On 2025-08-27, Janis Papanagnou wrote:
    [...]

    But even with Browsers and JS activated with my old Firefox I cannot
    use or read many websites nowadays; because they demand newer browser versions.

    "Demand" how?

    All sorts of "defunct"; from annoying notes telling me to upgrade
    my browser (while still seeing content and operating), then that
    message with a complete dis-functionality of dynamic content, and/
    or mis-formatted (to the degree of being unusable), or there's no
    text information at all displayed. And so on.

    If there's an issue with pages/services like reddit or sourceforge
    or (in the past; they seem to have fixed something) stackoverflow,
    or free services (news, weather, tv-program, etc.) I can just skip
    and ignore those services. But there's also commercial pages I've
    to use (like banks, tax/gov, or free mail providers, etc.) that I
    have (or need) to use then I must switch to another system or I'm
    out of luck. (Luckily I have systems available to choose from.)

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Sun Aug 31 08:34:59 2025
    From Newsgroup: comp.lang.misc

    On 30.08.2025 21:39, Ivan Shmakov wrote:
    [...]

    Not to mention that taking too long to 'polish' your product,
    you risk ending up lagging behind your competitors.

    It's not "polishing" that I was speaking about.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Sun Aug 31 13:35:51 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-30, Lawrence D'Oliveiro wrote:
    On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:

    Not to mention that taking too long to 'polish' your product, you
    risk ending up lagging behind your competitors.

    I would say, the open-source world is a counterexample to this.
    Look at how long it took GNU and Linux to end up dominating the
    entire computing landscape -- it didn't happen overnight.

    Indeed, one good thing about free software is that when one
    company closes down, another can pick up and go on from there.
    Such as how Netscape is no more, yet the legacy of its Navigator
    still survives in Firefox.

    I'm not sure how much of a consolation it is to the people
    who owned the companies that failed, though.

    Also, what indication is there that GNU is 'dominating' the
    landscape? Sure, Linux is everywhere (such as in now ubiquitous
    Android phones and TVs and whatnot), but I don't quite see GNU
    being adopted as widely.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sun Aug 31 22:40:49 2025
    From Newsgroup: comp.lang.misc

    On Sun, 31 Aug 2025 13:35:51 +0000, Ivan Shmakov wrote:

    I'm not sure how much of a consolation it is to the people who owned
    the companies that failed, though.

    Companies fail all the time, open source or no open source. When a
    company that has developed a piece of proprietary software fails, then
    the software dies with the company. With open source, the software
    stands a chance of living on.

    E.g. Loki was an early attempt at developing games on Linux. They
    failed. But the SDL framework that they created for low-latency
    multimedia graphics lives on.

    Also, what indication is there that GNU is 'dominating' the
    landscape? Sure, Linux is everywhere (such as in now ubiquitous
    Android phones and TVs and whatnot), but I don't quite see GNU
    being adopted as widely.

    Look at all the markets that Linux has taken away from Microsoft --
    Windows Media Center, Windows Home Server -- all defunct. Windows
    Server too is in slow decline. And now handheld gaming with the Steam
    Deck. You will find GNU there.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Thu Sep 4 18:25:44 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-31, Lawrence D'Oliveiro wrote:
    On Sun, 31 Aug 2025 13:35:51 +0000, Ivan Shmakov wrote:

    Indeed, one good thing about free software is that when one company
    closes down, another can pick up and go on from there. Such as how
    Netscape is no more, yet the legacy of its Navigator still survives
    in Firefox.

    I'm not sure how much of a consolation it is to the people who owned
    the companies that failed, though.

    Companies fail all the time, open source or no open source. When
    a company that has developed a piece of proprietary software fails,
    then the software dies with the company. With open source, the
    software stands a chance of living on.

    It sounds like we're in agreement on this point, no?

    My other point, however, is this: when you do run a business,
    shouldn't you be more concerned that said /business/ succeeds,
    rather than the products it delivers, whatever they might be?

    And from where I stand, releasing software targetting tomorrow's
    computers is, as a rule, a better business practice - than
    targetting decade-old ones.

    E.g. Loki was an early attempt at developing games on Linux. They
    failed. But the SDL framework that they created for low-latency
    multimedia graphics lives on.

    Yes, that too. (Though I like my Firefox example better.)

    Also, what indication is there that GNU is 'dominating' the
    landscape? Sure, Linux is everywhere (such as in now ubiquitous
    Android phones and TVs and whatnot), but I don't quite see GNU
    being adopted as widely.

    Look at all the markets that Linux has taken away from Microsoft --
    Windows Media Center, Windows Home Server -- all defunct. Windows
    Server too is in slow decline.

    I've had very little interest in Microsoft since the 1990s.
    About the only Microsoft-related news I've since paid attention
    to were that Microsoft contributed a fair chunk of code to Linux;
    that Microsoft acquired Github; and that Windows now has WSL.

    I have no idea what Windows Media Center is (or was), and what
    alternatives to it the GNU project, http://gnu.org/ , now offers.

    (I'd guess VLC and FFmpeg might be such alternatives, but last
    I've checked, they were not part of GNU.)

    And now handheld gaming with the Steam Deck. You will find GNU there.

    So I've read http://en.wikipedia.org/wiki/Steam_Deck and found
    out that the device runs SteamOS which, as of version 3.0, is
    based on Arch Linux, thus presumably retaining a fair chunk of
    GNU within. (Bash, Coreutils, Libc, to guess a few packages.
    I doubt it includes GNU Emacs or GNU Chess, though.)

    That said, I'm not sure Steam Deck can /itself/ be said to
    dominate the market:

    Market research firm International Data Corporation estimated that
    between 3.7 and 4 million Steam Decks had been sold by the third anniversary of the device in February 2025.

    How big a market share of handheld gaming computers is 4e6?

    Also, I gather it's not a direct competitor to Android and
    Android-based mobile computers, right?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Thu Sep 4 18:50:29 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-30, Lawrence D'Oliveiro wrote:
    On Sat, 30 Aug 2025 19:10:42 +0000, Ivan Shmakov wrote:
    On Wed, 27 Aug 2025 00:28:20 -0000 (UTC), Lawrence D'Oliveiro wrote:

    I think it makes sense to restate the point I'm arguing for in
    this subthread (see news:y2C3FavstjxdDZ-_@violet.siamics.net ):

    For those who are looking for a system with more "comprehensible"
    sources, I would recommend NetBSD. And if anything, I personally
    find its list of supported platforms, http://netbsd.org/ports/ ,
    fairly impressive.

    Don't get me wrong: NetBSD won't fit for every use case Linux-based systems cover - the complexity of the Linux kernel isn't there
    for nothing - but just in case you /can/ live with a "limited"
    OS (say, one that doesn't support Docker), thanks to NetBSD, you
    /do/ have that option.

    To stress it out, it wasn't my intent to compare NetBSD as
    a whole to the Linux kernel (as that's just silly.) Neither
    was it my intent to compare the NetBSD kernel to Linux, as:
    a. I don't have any use cases for a /kernel/ outside of an OS
    distribution; and b. I mostly use Linux-based systems myself.
    (And hence arguing that way would be a case of failing to
    practice what I preach.)

    All the same, should I ever encounter a problem that requires
    kernel-mode coding, NetBSD would be at the top of my list of
    options - because of code readability.

    Bit misleading, though. Note it counts "Xen" (a Linux-based
    hypervisor) as a separate platform.

    What do you mean by "Linux-based"?

    I mean that Xen runs an actual Linux kernel in the hypervisor,
    and supports regular Linux distros as guests -- they don't need to
    be modified to specially support Xen, or any other hypervisor.

    It's been well over a decade since I've last used Xen, so I'm
    going more by http://en.wikipedia.org/wiki/Xen than experience.

    But just to be sure, I've checked the sources [1], and while
    I do see portions of Linux code reused here and there - such as,
    say, [2] below - I'd hesitate to call Xen at large "Linux-based."
    If anything, there's way more of Linux in the GNU Mach microkernel
    (consider the linux/src/drivers subtree in [3], for instance)
    than in the Xen hypervisor. (And I don't recall GNU Mach being
    called "Linux-based.")

    To note is that there seem to be no mention in CHANGELOG.md of
    anything suggesting that Xen uses Linux as its upstream project.

    * common/notifier.c
    *
    * Routines to manage notifier chains for passing status changes to any
    * interested routines.
    *
    * Original code from Linux kernel 2.6.27 (Alan Cox [...])

    [1] http://downloads.xenproject.org/release/xen/4.20.1/xen-4.20.1.tar.gz
    [2] xen-4.20.1/xen/common/notifier.c
    [3] git://git.sv.gnu.org/hurd/gnumach.git rev. 8d456cd9e417 from 2025-09-03

    It's Linux above, and Linux below -- Linux at every layer.

    Sure, if you want to run it that way. You can also run Xen
    with NetBSD at every layer, or, apparently, OpenSolaris.

    A GNU/Linux distribution AFAICR needs to provide Xen-capable
    kernel for it to be usable as dom0 - as well as Xen user-mode
    tools. Niche / lightweight distributions might omit such support.
    (There're a few build-time options related to Xen in Linux.)

    Also, Xen supports both hardware-assisted virtualization /and/
    paravirtualization. On x86-32, the former is not available, so
    the Linux build /must/ support paravirtualization in order to be
    usable with Xen, dom0 or domU.

    When hardware-assisted virtualization /is/ available, the things
    certainly get easier: pretty much anything that can run under,
    say, Qemu, can be run under Xen HVM. The performance may suffer,
    though, should your domU system happen to lack virtio drivers and
    should thus need to resort to using emulated peripherals instead.

    NetBSD supports running as both Xen domU (unprivileged) /and/
    dom0 (privileged.)

    Linux doesn't count these as separate platforms. They're just
    considered a standard part of regular platform support.

    Which means one needs to be careful when comparing architecture
    support between different kernels.

    My point was that GNU/Linux distributions typically support less

    But that's an issue with the various distributions, not with the
    Linux kernel itself.

    True. That, however, doesn't mean you can use Linux /by itself/
    outside of a distribution. (Unless, of course, you're looking
    for a kernel for a new distribution, but I doubt that undermines
    my point.) So architecture support /you/ will have /will/ be
    limited by the distribution you choose, regardless of what Linux
    itself might offer.

    In the BSD world, there is no separate of "kernel" from "distribution".
    That makes things less flexible than the Linux world.

    That's debatable. Debian for a while had a kFreeBSD port (with
    a variant of the FreeBSD kernel separate from FreeBSD proper), and
    from what I recall, it was discontinued due to lack of volunteers,
    not lack of flexibility.

    For example, while base Debian itself may support something under a
    dozen architectures, there are offshoots of Debian that cover others.

    How is this observation helpful?

    Suppose someone asks, "what OS would you recommend for running
    on loongarch?" and the best answer we here on Usenet can give
    is along the lines of "NetBSD won't work, but there're dozens
    of Debian offshoots around - be sure to check them all, as one
    might happen to support it." Really?

    If you know of Debian offshoots that support architectures
    that Debian itself doesn't, could you please list them here?
    Or, if there's already a list somewhere, share a pointer.

    The way I see it, it's the /kernel/ that it takes the most effort
    to port to a new platform - as it's where the support for peripherals
    lives, including platform-specific ones.

    Given that the Linux kernel supports more of these different
    platforms than any BSD can manage, I think you're just reinforcing
    my point.

    Certainly - if your point is that way more effort went into
    Linux over the past two to three decades than in any of BSDs.
    (And perhaps into /all/ of free BSDs combined, I'd guess.)

    But I still think that if you're interested in understanding how
    your OS works - at the source code level - you'd be better with
    NetBSD than with a Linux-based OS.

    Linux separates the kernel from the userland. That makes things
    simpler than running everything together, as the BSDs do.

    I fail to see why developing the kernel and an OS based on it
    as subprojects to one "umbrella" project would in any way hinder
    code readability.

    Just in case it somehow matters, there're separate tarballs under
    rsync://rsync.netbsd.org/NetBSD/NetBSD-10.1/source/sets/ for the
    kernel (syssrc.tgz) and userland (src, gnusrc, sharesrc, xsrc.)

    That said, I've last tinkered with Linux around the days of
    2.0.36 (IIRC), and I don't recall reading any Linux sources
    newer than version 4. If you have experience patching newer
    Linux kernels, and in particular if you find the code easy to
    follow, - please share your observations.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Fri Sep 5 00:03:17 2025
    From Newsgroup: comp.lang.misc

    On Thu, 04 Sep 2025 18:50:29 +0000, Ivan Shmakov wrote:

    - I'd hesitate to call Xen at large "Linux-based." If anything,
    there's way more of Linux in the GNU Mach microkernel (consider the linux/src/drivers subtree in [3], for instance) than in the Xen
    hypervisor.

    Call it what you like, the fact is, Linux supports it without having
    to list it as a separate platform.

    You could argue equally well that NetBSD is not “BSD” any more,
    because it has diverged too far from the original BSD kernel.

    That, however, doesn't mean you can use Linux /by itself/ outside of
    a distribution.

    How do you think distributions get created in the first place?

    <https://linuxfromscratch.org/>

    Suppose someone asks, "what OS would you recommend for running on
    loongarch?" and the best answer we here on Usenet can give is

    <https://distrowatch.com/search.php?ostype=All&category=All&origin=All&basedon=All&notbasedon=None&desktop=All&architecture=loongarch64&package=All&rolling=All&isosize=All&netinstall=All&language=All&defaultinit=All&status=Active#simpleresults>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.lang.misc on Fri Sep 5 12:02:09 2025
    From Newsgroup: comp.lang.misc

    In article <KKx97WvtTkldzxgb@violet.siamics.net>,
    Ivan Shmakov <ivan@siamics.netREMOVE.invalid> wrote:
    On 2025-08-30, Lawrence D'Oliveiro wrote:

    FYI, you are arguing with a known troll. It is unlikely to turn
    into a productive exercise, so caveat emptor.

    [snip]
    I mean that Xen runs an actual Linux kernel in the hypervisor,
    and supports regular Linux distros as guests -- they don't need to
    be modified to specially support Xen, or any other hypervisor.

    It's been well over a decade since I've last used Xen, so I'm
    going more by http://en.wikipedia.org/wiki/Xen than experience.

    But just to be sure, I've checked the sources [1], and while
    I do see portions of Linux code reused here and there - such as,
    say, [2] below - I'd hesitate to call Xen at large "Linux-based."
    If anything, there's way more of Linux in the GNU Mach microkernel
    (consider the linux/src/drivers subtree in [3], for instance)
    than in the Xen hypervisor. (And I don't recall GNU Mach being
    called "Linux-based.")

    To note is that there seem to be no mention in CHANGELOG.md of
    anything suggesting that Xen uses Linux as its upstream project.

    This is basically correct. Xen falls into the broad category
    known as "Type-1" hypervisors: meaning that Xen controls runs
    directly on the bare metal outside of the context of an existing
    OS (versus, say, KVM, Bhyve, etc). It is true that Xen was
    centered on Linux initially, and pulled in a lot of the code; I
    think it's fair to say that early versions largely started with
    (and in many ways were based on) the Linux kernel, but it has
    clearly gone its own way.

    In the Type-1 model, you still need some software component that
    lets you do stuff like configure virtual machines, provide
    device models to guests, and so on. It's common to provide a
    specially blessed VM instance (Dom0 in Xen; a "root VM" in
    Hyper-V) to do this.

    * common/notifier.c
    *
    * Routines to manage notifier chains for passing status changes to any
    * interested routines.
    *
    * Original code from Linux kernel 2.6.27 (Alan Cox [...])

    [1] http://downloads.xenproject.org/release/xen/4.20.1/xen-4.20.1.tar.gz
    [2] xen-4.20.1/xen/common/notifier.c
    [3] git://git.sv.gnu.org/hurd/gnumach.git rev. 8d456cd9e417 from 2025-09-03

    It's Linux above, and Linux below -- Linux at every layer.

    Sure, if you want to run it that way. You can also run Xen
    with NetBSD at every layer, or, apparently, OpenSolaris.

    A GNU/Linux distribution AFAICR needs to provide Xen-capable
    kernel for it to be usable as dom0 - as well as Xen user-mode
    tools. Niche / lightweight distributions might omit such support.
    (There're a few build-time options related to Xen in Linux.)

    Also, Xen supports both hardware-assisted virtualization /and/ paravirtualization. On x86-32, the former is not available, so
    the Linux build /must/ support paravirtualization in order to be
    usable with Xen, dom0 or domU.

    When hardware-assisted virtualization /is/ available, the things
    certainly get easier: pretty much anything that can run under,
    say, Qemu, can be run under Xen HVM. The performance may suffer,
    though, should your domU system happen to lack virtio drivers and
    should thus need to resort to using emulated peripherals instead.

    Yes. With Xen, you've got the Xen VMM running on the bare metal
    and then any OS capable of supporting Xen's Dom0 requirements
    running as Dom0, and essentially any OS running as a DomU guest.

    So to summarize, you've got a hypervisor that descended from an
    old version of Linux, but was heavily modified, running a gaggle
    of other systems, none of which necessarily needs to be Linux.

    NetBSD supports running as both Xen domU (unprivileged) /and/
    dom0 (privileged.)

    Linux doesn't count these as separate platforms. They're just
    considered a standard part of regular platform support.

    Which means one needs to be careful when comparing architecture
    support between different kernels.

    I gathered your point was that neither Dom0 nor DomU _had_ to be
    Linux, and that's true. Note that the troll likes to subtlely
    change the point that he's arguing.

    My point was that GNU/Linux distributions typically support less

    But that's an issue with the various distributions, not with the
    Linux kernel itself.

    True. That, however, doesn't mean you can use Linux /by itself/
    outside of a distribution. (Unless, of course, you're looking
    for a kernel for a new distribution, but I doubt that undermines
    my point.) So architecture support /you/ will have /will/ be
    limited by the distribution you choose, regardless of what Linux
    itself might offer.

    In the BSD world, there is no separate of "kernel" from "distribution". That makes things less flexible than the Linux world.

    That's debatable. Debian for a while had a kFreeBSD port (with
    a variant of the FreeBSD kernel separate from FreeBSD proper), and
    from what I recall, it was discontinued due to lack of volunteers,
    not lack of flexibility.

    For example, while base Debian itself may support something under a
    dozen architectures, there are offshoots of Debian that cover others.

    How is this observation helpful?

    Suppose someone asks, "what OS would you recommend for running
    on loongarch?" and the best answer we here on Usenet can give
    is along the lines of "NetBSD won't work, but there're dozens
    of Debian offshoots around - be sure to check them all, as one
    might happen to support it." Really?

    If you know of Debian offshoots that support architectures
    that Debian itself doesn't, could you please list them here?
    Or, if there's already a list somewhere, share a pointer.

    The way I see it, it's the /kernel/ that it takes the most effort
    to port to a new platform - as it's where the support for peripherals
    lives, including platform-specific ones.

    Given that the Linux kernel supports more of these different
    platforms than any BSD can manage, I think you're just reinforcing
    my point.

    Certainly - if your point is that way more effort went into
    Linux over the past two to three decades than in any of BSDs.
    (And perhaps into /all/ of free BSDs combined, I'd guess.)

    But I still think that if you're interested in understanding how
    your OS works - at the source code level - you'd be better with
    NetBSD than with a Linux-based OS.

    Linux separates the kernel from the userland. That makes things
    simpler than running everything together, as the BSDs do.

    I fail to see why developing the kernel and an OS based on it
    as subprojects to one "umbrella" project would in any way hinder
    code readability.

    Just in case it somehow matters, there're separate tarballs under rsync://rsync.netbsd.org/NetBSD/NetBSD-10.1/source/sets/ for the
    kernel (syssrc.tgz) and userland (src, gnusrc, sharesrc, xsrc.)

    That said, I've last tinkered with Linux around the days of
    2.0.36 (IIRC), and I don't recall reading any Linux sources
    newer than version 4. If you have experience patching newer
    Linux kernels, and in particular if you find the code easy to
    follow, - please share your observations.

    He doesn't.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Sun Sep 7 15:55:43 2025
    From Newsgroup: comp.lang.misc

    On 2025-09-05, Lawrence D'Oliveiro wrote:
    On Thu, 04 Sep 2025 18:50:29 +0000, Ivan Shmakov wrote:

    I'd hesitate to call Xen at large "Linux-based." If anything,
    there's way more of Linux in the GNU Mach microkernel (consider
    the linux/src/drivers subtree in [3], for instance) than in the
    Xen hypervisor.

    Call it what you like, the fact is, Linux supports it without
    having to list it as a separate platform.

    I can't say I can quite grasp the importance of doing it one
    way or another, but well, I've been loosely working on my own
    "Debian offshoot" over the past few years, and should it ever
    come to a release, I'll be sure to test it with Xen and then
    list "Xen on amd64" alongside "amd64 on bare metal" in its list
    of supported platforms - NetBSD-style.

    You could argue equally well that NetBSD is not "BSD" any more,
    because it has diverged too far from the original BSD kernel.

    That's a good point, actually: as originally defined, "BSD" meant
    "Berkeley Software Distribution," and given that little (if any)
    work on NetBSD is (AIUI) currently being done at UCB, I'd say
    that yes, NetBSD is not "BSD" - and likely never have been.

    (Similarly, I find claims that "Debian is a free Unix" to be
    misleading: "GNU's Not Unix" is right on the cover, after all.)

    NetBSD is a descendant of 386BSD (as, AIUI, are all current
    "BSDs"), itself a descendant of 4.3BSD, so there /is/ a kind
    of continuity. (And likely bits of actual 4.3BSD code within
    NetBSD sources.) No idea if it's of much importance to anyone
    but OS historians.

    That, however, doesn't mean you can use Linux /by itself/ outside
    of a distribution. (Unless, of course, you're looking for a kernel
    for a new distribution, but I doubt that undermines my point.)

    How do you think distributions get created in the first place?

    <https://linuxfromscratch.org/>

    Like I've said, I doubt that undermines my point: you /still/
    choose among distributions rather than kernels, even if one
    (or more) of those distributions is of your own creation.

    When two decades ago I've put together my own "distribution"
    (I've never actually /distributed/ it, hence the quotes), the
    only CPU architecture it supported was "i386" - as that was the
    only one that I've had at hand and could test it on. How many
    others Linux supported at the time, I've had no idea - nor any
    reason to look into: they simply were out of my reach - and thus
    my concern - at the time.

    The aforementioned Debian derivative I'm working on currently
    only supports amd64, though I hope to add riscv64 and (or) arm64
    support eventually. From where I stand, adding support for
    anything beyond that (and especially architectures that aren't
    in Debian, and for which I thus cannot reuse Debian packages)
    is too much effort for too uncertain a gain.

    (Reportedly "i386" support is important for running Steam on
    Debian, but guess what? I use GOG.)

    Sure, it'd be nice to have a Debian derivative to run on my i586
    boxes (not supported after Jessie), but that's lots of effort,
    too - and then there's NetBSD that's already "486DX or better."

    With the above in mind, well, I'm willing to bet that if you
    ever put together your own distribution, it won't support every
    architecture Linux itself claims to support, either.

    Suppose someone asks, "what OS would you recommend for running on
    loongarch?" and the best answer we here on Usenet can give is

    <https://distrowatch.com/search.php?ostype=All&[...]>

    ... Or, in other words: "don't ask for recommendations here on
    Usenet, ask a website instead." What Usenet is even here for,
    then? Rants?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Sun Sep 7 16:30:42 2025
    From Newsgroup: comp.lang.misc

    On 2025-09-05, Dan Cross wrote:
    In article <KKx97WvtTkldzxgb@violet.siamics.net>, Ivan Shmakov wrote: >>>>> On 2025-08-30, Lawrence D'Oliveiro wrote:

    FYI, you are arguing with a known troll. It is unlikely to turn
    into a productive exercise, so caveat emptor.

    I'm inclined to define productive public discussion as one
    that's informative and interesting to read. Given that I've
    actually ended up learning a couple of things along the way,
    I'd say it /was/ productive, in a way.

    With no "views" and "likes" counts here on Usenet, I have no way
    of measuring how interesting the subthread was to others (being
    ill-suited for the group as it is), so I kinda hope for the best.

    When hardware-assisted virtualization /is/ available, the things
    certainly get easier: pretty much anything that can run under,
    say, Qemu, can be run under Xen HVM. The performance may suffer,
    though, should your domU system happen to lack virtio drivers and
    should thus need to resort to using emulated peripherals instead.

    Yes. With Xen, you've got the Xen VMM running on the bare metal and
    then any OS capable of supporting Xen's Dom0 requirements running as
    Dom0, and essentially any OS running as a DomU guest.

    So to summarize, you've got a hypervisor that descended from an
    old version of Linux, but was heavily modified, running a gaggle
    of other systems, none of which necessarily needs to be Linux.

    Glad to know I wasn't too off the mark in this case.

    Linux doesn't count these as separate platforms. They're just
    considered a standard part of regular platform support.

    Which means one needs to be careful when comparing architecture
    support between different kernels.

    I gathered your point was that neither Dom0 nor DomU _had_ to be
    Linux, and that's true.

    More to the point here is that my opponent took offense at
    http://netbsd.org/ports/ listing "Xen" as one of the supported
    "platforms" - apparently for the sole reason that Linux does
    it differently.

    Note that the troll likes to subtlely change the point that he's
    arguing.

    Well, in a properly set up public debate, there's ought to be
    a prior agreement on who's arguing what. This is Usenet, however,
    so we all figure out what points we do and do not want to argue
    along the way. I doubt I can rightfully blame a person for not
    sharing my preferences about on what to argue about - especially
    as I don't pay them for having an argument with me.

    That said, I've last tinkered with Linux around the days of 2.0.36
    (IIRC), and I don't recall reading any Linux sources newer than
    version 4. If you have experience patching newer Linux kernels, and
    in particular if you find the code easy to follow, - please share.

    He doesn't.

    That's what I suspect as well. I'd still be delighted to be
    proven wrong.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sun Sep 7 21:17:02 2025
    From Newsgroup: comp.lang.misc

    On Sun, 07 Sep 2025 15:55:43 +0000, Ivan Shmakov wrote:

    On Fri, 5 Sep 2025 00:03:17 -0000 (UTC), Lawrence D’Oliveiro wrote:

    On Thu, 04 Sep 2025 18:50:29 +0000, Ivan Shmakov wrote:

    I'd hesitate to call Xen at large "Linux-based."

    Call it what you like, the fact is, Linux supports it without
    having to list it as a separate platform.

    I can't say I can quite grasp the importance of doing it one way or
    another ...

    The fact that NetBSD has to list it as a separate platform to get its
    count up.

    Also:

    [09:10 xcp-ng-126 ~]# uname -a
    Linux xcp-ng-126 4.19.0+1 #1 SMP Tue May 6 15:24:43 CEST 2025 x86_64 x86_64 x86_64 GNU/Linux

    ... you /still/ choose among distributions rather than kernels ...

    The fact that all Linux distros share essentially the same kernel
    makes it much easier to interoperate and also switch between them: “distro-hopping” is a common activity in the Linux world, it’s not something that can be encouraged in the BSD world.

    ... Or, in other words: "don't ask for recommendations here on
    Usenet, ask a website instead."

    You asked for information, clearly in the expectation that it would
    not be forthcoming. I gave you the information, now you find another
    reason to complain?
    --- Synchronet 3.21a-Linux NewsLink 1.2