• Re: Windows, was Arm to run within IBM z

    From jgd@jgd@cix.co.uk (John Dallman) to comp.arch on Wed Apr 15 12:23:40 2026
    From Newsgroup: comp.arch

    In article <10rao08$jv9$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    Systems that desire to have one copy of a library installed in
    one place suffer from the obvious problem of needing to expose
    the union of functionality expected of all programs that depend
    on them, either directly or indirectly.

    There are at least three related, but different, use cases for shared
    libraries on Unix-like systems:

    1) Breaking up the operating system functionality into manageable-size
    chunks of sensibly related material. This is the case that shared library system designers are usually dealing with. In some cases, notably macOS,
    there are unspoken assumptions that this is _always_ the case, and shared libraries have strings embedded in them that are copied into programs
    built against them. The purpose of these strings is to tell the loader
    where to find them. This works fine for libraries that have a canonical location in the filesystem, but see below for ones that don't.

    2) Breaking up applications into related chunks of functionality. This
    can be helpful for organisation of code, for producing commercial
    applications with subsets of "full" functionality, and so on. The
    important point here is that the shared libraries are used by a single application, or a suite of related applications. On macOS, this is
    tackled by some special values in the embedded strings that tell the
    loader to look in a filesystem location relative to the application. That avoids the need for an application to have a fixed installation directory, which implies that you can't have two different versions installed.

    3) The fairly rare case of shared libraries intended as "software
    components," to be used in many different applications, but which are
    _not_ extensions to the operating system. Different applications may (and likely do) have different versions of such libraries. On macOS, this
    requires the application developer to modify the embedded strings in the
    shared library before linking against it. Apple provide a tool for doing
    this, but it's fairly obscure. The stuff I work on is in this category.

    John
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.arch on Wed Apr 15 12:23:40 2026
    From Newsgroup: comp.arch

    In article <10r5eor$b6q$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    Quite possibly! It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.

    Containers were designed to make it easy to run lots of different
    applications on the same cloud servers. The companies that offer cloud
    services don't want to solve such problems in the applications - it's
    hard to blame them - and the SaaS companies have learned that their
    customers want cheap, not good, software.

    John
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Wed Apr 15 13:51:41 2026
    From Newsgroup: comp.arch

    In article <memo.20260415122259.25212A@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10rao08$jv9$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    Systems that desire to have one copy of a library installed in
    one place suffer from the obvious problem of needing to expose
    the union of functionality expected of all programs that depend
    on them, either directly or indirectly.

    There are at least three related, but different, use cases for shared >libraries on Unix-like systems:

    I don't know what this has to do with what I wrote that you
    quoted, but I'm afraid it's mostly incorrect.

    The usual use cases for shared objects are a) sharing of text
    and r/o data between processes linked against the same image,
    b) providing fixes to libraries without having to relink
    programs, and c) providing extensibility via the ability to
    dynamically load shared objects into the address space of a
    running process, find e.g. callable functions in those objects
    by looking up entries in their symbol tables, and accessing
    functionality provided by those objects by calling them (using
    a well-defined ABI).

    The last bit is a particularly powerful thing, and is how a
    language interpreter can so easily take advantage of advanced
    functionality that is not built-in or written in that language.
    C.f. Python and its use in the data processing ecosystem, which
    relies heavily on FFI calls to numerical analysis libraries
    written in FORTRAN and C.

    1) Breaking up the operating system functionality into manageable-size
    chunks of sensibly related material. This is the case that shared library >system designers are usually dealing with.

    This doesn't make much sense to me. It is not at all clear what
    you mean by, "operating system functionality." What do you mean
    by "manageable-size chunks of sensibly related material"?

    These statements are so vague I can only speculate what they may
    mean. You write that this is, "the case that shared library
    system designers are usually dealing with", and I confess that I
    have no idea what that might possibly mean.

    Do you mean something like `libc`? If so, bear in mind that
    Unix-style systems have provided libraries for the use of user
    space code since approximately the beginning; they did so with
    static libraries (".a" files) for at least a decade before Unix
    grew support for shared libraries in anything resembling the
    modern sense.

    In some cases, notably macOS,
    there are unspoken assumptions that this is _always_ the case, and shared >libraries have strings embedded in them that are copied into programs
    built against them. The purpose of these strings is to tell the loader
    where to find them. This works fine for libraries that have a canonical >location in the filesystem, but see below for ones that don't.

    Pretty much every dynamic executable has a list of libraries
    that must be loaded by the runtime linker; macOS isn't
    particularly noteable in this regard, though they chose to stick
    with Mach-O and .dylib's as the file format, and not something
    like ELF; that makes then a bit of an outlier, but comparing
    `otool -L` on my Mac workstation to `ldd` on Linux doesn't show
    much that is conceptually different:

    ```
    mac% otool -L /bin/ls
    /bin/ls:
    /usr/lib/libutil.dylib (compatibility version 1.0.0, current version 1.0.0)
    /usr/lib/libncurses.5.4.dylib (compatibility version 5.4.0, current version 5.4.0)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1356.0.0)
    mac%
    ```

    (Aside: I presume the dependency on `ncurses` is so that `ls`
    can figure out how wide the terminal window is for columnated
    output; possibly for handling colors or something as well. I
    dunno; I try to turn as much of that off as I can.)

    ```
    linux% ldd /bin/ls
    linux-vdso.so.1 (0x00007fc9cc29c000)
    libcap.so.2 => /usr/lib/libcap.so.2 (0x00007fc9cc23f000)
    libc.so.6 => /usr/lib/libc.so.6 (0x00007fc9cc04e000)
    /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fc9cc29e000)
    linux%
    ```

    Of course, there are _some_ variations: Linux has the vDSO,
    and macOS has libSystem (it's probably worth nothing that the
    system call interface on macOS is opaque, versus linux where the
    KBI is rigidly defined; perhaps this is what you meant when you
    wrote, "unspoken assumptions that this is _always_ the case"
    above).

    2) Breaking up applications into related chunks of functionality. This
    can be helpful for organisation of code, for producing commercial >applications with subsets of "full" functionality, and so on. The
    important point here is that the shared libraries are used by a single >application, or a suite of related applications. On macOS, this is
    tackled by some special values in the embedded strings that tell the
    loader to look in a filesystem location relative to the application. That >avoids the need for an application to have a fixed installation directory, >which implies that you can't have two different versions installed.

    Huh. That's an interesting idea, but really it's something that
    is facilitated by having shared objects, not something that was
    (or is) a primary motivating factor for shared libraries in the
    first place.

    3) The fairly rare case of shared libraries intended as "software >components," to be used in many different applications, but which are
    _not_ extensions to the operating system. Different applications may (and >likely do) have different versions of such libraries. On macOS, this
    requires the application developer to modify the embedded strings in the >shared library before linking against it. Apple provide a tool for doing >this, but it's fairly obscure. The stuff I work on is in this category.

    Actually, I'd posit that this is very common.

    Everything that I installed from homebrew that cares about, say,
    the PNG library is picking up a single shared object: /opt/homebrew/opt/libpng/lib/libpng16.16.dylib.

    Again, the motivation was primarily sharing; any program using
    the same shared objects running concurrently shares the text,
    read-only data, and metadata of those objects with every other
    such program, as opposed to each statically linked binary
    getting its own copy. It does so at the expense of some
    additional bookkeeping in the operating system, but the overhead
    is not that bad.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Wed Apr 15 13:53:31 2026
    From Newsgroup: comp.arch

    In article <memo.20260415122259.25212B@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10r5eor$b6q$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    Quite possibly! It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.

    Containers were designed to make it easy to run lots of different >applications on the same cloud servers. The companies that offer cloud >services don't want to solve such problems in the applications - it's
    hard to blame them - and the SaaS companies have learned that their
    customers want cheap, not good, software.

    The companies that are offering such things on "cloud servers"
    are not allowing their customers to run those applications on
    the bare metal.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.arch on Wed Apr 15 14:43:31 2026
    From Newsgroup: comp.arch

    cross@spitfire.i.gajendra.net (Dan Cross) writes:
    In article <memo.20260415122259.25212A@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10rao08$jv9$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    Systems that desire to have one copy of a library installed in
    one place suffer from the obvious problem of needing to expose
    the union of functionality expected of all programs that depend
    on them, either directly or indirectly.

    There are at least three related, but different, use cases for shared >>libraries on Unix-like systems:

    I don't know what this has to do with what I wrote that you
    quoted, but I'm afraid it's mostly incorrect.

    The usual use cases for shared objects are a) sharing of text
    and r/o data between processes linked against the same image,
    b) providing fixes to libraries without having to relink
    programs, and c) providing extensibility via the ability to
    dynamically load shared objects into the address space of a
    running process, find e.g. callable functions in those objects
    by looking up entries in their symbol tables, and accessing
    functionality provided by those objects by calling them (using
    a well-defined ABI).

    The last bit is a particularly powerful thing, and is how a
    language interpreter can so easily take advantage of advanced
    functionality that is not built-in or written in that language.
    C.f. Python and its use in the data processing ecosystem, which
    relies heavily on FFI calls to numerical analysis libraries
    written in FORTRAN and C.

    This describes a major use of shared objects. The SoC simulator
    that I work on models a number of discrete SoCs and dynamically loads
    various shared objects based on the model of SoC being simulated.

    <snip>


    2) Breaking up applications into related chunks of functionality. This
    can be helpful for organisation of code, for producing commercial >>applications with subsets of "full" functionality, and so on. The
    important point here is that the shared libraries are used by a single >>application, or a suite of related applications. On macOS, this is
    tackled by some special values in the embedded strings that tell the
    loader to look in a filesystem location relative to the application. That >>avoids the need for an application to have a fixed installation directory, >>which implies that you can't have two different versions installed.

    Huh. That's an interesting idea, but really it's something that
    is facilitated by having shared objects, not something that was
    (or is) a primary motivating factor for shared libraries in the
    first place.

    Indeed, and Unix-like systems have LD_LIBRARY_PATH which supports
    a mechanism "telling the loader were to look for shared object"
    and it also supports binding the path into the ELF executable at
    link time.


    3) The fairly rare case of shared libraries intended as "software >>components," to be used in many different applications, but which are >>_not_ extensions to the operating system. Different applications may (and >>likely do) have different versions of such libraries. On macOS, this >>requires the application developer to modify the embedded strings in the >>shared library before linking against it. Apple provide a tool for doing >>this, but it's fairly obscure. The stuff I work on is in this category.

    Actually, I'd posit that this is very common.

    Indeed. Thinks like libxml and libxsl, for example. Or openssl.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.arch on Wed Apr 15 12:55:43 2026
    From Newsgroup: comp.arch

    On 4/11/2026 4:37 PM, Lawrence D’Oliveiro wrote:
    On Sat, 11 Apr 2026 14:49:21 -0700, Chris M. Thomasson wrote:

    On 4/11/2026 2:37 PM, Lawrence D’Oliveiro wrote:

    Shared library versioning is for dealing with backward-incompatible
    changes to the ABI, not (necessarily) the API.

    For example, some struct that is passed to a library call might
    have some more fields added to it. The setup call sets those fields
    to sensible defaults, so existing client code can be recompiled
    against the new interface, linked against the new library version,
    and continue to work unchanged.

    Windows on Alpha, windows on MIPS, ect...

    Windows doesn’t do shared library versioning though, does it?

    DLL HELL? A fun one.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Wed Apr 15 22:28:43 2026
    From Newsgroup: comp.arch

    On Wed, 15 Apr 2026 12:22 +0100 (BST), John Dallman wrote:

    Containers were designed to make it easy to run lots of different applications on the same cloud servers.

    Linux systems have been serving multirole operations for years. Time
    was that the built-in multiuser capabilities were sufficient to keep
    apps isolated from each other.

    The problem seems to be with increasing use of proprietary apps. The
    developers of those seem to be accustomed to thinking that they have
    full control over the machine their software is running on.

    So virtualization became popular as a way of dealing with this, by
    isolating each problem app into what it thinks is its own machine.

    Full virtualization has a certain cost in terms of resources used. For
    example, each VM needs its own OS installation. Containerization is a
    much lighter-weight solution, that shares the OS kernel among multiple userlands, while still keeping the latter isolated from each other.

    This requires special features of the OS kernel that are only
    available in Linux. It also means that the apps must be developed for
    Linux. This seems to have happened anyway; Windows Server has very
    little presence in the cloud, even in Microsoft’s cloud.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Thu Apr 16 00:49:52 2026
    From Newsgroup: comp.arch

    On Wed, 15 Apr 2026 12:22 +0100 (BST), John Dallman wrote:

    There are at least three related, but different, use cases for
    shared libraries on Unix-like systems:

    The first two of which you mention have to do with “libraries”, not necessarily “shared libraries”.

    3) The fairly rare case of shared libraries intended as "software components," to be used in many different applications, but which
    are _not_ extensions to the operating system.

    I don’t know why you think these are “rare”. Look at this collection <http://ftp.nz.debian.org/debian/pool/main/> of the standard packages
    for Debian, just for example: notice how half of the names of the
    subdirectory groupings begin with “lib”? Those are all shared
    libraries.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From John Levine@johnl@taugh.com to comp.arch on Thu Apr 16 01:24:26 2026
    From Newsgroup: comp.arch

    According to Scott Lurndal <slp53@pacbell.net>:
    3) The fairly rare case of shared libraries intended as "software >>>components," to be used in many different applications, but which are >>>_not_ extensions to the operating system. Different applications may (and >>>likely do) have different versions of such libraries. On macOS, this >>>requires the application developer to modify the embedded strings in the >>>shared library before linking against it. Apple provide a tool for doing >>>this, but it's fairly obscure. The stuff I work on is in this category.

    Actually, I'd posit that this is very common.

    Indeed. Thinks like libxml and libxsl, for example. Or openssl.

    Or for that matter, libc.so which is linked into every linux or BSD
    program. In my experience that is by far the major use of shared libraries.

    An important but distant second is loadable code modules like the ones
    many python libraries use.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Paul Clayton@paaronclayton@gmail.com to comp.arch on Sat Apr 18 22:35:31 2026
    From Newsgroup: comp.arch

    On 4/8/26 3:25 PM, John Levine wrote:
    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    Maybe he's thinking of what's known as DLL Hell, which resulted
    from Microsoft's neglecting to put a version number in DLL filenames. ... >>
    Quite possibly! It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.

    That's kind of sad.

    ELF library names include multi-part version numbers, with the plan being that
    if the API changes you bump the major number, while if it's a bug fix or otherwise compatible you just bump the minor number.

    One issue with this kind of version numbering is that adding a
    feature is an API change but does not necessarily prevent
    compatibility. This expression of versioning allows users
    dependent on newer features to request the newer major version,
    but it does not allow a newer major version to be used when it
    is compatible (which might be desirable to reduce library bloat
    or just not force installation of a different version).

    While one could include a backward compatibility range (e.g.,
    14-12 or 14b to indicate implementation of version 14 features
    with compatibility back to version (b) to version 12), this
    would still not maximize the compatibility support (which may
    well not be a desirable goal).

    Bug compatibility is also an issue as are leaky abstractions.
    For ISAs, if early implementations use a stronger memory model
    (or appear to do so), software could be written to exploit that
    and be incompatible with future hardware.

    One can also have performance (or non-time resource consumption) incompatibilities, where the abstract architecture is the same
    but some uses may be significantly impacted. E.g., an ISA might
    initially not define a preferred register zeroing idiom and
    software might choose any single cycle instruction that zeros a
    register, but later implementations might only optimize one of
    those options.

    (A library function that moved from a O(nlogn) implementation to
    a O(logn) implementation with larger constant factor might
    "break" software that calls the function N times rather than
    once with an N-times larger list/array but be broadly considered
    an improvement.)

    Then you symlink the name
    with the minor version number back to the name with just the major number so the
    lists of imported library names just have the major number. This seems to work
    OK on FreeBSD. What happens on linux?

    I guess using minor version is tricky as it depends on the
    specific implementation. The fourth bug fix version of Burzatt's
    implementation of the 14th version of the libgood API would be
    different than the fourth of Blinfoo's implementation. (There
    could, in theory, be cooperation in minor versioning when a
    common bug is discovered, but that seems unlikely.)

    I guess the major version represents the abstract interface and
    the minor version represents a "release number". With ISAs,
    multiple resource-use targets (power-performance-area tradeoffs,
    e.g.) might be pursued with the same abstract interface (the
    original goal of Architecture), so specifying the interface
    version number is not sufficient.

    There may also be desirable for different implementations to
    deprecate features (possibly where a feature technically works
    but has poor resource use traits) or extend the lifetime of a
    feature. E.g., an ISA version might specify that an opcode no
    longer needs to be supported and can either generate an illegal
    opcode exception or provide the legacy behavior while the
    implementation could still claim to be of that version of the
    ISA.

    [I feel my mind is getting fuzzy, sleep is calling.]
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Sun Apr 19 03:04:57 2026
    From Newsgroup: comp.arch

    On Sat, 18 Apr 2026 22:35:31 -0400, Paul Clayton wrote:

    One issue with this kind of version numbering is that adding a
    feature is an API change but does not necessarily prevent
    compatibility.

    This is why we distinguish between “API” changes and “ABI” changes.

    API changes which do not impact compatibility with existing binaries
    built against that shared library do not require a library version
    bump. ABI changes, which affect the generated code in some way (e.g.
    increasing the size of fixed structures), require some kind of
    versioning, either at the entire library level, or possibly just at
    the individual symbol level.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From George Neuner@gneuner2@comcast.net to comp.arch on Sun Apr 19 00:34:23 2026
    From Newsgroup: comp.arch

    On Sat, 18 Apr 2026 22:35:31 -0400, Paul Clayton
    <paaronclayton@gmail.com> wrote:

    On 4/8/26 3:25 PM, John Levine wrote:
    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    Maybe he's thinking of what's known as DLL Hell, which resulted
    from Microsoft's neglecting to put a version number in DLL filenames. ... >>>
    Quite possibly! It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.

    That's kind of sad.

    ELF library names include multi-part version numbers, with the plan being that
    if the API changes you bump the major number, while if it's a bug fix or
    otherwise compatible you just bump the minor number.

    One issue with this kind of version numbering is that adding a
    feature is an API change but does not necessarily prevent
    compatibility. This expression of versioning allows users
    dependent on newer features to request the newer major version,
    but it does not allow a newer major version to be used when it
    is compatible (which might be desirable to reduce library bloat
    or just not force installation of a different version).

    While one could include a backward compatibility range (e.g.,
    14-12 or 14b to indicate implementation of version 14 features
    with compatibility back to version (b) to version 12), this
    would still not maximize the compatibility support (which may
    well not be a desirable goal).

    And does not address the possibility of gaps in the range of
    compatible versions.

    I have encountered this problem just a few times, but it has been a
    nightmare each time: a new version changes some behavior in a way
    that is incompatible with your application. Some versions later after
    many complaints, the original behavior is restored [with new behavior
    kept but moved to a new API]. Now there is a gap in the list of
    compatible versions ... e.g., you can use versions C D E ... H but F
    and G won't work.


    Bug compatibility is also an issue as are leaky abstractions.

    Yup!


    For ISAs, if early implementations use a stronger memory model
    (or appear to do so), software could be written to exploit that
    and be incompatible with future hardware.

    One can also have performance (or non-time resource consumption) >incompatibilities, where the abstract architecture is the same
    but some uses may be significantly impacted. E.g., an ISA might
    initially not define a preferred register zeroing idiom and
    software might choose any single cycle instruction that zeros a
    register, but later implementations might only optimize one of
    those options.

    Seen this too working with DSPs.



    (A library function that moved from a O(nlogn) implementation to
    a O(logn) implementation with larger constant factor might
    "break" software that calls the function N times rather than
    once with an N-times larger list/array but be broadly considered
    an improvement.)

    Then you symlink the name
    with the minor version number back to the name with just the major number so the
    lists of imported library names just have the major number. This seems to work
    OK on FreeBSD. What happens on linux?

    I guess using minor version is tricky as it depends on the
    specific implementation. The fourth bug fix version of Burzatt's >implementation of the 14th version of the libgood API would be
    different than the fourth of Blinfoo's implementation. (There
    could, in theory, be cooperation in minor versioning when a
    common bug is discovered, but that seems unlikely.)

    I guess the major version represents the abstract interface and
    the minor version represents a "release number". With ISAs,
    multiple resource-use targets (power-performance-area tradeoffs,
    e.g.) might be pursued with the same abstract interface (the
    original goal of Architecture), so specifying the interface
    version number is not sufficient.

    There may also be desirable for different implementations to
    deprecate features (possibly where a feature technically works
    but has poor resource use traits) or extend the lifetime of a
    feature. E.g., an ISA version might specify that an opcode no
    longer needs to be supported and can either generate an illegal
    opcode exception or provide the legacy behavior while the
    implementation could still claim to be of that version of the
    ISA.

    [I feel my mind is getting fuzzy, sleep is calling.]
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Sun Apr 19 06:03:49 2026
    From Newsgroup: comp.arch

    In article <10s1f1m$3mi35$1@dont-email.me>,
    Paul Clayton <paaronclayton@gmail.com> wrote:
    On 4/8/26 3:25 PM, John Levine wrote:
    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    Maybe he's thinking of what's known as DLL Hell, which resulted
    from Microsoft's neglecting to put a version number in DLL filenames. ... >>>
    Quite possibly! It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.

    That's kind of sad.

    ELF library names include multi-part version numbers, with the plan being that
    if the API changes you bump the major number, while if it's a bug fix or
    otherwise compatible you just bump the minor number.

    One issue with this kind of version numbering is that adding a
    feature is an API change but does not necessarily prevent
    compatibility. This expression of versioning allows users
    dependent on newer features to request the newer major version,
    but it does not allow a newer major version to be used when it
    is compatible (which might be desirable to reduce library bloat
    or just not force installation of a different version).

    This is precisely the type of problem that so-called "semantic
    versioning" (semver) is meant to solve: https://semver.org

    It works decently well, as long as projects actually follow the
    convention.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2