• Looks like sorting of rational trees needs an existential type (Was:Prolog totally missed the AI Boom)

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 13:57:20 2025
    From Newsgroup: comp.lang.prolog

    Looks like sorting of rational trees
    needs an existential type, if we go full “logical”.
    If I use my old code from 2023 which computes
    a finest (*), i.e. non-monster, bisimulation

    pre-quotient (**) in prefix order:

    factorize(T, _, T) --> {var(T)}, !.
    factorize(T, C, V) --> {compound(T), member(S-V, C), S == T}, !.
    factorize(T, C, V) --> {compound(T)}, !,
    [V = S],
    {T =.. [F|L]},
    factorize_list(L, [T-V|C], R),
    {S =.. [F|R]}.
    factorize(T, _, T) --> [].

    I see that it always generates new
    intermediate variables:

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_8066=f(_8066)]-_8066

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_10984=f(_10984)]-_10984

    What would be swell if it would generate an
    existential quantifier, something like T^([T = f(T)]-T)
    in the above case. Then using alpha conversion
    different factorization runs would be equal,

    when they only differ by the introduced
    intermediate variables. But Prolog has no alpha
    conversion, only λ-Prolog has such things.
    So what can we do, how can we produce a

    representation, that can be used for sorting?

    (*) Why finest and not corsets? Because it uses
    non-monster instructions and not monster
    instructions

    (**) Why only pre-quotient? Because a
    XXX_with_stack algorithm does not fully
    deduplicate the equations, would
    probably need a XXX_with_memo algorithm.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 14:03:42 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    So do we see a new wave in interst in bismulation,
    especially in computing existential types for all
    kind of things? It seems so, quite facinating find:

    BQ-NCO: Bisimulation Quotienting for Efficient
    Neural Combinatorial Optimization
    https://arxiv.org/abs/2301.03313

    Has nobody less than Jean-Marc Andreoli on the
    author list. Possibly the same guy from earlier
    Focusing and Linear Logic, who was associated with

    ECRC Munich in 1990’s, but now working for naverlabs.com.

    Bye

    Mild Shock schrieb:
    Looks like sorting of rational trees
    needs an existential type, if we go full “logical”.
    If I use my old code from 2023 which computes
    a finest (*), i.e. non-monster, bisimulation

    pre-quotient (**) in prefix order:

    factorize(T, _, T) --> {var(T)}, !.
    factorize(T, C, V) --> {compound(T), member(S-V, C), S == T}, !.
    factorize(T, C, V) --> {compound(T)}, !,
       [V = S],
       {T =.. [F|L]},
       factorize_list(L, [T-V|C], R),
       {S =.. [F|R]}.
    factorize(T, _, T) --> [].

    I see that it always generates new
    intermediate variables:

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_8066=f(_8066)]-_8066

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_10984=f(_10984)]-_10984

    What would be swell if it would generate an
    existential quantifier, something like T^([T = f(T)]-T)
    in the above case. Then using alpha conversion
    different factorization runs would be equal,

    when they only differ by the introduced
    intermediate variables. But Prolog has no alpha
    conversion, only λ-Prolog has such things.
    So what can we do, how can we produce a

    representation, that can be used for sorting?

    (*) Why finest and not corsets? Because it uses
    non-monster instructions and not monster
    instructions

    (**) Why only pre-quotient? Because a
    XXX_with_stack algorithm does not fully
    deduplicate the equations, would
    probably need a XXX_with_memo algorithm.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 15:18:31 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    To do bi-simulation, you don't need to wear
    this t-shirt, bi-simulation doesn't refer to
    any sexual orientation, although you could give

    it a game theoretic touch with Samson and Delilah:

    Why Are You Geh T-Shirt https://www.amazon.co.uk/Why-Are-You-Gay-T-Shirt/dp/B0DJMZFQN8

    “bi-simulation equivalent” is sometimes simply
    named “bi-similar”. There is a nice paper by Manual
    Carro which gives a larger bisimilarity example:

    An Application of Rational Trees in a Logic.
    Programming Interpreter for a Procedural Language
    Manuel Carro - 2004
    https://arxiv.org/abs/cs/0403028v1

    He makes the case of “goto” in a programming
    language, where Labels are not needed, simply
    rational tree sharing and looping can be used.

    The case, from Figure 5: Threading the code into
    a rational tree, uses in its result the simpler
    bisimilarity, doesn’t need that much of a more
    elaborat bisimulation later.

    You can use dicts (not SWI-Prolog dicts, but
    some table operations) lookup to create the
    rational tree. But I guess you can also use dicts
    (again table operations) for the reverse, find

    some factorization of a rational tree,
    recreate the labels and jumps.

    Bye

    Mild Shock schrieb:
    Hi,

    So do we see a new wave in interst in bismulation,
    especially in computing existential types for all
    kind of things? It seems so, quite facinating find:

    BQ-NCO: Bisimulation Quotienting for Efficient
    Neural Combinatorial Optimization
    https://arxiv.org/abs/2301.03313

    Has nobody less than Jean-Marc Andreoli on the
    author list. Possibly the same guy from earlier
    Focusing and Linear Logic, who was associated with

    ECRC Munich in 1990’s, but now working for naverlabs.com.

    Bye

    Mild Shock schrieb:
    Looks like sorting of rational trees
    needs an existential type, if we go full “logical”.
    If I use my old code from 2023 which computes
    a finest (*), i.e. non-monster, bisimulation

    pre-quotient (**) in prefix order:

    factorize(T, _, T) --> {var(T)}, !.
    factorize(T, C, V) --> {compound(T), member(S-V, C), S == T}, !.
    factorize(T, C, V) --> {compound(T)}, !,
        [V = S],
        {T =.. [F|L]},
        factorize_list(L, [T-V|C], R),
        {S =.. [F|R]}.
    factorize(T, _, T) --> [].

    I see that it always generates new
    intermediate variables:

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl.
    [_8066=f(_8066)]-_8066

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl.
    [_10984=f(_10984)]-_10984

    What would be swell if it would generate an
    existential quantifier, something like T^([T = f(T)]-T)
    in the above case. Then using alpha conversion
    different factorization runs would be equal,

    when they only differ by the introduced
    intermediate variables. But Prolog has no alpha
    conversion, only λ-Prolog has such things.
    So what can we do, how can we produce a

    representation, that can be used for sorting?

    (*) Why finest and not corsets? Because it uses
    non-monster instructions and not monster
    instructions

    (**) Why only pre-quotient? Because a
    XXX_with_stack algorithm does not fully
    deduplicate the equations, would
    probably need a XXX_with_memo algorithm.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 19:10:09 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    you do need a theory of terms, and a specific one

    You could pull an Anti Ackerman. Negate the
    infinity axiom like Ackerman did here, where
    he also kept the regularity axiom:

    Die Widerspruchsfreiheit der allgemeinen Mengenlehre
    Ackermann, Wilhelm - 1937 https://www.digizeitschriften.de/id/235181684_0114%7Clog23

    But instead of Ackermann, you get an Anti (-Foundation)
    Ackermann if you drop the regularity axiom. Result, you
    get a lot of exotic sets, among which are also the

    famous Quine atoms:

    x = {x}

    Funny that in the setting I just described , where
    there is the negation of the infinity axiom, i.e.
    all sets are finite, contrary to the usually vulgar
    view, x = {x} is a finite object. Just like in Prolog

    X = f(X) is in principle a finite object, it has
    only one subtree, or what Alain Colmerauer
    already postulated:

    Definition: a "rational" tre is a tree which
    has a finite set of subtrees.

    Bye

    Mild Shock schrieb:
    Hi,

    Ok I have to correct "Rational Term" was less
    common, what was more in use "Rational Trees",
    but they might have also talked about finitely

    represented infinite tree. Rational trees itself
    probably an echo from Dmitry Mirimanoffs
    (1861–1945) “extraordinaire” sets.

    Dmitry Semionovitch Mirimanoff (Russian:
    Дми́трий Семёнович Мирима́нов; 13 September 1861, Pereslavl-Zalessky, Russia – 5 January 1945, Geneva,
    Switzerland) was a member of the Moscow Mathematical
    Society in 1897.[1] And later became a doctor of
    mathematical sciences in 1900, in Geneva, and
    taught at the universities of Geneva and Lausanne. https://en.wikipedia.org/wiki/Dmitry_Mirimanoff

    This year we can again celebrate another researcher,
    who died in 2023, Peter Aczel R.I.P., who made
    as well some thoughtful deviance from orthodoxy.

    Peter Aczel Memorial Conference on 10th September 2025.
    Logic Colloquium will take place at the University
    of Manchester  (UK) from 11th to 12th September 2025 https://sites.google.com/view/blc2025/home

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    An example of human intelligence, is of course the
    name "Rational Term" for cyclic terms set forth by
    Alain Colmerauer. Since it plays with "Rational Numbers".

    A subset of cyclic terms can indeed represent
    rational numbers, and they give a nice counter
    example to transitivity:

    ?- problem(X,Y,Z).
    X = _S1-7-9-1, % where
         _S1 = _S1-6-8-0-6-2-8,
    Y = _S2-1-6-1-5-4-6-1, % where
         _S2 = _S2-0-9-2,
    Z = _S3-3-0, % where
         _S3 = _S3-8-1

    The Fuzzer 2 from 2025 does just what I did in 2023,
    expanding rational numbers into rational terms:

    % fuzzy(-Term)
    fuzzy(X) :-
        random_between(1,100,A),
        random_between(1,100,B),
        random_between(1,10,M),
        fuzzy_chunk(M,A,B,C,X,Y),
        random_between(1,10,L),
        fuzzy_chunk(L,C,B,_,Y,Z),
        Z = Y.

    % fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
    fuzzy_chunk(0, A, _, A, X, X) :- !.
    fuzzy_chunk(N, A, B, C, Y-D, X) :-
        M is N-1,
        D is A // B,
        H is 10*(A - B*D),
        fuzzy_chunk(M, H, B, C, Y, X).

    Bye

    Mild Shock schrieb:
    Hi,

    Rota often celebrated symbolic, analogical, and
    conceptual understanding over brute calculation.
    This philosophy has come full circle in modern AI:

    - Large Language Models (LLMs) like GPT-4 don't
       just store facts — they recognize patterns,
       make analogies, and generate new structures
       from old ones.

    - Rota’s work in combinatorics, symbolic logic, and
       operator theory is essentially pattern-based
       manipulation — exactly the kind of reasoning LLMs
       aim to emulate at scale.

    Rota had a clear aesthetic. He valued clean formalisms,
    symbolic beauty, and well-defined structures. Rota wanted
    mathematics to mean something — to be not just correct,
    but intelligible and expressive.

    In contrast, modern AI (especially LLMs like GPT) thrives
    on the messy, including: Noisy data , Inconsistency ,
    Uncertainty, Contradiction. AI engineers today are mining
    meaning from noise.

    What counts as “structure” is often just the best
    pragmatic/effective description available at that moment.

    Bye

    Mild Shock schrieb:
    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump
    https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein?
    https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 19:14:13 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    But you might then experience the problem
    that the usual extensionality axiom of
    set theory is not enough, there could
    be two quine atoms y = {y} and x = {x}
    with x=/=y.

    On the other hand SWI-Prolog is convinced
    that X = [X] and Y = [Y] are the same,
    it can even apply member/2 to it since
    it has built-in rational trees:

    /* SWI-Prolog 9.3.25 */
    ?- X = [X], Y = [Y], X == Y.
    X = Y, Y = [Y].

    ?- X = [X], member(X, X).
    X = [X].

    But Peter Aczel’s Original AFA Statement was
    only Uniqueness of solutions to graph equations,
    whereas today we would talk about Equality =

    existence of a bisimulation relation.

    Bye

    Hi,

    you do need a theory of terms, and a specific one

    You could pull an Anti Ackerman. Negate the
    infinity axiom like Ackerman did here, where
    he also kept the regularity axiom:

    Die Widerspruchsfreiheit der allgemeinen Mengenlehre
    Ackermann, Wilhelm - 1937 https://www.digizeitschriften.de/id/235181684_0114%7Clog23

    But instead of Ackermann, you get an Anti (-Foundation)
    Ackermann if you drop the regularity axiom. Result, you
    get a lot of exotic sets, among which are also the

    famous Quine atoms:

    x = {x}

    Funny that in the setting I just described , where
    there is the negation of the infinity axiom, i.e.
    all sets are finite, contrary to the usually vulgar
    view, x = {x} is a finite object. Just like in Prolog

    X = f(X) is in principle a finite object, it has
    only one subtree, or what Alain Colmerauer
    already postulated:

    Definition: a "rational" tre is a tree which
    has a finite set of subtrees.

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jul 25 21:27:28 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    That is extremly embarassing. I don’t know
    what you are bragging about, when you wrote
    the below. You are wrestling with a ghost!
    Maybe you didn’t follow my superbe link:

    seemingly interesting paper. In stead
    particular, his final coa[l]gebra theorem

    The link behind Hopcroft and Karp (1971) I
    gave, which is a Bisimulation and Equirecursive
    Equality hand-out, has a coalgebra example,
    I used to derive pairs.pl from:

    https://www.cs.cornell.edu/courses/cs6110/2014sp/Lectures/lec35a.pdf

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jul 25 21:38:59 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    My beloved Logic professor introduced Non-Wellfounded
    in the form of library cards, sorry only German:

    Wir denken uns dazu eine Kartothek, auf deren
    Karten wieder Karten derselben Kartothek
    aufgeführt sind. Ein Beispiel einer solchen
    Kartothek wäre etwa das folgende : wir haben
    drei Karten a, b, c; a führt a und b auf, b
    die Karten a und c, c die Karte b a = (a, b),
    b = (a, c), c = (b). Entsprechend den sich
    nicht selbst als Element enthaltenden Mengen
    fragen wir nach den Karten, die sich nicht
    selbst aufführen. Die Karte a ist die einzige,
    die sich selbst aufführt ; b und c sind somit
    die sich nicht selbst aufführenden Karten.

    He then concludes that Non-Wellfounded has still the
    Russell Paradox, and hence also the productive form of it:

    Es gibt somit in jeder Kartothek eine
    Gesamtheit G von Karten, zu der es keine Karte
    gibt, die genau jene aus G aufführt. (Für endliche
    Kartotheken ist dies ziemlich selbstverständlich,
    doch wollen wir auch unendliche Kartotheken in
    Betracht ziehen.) Dieser Satz schliesst aber
    natürlich nicht aus, dass es stets möglich ist,
    eine genau die Karten aus G aufführende Karte
    herzustellen und diese in die Kartothek zu legen.
    Nur müssen wir mit der Möglich-

    What is your opinion? Excerpt from:

    **DIE ANTINOMIEN DER MENGENLEHRE**
    E. Specker, Dialectica, Vol. 8, No. 3 (15. 9. 1954) https://www.jstor.org/stable/42964119?seq=7

    Bye

    Mild Shock schrieb:
    Hi,

    That is extremly embarassing. I don’t know
    what you are bragging about, when you wrote
    the below. You are wrestling with a ghost!
    Maybe you didn’t follow my superbe link:

    seemingly interesting paper. In stead
    particular, his final coa[l]gebra theorem

    The link behind Hopcroft and Karp (1971) I
    gave, which is a Bisimulation and Equirecursive
    Equality hand-out, has a coalgebra example,
    I used to derive pairs.pl from:

    https://www.cs.cornell.edu/courses/cs6110/2014sp/Lectures/lec35a.pdf

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jul 25 23:03:45 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Take this exercise. Exercise 4.1 Draw the tree
    represented by the term n1(n2(n4),n3(n5,n6)). https://book.simply-logical.space/src/text/2_part_ii/4.1.html

    Maybe there was a plan that SWISH can draw trees,
    and it could be that something was implemented as well.

    But I don't see anything dynamic working on the
    above web site link. Next challenge for Simply Logical,

    in another life. Draw a rational tree.
    The Prolog system has them:

    /* SWI-Prolog 9.3.26 */
    ?- X = a(Y,_), Y = b(X,_).
    X = a(b(X, _A), _),
    Y = b(X, _A).

    Bye

    Mild Shock schrieb:
    Hi,

    My beloved Logic professor introduced Non-Wellfounded
    in the form of library cards, sorry only German:

    Wir denken uns dazu eine Kartothek, auf deren
    Karten wieder Karten derselben Kartothek
    aufgeführt sind. Ein Beispiel einer solchen
    Kartothek wäre etwa das folgende : wir haben
    drei Karten a, b, c; a führt a und b auf, b
    die Karten a und c, c die Karte b a = (a, b),
    b = (a, c), c = (b). Entsprechend den sich
    nicht selbst als Element enthaltenden Mengen
    fragen wir nach den Karten, die sich nicht
    selbst aufführen. Die Karte a ist die einzige,
    die sich selbst aufführt ; b und c sind somit
    die sich nicht selbst aufführenden Karten.

    He then concludes that Non-Wellfounded has still the
    Russell Paradox, and hence also the productive form of it:

    Es gibt somit in jeder Kartothek eine
    Gesamtheit G von Karten, zu der es keine Karte
    gibt, die genau jene aus G aufführt. (Für endliche
    Kartotheken ist dies ziemlich selbstverständlich,
    doch wollen wir auch unendliche Kartotheken in
    Betracht ziehen.) Dieser Satz schliesst aber
    natürlich nicht aus, dass es stets möglich ist,
    eine genau die Karten aus G aufführende Karte
    herzustellen und diese in die Kartothek zu legen.
    Nur müssen wir mit der Möglich-

    What is your opinion? Excerpt from:

    **DIE ANTINOMIEN DER MENGENLEHRE**
    E. Specker, Dialectica, Vol. 8, No. 3 (15. 9. 1954) https://www.jstor.org/stable/42964119?seq=7

    Bye

    Mild Shock schrieb:
    Hi,

    That is extremly embarassing. I don’t know
    what you are bragging about, when you wrote
    the below. You are wrestling with a ghost!
    Maybe you didn’t follow my superbe link:

    seemingly interesting paper. In stead
    particular, his final coa[l]gebra theorem

    The link behind Hopcroft and Karp (1971) I
    gave, which is a Bisimulation and Equirecursive
    Equality hand-out, has a coalgebra example,
    I used to derive pairs.pl from:

    https://www.cs.cornell.edu/courses/cs6110/2014sp/Lectures/lec35a.pdf

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Jul 26 16:10:59 2025
    From Newsgroup: comp.lang.prolog


    I guess there is a bug in preparing flat terms vector

    I give you a gold medal 🥇, if you can prove a
    compare_index/3 correct that uses this rule. It
    was already shown impossible by Matt Carlson.

    There are alternative approaches that can reach
    transitivity, but do not use the below step
    inside some compare_index/3.

    compare_term_args(I, C, X, Y, A, H):-
    arg(I, X, K),
    arg(I, Y, L),
    !,
    compare_index(D, K, L, A, H),
    ( D = (=) ->
    I0 is I + 1,
    compare_term_args(I0, C, X, Y, A, H)
    ; C = D
    ).
    compare_term_args(_ ,= , _, _, _, _).

    Maybe there is a grain of salt of invoking the
    Axiom of Choice (AC) in some previous posts.
    Although the Axiom of Choice is not needed for

    finite sets, they have anyway some choice.

    BTW: When Peter Aczel writes ZFC-, he then
    means ZFC without AC, right? But he doesn’t
    show some compare/3 .

    Mild Shock schrieb:
    Hi,

    Take this exercise. Exercise 4.1 Draw the tree
    represented by the term n1(n2(n4),n3(n5,n6)). https://book.simply-logical.space/src/text/2_part_ii/4.1.html

    Maybe there was a plan that SWISH can draw trees,
    and it could be that something was implemented as well.

    But I don't see anything dynamic working on the
    above web site link. Next challenge for Simply Logical,

    in another life. Draw a rational tree.
    The Prolog system has them:

    /* SWI-Prolog 9.3.26 */
    ?- X = a(Y,_), Y = b(X,_).
    X = a(b(X, _A), _),
    Y = b(X, _A).

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Jul 26 16:17:42 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Did the old School Logicians waste time
    with compare/3 ? I guess no:

    Ernst Specker, my beloved Professor, and
    Dana Scott made only a partial order. A
    partial order might have transitivity

    of (<') lacking:

    "Scott's model construction is in fact
    closely related to Specker's but there
    is a subtle difference in the notion of
    tree that they use. In fact neither of
    them formulate their notion of tree in
    terms of graphs but rather in terms of
    what it will be convenient here to
    call tree-partial-orderings."

    See here:

    NON-WELL-FOUNDED SETS
    Peter Aczel - 1988 https://les-mathematiques.net/vanilla/uploads/editor/fh/v4pi6qyxfbel.pdf

    There is also the notion of co-well-
    foundedness, something like Noetherian but
    up side down, i.e. certain ascending
    chains stabilizing.

    Bye

    Mild Shock schrieb:

    I guess there is a bug in preparing flat terms vector

    I give you a gold medal 🥇, if you can prove a
    compare_index/3 correct that uses this rule. It
    was already shown impossible by Matt Carlson.

    There are alternative approaches that can reach
    transitivity, but do not use the below step
    inside some compare_index/3.

    compare_term_args(I, C, X, Y, A, H):-
            arg(I, X, K),
            arg(I, Y, L),
            !,
            compare_index(D, K, L, A, H),
            (    D = (=) ->
                I0 is I + 1,
                compare_term_args(I0, C, X, Y, A, H)
            ;    C = D
            ).
    compare_term_args(_ ,= , _, _, _, _).

    Maybe there is a grain of salt of invoking the
    Axiom of Choice (AC) in some previous posts.
    Although the Axiom of Choice is not needed for

    finite sets, they have anyway some choice.

    BTW: When Peter Aczel writes ZFC-, he then
    means ZFC without AC, right? But he doesn’t
    show some compare/3 .

    Mild Shock schrieb:
    Hi,

    Take this exercise. Exercise 4.1 Draw the tree
    represented by the term n1(n2(n4),n3(n5,n6)).
    https://book.simply-logical.space/src/text/2_part_ii/4.1.html

    Maybe there was a plan that SWISH can draw trees,
    and it could be that something was implemented as well.

    But I don't see anything dynamic working on the
    above web site link. Next challenge for Simply Logical,

    in another life. Draw a rational tree.
    The Prolog system has them:

    /* SWI-Prolog 9.3.26 */
    ?- X = a(Y,_), Y = b(X,_).
    X = a(b(X, _A), _),
    Y = b(X, _A).

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Jul 26 16:36:35 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Is compare/3 for rational trees a sunflower
    study subject? With one publication from
    the University of Tanzania? Who knows?

    Bye

    Mild Shock schrieb:
    Hi,

    Did the old School Logicians waste time
    with compare/3 ? I guess no:

    Ernst Specker, my beloved Professor, and
    Dana Scott made only a partial order. A
    partial order might have transitivity

    of (<') lacking:

    "Scott's model construction is in fact
    closely related to Specker's but there
    is a subtle difference in the notion of
    tree that they use. In fact neither of
    them formulate their notion of tree in
    terms of graphs but rather in terms of
    what it will be convenient here to
    call tree-partial-orderings."

    See here:

    NON-WELL-FOUNDED SETS
    Peter Aczel - 1988 https://les-mathematiques.net/vanilla/uploads/editor/fh/v4pi6qyxfbel.pdf

    There is also the notion of co-well-
    foundedness, something like Noetherian but
    up side down, i.e. certain ascending
    chains stabilizing.

    Bye

    Mild Shock schrieb:

    I guess there is a bug in preparing flat terms vector

    I give you a gold medal 🥇, if you can prove a
    compare_index/3 correct that uses this rule. It
    was already shown impossible by Matt Carlson.

    There are alternative approaches that can reach
    transitivity, but do not use the below step
    inside some compare_index/3.

    compare_term_args(I, C, X, Y, A, H):-
             arg(I, X, K),
             arg(I, Y, L),
             !,
             compare_index(D, K, L, A, H),
             (    D = (=) ->
                 I0 is I + 1,
                 compare_term_args(I0, C, X, Y, A, H)
             ;    C = D
             ).
    compare_term_args(_ ,= , _, _, _, _).

    Maybe there is a grain of salt of invoking the
    Axiom of Choice (AC) in some previous posts.
    Although the Axiom of Choice is not needed for

    finite sets, they have anyway some choice.

    BTW: When Peter Aczel writes ZFC-, he then
    means ZFC without AC, right? But he doesn’t
    show some compare/3 .

    Mild Shock schrieb:
    Hi,

    Take this exercise. Exercise 4.1 Draw the tree
    represented by the term n1(n2(n4),n3(n5,n6)).
    https://book.simply-logical.space/src/text/2_part_ii/4.1.html

    Maybe there was a plan that SWISH can draw trees,
    and it could be that something was implemented as well.

    But I don't see anything dynamic working on the
    above web site link. Next challenge for Simply Logical,

    in another life. Draw a rational tree.
    The Prolog system has them:

    /* SWI-Prolog 9.3.26 */
    ?- X = a(Y,_), Y = b(X,_).
    X = a(b(X, _A), _),
    Y = b(X, _A).

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Nov 2 11:58:55 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Taking this one:

    Sam, Jakub, and Wojciech on the future of OpenAI https://www.youtube.com/watch?v=ngDCxlZcecw

    There are some funny parts where Jakub stutters:

    OpenAI is Deploying the Forbidden Method: GPT-6 is Different! https://www.youtube.com/watch?v=tR2M6JDyrRw

    The Energy Part: 20 Billion USD for 1 GW per 5 Years.
    I wonder how, when, and why the Bubble will burst.
    Or is the bubble here to stay?

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Nov 2 12:19:25 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Taking this one:

    Sam, Jakub, and Wojciech on the future of OpenAI https://www.youtube.com/watch?v=ngDCxlZcecw

    There are some funny parts where Jakub stutters:

    OpenAI is Deploying the Forbidden Method: GPT-6 is Different! https://www.youtube.com/watch?v=tR2M6JDyrRw

    What is even "Latent Thinking". While some thinking
    models go through varbalization loops and realize a
    form of "Loud Thinking", i.e. think out loud.

    Autoencoders anyway build a latent space during the
    training phase, so one can do chain of thoughs
    in the latent space, providing a form of "Slient Thinking".

    The Energy Part: 20 Billion USD for 1 GW per 5 Years.
    I wonder how, when, and why the Bubble will burst.
    Or is the bubble here to stay?

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Nov 2 13:20:15 2025
    From Newsgroup: comp.lang.prolog


    Hi,

    And what about this fully automated AI researcher in
    your team suggested in the OpenAI chat. Well latent spaces
    were already there with autoencoders sprouting before

    OpenAI picked them up. They challenge your own "framing"
    culture, from classic to modern, from Aristotle Catgegories,
    to Russells Complexes, to who knows what inside the

    scholarly world. But to cater for its human client, the AI has to
    be trained to use this "framing" culture in conversation. But it
    could equally well communicate in latent space talk?

    The cryptic "framing" that the AI training invents by itself ?
    This kind of interaction might have some benefit, so
    society must arm itself with AI researchers?

    Bye

    P.S.: Napolean was the same Peace Lover as Drump?

    The cartoon of Napoleon III below (EXAMPLE 4)
    appeared in Punch on Feb. 19, 1859.
    Premise: Napoleon declares that “The Empire embodies
    peace” (“L’Empire c’est la paix”).
    Premise: Napoleon has surrounded himself with many armaments.
    Conclusion: Napoleon may sound inoffensive when he says that
    “The Empire embodies peace,” but his build up of armaments
    suggests we should be wary of the empire he has built. https://plato.stanford.edu/entries/logic-informal/



    Mild Shock schrieb:
    Hi,

    Taking this one:

    Sam, Jakub, and Wojciech on the future of OpenAI https://www.youtube.com/watch?v=ngDCxlZcecw

    There are some funny parts where Jakub stutters:

    OpenAI is Deploying the Forbidden Method: GPT-6 is Different! https://www.youtube.com/watch?v=tR2M6JDyrRw

    What is even "Latent Thinking". While some thinking
    models go through varbalization loops and realize a
    form of "Loud Thinking", i.e. think out loud.

    Autoencoders anyway build a latent space during the
    training phase, so one can do chain of thoughs
    in the latent space, providing a form of "Slient Thinking".

    The Energy Part: 20 Billion USD for 1 GW per 5 Years.
    I wonder how, when, and why the Bubble will burst.
    Or is the bubble here to stay?

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.21a-Linux NewsLink 1.2