• Re: Explicitly showing the divergence of behavior between HHH(DDD)and HHH1(DDD)

    From wij@wyniijj5@gmail.com to comp.theory on Fri Sep 12 06:41:06 2025
    From Newsgroup: comp.theory

    On Tue, 2025-09-09 at 23:19 +0800, wij wrote:
    On Tue, 2025-09-09 at 04:31 +0100, Mike Terry wrote:
    On 09/09/2025 02:03, wij wrote:
    On Tue, 2025-09-09 at 01:25 +0100, Mike Terry wrote:
    [..snip..]

      There are
    soooo many other more basic problems to pick on first if we're just refuting PO's claims -
    like
    HHH
    decides DD never halts, when DD halts.


    Mike.

    IMO, in POOH case, since HHH is the halt decider (a TM), reaching final halt
    state should be defined like:

    int main() {
       int final_stat= HHH(DD);  // "halt at the final state" when HHH returns here
                                 // and the variable final_stat is initialized.
    }

    Yes.  In x86utm world, the equivalent of a TM machine is something like HHH, but x86utm is written
    to locate and run a function called "main" which calls HHH, and HHH subsequently returns to main.

    So main here isn't part of "TM" (HHH) being executed.  It's part of the paraphernalia PO uses to
    get
    the x86utm virtual address space (particularly the stack) into the right state to start the TM
    (HHH).  And when HHH returns, the "TM machine" ends like you say when it returns its final state.
    The rest of main isn't part of the TM.

    But it's useful and quite flexible to have main there, and the coding for x86utm is simplified.

    There are other ways it might have been done, e.g. eliminating main() and have x86utm itself set
    up
    the stack to run HHH.  Or other ways of indicating halting might have been used like calling a
    primitive operation with signature "void Halt(int code)".  But given we all like "structured
    programming" concepts like strictly nested code blocks, returning from HHH seems like the most
    natural way to do it.


    That also means:
       1. DD halts means:
          int main() {
            DD();
          } // DD halts means the real *TM* DD reaches here

    Yes.

       2. The DD 'input simulated' by HHH is not real, it can never really reach
          the real final halt state but a data/string analyzed/decided to reach its
          final state.

    Well, HHH could analyse some other halting function like say SomeFunc().  It can do its simulation
    and simulate right up to SomeFunc's return.  HHH will see that, and conclude that SomeFunc() is
    halting.  That's ok, but simulation is just one kind of analysis a halt decider can perform to
    reach
    its decision.  Like you say in this case SomeFunc() is not "real" in the way HHH is - it's part of
    the calculation HHH is performing.

    HHH can simulate /some/ functions to their final state, but not DD, because of the specific way DD
    and HHH are related.


    And, so, I think 'pure' function or not (and others) should not be important (so far).

    Probably not. (so far).

    But... PO wants to argue about his simulations and what they're doing.  His x86utm is written so
    that the original HHH and all its nested simulations run in the same address space.  When a
    simulation changes something in memory every simulation and outer HHH has visibility of that
    change,
    at least in principle!  (The simulations might try to avoid looking at those changes, but that
    takes
    discipline through coding rules.)

    Also PO talks about HHH and DD() which calls HHH because that corresponds to what the Linz proof
    does: the Linz proof (using TMs) embeds the functionality of HHH inside DD and the key point is
    that
    HHH inside DD must do exactly what HHH did.

    It's easy to make HHH (the decider) and HHH (embedded in DD) behave differently if impure
    functions
    e.g. misusing global variables are allowed: just set a global flag which modifies HHH behaviour
    according to how deep it is in the simulation stack.  So we could make HHH decide DD's halting
    correctly with such "cheating"!

    But with this cheating PO would be breaking the correspondence with the Linz proof, and for such a
    cheat there would be no contradiction with that proof!  So for PO's argument he needs to follow
    coding rules to guarantee no such cheating - and when using one address space for all the code
    (decider HHH, and all its nested simulations) those rules mean pure functions or something that
    amounts to that.

    ###  That is the point which makes it clearest that HHH/DD need to following coding rules such as
    being pure functions (or something very like this) so that there can be no such cheating.  That
    way,
    if PO's HHH/DD pair are correctly coded to reflect the relationship they have in Linz proof, and
    if
    HHH /still/ correctly decides DD()'s halting status, that would be a problem for the Linz proof.

    Anyway that's why the talk of pure functions comes up - it's not relevant if we simply want to use
    x86utm to execute one program in isolation, but PO must follow the code restrictions if he wants
    his
    arguments about HHH and DD to hold.  He's made his own life more complicated by his design.  If he
    had created his simulations each in their own address space the use of global data would not
    really
    matter - it would just be "part of the computation" happening in that address space, and could not
    influence other simulation levels in a cheating manner.  So there would be no requirement for
    "pure"
    functions to prevent that cheating...  (I think! maybe more thought is needed.)


    Not sure if that is useful or the sort of response you were looking for.  It seems to me that you
    were in effect asking "why do people talk about pure functions?"


    Mike.

    Thanks for the long explanation.
    In my view, pure function (or other rules) may also work, but a restricted case of TM (restricted by
    the coding rule).

    It seems you are trying to make olcott understand.
    I would first making olcott understand what contradiction is. He cannot even understand what proposition X&~X means !!!
    I just came up with a solution. POO proof could be defined as:
    Let set S={f| f is a 'TM' decision function}. Does a H,H∈S exists so that for any 'TM' function D (no argument), H(D)==1 iff D() halts?
    So, POO proof can be modified this way saving the need involving tape encoding. But of course, the detail of POOH is very problematic.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From wij@wyniijj5@gmail.com to comp.theory on Fri Sep 12 07:04:23 2025
    From Newsgroup: comp.theory

    On Fri, 2025-09-12 at 06:41 +0800, wij wrote:
    On Tue, 2025-09-09 at 23:19 +0800, wij wrote:
    On Tue, 2025-09-09 at 04:31 +0100, Mike Terry wrote:
    On 09/09/2025 02:03, wij wrote:
    On Tue, 2025-09-09 at 01:25 +0100, Mike Terry wrote:
    [..snip..]

      There are
    soooo many other more basic problems to pick on first if we're just refuting PO's claims -
    like
    HHH
    decides DD never halts, when DD halts.


    Mike.

    IMO, in POOH case, since HHH is the halt decider (a TM), reaching final halt
    state should be defined like:

    int main() {
       int final_stat= HHH(DD);  // "halt at the final state" when HHH returns here
                                 // and the variable final_stat is initialized.
    }

    Yes.  In x86utm world, the equivalent of a TM machine is something like HHH, but x86utm is
    written
    to locate and run a function called "main" which calls HHH, and HHH subsequently returns to
    main.

    So main here isn't part of "TM" (HHH) being executed.  It's part of the paraphernalia PO uses to
    get
    the x86utm virtual address space (particularly the stack) into the right state to start the TM
    (HHH).  And when HHH returns, the "TM machine" ends like you say when it returns its final
    state.
    The rest of main isn't part of the TM.

    But it's useful and quite flexible to have main there, and the coding for x86utm is simplified.

    There are other ways it might have been done, e.g. eliminating main() and have x86utm itself set
    up
    the stack to run HHH.  Or other ways of indicating halting might have been used like calling a
    primitive operation with signature "void Halt(int code)".  But given we all like "structured
    programming" concepts like strictly nested code blocks, returning from HHH seems like the most
    natural way to do it.


    That also means:
       1. DD halts means:
          int main() {
            DD();
          } // DD halts means the real *TM* DD reaches here

    Yes.

       2. The DD 'input simulated' by HHH is not real, it can never really reach
          the real final halt state but a data/string analyzed/decided to reach its
          final state.

    Well, HHH could analyse some other halting function like say SomeFunc().  It can do its
    simulation
    and simulate right up to SomeFunc's return.  HHH will see that, and conclude that SomeFunc() is
    halting.  That's ok, but simulation is just one kind of analysis a halt decider can perform to
    reach
    its decision.  Like you say in this case SomeFunc() is not "real" in the way HHH is - it's part
    of
    the calculation HHH is performing.

    HHH can simulate /some/ functions to their final state, but not DD, because of the specific way
    DD
    and HHH are related.


    And, so, I think 'pure' function or not (and others) should not be important (so far).

    Probably not. (so far).

    But... PO wants to argue about his simulations and what they're doing.  His x86utm is written so
    that the original HHH and all its nested simulations run in the same address space.  When a
    simulation changes something in memory every simulation and outer HHH has visibility of that
    change,
    at least in principle!  (The simulations might try to avoid looking at those changes, but that
    takes
    discipline through coding rules.)

    Also PO talks about HHH and DD() which calls HHH because that corresponds to what the Linz proof
    does: the Linz proof (using TMs) embeds the functionality of HHH inside DD and the key point is
    that
    HHH inside DD must do exactly what HHH did.

    It's easy to make HHH (the decider) and HHH (embedded in DD) behave differently if impure
    functions
    e.g. misusing global variables are allowed: just set a global flag which modifies HHH behaviour
    according to how deep it is in the simulation stack.  So we could make HHH decide DD's halting
    correctly with such "cheating"!

    But with this cheating PO would be breaking the correspondence with the Linz proof, and for such
    a
    cheat there would be no contradiction with that proof!  So for PO's argument he needs to follow
    coding rules to guarantee no such cheating - and when using one address space for all the code
    (decider HHH, and all its nested simulations) those rules mean pure functions or something that
    amounts to that.

    ###  That is the point which makes it clearest that HHH/DD need to following coding rules such
    as
    being pure functions (or something very like this) so that there can be no such cheating.  That
    way,
    if PO's HHH/DD pair are correctly coded to reflect the relationship they have in Linz proof, and
    if
    HHH /still/ correctly decides DD()'s halting status, that would be a problem for the Linz proof.

    Anyway that's why the talk of pure functions comes up - it's not relevant if we simply want to
    use
    x86utm to execute one program in isolation, but PO must follow the code restrictions if he wants
    his
    arguments about HHH and DD to hold.  He's made his own life more complicated by his design.  If
    he
    had created his simulations each in their own address space the use of global data would not
    really
    matter - it would just be "part of the computation" happening in that address space, and could
    not
    influence other simulation levels in a cheating manner.  So there would be no requirement for
    "pure"
    functions to prevent that cheating...  (I think! maybe more thought is needed.)


    Not sure if that is useful or the sort of response you were looking for.  It seems to me that
    you
    were in effect asking "why do people talk about pure functions?"


    Mike.

    Thanks for the long explanation.
    In my view, pure function (or other rules) may also work, but a restricted case of TM (restricted
    by
    the coding rule).

    It seems you are trying to make olcott understand.
    I would first making olcott understand what contradiction is. He cannot even
    understand what proposition X&~X means !!!

    I just came up with a solution. POO proof could be defined as:

    Let set S={f| f is a 'TM' decision function}. Does a H,H∈S exists so that for any 'TM' function D (no argument), H(D)==1 iff D() halts?

    So, POO proof can be modified this way saving the need involving tape encoding.
    But of course, the detail of POOH is very problematic.
    Re-write the previous post to be clearer.
    I just came up with a solution. The Halting Problem could be defined as:
    Let set S={f| f is a 'TM' decision function in C}. Does a H,H∈S exists so that
    for any 'TM' function D in C (no argument), H(D)==1 iff D() halts?
    So, the HP proof (in case the author wants to prove the decider is possible) can can save the need 
    to address tape encoding. This method should be generally useful to saving addressing tape encoding
    of TM (hope so, not really think about it).
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Fri Sep 12 02:14:03 2025
    From Newsgroup: comp.theory

    On 09/09/2025 14:51, olcott wrote:
    On 9/8/2025 7:25 PM, Mike Terry wrote:
    On 08/09/2025 20:07, Kaz Kylheku wrote:
    On 2025-09-08, Richard Heathfield <rjh@cpax.org.uk> wrote:
    [..snip..]


    HHH/HHH1 do not use "their own addresses" in Decide_Halting.  If they do in your halt7.c, what
    date is that from?  (My copy is from around July 2024).


    February 15, 2025 is the last commit https://github.com/plolcott/x86utm/blob/master/Halt7.c


    thanks, that date is later than the file I had, but I see there aren't major changes in halt7.c -
    mostly moving routines around and renaming a couple of routines. HHH/HHH1 still do not use their
    own addresses.

    Mike.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Sep 14 08:11:39 2025
    From Newsgroup: comp.theory

    HHH and HHH1 have identical source code except
    for their name. The DDD of HHH1(DDD) has identical
    behavior to the directly executed DDD().

    DDD calls HHH(DDD) in recursive emulation. DDD does
    not call HHH1 at all. This is why the behavior
    of DDD.HHH1 is different than the behavior of DDD.HHH

    _DDD()
    [00002183] 55 push ebp
    [00002184] 8bec mov ebp,esp
    [00002186] 6883210000 push 00002183 ; push DDD
    [0000218b] e833f4ffff call 000015c3 ; call HHH
    [00002190] 83c404 add esp,+04
    [00002193] 5d pop ebp
    [00002194] c3 ret
    Size in bytes:(0018) [00002194]

    _main()
    [000021a3] 55 push ebp
    [000021a4] 8bec mov ebp,esp
    [000021a6] 6883210000 push 00002183 ; push DDD
    [000021ab] e843f3ffff call 000014f3 ; call HHH1
    [000021b0] 83c404 add esp,+04
    [000021b3] 33c0 xor eax,eax
    [000021b5] 5d pop ebp
    [000021b6] c3 ret
    Size in bytes:(0020) [000021b6]

    machine stack stack machine assembly
    address address data code language
    ======== ======== ======== ========== ============= [000021a3][0010382d][00000000] 55 push ebp ; main() [000021a4][0010382d][00000000] 8bec mov ebp,esp ; main() [000021a6][00103829][00002183] 6883210000 push 00002183 ; push DDD [000021ab][00103825][000021b0] e843f3ffff call 000014f3 ; call HHH1
    New slave_stack at:1038d1

    Begin Local Halt Decider Simulation Execution Trace Stored at:1138d9 [00002183][001138c9][001138cd] 55 push ebp ; DDD of HHH1 [00002184][001138c9][001138cd] 8bec mov ebp,esp ; DDD of HHH1 [00002186][001138c5][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001138c1][00002190] e833f4ffff call 000015c3 ; call HHH
    New slave_stack at:14e2f9

    Begin Local Halt Decider Simulation Execution Trace Stored at:15e301 [00002183][0015e2f1][0015e2f5] 55 push ebp ; DDD of HHH[0] [00002184][0015e2f1][0015e2f5] 8bec mov ebp,esp ; DDD of HHH[0] [00002186][0015e2ed][00002183] 6883210000 push 00002183 ; push DDD [0000218b][0015e2e9][00002190] e833f4ffff call 000015c3 ; call HHH
    New slave_stack at:198d21

    *This is the beginning of the divergence of the behavior*
    *of DDD emulated by HHH versus DDD emulated by HHH1*

    [00002183][001a8d19][001a8d1d] 55 push ebp ; DDD of HHH[1] [00002184][001a8d19][001a8d1d] 8bec mov ebp,esp ; DDD of HHH[1] [00002186][001a8d15][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001a8d11][00002190] e833f4ffff call 000015c3 ; call HHH
    Local Halt Decider: Infinite Recursion Detected Simulation Stopped

    [00002190][001138c9][001138cd] 83c404 add esp,+04 ; DDD of HHH1 [00002193][001138cd][000015a8] 5d pop ebp ; DDD of HHH1 [00002194][001138d1][0003a980] c3 ret ; DDD of HHH1 [000021b0][0010382d][00000000] 83c404 add esp,+04 ; main() [000021b3][0010382d][00000000] 33c0 xor eax,eax ; main() [000021b5][00103831][00000018] 5d pop ebp ; main() [000021b6][00103835][00000000] c3 ret ; main()
    Number of Instructions Executed(352831) == 5266 Pages

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.lang.c++,comp.lang.c,comp.ai.philosophy on Sun Sep 14 08:22:50 2025
    From Newsgroup: comp.theory

    void DDD()
    {
    HHH(DDD);
    return;
    }

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    That is why I said it is so important for you to
    carefully study this carefully annotated execution
    trace instead of continuing to totally ignore it.



    HHH and HHH1 have identical source code except
    for their name. The DDD of HHH1(DDD) has identical
    behavior to the directly executed DDD().

    DDD calls HHH(DDD) in recursive emulation. DDD does
    not call HHH1 at all. This is why the behavior
    of DDD.HHH1 is different than the behavior of DDD.HHH

    _DDD()
    [00002183] 55 push ebp
    [00002184] 8bec mov ebp,esp
    [00002186] 6883210000 push 00002183 ; push DDD
    [0000218b] e833f4ffff call 000015c3 ; call HHH
    [00002190] 83c404 add esp,+04
    [00002193] 5d pop ebp
    [00002194] c3 ret
    Size in bytes:(0018) [00002194]

    _main()
    [000021a3] 55 push ebp
    [000021a4] 8bec mov ebp,esp
    [000021a6] 6883210000 push 00002183 ; push DDD
    [000021ab] e843f3ffff call 000014f3 ; call HHH1
    [000021b0] 83c404 add esp,+04
    [000021b3] 33c0 xor eax,eax
    [000021b5] 5d pop ebp
    [000021b6] c3 ret
    Size in bytes:(0020) [000021b6]

    machine stack stack machine assembly
    address address data code language
    ======== ======== ======== ========== ============= [000021a3][0010382d][00000000] 55 push ebp ; main() [000021a4][0010382d][00000000] 8bec mov ebp,esp ; main() [000021a6][00103829][00002183] 6883210000 push 00002183 ; push DDD [000021ab][00103825][000021b0] e843f3ffff call 000014f3 ; call HHH1
    New slave_stack at:1038d1

    Begin Local Halt Decider Simulation Execution Trace Stored at:1138d9 [00002183][001138c9][001138cd] 55 push ebp ; DDD of HHH1 [00002184][001138c9][001138cd] 8bec mov ebp,esp ; DDD of HHH1 [00002186][001138c5][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001138c1][00002190] e833f4ffff call 000015c3 ; call HHH
    New slave_stack at:14e2f9

    Begin Local Halt Decider Simulation Execution Trace Stored at:15e301 [00002183][0015e2f1][0015e2f5] 55 push ebp ; DDD of HHH[0] [00002184][0015e2f1][0015e2f5] 8bec mov ebp,esp ; DDD of HHH[0] [00002186][0015e2ed][00002183] 6883210000 push 00002183 ; push DDD [0000218b][0015e2e9][00002190] e833f4ffff call 000015c3 ; call HHH
    New slave_stack at:198d21

    *This is the beginning of the divergence of the behavior*
    *of DDD emulated by HHH versus DDD emulated by HHH1*

    [00002183][001a8d19][001a8d1d] 55 push ebp ; DDD of HHH[1] [00002184][001a8d19][001a8d1d] 8bec mov ebp,esp ; DDD of HHH[1] [00002186][001a8d15][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001a8d11][00002190] e833f4ffff call 000015c3 ; call HHH
    Local Halt Decider: Infinite Recursion Detected Simulation Stopped

    [00002190][001138c9][001138cd] 83c404 add esp,+04 ; DDD of HHH1 [00002193][001138cd][000015a8] 5d pop ebp ; DDD of HHH1 [00002194][001138d1][0003a980] c3 ret ; DDD of HHH1 [000021b0][0010382d][00000000] 83c404 add esp,+04 ; main() [000021b3][0010382d][00000000] 33c0 xor eax,eax ; main() [000021b5][00103831][00000018] 5d pop ebp ; main() [000021b6][00103835][00000000] c3 ret ; main()
    Number of Instructions Executed(352831) == 5266 Pages

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Sep 14 09:24:17 2025
    From Newsgroup: comp.theory

    On 9/14/2025 9:11 AM, olcott wrote:
    HHH and HHH1 have identical source code except
    for their name.

    But they use local static data, therefore they are different because
    they use different static data.

    The DDD of HHH1(DDD) has identical
    behavior to the directly executed DDD().

    DDD calls HHH(DDD) in recursive emulation. DDD does
    not call HHH1 at all. This is why the behavior
    of DDD.HHH1 is different than the behavior of DDD.HHH

    False. Since DD does not call HHH1 but HHH, HHH1 simulates the same
    thing as HHH.


    _DDD()
    [00002183] 55             push ebp
    [00002184] 8bec           mov ebp,esp
    [00002186] 6883210000     push 00002183 ; push DDD
    [0000218b] e833f4ffff     call 000015c3 ; call HHH
    [00002190] 83c404         add esp,+04
    [00002193] 5d             pop ebp
    [00002194] c3             ret
    Size in bytes:(0018) [00002194]

    _main()
    [000021a3] 55             push ebp
    [000021a4] 8bec           mov ebp,esp
    [000021a6] 6883210000     push 00002183 ; push DDD
    [000021ab] e843f3ffff     call 000014f3 ; call HHH1
    [000021b0] 83c404         add esp,+04
    [000021b3] 33c0           xor eax,eax
    [000021b5] 5d             pop ebp
    [000021b6] c3             ret
    Size in bytes:(0020) [000021b6]

     machine   stack     stack     machine    assembly
     address   address   data      code       language
     ========  ========  ========  ========== ============= [000021a3][0010382d][00000000] 55         push ebp      ; main() [000021a4][0010382d][00000000] 8bec       mov ebp,esp   ; main() [000021a6][00103829][00002183] 6883210000 push 00002183 ; push DDD [000021ab][00103825][000021b0] e843f3ffff call 000014f3 ; call HHH1
    New slave_stack at:1038d1

    Begin Local Halt Decider Simulation   Execution Trace Stored at:1138d9 [00002183][001138c9][001138cd] 55         push ebp      ; DDD of HHH1
    [00002184][001138c9][001138cd] 8bec       mov ebp,esp   ; DDD of HHH1 [00002186][001138c5][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001138c1][00002190] e833f4ffff call 000015c3 ; call HHH
    New slave_stack at:14e2f9

    Begin Local Halt Decider Simulation   Execution Trace Stored at:15e301 [00002183][0015e2f1][0015e2f5] 55         push ebp      ; DDD of HHH[0]
    [00002184][0015e2f1][0015e2f5] 8bec       mov ebp,esp   ; DDD of HHH[0]
    [00002186][0015e2ed][00002183] 6883210000 push 00002183 ; push DDD [0000218b][0015e2e9][00002190] e833f4ffff call 000015c3 ; call HHH
    New slave_stack at:198d21

    *This is the beginning of the divergence of the behavior*
    *of DDD emulated by HHH versus DDD emulated by HHH1*

    False. Both are the same instruction-for-instruction up to the point
    that HHH aborts. You have repeatedly failed to show where at the
    assembly level a difference occurs.

    This is made more clear in the following side-by-side trace:


    HHH1 Simulation of DD (HHH1 never aborts) HHH Simulation
    of DD
    ========================================== ==========================================
    S machine machine assembly S machine
    machine assembly
    D address code language D address code
    language
    = ======== ============== ============= = ======== ============== =============
    [1][0000213e] 55 push ebp [1][0000213e] 55
    push ebp
    [1][0000213f] 8bec mov ebp,esp [1][0000213f] 8bec
    mov ebp,esp
    [1][00002141] 51 push ecx [1][00002141] 51
    push ecx
    [1][00002142] 683e210000 push 0000213e [1][00002142]
    683e210000 push 0000213e
    [1][00002147] e8a2f4ffff call 000015ee [1][00002147]
    e8a2f4ffff call 000015ee
    [1]New slave_stack at:14e33e [1]New slave_stack at:14e33e

    [1]Begin Local Halt Decider Simulation
    [2][0000213e] 55 push ebp [2][0000213e] 55
    push ebp
    [2][0000213f] 8bec mov ebp,esp [2][0000213f] 8bec
    mov ebp,esp
    [2][00002141] 51 push ecx [2][00002141] 51
    push ecx
    [2][00002142] 683e210000 push 0000213e [2][00002142]
    683e210000 push 0000213e
    [2][00002147] e8a2f4ffff call 000015ee [2][00002147]
    e8a2f4ffff call 000015ee
    [3]New slave_stack at:198d66 ### THIS IS WHERE
    HHH stops simulating DD
    [3][0000213e] 55 push ebp ### Right up to
    this point
    [3][0000213f] 8bec mov ebp,esp ### HHH's and
    HHH1's traces match as claimed
    [3][00002141] 51 push ecx
    [3][00002142] 683e210000 push 0000213e
    [3][00002147] e8a2f4ffff call 000015ee
    [1]Infinite Recursion Detected Simulation Stopped

    [1][0000214c] 83c404 add esp,+04
    [1][0000214f] 8945fc mov [ebp-04],eax
    [1][00002152] 837dfc00 cmp dword [ebp-04],+00
    [1][00002156] 7402 jz 0000215a
    [1][0000215a] 8b45fc mov eax,[ebp-04]
    [1][0000215d] 8be5 mov esp,ebp
    [1][0000215f] 5d pop ebp
    [1][00002160] c3 ret

    (SD column is simulation depth)



    [00002183][001a8d19][001a8d1d] 55         push ebp      ; DDD of HHH[1]
    [00002184][001a8d19][001a8d1d] 8bec       mov ebp,esp   ; DDD of HHH[1]
    [00002186][001a8d15][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001a8d11][00002190] e833f4ffff call 000015c3 ; call HHH
    Local Halt Decider: Infinite Recursion Detected Simulation Stopped

    [00002190][001138c9][001138cd] 83c404     add esp,+04 ; DDD of HHH1 [00002193][001138cd][000015a8] 5d         pop ebp     ; DDD of HHH1
    [00002194][001138d1][0003a980] c3         ret         ; DDD of HHH1
    [000021b0][0010382d][00000000] 83c404     add esp,+04 ; main() [000021b3][0010382d][00000000] 33c0       xor eax,eax ; main() [000021b5][00103831][00000018] 5d         pop ebp     ; main() [000021b6][00103835][00000000] c3         ret         ; main()
    Number of Instructions Executed(352831) == 5266 Pages


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Sep 14 09:31:14 2025
    From Newsgroup: comp.theory

    On 9/14/2025 9:22 AM, olcott wrote:
    void DDD()
    {
      HHH(DDD);
      return;
    }

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    False. HHH1 sees the full behavior of DD from beginning to its final state.


    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    Because HHH aborts in violation of the semantics of the x86 language


    That is why I said it is so important for you to
    carefully study this carefully annotated execution
    trace instead of continuing to totally ignore it.


    But you ignore the following trace that shows that HHH and HHH1 perform
    the same simulation instruction-for-instruction up to the point that HHH aborts.

    If you disagree, point out the specific instruction that is correctly simulated differently by HHH and HHH1.

    A valid answer will be of the form "The last common instruction is
    number N which is X. When HHH simulates X, A occurs, and when HHH1
    simulates X, B occurs.

    Failure to provide the above will be construed as admission that I am
    right. You have permanently and irrevocably accepted this method of
    admission by using it yourself:


    On 6/15/2024 1:39 PM, olcott wrote:
    You are the one that is backed into a corner here and no amount
    of pure bluster will get you out. Failing to provide the requested
    steps *is construed as your admission that I am correct*





    HHH1 Simulation of DD (HHH1 never aborts) HHH Simulation
    of DD
    ========================================== ==========================================
    S machine machine assembly S machine
    machine assembly
    D address code language D address code
    language
    = ======== ============== ============= = ======== ============== =============
    [1][0000213e] 55 push ebp [1][0000213e] 55
    push ebp
    [1][0000213f] 8bec mov ebp,esp [1][0000213f] 8bec
    mov ebp,esp
    [1][00002141] 51 push ecx [1][00002141] 51
    push ecx
    [1][00002142] 683e210000 push 0000213e [1][00002142]
    683e210000 push 0000213e
    [1][00002147] e8a2f4ffff call 000015ee [1][00002147]
    e8a2f4ffff call 000015ee
    [1]New slave_stack at:14e33e [1]New slave_stack at:14e33e

    [1]Begin Local Halt Decider Simulation
    [2][0000213e] 55 push ebp [2][0000213e] 55
    push ebp
    [2][0000213f] 8bec mov ebp,esp [2][0000213f] 8bec
    mov ebp,esp
    [2][00002141] 51 push ecx [2][00002141] 51
    push ecx
    [2][00002142] 683e210000 push 0000213e [2][00002142]
    683e210000 push 0000213e
    [2][00002147] e8a2f4ffff call 000015ee [2][00002147]
    e8a2f4ffff call 000015ee
    [3]New slave_stack at:198d66 ### THIS IS WHERE
    HHH stops simulating DD
    [3][0000213e] 55 push ebp ### Right up to
    this point
    [3][0000213f] 8bec mov ebp,esp ### HHH's and
    HHH1's traces match as claimed
    [3][00002141] 51 push ecx
    [3][00002142] 683e210000 push 0000213e
    [3][00002147] e8a2f4ffff call 000015ee
    [1]Infinite Recursion Detected Simulation Stopped

    [1][0000214c] 83c404 add esp,+04
    [1][0000214f] 8945fc mov [ebp-04],eax
    [1][00002152] 837dfc00 cmp dword [ebp-04],+00
    [1][00002156] 7402 jz 0000215a
    [1][0000215a] 8b45fc mov eax,[ebp-04]
    [1][0000215d] 8be5 mov esp,ebp
    [1][0000215f] 5d pop ebp
    [1][00002160] c3 ret

    (SD column is simulation depth)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory on Sun Sep 14 14:11:25 2025
    From Newsgroup: comp.theory

    On Sun, 14 Sep 2025 08:11:39 -0500, olcott wrote:

    HHH and HHH1 have identical source code except for their name. The DDD
    of HHH1(DDD) has identical behavior to the directly executed DDD().

    DDD calls HHH(DDD) in recursive emulation. DDD does not call HHH1 at
    all. This is why the behavior of DDD.HHH1 is different than the behavior
    of DDD.HHH

    _DDD()
    [00002183] 55 push ebp [00002184] 8bec mov ebp,esp [00002186] 6883210000 push 00002183 ; push DDD [0000218b] e833f4ffff
    call 000015c3 ; call HHH [00002190] 83c404 add esp,+04
    [00002193] 5d pop ebp [00002194] c3 ret Size in bytes:(0018) [00002194]

    _main()
    [000021a3] 55 push ebp [000021a4] 8bec mov ebp,esp [000021a6] 6883210000 push 00002183 ; push DDD [000021ab] e843f3ffff
    call 000014f3 ; call HHH1 [000021b0] 83c404 add esp,+04 [000021b3] 33c0 xor eax,eax [000021b5] 5d pop ebp [000021b6] c3 ret Size in bytes:(0020) [000021b6]

    machine stack stack machine assembly address address
    data code language ======== ======== ======== ==========
    =============
    [000021a3][0010382d][00000000] 55 push ebp ; main() [000021a4][0010382d][00000000] 8bec mov ebp,esp ; main() [000021a6][00103829][00002183] 6883210000 push 00002183 ; push DDD [000021ab][00103825][000021b0] e843f3ffff call 000014f3 ; call HHH1 New slave_stack at:1038d1

    Begin Local Halt Decider Simulation Execution Trace Stored at:1138d9 [00002183][001138c9][001138cd] 55 push ebp ; DDD of HHH1 [00002184][001138c9][001138cd] 8bec mov ebp,esp ; DDD of HHH1 [00002186][001138c5][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001138c1][00002190] e833f4ffff call 000015c3 ; call HHH New slave_stack at:14e2f9

    Begin Local Halt Decider Simulation Execution Trace Stored at:15e301 [00002183][0015e2f1][0015e2f5] 55 push ebp ; DDD of HHH[0] [00002184][0015e2f1][0015e2f5] 8bec mov ebp,esp ; DDD of HHH[0] [00002186][0015e2ed][00002183] 6883210000 push 00002183 ; push DDD [0000218b][0015e2e9][00002190] e833f4ffff call 000015c3 ; call HHH New slave_stack at:198d21

    *This is the beginning of the divergence of the behavior*
    *of DDD emulated by HHH versus DDD emulated by HHH1*

    [00002183][001a8d19][001a8d1d] 55 push ebp ; DDD of HHH[1] [00002184][001a8d19][001a8d1d] 8bec mov ebp,esp ; DDD of HHH[1] [00002186][001a8d15][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001a8d11][00002190] e833f4ffff call 000015c3 ; call HHH Local
    Halt Decider: Infinite Recursion Detected Simulation Stopped

    [00002190][001138c9][001138cd] 83c404 add esp,+04 ; DDD of HHH1 [00002193][001138cd][000015a8] 5d pop ebp ; DDD of HHH1 [00002194][001138d1][0003a980] c3 ret ; DDD of HHH1 [000021b0][0010382d][00000000] 83c404 add esp,+04 ; main() [000021b3][0010382d][00000000] 33c0 xor eax,eax ; main() [000021b5][00103831][00000018] 5d pop ebp ; main() [000021b6][00103835][00000000] c3 ret ; main()
    Number of Instructions Executed(352831) == 5266 Pages

    DDD halts.

    /Flibble
    --
    meet ever shorter deadlines, known as "beat the clock"
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From joes@noreply@example.org to comp.theory on Sun Sep 14 16:02:06 2025
    From Newsgroup: comp.theory

    Am Sun, 14 Sep 2025 08:22:50 -0500 schrieb olcott:
    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:

    Of course the execution traces are different before and after the
    abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH has aborted DD thus need
    not abort DD itself.
    HHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus must
    abort DD itself.
    You mean the outermost simulated HHH, like so (indented by simulation
    level):
    HHH1 HHH
    simulates
    DD DD
    calls
    HHH HHH
    simulates
    DD DD
    calls
    HHH HHH
    simulates aborts <- here is the difference
    DD
    calls
    HHH
    aborts <- the catchup point

    But if you line up the outermost HHH, simulated or not, the arrows
    line up too.

    HHH and HHH1 have identical source code except for their name.
    The different behaviour shows they are not equal.

    DDD calls HHH(DDD) in recursive emulation. DDD does not call HHH1 at
    all. This is why the behavior of DDD.HHH1 is different than the behavior
    of DDD.HHH
    It shouldn't matter which simulator is simulating.
    --
    Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
    It is not guaranteed that n+1 exists for every n.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Sun Sep 14 18:30:11 2025
    From Newsgroup: comp.theory

    On 2025-09-14, olcott <polcott333@gmail.com> wrote:
    HHH and HHH1

    I don't understand what you hope to achieve by introducing two deciders
    into the picture such that only one of them is the subject of the
    diagonal test case.

    have identical source code except
    for their name.

    What is the sigificance of this remark? All it does is show
    two things:

    You don't understand what it means for functions to
    be equivalent.

    The subsequent revelation that these functions behave differently means
    that there is some unwarranted sensitivity to function names
    or addreses. For instance, an execution trace can distinguish
    the cases when HHH1(DD) called HHH versus HHH(DD) called HHH,

    This is invalid.

    The DDD of HHH1(DDD) has identical
    behavior to the directly executed DDD().

    If it's not the case that every DDD has identical behavior to every
    other, you have a bug.

    *This is the beginning of the divergence of the behavior*
    *of DDD emulated by HHH versus DDD emulated by HHH1*

    So, contrary to your earlier claim, prior to this event they do not
    diverge; they are the same.

    [00002183][001a8d19][001a8d1d] 55 push ebp ; DDD of HHH[1] [00002184][001a8d19][001a8d1d] 8bec mov ebp,esp ; DDD of HHH[1] [00002186][001a8d15][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001a8d11][00002190] e833f4ffff call 000015c3 ; call HHH
    Local Halt Decider: Infinite Recursion Detected Simulation Stopped

    [00002190][001138c9][001138cd] 83c404 add esp,+04 ; DDD of HHH1 [00002193][001138cd][000015a8] 5d pop ebp ; DDD of HHH1 [00002194][001138d1][0003a980] c3 ret ; DDD of HHH1

    What is the value of EAX here; i.e. the decision returned by HHH1?

    Might it not be 1, showing that HHH1 has decided that DD halts?

    [000021b0][0010382d][00000000] 83c404 add esp,+04 ; main() [000021b3][0010382d][00000000] 33c0 xor eax,eax ; main()

    Ooops, your main is now clobbering the value of EAX to zero, in effect destroying the evidence.

    [000021b5][00103831][00000018] 5d pop ebp ; main() [000021b6][00103835][00000000] c3 ret ; main()
    Number of Instructions Executed(352831) == 5266 Pages
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Sun Sep 14 18:36:27 2025
    From Newsgroup: comp.theory

    On 2025-09-14, olcott <polcott333@gmail.com> wrote:
    void DDD()
    {
    HHH(DDD);
    return;
    }

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    You are literally in this same posting showing
    that HHH and HHH1 trace are identical before the abort;

    *This is the beginning of the divergence of the behavior*
    *of DDD emulated by HHH versus DDD emulated by HHH1*

    So what you are clearly saying here is that: before this abort
    pont there exists a behavior of DDD emulated by HHH;
    there exists a behavior of DDD emulated by HHH1;
    and they are the same until then.

    This directly contradicts your bullshit rambling that
    "HHH1 only sees the behavior of DD after HHH has aborted DD".

    That is clearly false HHH1(DD) begins tracing DD right from
    the beginning of DD, and through its call into HHH(DD) and so on.

    [00002190][001138c9][001138cd] 83c404 add esp,+04 ; DDD of HHH1 [00002193][001138cd][000015a8] 5d pop ebp ; DDD of HHH1 [00002194][001138d1][0003a980] c3 ret ; DDD of HHH1

    If you were honest with yourself and everyone else, you would dump the
    value of EAX here, the return value of HHH1. Your execution traces would
    track the values of all registers and show them, not only instructions executed.

    (You would also not arbitrarily exclude portions of the code form
    being traced.)
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Sun Sep 14 18:41:50 2025
    From Newsgroup: comp.theory

    On 2025-09-14, joes <noreply@example.org> wrote:
    DDD calls HHH(DDD) in recursive emulation. DDD does not call HHH1 at
    all. This is why the behavior of DDD.HHH1 is different than the behavior
    of DDD.HHH
    It shouldn't matter which simulator is simulating.

    Of course it should, because the diagonal test case will always be
    wrong, but another decider off the diagnaol can get it right.

    There is absolutly no point in bringing in HHH1 at all; the only reason
    Olctoo has it ther is that it has "identical source code to HHH except
    for the name" and he believes this holds some consequence that he has
    not explained.

    The presence of HHH1 is irrelevant and uninteresting, except insofar
    that he's concealing that it returns 1.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 00:58:37 2025
    From Newsgroup: comp.theory

    On 9/14/2025 1:36 PM, Kaz Kylheku wrote:
    On 2025-09-14, olcott <polcott333@gmail.com> wrote:
    void DDD()
    {
    HHH(DDD);
    return;
    }

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    You are literally in this same posting showing
    that HHH and HHH1 trace are identical before the abort;


    Counter factual, please go back to the
    preceding post and try again. This time
    don't erase anything that I said.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Mon Sep 15 08:30:17 2025
    From Newsgroup: comp.theory

    On 15/09/2025 06:58, olcott wrote:
    On 9/14/2025 1:36 PM, Kaz Kylheku wrote:
    On 2025-09-14, olcott <polcott333@gmail.com> wrote:
    void DDD()
    {
        HHH(DDD);
        return;
    }

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    You are literally in this same posting showing
    that HHH and HHH1 trace are identical before the abort;


    Counter factual, please go back to the
    preceding post and try again. This time
    don't erase anything that I said.

    If it matters that much to your case and you need people to
    understand it *so much* that snipping it is a problem, hiding it
    in 92 lines of stack trace and other eye-glazing material wasn't
    smart. Didn't you recently claim to be clever?

    If it's not important enough for you to explain and defend, it's
    not important enough to survive a snip.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory on Mon Sep 15 09:48:25 2025
    From Newsgroup: comp.theory

    Op 14.sep.2025 om 15:22 schreef olcott:
    void DDD()
    {
      HHH(DDD);
      return;
    }

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    Exactly! That is the error of HHH! Do you finally get it? HHH aborts
    before it can see that no abort is needed for this input.
    HHH1 does not abort and sees that no abort is needed for exactly the
    same input.
    HHH is made blind for the whle specification of the input. It only sees
    the first part of the specification. That does not mean that the last
    part does not exist. HHH simply fails to see the full specification.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 10:45:15 2025
    From Newsgroup: comp.theory

    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
    The presence of HHH1 is irrelevant and uninteresting, except insofar
    that he's concealing that it returns 1.


    You get this same information when DDD of HHH1
    reaches [00002194].

    I try to make it as simple as possible so that you
    can keep track of every detail of the execution trace of

    DDD correctly simulated by HHH
    DDD correctly simulated by HHH1

    *So that you can directly see that*

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    *When HHH reports on the actual behavior that it*
    *actually sees then HHH(DDD)==0 IS CORRECT*

    *When HHH reports on the actual behavior that it*
    *actually sees then HHH(DDD)==0 IS CORRECT*

    *When HHH reports on the actual behavior that it*
    *actually sees then HHH(DDD)==0 IS CORRECT*

    *That is why I said it is so important for you to*
    *carefully study this carefully annotated execution*
    *trace instead of continuing to totally ignore it*

    By not paying complete attention we have to go
    over this exact same material over and over it
    again and again until you get it.

    HHH and HHH1 have identical source code except
    for their name. The DDD of HHH1(DDD) has identical
    behavior to the directly executed DDD().

    DDD calls HHH(DDD) in recursive emulation. DDD does
    not call HHH1 at all. This is why the behavior
    of DDD.HHH1 is different than the behavior of DDD.HHH

    _DDD()
    [00002183] 55 push ebp
    [00002184] 8bec mov ebp,esp
    [00002186] 6883210000 push 00002183 ; push DDD
    [0000218b] e833f4ffff call 000015c3 ; call HHH
    [00002190] 83c404 add esp,+04
    [00002193] 5d pop ebp
    [00002194] c3 ret
    Size in bytes:(0018) [00002194]

    _main()
    [000021a3] 55 push ebp
    [000021a4] 8bec mov ebp,esp
    [000021a6] 6883210000 push 00002183 ; push DDD
    [000021ab] e843f3ffff call 000014f3 ; call HHH1
    [000021b0] 83c404 add esp,+04
    [000021b3] 33c0 xor eax,eax
    [000021b5] 5d pop ebp
    [000021b6] c3 ret
    Size in bytes:(0020) [000021b6]

    machine stack stack machine assembly
    address address data code language
    ======== ======== ======== ========== ============= [000021a3][0010382d][00000000] 55 push ebp ; main() [000021a4][0010382d][00000000] 8bec mov ebp,esp ; main() [000021a6][00103829][00002183] 6883210000 push 00002183 ; push DDD [000021ab][00103825][000021b0] e843f3ffff call 000014f3 ; call HHH1
    New slave_stack at:1038d1

    Begin Local Halt Decider Simulation Execution Trace Stored at:1138d9 [00002183][001138c9][001138cd] 55 push ebp ; DDD of HHH1 [00002184][001138c9][001138cd] 8bec mov ebp,esp ; DDD of HHH1 [00002186][001138c5][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001138c1][00002190] e833f4ffff call 000015c3 ; call HHH[0]
    New slave_stack at:14e2f9

    Begin Local Halt Decider Simulation Execution Trace Stored at:15e301 [00002183][0015e2f1][0015e2f5] 55 push ebp ; DDD of HHH[0] [00002184][0015e2f1][0015e2f5] 8bec mov ebp,esp ; DDD of HHH[0] [00002186][0015e2ed][00002183] 6883210000 push 00002183 ; push DDD [0000218b][0015e2e9][00002190] e833f4ffff call 000015c3 ; call HHH[1]
    New slave_stack at:198d21

    *This is the beginning of the divergence of the behavior*
    *of DDD emulated by HHH versus DDD emulated by HHH1*

    [00002183][001a8d19][001a8d1d] 55 push ebp ; DDD of HHH[1] [00002184][001a8d19][001a8d1d] 8bec mov ebp,esp ; DDD of HHH[1] [00002186][001a8d15][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001a8d11][00002190] e833f4ffff call 000015c3 ; call HHH[2]
    Local Halt Decider: Infinite Recursion Detected Simulation Stopped

    [00002190][001138c9][001138cd] 83c404 add esp,+04 ; DDD of HHH1 [00002193][001138cd][000015a8] 5d pop ebp ; DDD of HHH1 [00002194][001138d1][0003a980] c3 ret ; DDD of HHH1 [000021b0][0010382d][00000000] 83c404 add esp,+04 ; main() [000021b3][0010382d][00000000] 33c0 xor eax,eax ; main() [000021b5][00103831][00000018] 5d pop ebp ; main() [000021b6][00103835][00000000] c3 ret ; main()
    Number of Instructions Executed(352831) == 5266 Pages
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 11:38:56 2025
    From Newsgroup: comp.theory

    On 9/15/2025 2:30 AM, Richard Heathfield wrote:
    On 15/09/2025 06:58, olcott wrote:
    On 9/14/2025 1:36 PM, Kaz Kylheku wrote:
    On 2025-09-14, olcott <polcott333@gmail.com> wrote:
    void DDD()
    {
        HHH(DDD);
        return;
    }

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    You are literally in this same posting showing
    that HHH and HHH1 trace are identical before the abort;


    Counter factual, please go back to the
    preceding post and try again. This time
    don't erase anything that I said.

    If it matters that much to your case and you need people to understand
    it *so much* that snipping it is a problem, hiding it in 92 lines of
    stack trace and other eye-glazing material wasn't smart. Didn't you
    recently claim to be clever?


    This is the clearest way that I can provide ALL
    of the relevant details to make my most important
    key points.

    If it's not important enough for you to explain and defend, it's not important enough to survive a snip.


    That was already in the part that was erased.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 11:40:21 2025
    From Newsgroup: comp.theory

    On 9/15/2025 2:48 AM, Fred. Zwarts wrote:
    Op 14.sep.2025 om 15:22 schreef olcott:
    void DDD()
    {
       HHH(DDD);
       return;
    }

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    Exactly! That is the error of HHH!
    I provided the full trace so that you could
    see that it is not any error. That you ignored
    this is not any sort of actual rebuttal.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Mon Sep 15 17:27:52 2025
    From Newsgroup: comp.theory

    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
    The presence of HHH1 is irrelevant and uninteresting, except insofar
    that he's concealing that it returns 1.


    You get this same information when DDD of HHH1
    reaches [00002194].

    I try to make it as simple as possible so that you
    can keep track of every detail of the execution trace of

    DDD correctly simulated by HHH
    DDD correctly simulated by HHH1

    *So that you can directly see that*

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    That is obviously false. HHH1(DD) begins simulating
    DD from the entry to DD, and creating execution
    trace entries, before DD reaches its HHH(DD) call.

    You know this, which is why in your execution traces you include this
    remark:

    *This is the beginning of the divergence of the behavior*
    *of DDD emulated by HHH versus DDD emulated by HHH1*

    Before the divergence, there are behaviors, and they are not
    divergent.

    Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.

    You have yet to explain the significance of HHH1.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    That's a tautology. HHH only sees the behavior that HHH has seen in
    order to convince itself that seeing more behavior is not necessary.

    Yes?

    *When HHH reports on the actual behavior that it*
    *actually sees then HHH(DDD)==0 IS CORRECT*

    The simulation which HHH has abandoned can be
    continued to show that DDD halts, which indicates
    that the 0 is incorrect.

    *That is why I said it is so important for you to*
    *carefully study this carefully annotated execution*
    *trace instead of continuing to totally ignore it*

    Why don't you properly port it to Linux. Modern Linux toolchains do not
    produce COFF object files, only ELF. There evidently aren't any options
    you can supply to get a COFF object file.

    You have an ELF-handling class and COFF_handling class, but you
    neglected to create an easily switchable abstraction; you have
    hard-coded use of the COFF class in a whole bunch of places.

    HHH and HHH1 have identical source code except
    for their name.

    You keep repeating this without explaining what you
    believe to be the signficance of it, or why HHH1
    has to be present at all.

    The DDD of HHH1(DDD) has identical
    behavior to the directly executed DDD().

    And you know that the directly executed DDD is halting; you have
    acknowledged that multiple times and brushed it aside.

    You know that when HHH1(DDD) executes its RET instruction,
    the return register EAX is nonzero.

    DDD calls HHH(DDD) in recursive emulation. DDD does
    not call HHH1 at all.

    That's why HHH1 has a shot at deciding DDD correctly
    by returning nonzero. HHH1 is not embroiled in the diagonal pair
    HHH/DDD.

    This is why the behavior
    of DDD.HHH1 is different than the behavior of DDD.HHH

    There is only one DDD with one behavior. If something else is observed, something is seriously wrong. Luckily, something is not seriously wrong
    (in that regard).
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Mon Sep 15 17:28:56 2025
    From Newsgroup: comp.theory

    On 2025-09-15, Richard Heathfield <rjh@cpax.org.uk> wrote:
    If it's not important enough for you to explain and defend, it's
    not important enough to survive a snip.

    He did the snip; I quoted what I thought was necessary; and my article
    has a parent reference so that anyone can see the original if they stil
    have it on their server.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 13:19:26 2025
    From Newsgroup: comp.theory

    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
    The presence of HHH1 is irrelevant and uninteresting, except insofar
    that he's concealing that it returns 1.


    You get this same information when DDD of HHH1
    reaches [00002194].

    I try to make it as simple as possible so that you
    can keep track of every detail of the execution trace of

    DDD correctly simulated by HHH
    DDD correctly simulated by HHH1

    *So that you can directly see that*

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    That is obviously false. HHH1(DD) begins simulating
    DD from the entry to DD, and creating execution
    trace entries, before DD reaches its HHH(DD) call.


    From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.

    You know this, which is why in your execution traces you include this
    remark:

    *This is the beginning of the divergence of the behavior*
    *of DDD emulated by HHH versus DDD emulated by HHH1*

    Before the divergence, there are behaviors, and they are not
    divergent.


    HHH1 cannot see the behavior of DDD correctly
    simulated by HHH. HHH1 has no idea that HHH is
    identical to itself. HHH1 only sees that when
    its DDD calls HHH(DD) that this call returns.

    Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.


    DDD correctly simulated by HHH1 is identical to
    the behavior of the directly executed DDD().

    When we have emulation compared to emulation we are
    comparing Apples to Apples and not Apples to Oranges.

    You have yet to explain the significance of HHH1.


    Just did.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    That's a tautology. HHH only sees the behavior that HHH has seen in
    order to convince itself that seeing more behavior is not necessary.

    Yes?


    HHH has complete proof that DDD correctly
    simulated by HHH cannot possibly reach its
    own simulated final halt state.

    That you do not understand this complete
    proof is no actual rebuttal what-so-ever.

    You are a very smart guy. The way around this
    is for you to figure out on your own how HHH
    would correctly determine that:

    void Infinite_Recursion()
    {
    Infinite_Recursion();
    return;
    }

    Would never halt.

    Exactly what execution trace details would HHH
    need to see to correctly conclude beyond all
    possible doubt that Infinite_Recursion() never halts?

    *When HHH reports on the actual behavior that it*
    *actually sees then HHH(DDD)==0 IS CORRECT*

    The simulation which HHH has abandoned can be
    continued to show that DDD halts,

    Try and trace through every single detail of every
    single step to show this.

    which indicates
    that the 0 is incorrect.

    *That is why I said it is so important for you to*
    *carefully study this carefully annotated execution*
    *trace instead of continuing to totally ignore it*

    Why don't you properly port it to Linux. Modern Linux toolchains do not produce COFF object files, only ELF. There evidently aren't any options
    you can supply to get a COFF object file.


    I came from Linux and was ported to Windows.

    I started writing and Elf version of this https://github.com/plolcott/x86utm/blob/master/include/Read_COFF_Object.h

    and determined that it was not worth the effort to finish it.

    You have an ELF-handling class and COFF_handling class, but you
    neglected to create an easily switchable abstraction; you have
    hard-coded use of the COFF class in a whole bunch of places.

    HHH and HHH1 have identical source code except
    for their name.

    You keep repeating this without explaining what you
    believe to be the signficance of it, or why HHH1
    has to be present at all.


    I explained the crucial importance of this many hundreds
    of times over several years and not one single person
    ever paid any attention at all to any of the details.

    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*

    People are so damned sure that I am wrong that whenever
    I explain the details of the proof that I am correct
    they never ever hear anything beside blah, blah, blah...

    The DDD of HHH1(DDD) has identical
    behavior to the directly executed DDD().

    And you know that the directly executed DDD is halting; you have
    acknowledged that multiple times and brushed it aside.


    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES

    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES

    I can give you 100 megabytes of that if it will help you
    see that I ever said it at least once.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory on Mon Sep 15 18:54:02 2025
    From Newsgroup: comp.theory

    On Mon, 15 Sep 2025 13:19:26 -0500, olcott wrote:

    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
    The presence of HHH1 is irrelevant and uninteresting, except insofar
    that he's concealing that it returns 1.


    You get this same information when DDD of HHH1 reaches [00002194].

    I try to make it as simple as possible so that you can keep track of
    every detail of the execution trace of

    DDD correctly simulated by HHH DDD correctly simulated by HHH1

    *So that you can directly see that*

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different before and after the
    abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH has aborted DD thus need
    not abort DD itself.

    That is obviously false. HHH1(DD) begins simulating DD from the entry
    to DD, and creating execution trace entries, before DD reaches its
    HHH(DD) call.


    From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.

    You know this, which is why in your execution traces you include this
    remark:

    *This is the beginning of the divergence of the behavior*
    *of DDD emulated by HHH versus DDD emulated by HHH1*

    Before the divergence, there are behaviors, and they are not divergent.


    HHH1 cannot see the behavior of DDD correctly simulated by HHH. HHH1 has
    no idea that HHH is identical to itself. HHH1 only sees that when its
    DDD calls HHH(DD) that this call returns.

    Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.


    DDD correctly simulated by HHH1 is identical to the behavior of the
    directly executed DDD().

    When we have emulation compared to emulation we are comparing Apples to Apples and not Apples to Oranges.

    You have yet to explain the significance of HHH1.


    Just did.

    HHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus must
    abort DD itself.

    That's a tautology. HHH only sees the behavior that HHH has seen in
    order to convince itself that seeing more behavior is not necessary.

    Yes?


    HHH has complete proof that DDD correctly simulated by HHH cannot
    possibly reach its own simulated final halt state.

    That you do not understand this complete proof is no actual rebuttal what-so-ever.

    You are a very smart guy. The way around this is for you to figure out
    on your own how HHH would correctly determine that:

    void Infinite_Recursion()
    {
    Infinite_Recursion();
    return;
    }

    Would never halt.

    Exactly what execution trace details would HHH need to see to correctly conclude beyond all possible doubt that Infinite_Recursion() never
    halts?

    *When HHH reports on the actual behavior that it* *actually sees then
    HHH(DDD)==0 IS CORRECT*

    The simulation which HHH has abandoned can be continued to show that
    DDD halts,

    Try and trace through every single detail of every single step to show
    this.

    which indicates that the 0 is incorrect.

    *That is why I said it is so important for you to* *carefully study
    this carefully annotated execution* *trace instead of continuing to
    totally ignore it*

    Why don't you properly port it to Linux. Modern Linux toolchains do not
    produce COFF object files, only ELF. There evidently aren't any
    options you can supply to get a COFF object file.


    I came from Linux and was ported to Windows.

    I started writing and Elf version of this https://github.com/plolcott/x86utm/blob/master/include/Read_COFF_Object.h

    and determined that it was not worth the effort to finish it.

    You have an ELF-handling class and COFF_handling class, but you
    neglected to create an easily switchable abstraction; you have
    hard-coded use of the COFF class in a whole bunch of places.

    HHH and HHH1 have identical source code except for their name.

    You keep repeating this without explaining what you believe to be the
    signficance of it, or why HHH1 has to be present at all.


    I explained the crucial importance of this many hundreds of times over several years and not one single person ever paid any attention at all
    to any of the details.

    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*

    People are so damned sure that I am wrong that whenever I explain the
    details of the proof that I am correct they never ever hear anything
    beside blah, blah, blah...

    The DDD of HHH1(DDD) has identical behavior to the directly executed
    DDD().

    And you know that the directly executed DDD is halting; you have
    acknowledged that multiple times and brushed it aside.


    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR
    THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE
    ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL
    INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT
    ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES

    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR
    THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE
    ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL
    INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT
    ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES

    I can give you 100 megabytes of that if it will help you see that I ever
    said it at least once.

    DD halts.

    /Flibble
    --
    meet ever shorter deadlines, known as "beat the clock"
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Mon Sep 15 21:59:33 2025
    From Newsgroup: comp.theory

    On 15/09/2025 18:27, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
    The presence of HHH1 is irrelevant and uninteresting, except insofar
    that he's concealing that it returns 1.


    You get this same information when DDD of HHH1
    reaches [00002194].

    I try to make it as simple as possible so that you
    can keep track of every detail of the execution trace of

    DDD correctly simulated by HHH
    DDD correctly simulated by HHH1

    *So that you can directly see that*

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    That is obviously false. HHH1(DD) begins simulating
    DD from the entry to DD, and creating execution
    trace entries, before DD reaches its HHH(DD) call.

    You know this, which is why in your execution traces you include this
    remark:

    *This is the beginning of the divergence of the behavior*
    *of DDD emulated by HHH versus DDD emulated by HHH1*

    Before the divergence, there are behaviors, and they are not
    divergent.

    Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.

    You have yet to explain the significance of HHH1.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    That's a tautology. HHH only sees the behavior that HHH has seen in
    order to convince itself that seeing more behavior is not necessary.

    Yes?

    *When HHH reports on the actual behavior that it*
    *actually sees then HHH(DDD)==0 IS CORRECT*

    The simulation which HHH has abandoned can be
    continued to show that DDD halts, which indicates
    that the 0 is incorrect.

    *That is why I said it is so important for you to*
    *carefully study this carefully annotated execution*
    *trace instead of continuing to totally ignore it*

    Why don't you properly port it to Linux. Modern Linux toolchains do not produce COFF object files, only ELF. There evidently aren't any options
    you can supply to get a COFF object file.

    hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname option. I don't have objcopy to
    test, but PO only needs a few basic COFF capabilities, so it might be enough...

    <https://man7.org/linux/man-pages/man1/objcopy.1.html>

    Mike.


    You have an ELF-handling class and COFF_handling class, but you
    neglected to create an easily switchable abstraction; you have
    hard-coded use of the COFF class in a whole bunch of places.

    HHH and HHH1 have identical source code except
    for their name.

    You keep repeating this without explaining what you
    believe to be the signficance of it, or why HHH1
    has to be present at all.

    The DDD of HHH1(DDD) has identical
    behavior to the directly executed DDD().

    And you know that the directly executed DDD is halting; you have
    acknowledged that multiple times and brushed it aside.

    You know that when HHH1(DDD) executes its RET instruction,
    the return register EAX is nonzero.

    DDD calls HHH(DDD) in recursive emulation. DDD does
    not call HHH1 at all.

    That's why HHH1 has a shot at deciding DDD correctly
    by returning nonzero. HHH1 is not embroiled in the diagonal pair
    HHH/DDD.

    This is why the behavior
    of DDD.HHH1 is different than the behavior of DDD.HHH

    There is only one DDD with one behavior. If something else is observed, something is seriously wrong. Luckily, something is not seriously wrong
    (in that regard).

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From joes@noreply@example.org to comp.theory on Mon Sep 15 21:01:20 2025
    From Newsgroup: comp.theory

    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

    From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?

    Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.
    DDD correctly simulated by HHH1 is identical to the behavior of the
    directly executed DDD().
    When we have emulation compared to emulation we are comparing Apples to Apples and not Apples to Oranges.

    You have yet to explain the significance of HHH1.
    Just did.
    Why are you comparing at all?

    HHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus must
    abort DD itself.
    That's a tautology. HHH only sees the behavior that HHH has seen in
    order to convince itself that seeing more behavior is not necessary.
    Yes?
    HHH has complete proof that DDD correctly simulated by HHH cannot
    possibly reach its own simulated final halt state.
    So yes. What is the proof?

    Exactly what execution trace details would HHH need to see to correctly conclude beyond all possible doubt that Infinite_Recursion() never
    halts?
    It would need to see the whole input, if that includes itself.

    *When HHH reports on the actual behavior that it* *actually sees then
    HHH(DDD)==0 IS CORRECT*
    The same tautology still applies to my null simulator.

    The simulation which HHH has abandoned can be continued to show that
    DDD halts, which indicates that the 0 is incorrect.
    Try and trace through every single detail of every single step to show
    this.
    Why don't you show how DDD doesn't halt?

    HHH and HHH1 have identical source code except for their name.

    You keep repeating this without explaining what you believe to be the
    signficance of it, or why HHH1 has to be present at all.

    I explained the crucial importance of this many hundreds of times over several years and not one single person ever paid any attention at all
    to any of the details.
    Only because you spam. So what is the reason?

    People are so damned sure that I am wrong that whenever I explain the
    details of the proof that I am correct they never ever hear anything
    beside blah, blah, blah...
    TBF you don't explain anything else.

    I can give you 100 megabytes of that if it will help you see that I ever
    said it at least once.
    It wouldn't help. It's not a matter of noticing.
    --
    Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
    It is not guaranteed that n+1 exists for every n.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 16:08:17 2025
    From Newsgroup: comp.theory

    On 9/15/2025 3:59 PM, Mike Terry wrote:
    On 15/09/2025 18:27, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
    The presence of HHH1 is irrelevant and uninteresting, except insofar
    that he's concealing that it returns 1.


    You get this same information when DDD of HHH1
    reaches [00002194].

    I try to make it as simple as possible so that you
    can keep track of every detail of the execution trace of

    DDD correctly simulated by HHH
    DDD correctly simulated by HHH1

    *So that you can directly see that*

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    That is obviously false. HHH1(DD) begins simulating
    DD from the entry to DD, and creating execution
    trace entries, before DD reaches its HHH(DD) call.

    You know this, which is why in your execution traces you include this
    remark:

       *This is the beginning of the divergence of the behavior*
       *of DDD emulated by HHH versus DDD emulated by HHH1*

    Before the divergence, there are behaviors, and they are not
    divergent.

    Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.

    You have yet to explain the significance of HHH1.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    That's a tautology. HHH only sees the behavior that HHH has seen in
    order to convince itself that seeing more behavior is not necessary.

    Yes?

    *When HHH reports on the actual behavior that it*
    *actually sees then HHH(DDD)==0 IS CORRECT*

    The simulation which HHH has abandoned can be
    continued to show that DDD halts, which indicates
    that the 0 is incorrect.

    *That is why I said it is so important for you to*
    *carefully study this carefully annotated execution*
    *trace instead of continuing to totally ignore it*

    Why don't you properly port it to Linux. Modern Linux toolchains do not
    produce COFF object files, only ELF.  There evidently aren't any options
    you can supply to get a COFF object file.

    hmm, maybe objcopy can convert ELF to COFF?  It has a -O bfdname
    option.  I don't have objcopy to test, but PO only needs a few basic
    COFF capabilities, so it might be enough...

      <https://man7.org/linux/man-pages/man1/objcopy.1.html>

    Mike.


    It must be fully integrated into my code. https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h
    I just never got around to finishing that.

    https://github.com/plolcott/x86utm/blob/master/include/Read_COFF_Object.h
    is the one that I use.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 16:26:03 2025
    From Newsgroup: comp.theory

    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

    From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?

    Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.
    DDD correctly simulated by HHH1 is identical to the behavior of the
    directly executed DDD().
    When we have emulation compared to emulation we are comparing Apples to
    Apples and not Apples to Oranges.

    You have yet to explain the significance of HHH1.
    Just did.
    Why are you comparing at all?

    HHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus must >>>> abort DD itself.
    That's a tautology. HHH only sees the behavior that HHH has seen in
    order to convince itself that seeing more behavior is not necessary.
    Yes?
    HHH has complete proof that DDD correctly simulated by HHH cannot
    possibly reach its own simulated final halt state.

    So yes. What is the proof?


    It is apparently over everyone's head.

    void Infinite_Recursion()
    {
    Infinite_Recursion();
    return;
    }

    How can a program know with complete
    certainty that Infinite_Recursion()
    never halts?

    It is apparently over everyone's head.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory on Mon Sep 15 14:30:18 2025
    From Newsgroup: comp.theory

    On 9/15/2025 2:26 PM, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?

    Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.
    DDD correctly simulated by HHH1 is identical to the behavior of the
    directly executed DDD().
    When we have emulation compared to emulation we are comparing Apples to
    Apples and not Apples to Oranges.

    You have yet to explain the significance of HHH1.
    Just did.
    Why are you comparing at all?

    HHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus must >>>>> abort DD itself.
    That's a tautology. HHH only sees the behavior that HHH has seen in
    order to convince itself that seeing more behavior is not necessary.
    Yes?
    HHH has complete proof that DDD correctly simulated by HHH cannot
    possibly reach its own simulated final halt state.

    So yes. What is the proof?


    It is apparently over everyone's head.

    void Infinite_Recursion()
    {
      Infinite_Recursion();
      return;
    }

    How can a program know with complete
    certainty that Infinite_Recursion()
    never halts?

    Ummm... Your Infinite_Recursion example is basically akin to:

    10 PRINT "Halt" : GOTO 10

    Right? It says halt but does not... ;^)





    It is apparently over everyone's head.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 16:32:21 2025
    From Newsgroup: comp.theory

    On 9/15/2025 4:30 PM, Chris M. Thomasson wrote:
    On 9/15/2025 2:26 PM, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?

    Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.
    DDD correctly simulated by HHH1 is identical to the behavior of the
    directly executed DDD().
    When we have emulation compared to emulation we are comparing Apples to >>>> Apples and not Apples to Oranges.

    You have yet to explain the significance of HHH1.
    Just did.
    Why are you comparing at all?

    HHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus >>>>>> must
    abort DD itself.
    That's a tautology. HHH only sees the behavior that HHH has seen in
    order to convince itself that seeing more behavior is not necessary. >>>>> Yes?
    HHH has complete proof that DDD correctly simulated by HHH cannot
    possibly reach its own simulated final halt state.

    So yes. What is the proof?


    It is apparently over everyone's head.

    void Infinite_Recursion()
    {
       Infinite_Recursion();
       return;
    }

    How can a program know with complete
    certainty that Infinite_Recursion()
    never halts?

    Ummm... Your Infinite_Recursion example is basically akin to:

    10 PRINT "Halt" : GOTO 10

    Right? It says halt but does not... ;^)


    You aren't this stupid on the other forums
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory on Mon Sep 15 14:35:18 2025
    From Newsgroup: comp.theory

    On 9/15/2025 2:32 PM, olcott wrote:
    On 9/15/2025 4:30 PM, Chris M. Thomasson wrote:
    On 9/15/2025 2:26 PM, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?

    Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant. >>>>> DDD correctly simulated by HHH1 is identical to the behavior of the
    directly executed DDD().
    When we have emulation compared to emulation we are comparing
    Apples to
    Apples and not Apples to Oranges.

    You have yet to explain the significance of HHH1.
    Just did.
    Why are you comparing at all?

    HHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus >>>>>>> must
    abort DD itself.
    That's a tautology. HHH only sees the behavior that HHH has seen in >>>>>> order to convince itself that seeing more behavior is not necessary. >>>>>> Yes?
    HHH has complete proof that DDD correctly simulated by HHH cannot
    possibly reach its own simulated final halt state.

    So yes. What is the proof?


    It is apparently over everyone's head.

    void Infinite_Recursion()
    {
       Infinite_Recursion();
       return;
    }

    How can a program know with complete
    certainty that Infinite_Recursion()
    never halts?

    Check...


    Ummm... Your Infinite_Recursion example is basically akin to:

    10 PRINT "Halt" : GOTO 10

    Right? It says halt but does not... ;^)


    You aren't this stupid on the other forums



    Well, what are you trying to say here? That the following might halt?

    void Infinite_Recursion()
    {
    Infinite_Recursion();
    return;
    }

    I think not. Blowing the stack is not the same as halting...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Mon Sep 15 21:57:42 2025
    From Newsgroup: comp.theory

    On 2025-09-15, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 15/09/2025 18:27, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    Why don't you properly port it to Linux. Modern Linux toolchains do not
    produce COFF object files, only ELF. There evidently aren't any options
    you can supply to get a COFF object file.

    hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname option. I don't have objcopy to

    I tried that a bunch of weeks ago. It's not there.

    Basically COFF is gone. It appears to be thoroughly deprecated and not
    present on platforms that don't needed.

    It seems it would make sense for Olcott's code not to rely on these
    formats, and just use dlopen()/dlsym() or LoadLibrary() and
    GetProcAddress, with his build system making a .so or .dll.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Mon Sep 15 21:59:12 2025
    From Newsgroup: comp.theory

    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    It must be fully integrated into my code. https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h
    I just never got around to finishing that.

    https://github.com/plolcott/x86utm/blob/master/include/Read_COFF_Object.h
    is the one that I use.

    Why don't you just make .dll or .so objects out of the
    test case and use LoadLibrary/GetProcAddress on Windows,
    and dlopen/dlsym on Linux.

    Under Cygwin, you can use dlopen/dlsym on Windows.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Mon Sep 15 22:09:10 2025
    From Newsgroup: comp.theory

    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
    The presence of HHH1 is irrelevant and uninteresting, except insofar
    that he's concealing that it returns 1.


    You get this same information when DDD of HHH1
    reaches [00002194].

    I try to make it as simple as possible so that you
    can keep track of every detail of the execution trace of

    DDD correctly simulated by HHH
    DDD correctly simulated by HHH1

    *So that you can directly see that*

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    That is obviously false. HHH1(DD) begins simulating
    DD from the entry to DD, and creating execution
    trace entries, before DD reaches its HHH(DD) call.


    From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.

    But that's already true from the POV of the native x86
    processor, if we run DD() out of main.

    If DD is correctly a pure computation/TM, that must be true regardless
    of the context in which HHH(DD) is invoked; HHH(DD)
    returns, period.

    So again, what is the point of introducing HHH1.

    You know this, which is why in your execution traces you include this
    remark:

    *This is the beginning of the divergence of the behavior*
    *of DDD emulated by HHH versus DDD emulated by HHH1*

    Before the divergence, there are behaviors, and they are not
    divergent.


    HHH1 cannot see the behavior of DDD correctly
    simulated by HHH. HHH1 has no idea that HHH is
    identical to itself.

    We have no such idea either; HHH1 is not identical to HHH.

    Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.

    DDD correctly simulated by HHH1 is identical to
    the behavior of the directly executed DDD().

    When we have emulation compared to emulation we are
    comparing Apples to Apples and not Apples to Oranges.

    So HHH1 enables you to say, "See? DD halts even when
    it is emulated, not just when it is natively executed!"

    While that is great and true, how does that help you?

    It makes your rhetoric more far fetched that DD simulated under HHH
    should be different from a simulated DD.

    If that is so, only one of those two can be a correct
    simulation.

    You have yet to explain the significance of HHH1.

    Just did.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    That's a tautology. HHH only sees the behavior that HHH has seen in
    order to convince itself that seeing more behavior is not necessary.

    Yes?


    HHH has complete proof that DDD correctly
    simulated by HHH cannot possibly reach its
    own simulated final halt state.

    If HHH has "complete proof", where does that leave HHH1?

    That you do not understand this complete
    proof is no actual rebuttal what-so-ever.

    You are a very smart guy. The way around this
    is for you to figure out on your own how HHH
    would correctly determine that:

    void Infinite_Recursion()
    {
    Infinite_Recursion();
    return;
    }

    Would never halt.

    It would see that a CALL appears twice to the same address,
    without an intervening conditional jump instruction (in
    a carefully curated execution trace).

    What did I win?

    I explained the crucial importance of this many hundreds
    of times over several years and not one single person
    ever paid any attention at all to any of the details.

    *IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*

    HHH1(DD) returning 1 is the reason HHH(DD) == 0
    is correct?

    What?
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Tue Sep 16 00:34:25 2025
    From Newsgroup: comp.theory

    On 15/09/2025 22:01, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

    From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?

    Halt? HALT?

    It doesn't halt - it STOPS!

    Do try to keep up. :-)
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 18:52:42 2025
    From Newsgroup: comp.theory

    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

    From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 19:00:13 2025
    From Newsgroup: comp.theory

    On 9/15/2025 5:09 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
    The presence of HHH1 is irrelevant and uninteresting, except insofar >>>>> that he's concealing that it returns 1.


    You get this same information when DDD of HHH1
    reaches [00002194].

    I try to make it as simple as possible so that you
    can keep track of every detail of the execution trace of

    DDD correctly simulated by HHH
    DDD correctly simulated by HHH1

    *So that you can directly see that*

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    That is obviously false. HHH1(DD) begins simulating
    DD from the entry to DD, and creating execution
    trace entries, before DD reaches its HHH(DD) call.


    From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.

    But that's already true from the POV of the native x86
    processor, if we run DD() out of main.

    If DD is correctly a pure computation/TM,
    DD.exe halts
    DD.HHH1 halts
    DDD.HHH cannot possibly reach its own final halt state.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Sep 16 01:06:19 2025
    From Newsgroup: comp.theory

    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

    From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.

    I.e. it *has* a final halt state; it is a halting computation!
    A non-halting computation doesn't have a final halt state.

    "Cannot possibly reach": is just rhetoric about a particular simulation
    of it conducted by a the HHH decider not being taken to sufficient
    completion.

    If the abandoned simulation is picked up and continued,
    it will be shown to terminate.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Sep 16 01:09:32 2025
    From Newsgroup: comp.theory

    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    On 9/15/2025 5:09 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.

    But that's already true from the POV of the native x86
    processor, if we run DD() out of main.

    If DD is correctly a pure computation/TM,
    DD.exe halts
    DD.HHH1 halts
    DDD.HHH cannot possibly reach its own final halt state.

    Yes it can; all we have to do is add some clean-up code at the
    end of main() which picks up the abandoned simulation that HHH left
    behind and starts stepping where it left off. That simulation will be
    shown to terminate, indicating that HHH made the wrong call when it
    abandoned that simulation and called it non-terminating.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Tue Sep 16 02:53:35 2025
    From Newsgroup: comp.theory

    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.

    And yet it does.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Tue Sep 16 02:59:14 2025
    From Newsgroup: comp.theory

    On 15/09/2025 22:57, Kaz Kylheku wrote:
    On 2025-09-15, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 15/09/2025 18:27, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    Why don't you properly port it to Linux. Modern Linux toolchains do not
    produce COFF object files, only ELF. There evidently aren't any options >>> you can supply to get a COFF object file.

    hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname option. I don't have objcopy to

    I tried that a bunch of weeks ago. It's not there.

    Basically COFF is gone. It appears to be thoroughly deprecated and not present on platforms that don't needed.

    It seems it would make sense for Olcott's code not to rely on these
    formats, and just use dlopen()/dlsym() or LoadLibrary() and
    GetProcAddress, with his build system making a .so or .dll.


    The problem I see is PO needs to single step his simulations. Under Windows there are Debug system
    APIs, but that doesn't sound straight forward at all (They enable one process to debug a second
    process, setting breakpoints, inspecting memory and so on. You rather need a good understanding of
    what you're doing! I don't know what's available for unix world. To me this sounds much harder
    than adding ELF support...) Probably you had a different plan, I'm just guessing.

    Hmm, I imagine you could get the x86utm program built on your platform if you had to. Maybe there's
    some server somewhere, where you could send halt7.c for MSVC compilation? Probably that would run
    into licensing issues or something, although there's a free MSVC compiler (for Windows users)...


    Mike.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 20:59:30 2025
    From Newsgroup: comp.theory

    On 9/15/2025 8:53 PM, Richard Heathfield wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.

    And yet it does.


    DD.HHH cannot possibly reach its own final halt state.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 21:04:56 2025
    From Newsgroup: comp.theory

    On 9/15/2025 8:59 PM, Mike Terry wrote:
    On 15/09/2025 22:57, Kaz Kylheku wrote:
    On 2025-09-15, Mike Terry
    <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 15/09/2025 18:27, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    Why don't you properly port it to Linux. Modern Linux toolchains do not >>>> produce COFF object files, only ELF.  There evidently aren't any
    options
    you can supply to get a COFF object file.

    hmm, maybe objcopy can convert ELF to COFF?  It has a -O bfdname
    option.  I don't have objcopy to

    I tried that a bunch of weeks ago. It's not there.

    Basically COFF is gone. It appears to be thoroughly deprecated and not
    present on platforms that don't needed.

    It seems it would make sense for Olcott's code not to rely on these
    formats, and just use dlopen()/dlsym() or LoadLibrary() and
    GetProcAddress, with his build system making a .so or .dll.


    The problem I see is PO needs to single step his simulations.  Under Windows there are Debug system APIs, but that doesn't sound straight
    forward at all  (They enable one process to debug a second process,
    setting breakpoints, inspecting memory and so on.  You rather need a
    good understanding of what you're doing!  I don't know what's available
    for unix world.  To me this sounds much harder than adding ELF support...)  Probably you had a different plan, I'm just guessing.

    Hmm, I imagine you could get the x86utm program built on your platform
    if you had to.  Maybe there's some server somewhere, where you could
    send halt7.c for MSVC compilation?  Probably that would run into
    licensing issues or something, although there's a free MSVC compiler
    (for Windows users)...


    Mike.


    x86utm works just fine under Linux.
    libx86emu was written for Linux.
    I ported it to Windows and kept Linux
    compatibility.

    https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h
    would need to be completed using https://github.com/plolcott/x86utm/blob/master/include/Read_COFF_Object.h
    as the model.

    Most of the work is already done by
    #include "elf.h"
    https://github.com/plolcott/x86utm/blob/master/include/elf.h
    A reliable third party file.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Sep 16 02:07:46 2025
    From Newsgroup: comp.theory

    On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 15/09/2025 22:57, Kaz Kylheku wrote:
    On 2025-09-15, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 15/09/2025 18:27, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    Why don't you properly port it to Linux. Modern Linux toolchains do not >>>> produce COFF object files, only ELF. There evidently aren't any options >>>> you can supply to get a COFF object file.

    hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname option. I don't have objcopy to

    I tried that a bunch of weeks ago. It's not there.

    Basically COFF is gone. It appears to be thoroughly deprecated and not
    present on platforms that don't needed.

    It seems it would make sense for Olcott's code not to rely on these
    formats, and just use dlopen()/dlsym() or LoadLibrary() and
    GetProcAddress, with his build system making a .so or .dll.


    The problem I see is PO needs to single step his simulations. Under Windows there are Debug system
    APIs, but that doesn't sound straight forward at all (They enable one process to debug a second
    process, setting breakpoints, inspecting memory and so on. You rather need a good understanding of
    what you're doing! I don't know what's available for unix world. To me this sounds much harder
    than adding ELF support...) Probably you had a different plan, I'm just guessing.

    I mean, he could just boostrap into a simulation inside main:

    void sim_main(void)
    {
    if (HHH(DD)) ... etc
    }

    int main(void)
    {
    Simulate(sim_main);
    }

    Then his own Debug_Step would be stepping everything in software; no
    need to mess around with the host system's access to processor single
    stepping. Plus he could record one of his famous execution traces for
    the entire test case starting at sim_main.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Tue Sep 16 03:10:14 2025
    From Newsgroup: comp.theory

    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Sep 16 02:11:07 2025
    From Newsgroup: comp.theory

    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    x86utm works just fine under Linux.
    ^^^^^^^^^^^^^^

    Present tense!

    https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h would need to be completed using
    ^^^^^^^^^^^^^^^^^^^^^^^^^^

    Subjunctive!

    Story of your life, eh! Claiming that stuff works now but, oh, if
    so and so would be implemented.

    Oh, I disproved the Halting Theorem beyond a doubt ... except I would
    have to make sure my HHH is a pure function, and this and that ...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Sep 16 02:17:39 2025
    From Newsgroup: comp.theory

    On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?

    This is a kind of notation that he invented where A.B means a
    B-attributed variant of A, which is to be understood as a potentially
    different A from some C-attributed variant A.C.

    It's a way of saying that there is a DD_HH1 and DD_HHH variant of DD.
    Except the dot is used so that DD is understood to be one C
    function definition, and .HH1 is an abstract attribute.

    It was when I explained that when you have multiple decider behaviors,
    you cannot roll them into a function; you should have a HHH1, HHH2,
    HHH3, and those need to use diagonal programs DD_HHH1, DD_HHH2, and so
    on and then make sure these are all pure.

    He changed that underscore to dot to keep using a single DD function and perverted the naming to his own purposes. Or that's vaguely how I
    remember it.

    He soon invented the .exe attribute as well (DD.exe) to denote that
    DD that is executed by the host processor rather than simulated.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 21:37:03 2025
    From Newsgroup: comp.theory

    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    _DDD()
    [00002183] 55 push ebp
    [00002184] 8bec mov ebp,esp
    [00002186] 6883210000 push 00002183 ; push DDD
    [0000218b] e833f4ffff call 000015c3 ; call HHH
    [00002190] 83c404 add esp,+04
    [00002193] 5d pop ebp
    [00002194] c3 ret
    Size in bytes:(0018) [00002194]

    _main()
    [000021a3] 55 push ebp
    [000021a4] 8bec mov ebp,esp
    [000021a6] 6883210000 push 00002183 ; push DDD
    [000021ab] e843f3ffff call 000014f3 ; call HHH1
    [000021b0] 83c404 add esp,+04
    [000021b3] 33c0 xor eax,eax
    [000021b5] 5d pop ebp
    [000021b6] c3 ret
    Size in bytes:(0020) [000021b6]

    machine stack stack machine assembly
    address address data code language
    ======== ======== ======== ========== ============= [000021a3][0010382d][00000000] 55 push ebp ; main() [000021a4][0010382d][00000000] 8bec mov ebp,esp ; main() [000021a6][00103829][00002183] 6883210000 push 00002183 ; push DDD [000021ab][00103825][000021b0] e843f3ffff call 000014f3 ; call HHH1
    New slave_stack at:1038d1

    Begin Local Halt Decider Simulation Execution Trace Stored at:1138d9 [00002183][001138c9][001138cd] 55 push ebp ; DDD of HHH1 [00002184][001138c9][001138cd] 8bec mov ebp,esp ; DDD of HHH1 [00002186][001138c5][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001138c1][00002190] e833f4ffff call 000015c3 ; call HHH[0]
    New slave_stack at:14e2f9

    Begin Local Halt Decider Simulation Execution Trace Stored at:15e301 [00002183][0015e2f1][0015e2f5] 55 push ebp ; DDD of HHH[0] [00002184][0015e2f1][0015e2f5] 8bec mov ebp,esp ; DDD of HHH[0] [00002186][0015e2ed][00002183] 6883210000 push 00002183 ; push DDD [0000218b][0015e2e9][00002190] e833f4ffff call 000015c3 ; call HHH[1]
    New slave_stack at:198d21

    *This is the beginning of the divergence of the behavior*
    *of DDD emulated by HHH versus DDD emulated by HHH1*

    [00002183][001a8d19][001a8d1d] 55 push ebp ; DDD of HHH[1] [00002184][001a8d19][001a8d1d] 8bec mov ebp,esp ; DDD of HHH[1] [00002186][001a8d15][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001a8d11][00002190] e833f4ffff call 000015c3 ; call HHH[2]
    Local Halt Decider: Infinite Recursion Detected Simulation Stopped

    [00002190][001138c9][001138cd] 83c404 add esp,+04 ; DDD of HHH1 [00002193][001138cd][000015a8] 5d pop ebp ; DDD of HHH1 [00002194][001138d1][0003a980] c3 ret ; DDD of HHH1 [000021b0][0010382d][00000000] 83c404 add esp,+04 ; main() [000021b3][0010382d][00000000] 33c0 xor eax,eax ; main() [000021b5][00103831][00000018] 5d pop ebp ; main() [000021b6][00103835][00000000] c3 ret ; main()
    Number of Instructions Executed(352831) == 5266 Pages
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 22:01:35 2025
    From Newsgroup: comp.theory

    On 9/15/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    x86utm works just fine under Linux.
    ^^^^^^^^^^^^^^

    Present tense!

    https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h
    would need to be completed using
    ^^^^^^^^^^^^^^^^^^^^^^^^^^

    Subjunctive!

    Story of your life, eh! Claiming that stuff works now but, oh, if
    so and so would be implemented.

    Oh, I disproved the Halting Theorem beyond a doubt ... except I would
    have to make sure my HHH is a pure function, and this and that ...

    I ran x86utm under Linux with COFF object file input.
    To run it with ELF input it needs a little work.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Sep 16 03:26:01 2025
    From Newsgroup: comp.theory

    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    On 9/15/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    x86utm works just fine under Linux.
    ^^^^^^^^^^^^^^

    Present tense!

    https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h >>> would need to be completed using
    ^^^^^^^^^^^^^^^^^^^^^^^^^^

    Subjunctive!

    Story of your life, eh! Claiming that stuff works now but, oh, if
    so and so would be implemented.

    Oh, I disproved the Halting Theorem beyond a doubt ... except I would
    have to make sure my HHH is a pure function, and this and that ...

    I ran x86utm under Linux with COFF object file input.

    The problem is that toolchains on modern Linux do not produce COFF
    object files, even as an option. The support is removed.

    Did you copy them over from your Windows installation?
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 22:34:05 2025
    From Newsgroup: comp.theory

    On 9/15/2025 10:26 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    On 9/15/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    x86utm works just fine under Linux.
    ^^^^^^^^^^^^^^

    Present tense!

    https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h >>>> would need to be completed using
    ^^^^^^^^^^^^^^^^^^^^^^^^^^

    Subjunctive!

    Story of your life, eh! Claiming that stuff works now but, oh, if
    so and so would be implemented.

    Oh, I disproved the Halting Theorem beyond a doubt ... except I would
    have to make sure my HHH is a pure function, and this and that ...

    I ran x86utm under Linux with COFF object file input.

    The problem is that toolchains on modern Linux do not produce COFF
    object files, even as an option. The support is removed.

    Did you copy them over from your Windows installation?


    Well duh.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 22:35:23 2025
    From Newsgroup: comp.theory

    On 9/15/2025 10:26 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    On 9/15/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    x86utm works just fine under Linux.
    ^^^^^^^^^^^^^^

    Present tense!

    https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h >>>> would need to be completed using
    ^^^^^^^^^^^^^^^^^^^^^^^^^^

    Subjunctive!

    Story of your life, eh! Claiming that stuff works now but, oh, if
    so and so would be implemented.

    Oh, I disproved the Halting Theorem beyond a doubt ... except I would
    have to make sure my HHH is a pure function, and this and that ...

    I ran x86utm under Linux with COFF object file input.

    The problem is that toolchains on modern Linux do not produce COFF
    object files, even as an option. The support is removed.

    Did you copy them over from your Windows installation?


    Only the file named Halt7.c needs to be compiled
    into COFF as input param to x86utm.exe command line.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Tue Sep 16 04:36:22 2025
    From Newsgroup: comp.theory

    On 16/09/2025 03:07, Kaz Kylheku wrote:
    On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 15/09/2025 22:57, Kaz Kylheku wrote:
    On 2025-09-15, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 15/09/2025 18:27, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    Why don't you properly port it to Linux. Modern Linux toolchains do not >>>>> produce COFF object files, only ELF. There evidently aren't any options >>>>> you can supply to get a COFF object file.

    hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname option. I don't have objcopy to

    I tried that a bunch of weeks ago. It's not there.

    Basically COFF is gone. It appears to be thoroughly deprecated and not
    present on platforms that don't needed.

    It seems it would make sense for Olcott's code not to rely on these
    formats, and just use dlopen()/dlsym() or LoadLibrary() and
    GetProcAddress, with his build system making a .so or .dll.


    The problem I see is PO needs to single step his simulations. Under Windows there are Debug system
    APIs, but that doesn't sound straight forward at all (They enable one process to debug a second
    process, setting breakpoints, inspecting memory and so on. You rather need a good understanding of
    what you're doing! I don't know what's available for unix world. To me this sounds much harder
    than adding ELF support...) Probably you had a different plan, I'm just guessing.

    I mean, he could just boostrap into a simulation inside main:

    void sim_main(void)
    {
    if (HHH(DD)) ... etc
    }

    int main(void)
    {
    Simulate(sim_main);
    }

    I'm not sure I get this. The above looks like part of an OS executable e.g. halt7.exe on Windows.
    Or more likely a dynamic library halt7.dll on Windows, which would be loaded by main executable akin
    to x86utm.exe? (So Main/Simulate above would be in the main executable, and halt7.xxxx dynamically
    loaded - that seems cleanest.)

    So Simulate creates the libx86emu virtual x86 machine, and ... what?

    It would need to initialise that virtual address space with sim_main, HHH, DD and so on, but they're
    in halt7.exe processor address space, loaded by the OS after relocating it who knows where? Are you
    saying
    - initialise libx86emu virtual x86 memory by copying ... something ... from real memory?

    halt7.exe/dll memory might not even be 32-bit or x86-based. OK let's assume it is 32-bit x86. Hmm,
    I suppose we could copy the real memory range where halt7.exe/dll has been loaded to /the same/
    memory range in virtual memory [to avoid relocation problems]. How much memory to copy?? To get
    that we would have to do something platform specific - on Windows there's an API to get loaded
    module begin/end address ranges.

    Then Allocate() [within x86utm.exe] would need to work around the chunk taken out of the virtual
    address space, but I suppose it can do that.

    Well, that might work I suppose. The code in x86utm/halt7 would have much less information about
    the halt7 code than PO's implementation. Any function Simulate'd, and all "Primitive Op functions"
    would need to be "exported" so the OS can locate them, but that's ok. I'm not sure PO would be able
    to create his disassembly listing due to not knowing where all the functions start and end, or even
    where the "code" segment is, but the compiler can create such a listing (but not relocated to its
    load address, so less convenient).

    I suppose we've eliminated dependencies on COFF/ELF, but created some new OS dependencies.

    OK, Peter - looks like its over to you now to try it all out and report back!


    Then his own Debug_Step would be stepping everything in software; no
    need to mess around with the host system's access to processor single stepping. Plus he could record one of his famous execution traces for
    the entire test case starting at sim_main.

    That's what happens now in effect - x86utm.exe log starts tracing at the halt7.c main() function.
    The DebugStep code is shared I think by the x86utm main execution loop [that simulates halt7.c
    starting at main()] and by the halt7.c DebugStep primitive op. So the log trace entries all come
    out in the one log (preceded by the disassembly listing).


    Mike.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Tue Sep 16 04:52:33 2025
    From Newsgroup: comp.theory

    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? And so DD.exe (I think I've seen
    that) would mean DD directly executed?

    The alternative (which is what I would have guessed) is "DD which calls HHH1", but it seems you
    don't mean that (?)


    Mike.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Sep 15 23:13:18 2025
    From Newsgroup: comp.theory

    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1?  And so DD.exe (I think I've seen that) would mean DD directly executed?

    The alternative (which is what I would have guessed) is "DD which calls HHH1", but it seems you don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Sep 16 05:50:57 2025
    From Newsgroup: comp.theory

    On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    That's what happens now in effect - x86utm.exe log starts tracing at the halt7.c main() function.

    Then why does he keep claiming that main() is "native"? Nothing is
    "native" in the loaded COFF test case, then.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From joes@noreply@example.org to comp.theory on Tue Sep 16 08:14:37 2025
    From Newsgroup: comp.theory

    Am Mon, 15 Sep 2025 16:26:03 -0500 schrieb olcott:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

    From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halt*?

    Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.
    DDD correctly simulated by HHH1 is identical to the behavior of the
    directly executed DDD().
    When we have emulation compared to emulation we are comparing Apples
    to Apples and not Apples to Oranges.

    You have yet to explain the significance of HHH1.
    Just did.
    Why are you comparing at all?

    HHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus
    must abort DD itself.
    That's a tautology. HHH only sees the behavior that HHH has seen in
    order to convince itself that seeing more behavior is not necessary.
    Yes?
    HHH has complete proof that DDD correctly simulated by HHH cannot
    possibly reach its own simulated final halt state.

    So yes. What is the proof?

    How can a program know with complete certainty that Infinite_Recursion() never halts?
    That is what I ask you! Among all the other questions above.
    --
    Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
    It is not guaranteed that n+1 exists for every n.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Tue Sep 16 09:20:07 2025
    From Newsgroup: comp.theory

    On 16/09/2025 02:59, olcott wrote:
    On 9/15/2025 8:53 PM, Richard Heathfield wrote:
    On 16/09/2025 00:52, olcott wrote:

    <snip>


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.

    And yet it does.


    DD.HHH cannot possibly reach its own final halt state.

    And yet it does.

    $ ./plaindd
    DD halted.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory on Tue Sep 16 10:20:29 2025
    From Newsgroup: comp.theory

    Op 15.sep.2025 om 18:40 schreef olcott:
    On 9/15/2025 2:48 AM, Fred. Zwarts wrote:
    Op 14.sep.2025 om 15:22 schreef olcott:
    void DDD()
    {
       HHH(DDD);
       return;
    }

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    HHH ONLY sees the behavior of DD *BEFORE* HHH
    has aborted DD thus must abort DD itself.

    Exactly! That is the error of HHH!
    I provided the full trace so that you could
    see that it is not any error. That you ignored
    this is not any sort of actual rebuttal.

    As usual claims without evidence.
    The traces exactly show what I claim: HHH does not simulate far enough.
    It aborts before it sees the final halt state.
    Don't try to prove me wrong by changing the input.
    For *this input* only one more cycle of simulation is needed, but HHH
    fails to do that.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory on Tue Sep 16 10:29:57 2025
    From Newsgroup: comp.theory

    Op 16.sep.2025 om 03:59 schreef olcott:
    On 9/15/2025 8:53 PM, Richard Heathfield wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.

    And yet it does.


    DD.HHH cannot possibly reach its own final halt state.


    Indeed, it has been programmed with a bug, so that it cannot reach its
    own final halt state. That does not prove that the final halt state does
    not exist.
    Not being able to reach something does not prove that it does not exist.
    Other simulators of exactly the same input prove that this input
    specifies a reachable final halt state.
    That HHH cannot reach it, therefore, is a failure of HHH, not a property
    of the program specified in the input.
    HHH has been made blind for the full specification and then comes your
    belief that what you cannot see does not exists. Such a dream is not a
    proof.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory on Tue Sep 16 10:37:36 2025
    From Newsgroup: comp.theory

    Op 16.sep.2025 om 06:13 schreef olcott:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1?  And
    so DD.exe (I think I've seen that) would mean DD directly executed?

    The alternative (which is what I would have guessed) is "DD which
    calls HHH1", but it seems you don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.
    Exactly. Do you get it? HHH1 is able reach the final halt state, but HHH
    fails to reach that same final halt state specified in exactly the same
    input.
    HHH aborts one cycle before it can see that the program will halt. It
    does not analyse whether a continued correct simulation would halt. The
    only reason to abort is the *assumption* that the finite recursion it
    sees must be interpreted as an infinite recursion, but it has failed to
    show any basis for this assumption.
    It has seen many conditional branch instructions during the finite
    recursion, but failed to prove that the other branches will not be
    followed in a correct continued simulation. HHH1 proves that they will.
    It is clear that you have no counter-argument. You close your eyes for
    these facts and pretend that they do not exist, because the disturb your dreams. That is your attitude.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Tue Sep 16 11:26:52 2025
    From Newsgroup: comp.theory

    On 16/09/2025 09:37, Fred. Zwarts wrote:

    <snip>

    It is clear that you have no counter-argument. You close your
    eyes for these facts and pretend that they do not exist, because
    the disturb your dreams. That is your attitude.

    That I can live with.

    What puts him beyond the pale is that when we tell him the
    (self-evident!) truth, he throws his toys out of the pram,
    carelessly hurling insults around the group by calling good
    people fools and liars.

    That's a lot harder to forgive.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Tue Sep 16 16:33:45 2025
    From Newsgroup: comp.theory

    On 15/09/2025 23:09, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
    The presence of HHH1 is irrelevant and uninteresting, except insofar >>>>> that he's concealing that it returns 1.


    You get this same information when DDD of HHH1
    reaches [00002194].

    I try to make it as simple as possible so that you
    can keep track of every detail of the execution trace of

    DDD correctly simulated by HHH
    DDD correctly simulated by HHH1

    *So that you can directly see that*

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    That is obviously false. HHH1(DD) begins simulating
    DD from the entry to DD, and creating execution
    trace entries, before DD reaches its HHH(DD) call.


    From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.

    But that's already true from the POV of the native x86
    processor, if we run DD() out of main.

    If DD is correctly a pure computation/TM, that must be true regardless
    of the context in which HHH(DD) is invoked; HHH(DD)
    returns, period.

    So again, what is the point of introducing HHH1.

    My suggestion: PO has seen DD() halt [when called from main()]. He has traces of DD halting! He
    /wants/ to say that when HHH simulates DD, DD's "behaviour" is different from the behaviour of DD
    called from main().

    But everyone else understands DD does what DD does, or at least assuming we're talking pure
    functions. The idea that a simulation goes different ways depending on who's performing the
    simulation is plain silly - one of PO's fantasies invented to try to maintain even sillier beliefs
    elsewhere [like that DD doesn't /really/ halt when it obviously does].

    His HHH1 supports PO's narative, in PO's mind - here we (supposedly) have two identical simulators
    simulating the same DD, but they behave differently. That can only be explained (in PO's mind) by
    invoking magical thinking about "pathelogical self reference".

    If PO were to acknowledge that all simulators just step along the "One True Path" of the computation
    step by step, up to the point they give up, he would lose his argument that HHH /simulating/ DD
    "sees" different halting behaviour from DD /directly executed/. Then his whole Linz proof argument
    would be seen by all as nonsense. [I have no doubt he could think up some other explanation which
    is even less logical if he needed to, in order to maintain his delusional framework, so there's no
    danger here of "sending PO over the edge". Such people will always "do whatever it takes" to
    maintain their delusions.]


    Mike.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Tue Sep 16 16:44:33 2025
    From Newsgroup: comp.theory

    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1?  And so DD.exe (I think I've seen
    that) would mean DD directly executed?

    The alternative (which is what I would have guessed) is "DD which calls HHH1", but it seems you
    don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.

    I didn't ask that.


    *Please confirm:*

    DD.HHH1 means DD simulated by HHH1?

    DD.exe means DD directly executed?

    This is just about clarification of /your/ notation, not about any claims you are making. It's ok
    to reply "yes" or "no". (Indeed it's your duty to do so.)


    Mike.

















    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Tue Sep 16 17:46:34 2025
    From Newsgroup: comp.theory

    On 16/09/2025 06:50, Kaz Kylheku wrote:
    On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    That's what happens now in effect - x86utm.exe log starts tracing at the halt7.c main() function.

    Then why does he keep claiming that main() is "native"? Nothing is
    "native" in the loaded COFF test case, then.

    I genuinely can't see why you would be asking that question, so I'm missing something. (Probably
    there's something about this scenario which I consider so obvious/commonly accepted, and which
    hasn't been explicitly questioned in 3 years, and now your understanding differs in some way but I
    can't see what you're thinking instead so I struggle to address it.)

    If you're just making the point that /all/ the code in halt7.c is "executed" within PO's x86utm,
    that's perfectly correct. With the possible exception of main(), all the code in halt7.c is "TM
    code" or simulations made by that TM code. The TM code is "directly executed" [that's just what the
    phrase means in x86utm context] and code it simulates using DebugStep() is "simulated".

    Or if you've just taken "native" as meaning "run directly under Windows/Unix/other OS" then that
    could explain your question, in which case the answer is that's not how the word is usually used in
    these threads... [just read it as "directly executed", equivalent of TM computations appearing in
    the HP proof, as opposed to UTMs simulating a computation.


    Mike.

























    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 12:44:19 2025
    From Newsgroup: comp.theory

    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1?  And
    so DD.exe (I think I've seen that) would mean DD directly executed?

    The alternative (which is what I would have guessed) is "DD which
    calls HHH1", but it seems you don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.

    I didn't ask that.


    All of the details of the trace that you erased
    answer every possibly question that you could
    possibly have about these things. My answer directed
    you to the trace.

    HHH(DD) is the conventional diagonal case and

    HHH1(DD) is the common understanding that
    another different decider could correctly
    decide this same input because it does not
    form the diagonal case.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Tue Sep 16 19:15:51 2025
    From Newsgroup: comp.theory

    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*? >>>>>>>>

    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1?  And so DD.exe (I think I've
    seen that) would mean DD directly executed?

    The alternative (which is what I would have guessed) is "DD which calls HHH1", but it seems you
    don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.

    I didn't ask that.


    All of the details of the trace that you erased
    answer every possibly question that you could
    possibly have about these things. My answer directed
    you to the trace.

    HHH(DD) is the conventional diagonal case and

    HHH1(DD) is the common understanding that
    another different decider could correctly
    decide this same input because it does not
    form the diagonal case.

    That's all very interesting, and all, but what I want to know is:

    Does DD.HHH1 mean DD simulated by HHH1?

    Does DD.exe mean DD directly executed?

    Are you really going to refuse to answer a simple question with a simple answer? Why would anyone
    not just confirm yes or no in this situation? (I can't imagine, but it appears you're really not
    going to answer.)


    Mike.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 13:29:28 2025
    From Newsgroup: comp.theory

    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*? >>>>>>>>>

    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1?
    And so DD.exe (I think I've seen that) would mean DD directly
    executed?

    The alternative (which is what I would have guessed) is "DD which
    calls HHH1", but it seems you don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.

    I didn't ask that.


    All of the details of the trace that you erased
    answer every possibly question that you could
    possibly have about these things. My answer directed
    you to the trace.

    HHH(DD) is the conventional diagonal case and

    HHH1(DD) is the common understanding that
    another different decider could correctly
    decide this same input because it does not
    form the diagonal case.

    That's all very interesting, and all, but what I want to know is:

    Does DD.HHH1 mean DD simulated by HHH1?

    Does DD.exe mean DD directly executed?


    The only relevant cases now are DD.HHH where DD
    is simulated by HHH AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From joes@noreply@example.org to comp.theory on Tue Sep 16 19:09:05 2025
    From Newsgroup: comp.theory

    Am Tue, 16 Sep 2025 13:29:28 -0500 schrieb olcott:
    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:

    I guess I missed an earlier post.
    What do DD.HHH1 and DDD.HHH mean?

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1? And >>>>>> so DD.exe (I think I've seen that) would mean DD directly executed? >>>>>> The alternative (which is what I would have guessed) is "DD which
    calls HHH1", but it seems you don't mean that (?)

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.

    I didn't ask that.

    All of the details of the trace that you erased answer every possibly
    question that you could possibly have about these things. My answer
    directed you to the trace.
    HHH(DD) is the conventional diagonal case and
    HHH1(DD) is the common understanding that another different decider
    could correctly decide this same input because it does not form the
    diagonal case.

    That's all very interesting, and all, but what I want to know is:
    Does DD.HHH1 mean DD simulated by HHH1?
    Does DD.exe mean DD directly executed?

    The only relevant cases now are DD.HHH where DD is simulated by HHH AKA
    the conventional diagonal relationship and DD.HHH1 where the same input
    does not form the diagonal case thus is conventionally decidable.
    You could have just said "yes" the first time.

    This means that it has always been common knowledge that the behavior of
    DD with HHH(DD) is different than the behavior of DD with HHH1(DD) yet everyone here disagrees because they value disagreement over truth.
    Nobody says that HHH and HHH1 are the same. But they should.
    --
    Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
    It is not guaranteed that n+1 exists for every n.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 14:19:13 2025
    From Newsgroup: comp.theory

    On 9/16/2025 2:09 PM, joes wrote:
    Am Tue, 16 Sep 2025 13:29:28 -0500 schrieb olcott:
    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:

    I guess I missed an earlier post.
    What do DD.HHH1 and DDD.HHH mean?

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1? And >>>>>>> so DD.exe (I think I've seen that) would mean DD directly executed? >>>>>>> The alternative (which is what I would have guessed) is "DD which >>>>>>> calls HHH1", but it seems you don't mean that (?)

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.

    I didn't ask that.

    All of the details of the trace that you erased answer every possibly
    question that you could possibly have about these things. My answer
    directed you to the trace.
    HHH(DD) is the conventional diagonal case and
    HHH1(DD) is the common understanding that another different decider
    could correctly decide this same input because it does not form the
    diagonal case.

    That's all very interesting, and all, but what I want to know is:
    Does DD.HHH1 mean DD simulated by HHH1?
    Does DD.exe mean DD directly executed?

    The only relevant cases now are DD.HHH where DD is simulated by HHH AKA
    the conventional diagonal relationship and DD.HHH1 where the same input
    does not form the diagonal case thus is conventionally decidable.
    You could have just said "yes" the first time.


    It enrages me that people insist that I must be
    wrong and they do this entirely on the basis of
    refusing to pay enough attention.

    This means that it has always been common knowledge that the behavior of
    DD with HHH(DD) is different than the behavior of DD with HHH1(DD) yet
    everyone here disagrees because they value disagreement over truth.
    Nobody says that HHH and HHH1 are the same. But they should.>
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 15:02:24 2025
    From Newsgroup: comp.theory

    On 9/16/2025 10:33 AM, Mike Terry wrote:
    On 15/09/2025 23:09, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
    The presence of HHH1 is irrelevant and uninteresting, except insofar >>>>>> that he's concealing that it returns 1.


    You get this same information when DDD of HHH1
    reaches [00002194].

    I try to make it as simple as possible so that you
    can keep track of every detail of the execution trace of

    DDD correctly simulated by HHH
    DDD correctly simulated by HHH1

    *So that you can directly see that*

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    That is obviously false. HHH1(DD) begins simulating
    DD from the entry to DD, and creating execution
    trace entries, before DD reaches its HHH(DD) call.


      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.

    But that's already true from the POV of the native x86
    processor, if we run DD() out of main.

    If DD is correctly a pure computation/TM, that must be true regardless
    of the context in which HHH(DD) is invoked; HHH(DD)
    returns, period.

    So again, what is the point of introducing HHH1.

    My suggestion:  PO has seen DD() halt [when called from main()].  He has traces of DD halting!  He /wants/ to say that when HHH simulates DD,
    DD's "behaviour" is different from the behaviour of DD called from main().

    But everyone else understands DD does what DD does, or at least assuming we're talking pure functions.  The idea that a simulation goes different ways depending on who's performing the simulation is plain silly - one
    of PO's fantasies invented to try to maintain even sillier beliefs
    elsewhere [like that DD doesn't /really/ halt when it obviously does].

    His HHH1 supports PO's narative, in PO's mind - here we (supposedly)
    have two identical simulators simulating the same DD, but they behave differently.  That can only be explained (in PO's mind) by invoking
    magical thinking about "pathelogical self reference".

    If PO were to acknowledge that all simulators just step along the "One
    True Path" of the computation step by step, up to the point they give
    up, he would lose his argument that HHH /simulating/ DD "sees" different halting behaviour from DD /directly executed/.  Then his whole Linz
    proof argument would be seen by all as nonsense.  [I have no doubt he
    could think up some other explanation which is even less logical if he needed to, in order to maintain his delusional framework, so there's no danger here of "sending PO over the edge".  Such people will always "do whatever it takes" to maintain their delusions.]


    Mike.


    It seems ridiculously dumb that you can not see
    that the diagonal case presented to a simulating
    termination analyzer:
    (1) Bypasses the "do the opposite" code as unreachable.
    (2) Causes the simulating termination analyzer to continue
    to be called in recursive simulation that:
    (a) Cannot possibly stop running unless aborted.
    (b) Cannot possibly reach its own simulated final
    halt state whether aborted at some point or not.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Tue Sep 16 21:52:01 2025
    From Newsgroup: comp.theory

    On 16/09/2025 19:29, olcott wrote:
    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>> simply returns.
    Yeah, why does HHH think that it doesn't halt *and then
    halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by
    HHH1? And so DD.exe (I think I've seen that) would mean DD
    directly executed?

    The alternative (which is what I would have guessed) is "DD
    which calls HHH1", but it seems you don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.

    I didn't ask that.


    All of the details of the trace that you erased
    answer every possibly question that you could
    possibly have about these things. My answer directed
    you to the trace.

    HHH(DD) is the conventional diagonal case and

    HHH1(DD) is the common understanding that
    another different decider could correctly
    decide this same input because it does not
    form the diagonal case.

    That's all very interesting, and all, but what I want to know is:

    Does DD.HHH1 mean DD simulated by HHH1?

    Does DD.exe mean DD directly executed?


    The only relevant cases now are DD.HHH where DD
    is simulated by HHH AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.


    They weren't hard questions really, but it took Olcott 69 words
    not to answer them.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Tue Sep 16 21:57:19 2025
    From Newsgroup: comp.theory

    On 16/09/2025 20:19, olcott wrote:

    <snip>

    It enrages me that people insist that I must be
    wrong

    Then stop being wrong all the time.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Tue Sep 16 22:01:23 2025
    From Newsgroup: comp.theory

    On 16/09/2025 21:02, olcott wrote:

    <snip>

    It seems ridiculously dumb that you can not see
    that the diagonal case presented to a simulating
    termination analyzer:
    (1) Bypasses the "do the opposite" code as unreachable.

    It's not unreachable.

    $ ./plaindd
    DD halted.

    See?

    If you can't simulate that, your simulator is screwed.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@agisaak@gm.invalid to comp.theory on Tue Sep 16 15:49:30 2025
    From Newsgroup: comp.theory

    On 2025-09-16 13:19, olcott wrote:
    On 9/16/2025 2:09 PM, joes wrote:
    Am Tue, 16 Sep 2025 13:29:28 -0500 schrieb olcott:
    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:

    I guess I missed an earlier post.
    What do DD.HHH1 and DDD.HHH mean?

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1? And >>>>>>>> so DD.exe (I think I've seen that) would mean DD directly executed? >>>>>>>> The alternative (which is what I would have guessed) is "DD which >>>>>>>> calls HHH1", but it seems you don't mean that (?)

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.

    I didn't ask that.

    All of the details of the trace that you erased answer every possibly >>>>> question that you could possibly have about these things. My answer
    directed you to the trace.
    HHH(DD) is the conventional diagonal case and
    HHH1(DD) is the common understanding that another different decider
    could correctly decide this same input because it does not form the
    diagonal case.

    That's all very interesting, and all, but what I want to know is:
    Does DD.HHH1 mean DD simulated by HHH1?
    Does DD.exe mean DD directly executed?

    The only relevant cases now are DD.HHH where DD is simulated by HHH AKA
    the conventional diagonal relationship and DD.HHH1 where the same input
    does not form the diagonal case thus is conventionally decidable.
    You could have just said "yes" the first time.


    It enrages me that people insist that I must be
    wrong and they do this entirely on the basis of
    refusing to pay enough attention.

    Chill, dude.,.

    Mike Terry never said anything about you being right or wrong. He merely
    asked you to clarify your dot notation...

    André
    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Sep 16 21:54:07 2025
    From Newsgroup: comp.theory

    On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 16/09/2025 06:50, Kaz Kylheku wrote:
    On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    That's what happens now in effect - x86utm.exe log starts tracing at the halt7.c main() function.

    Then why does he keep claiming that main() is "native"? Nothing is
    "native" in the loaded COFF test case, then.

    I genuinely can't see why you would be asking that question, so I'm missing something.

    Just I thought that the host executable just branches into the loaded
    module's main(). (Which would be a sensible thing to do; there is no
    need to simulate anyting outside of the halting decider such as HHH.)

    If you're just making the point that /all/ the code in halt7.c is "executed" within PO's x86utm,
    that's perfectly correct. With the possible exception of main(), all the code in halt7.c is "TM
    code" or simulations made by that TM code.

    Is there a possible exception? I'm looking at the code now and it looks
    like if the simulation from the entry point into the loaded file is unconditional; it doesn't appear to be an option to branch to it
    natively.

    The TM code is "directly executed" [that's just what the
    phrase means in x86utm context] and code it simulates using DebugStep() is "simulated".

    That distinction makes no sense, like a lot of things from P. O.
    I was tripped up thinking that directly executed means using the host processor.

    "Directly Executed" should be equivalent to a wrapper which calls
    DebugStep, except that if we open-code the DebugStep loop, we can insert halting criteria, and trace recording and whatnot.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Sep 16 22:06:33 2025
    From Newsgroup: comp.theory

    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 10:33 AM, Mike Terry wrote:
    On 15/09/2025 23:09, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
    The presence of HHH1 is irrelevant and uninteresting, except insofar >>>>>>> that he's concealing that it returns 1.


    You get this same information when DDD of HHH1
    reaches [00002194].

    I try to make it as simple as possible so that you
    can keep track of every detail of the execution trace of

    DDD correctly simulated by HHH
    DDD correctly simulated by HHH1

    *So that you can directly see that*

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    That is obviously false. HHH1(DD) begins simulating
    DD from the entry to DD, and creating execution
    trace entries, before DD reaches its HHH(DD) call.


      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.

    But that's already true from the POV of the native x86
    processor, if we run DD() out of main.

    If DD is correctly a pure computation/TM, that must be true regardless
    of the context in which HHH(DD) is invoked; HHH(DD)
    returns, period.

    So again, what is the point of introducing HHH1.

    My suggestion:  PO has seen DD() halt [when called from main()].  He has >> traces of DD halting!  He /wants/ to say that when HHH simulates DD,
    DD's "behaviour" is different from the behaviour of DD called from main(). >>
    But everyone else understands DD does what DD does, or at least assuming
    we're talking pure functions.  The idea that a simulation goes different >> ways depending on who's performing the simulation is plain silly - one
    of PO's fantasies invented to try to maintain even sillier beliefs
    elsewhere [like that DD doesn't /really/ halt when it obviously does].

    His HHH1 supports PO's narative, in PO's mind - here we (supposedly)
    have two identical simulators simulating the same DD, but they behave
    differently.  That can only be explained (in PO's mind) by invoking
    magical thinking about "pathelogical self reference".

    If PO were to acknowledge that all simulators just step along the "One
    True Path" of the computation step by step, up to the point they give
    up, he would lose his argument that HHH /simulating/ DD "sees" different
    halting behaviour from DD /directly executed/.  Then his whole Linz
    proof argument would be seen by all as nonsense.  [I have no doubt he
    could think up some other explanation which is even less logical if he
    needed to, in order to maintain his delusional framework, so there's no
    danger here of "sending PO over the edge".  Such people will always "do
    whatever it takes" to maintain their delusions.]


    Mike.


    It seems ridiculously dumb that you can not see
    that the diagonal case presented to a simulating
    termination analyzer:
    (1) Bypasses the "do the opposite" code as unreachable.

    Rather, it is ridiculously dumb that you cannot see that
    this bypass doesn't occur, in the diagonal test case, when
    the decider returns a "does not halt" decision (HHH(DD) -> 0).

    (2) Causes the simulating termination analyzer to continue
    to be called in recursive simulation that:
    (a) Cannot possibly stop running unless aborted.

    This stopping of course happens.

    (b) Cannot possibly reach its own simulated final
    halt state whether aborted at some point or not.

    There is no "simulated halt state" category; there is only a halt state.
    DD specifies a computation which reaches its halt state. It is
    immaterial that a given simulation isn't carried sufficiently far to demonstrate that.

    The halting status (yes or no) of a computation isn't a blessing
    bestowed upon that computation by a particular simulation or simulator.

    You're just too unsophisticated to regognize that your conditions
    (in a historic trace, seeing a couple of CALLs to the same address
    without any intervening conditional jumps) do not warrant jumping to the conclusion that such conditional jumps are not coming in the future if
    you continue tracing. Your flimsy halting test is begging the
    question. You are convinced that the CALL HHH, DD does not return,
    and so your halting test just /assumes/ that, using the flimsy
    "proof" that two CALLs are observed without a return.

    By having HHH abort and return 0 at that point, you are ensuring that
    the third CALL returns. If you were to look for three CALLs before an
    abort, you would ensure that the fourth CALL returns, and so on. The
    returning call is always just out of reach of your broken halting
    decision, which makes you think that the decision is correct.

    No matter how many unbroken CALLs you expect, if you write the
    test to expect that many calls before aborting, it will take at least
    one more CALL to actually return. But it does return!

    The conclusion that "since N executions of the CALL instructions
    seen so far have not returned, it must be that none of them return" is a
    false generalization, like a gambler believing in a lucky streak.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Tue Sep 16 23:10:55 2025
    From Newsgroup: comp.theory

    On 16/09/2025 19:29, olcott wrote:
    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*? >>>>>>>>>>

    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1? And so DD.exe (I think I've
    seen that) would mean DD directly executed?

    The alternative (which is what I would have guessed) is "DD which calls HHH1", but it seems
    you don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.

    I didn't ask that.


    All of the details of the trace that you erased
    answer every possibly question that you could
    possibly have about these things. My answer directed
    you to the trace.

    HHH(DD) is the conventional diagonal case and

    HHH1(DD) is the common understanding that
    another different decider could correctly
    decide this same input because it does not
    form the diagonal case.

    That's all very interesting, and all, but what I want to know is:

    Does DD.HHH1 mean DD simulated by HHH1?

    Does DD.exe mean DD directly executed?


    The only relevant cases now are DD.HHH where DD
    is simulated by HHH

    aha! So that's a "yes" to the first question.

    AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    and I guess I'll just have to accept that as a "yes" for the second question


    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.

    and no answer to the third question.

    You spent all those words when you could have just replied
    > ...[1st Q.]
    Yes.
    > ...[2nd Q.]
    Yes.
    > ...[3rd Q.]
    Not saying!

    I'll assume the 3rd question should be answered "yes" as well. I won't say "thanks"...


    Mike.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 17:11:46 2025
    From Newsgroup: comp.theory

    On 9/16/2025 4:49 PM, André G. Isaak wrote:
    On 2025-09-16 13:19, olcott wrote:
    On 9/16/2025 2:09 PM, joes wrote:
    Am Tue, 16 Sep 2025 13:29:28 -0500 schrieb olcott:
    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:

    I guess I missed an earlier post.
    What do DD.HHH1 and DDD.HHH mean?

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1? >>>>>>>>> And
    so DD.exe (I think I've seen that) would mean DD directly
    executed?
    The alternative (which is what I would have guessed) is "DD which >>>>>>>>> calls HHH1", but it seems you don't mean that (?)

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state. >>>>>>>
    I didn't ask that.

    All of the details of the trace that you erased answer every possibly >>>>>> question that you could possibly have about these things. My answer >>>>>> directed you to the trace.
    HHH(DD) is the conventional diagonal case and
    HHH1(DD) is the common understanding that another different decider >>>>>> could correctly decide this same input because it does not form the >>>>>> diagonal case.

    That's all very interesting, and all, but what I want to know is:
    Does DD.HHH1 mean DD simulated by HHH1?
    Does DD.exe mean DD directly executed?

    The only relevant cases now are DD.HHH where DD is simulated by HHH AKA >>>> the conventional diagonal relationship and DD.HHH1 where the same input >>>> does not form the diagonal case thus is conventionally decidable.
    You could have just said "yes" the first time.


    It enrages me that people insist that I must be
    wrong and they do this entirely on the basis of
    refusing to pay enough attention.

    Chill, dude.,.

    Mike Terry never said anything about you being right or wrong. He merely asked you to clarify your dot notation...

    André


    I am not referring to Mike specifically yet it
    does seem that he did say that I am wrong on
    the basis of his own lack of understanding
    of one very key point.

    What is at stake here is life on Earth
    (death by climate change) and the rise of
    the fourth Reich on the basis that we have
    not unequivocally divided lies from truth.

    My system of reasoning makes the set of
    {True on the basis of meaning} computable.

    Is severe climate change caused by humans? YES
    It Donald Trump exactly copying Hitler's rise to power? YES
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 17:14:47 2025
    From Newsgroup: comp.theory

    On 9/16/2025 5:06 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 10:33 AM, Mike Terry wrote:
    On 15/09/2025 23:09, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
    The presence of HHH1 is irrelevant and uninteresting, except insofar >>>>>>>> that he's concealing that it returns 1.


    You get this same information when DDD of HHH1
    reaches [00002194].

    I try to make it as simple as possible so that you
    can keep track of every detail of the execution trace of

    DDD correctly simulated by HHH
    DDD correctly simulated by HHH1

    *So that you can directly see that*

    On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
    Of course the execution traces are different
    before and after the abort.

    HHH1 ONLY sees the behavior of DD *AFTER* HHH
    has aborted DD thus need not abort DD itself.

    That is obviously false. HHH1(DD) begins simulating
    DD from the entry to DD, and creating execution
    trace entries, before DD reaches its HHH(DD) call.


      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.

    But that's already true from the POV of the native x86
    processor, if we run DD() out of main.

    If DD is correctly a pure computation/TM, that must be true regardless >>>> of the context in which HHH(DD) is invoked; HHH(DD)
    returns, period.

    So again, what is the point of introducing HHH1.

    My suggestion:  PO has seen DD() halt [when called from main()].  He has >>> traces of DD halting!  He /wants/ to say that when HHH simulates DD,
    DD's "behaviour" is different from the behaviour of DD called from main(). >>>
    But everyone else understands DD does what DD does, or at least assuming >>> we're talking pure functions.  The idea that a simulation goes different >>> ways depending on who's performing the simulation is plain silly - one
    of PO's fantasies invented to try to maintain even sillier beliefs
    elsewhere [like that DD doesn't /really/ halt when it obviously does].

    His HHH1 supports PO's narative, in PO's mind - here we (supposedly)
    have two identical simulators simulating the same DD, but they behave
    differently.  That can only be explained (in PO's mind) by invoking
    magical thinking about "pathelogical self reference".

    If PO were to acknowledge that all simulators just step along the "One
    True Path" of the computation step by step, up to the point they give
    up, he would lose his argument that HHH /simulating/ DD "sees" different >>> halting behaviour from DD /directly executed/.  Then his whole Linz
    proof argument would be seen by all as nonsense.  [I have no doubt he
    could think up some other explanation which is even less logical if he
    needed to, in order to maintain his delusional framework, so there's no
    danger here of "sending PO over the edge".  Such people will always "do >>> whatever it takes" to maintain their delusions.]


    Mike.


    It seems ridiculously dumb that you can not see
    that the diagonal case presented to a simulating
    termination analyzer:
    (1) Bypasses the "do the opposite" code as unreachable.

    Rather, it is ridiculously dumb that you cannot see that
    this bypass doesn't occur, in the diagonal test case, when
    the decider returns a "does not halt" decision (HHH(DD) -> 0).


    That is not the actual behavior specified by the actual input to HHH(DD)
    That is not the actual behavior specified by the actual input to HHH(DD)
    That is not the actual behavior specified by the actual input to HHH(DD)
    That is not the actual behavior specified by the actual input to HHH(DD)
    That is not the actual behavior specified by the actual input to HHH(DD)
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Sep 16 22:14:56 2025
    From Newsgroup: comp.theory

    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    The only relevant cases now are DD.HHH where DD
    is simulated by HHH AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    DD is always the diagonal case, targeting HHH.

    It can be decided by something that is not HHH.

    You need not be proving this, if your aim is to
    topple the halting theorem.

    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.

    It took years for the newsgroup to teach you about the diagonalization
    and how the diagonal test case is not absolutely undecidable---only to
    the particular decider that is targeted by that case.

    You have now somehow twisted this around into some new misconception.

    DD is always one specific computation which indisputably halts or does
    not halt.

    HHH and HHH1 are different deciders. They behave differently and make potentially different decisions about DD. (When they disagree, one of
    them is right, and it can't be HHH).

    DD does not vary whatsoever.

    If we /edit/ HHH to make a new revision of it, then by doing
    that we make a new revision of DD, since DD is built on HHH. (In fact
    most of the complexity of DD is buried in HHH; no matter how complex you
    make HHH, DD turns that complexity against HHH.)

    Since DD is /not/ built on HHH1, then revising HHH1 has no
    effect on the definition of DD.

    That's it.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Sep 16 22:17:02 2025
    From Newsgroup: comp.theory

    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    It enrages me that people insist that I must be
    wrong and they do this entirely on the basis of
    refusing to pay enough attention.

    The problem is that your goalposts for "enough attention" is for
    people to see things which are not there.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Tue Sep 16 23:18:27 2025
    From Newsgroup: comp.theory

    On 16/09/2025 21:52, Richard Heathfield wrote:
    On 16/09/2025 19:29, olcott wrote:
    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>>> simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*? >>>>>>>>>>>

    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1? And so DD.exe (I think I've
    seen that) would mean DD directly executed?

    The alternative (which is what I would have guessed) is "DD which calls HHH1", but it seems
    you don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.

    I didn't ask that.


    All of the details of the trace that you erased
    answer every possibly question that you could
    possibly have about these things. My answer directed
    you to the trace.

    HHH(DD) is the conventional diagonal case and

    HHH1(DD) is the common understanding that
    another different decider could correctly
    decide this same input because it does not
    form the diagonal case.

    That's all very interesting, and all, but what I want to know is:

    Does DD.HHH1 mean DD simulated by HHH1?

    Does DD.exe mean DD directly executed?


    The only relevant cases now are DD.HHH where DD
    is simulated by HHH AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.


    They weren't hard questions really, but it took Olcott 69 words not to answer them.

    plus 59 in the previous post,
    plus 21 in the post before that !

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Sep 16 22:24:34 2025
    From Newsgroup: comp.theory

    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    Rather, it is ridiculously dumb that you cannot see that
    this bypass doesn't occur, in the diagonal test case, when
    the decider returns a "does not halt" decision (HHH(DD) -> 0).


    That is not the actual behavior specified by the actual input to HHH(DD)

    Speaking of which, you have been dodging the question of specifying
    what the "actual input" to HHH comprises in the expression HHH(DD).

    Can you list the piece or pieces of material that you believe are part
    of the input, omitting anything that is not part of the input?
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 17:25:44 2025
    From Newsgroup: comp.theory

    On 9/16/2025 5:10 PM, Mike Terry wrote:
    On 16/09/2025 19:29, olcott wrote:
    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>>> simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*? >>>>>>>>>>>

    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1? >>>>>>> And so DD.exe (I think I've seen that) would mean DD directly
    executed?

    The alternative (which is what I would have guessed) is "DD which >>>>>>> calls HHH1", but it seems you don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.

    I didn't ask that.


    All of the details of the trace that you erased
    answer every possibly question that you could
    possibly have about these things. My answer directed
    you to the trace.

    HHH(DD) is the conventional diagonal case and

    HHH1(DD) is the common understanding that
    another different decider could correctly
    decide this same input because it does not
    form the diagonal case.

    That's all very interesting, and all, but what I want to know is:

    Does DD.HHH1 mean DD simulated by HHH1?

    Does DD.exe mean DD directly executed?


    The only relevant cases now are DD.HHH where DD
    is simulated by HHH

    aha!  So that's a "yes" to the first question.

    AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    and I guess I'll just have to accept that as a "yes" for the second
    question


    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.

    and no answer to the third question.


    This is a matter of life and death of the planet
    and stopping the rise of the fourth Reich.

    When truth becomes computable then the liars
    cannot get way with their lies.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@agisaak@gm.invalid to comp.theory on Tue Sep 16 16:50:19 2025
    From Newsgroup: comp.theory

    On 2025-09-16 16:25, olcott wrote:
    On 9/16/2025 5:10 PM, Mike Terry wrote:
    On 16/09/2025 19:29, olcott wrote:

    aha!  So that's a "yes" to the first question.

    AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    and I guess I'll just have to accept that as a "yes" for the second
    question


    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.

    and no answer to the third question.


    This is a matter of life and death of the planet
    and stopping the rise of the fourth Reich.

    When truth becomes computable then the liars
    cannot get way with their lies.

    If it's so important, then why do you go to such lengths to to answer
    fairly straightforward questions about you claims (like what your dot
    notation means)?

    André
    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Tue Sep 16 23:53:33 2025
    From Newsgroup: comp.theory

    On 16/09/2025 23:25, olcott wrote:
    On 9/16/2025 5:10 PM, Mike Terry wrote:
    On 16/09/2025 19:29, olcott wrote:
    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>> On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>>>> simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*? >>>>>>>>>>>>

    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1? And so DD.exe (I think I've
    seen that) would mean DD directly executed?

    The alternative (which is what I would have guessed) is "DD which calls HHH1", but it seems
    you don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state.

    I didn't ask that.


    All of the details of the trace that you erased
    answer every possibly question that you could
    possibly have about these things. My answer directed
    you to the trace.

    HHH(DD) is the conventional diagonal case and

    HHH1(DD) is the common understanding that
    another different decider could correctly
    decide this same input because it does not
    form the diagonal case.

    That's all very interesting, and all, but what I want to know is:

    Does DD.HHH1 mean DD simulated by HHH1?

    Does DD.exe mean DD directly executed?


    The only relevant cases now are DD.HHH where DD
    is simulated by HHH

    aha!  So that's a "yes" to the first question.

    AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    and I guess I'll just have to accept that as a "yes" for the second question >>

    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.

    and no answer to the third question.


    This is a matter of life and death of the planet
    and stopping the rise of the fourth Reich.

    When truth becomes computable then the liars
    cannot get way with their lies.


    Dude! Nothing going on here has even the remotest of effects on such world events. You are
    thoroughly deluded, imaging yourself as saving the world, and being at the centre of intellectual
    efforts to save humanity etc.. The truth is exactly the opposite of that!

    <https://en.wikipedia.org/wiki/Delusions_of_grandeur>

    Still, you'll always have your chatbots to comfort you and tell you what you want to hear.
    (Assuming they don't get smart enough to start /questioning/ and properly analysing your work, like
    people here do...)


    Mike.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 19:01:14 2025
    From Newsgroup: comp.theory

    On 9/16/2025 5:50 PM, André G. Isaak wrote:
    On 2025-09-16 16:25, olcott wrote:
    On 9/16/2025 5:10 PM, Mike Terry wrote:
    On 16/09/2025 19:29, olcott wrote:

    aha!  So that's a "yes" to the first question.

    AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    and I guess I'll just have to accept that as a "yes" for the second
    question


    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.

    and no answer to the third question.


    This is a matter of life and death of the planet
    and stopping the rise of the fourth Reich.

    When truth becomes computable then the liars
    cannot get way with their lies.

    If it's so important, then why do you go to such lengths to to answer
    fairly straightforward questions about you claims (like what your dot notation means)?

    André


    Because NOT doing this encourages you to go
    back and more carefully study the details
    of what I said so that you cannot so desperately
    hang on to your willful ignorance.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 19:07:11 2025
    From Newsgroup: comp.theory

    On 9/16/2025 5:53 PM, Mike Terry wrote:
    On 16/09/2025 23:25, olcott wrote:
    On 9/16/2025 5:10 PM, Mike Terry wrote:
    On 16/09/2025 19:29, olcott wrote:
    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>> On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>>>>> simply returns.
    Yeah, why does HHH think that it doesn't halt *and then >>>>>>>>>>>>> halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1? >>>>>>>>> And so DD.exe (I think I've seen that) would mean DD directly >>>>>>>>> executed?

    The alternative (which is what I would have guessed) is "DD >>>>>>>>> which calls HHH1", but it seems you don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state. >>>>>>>
    I didn't ask that.


    All of the details of the trace that you erased
    answer every possibly question that you could
    possibly have about these things. My answer directed
    you to the trace.

    HHH(DD) is the conventional diagonal case and

    HHH1(DD) is the common understanding that
    another different decider could correctly
    decide this same input because it does not
    form the diagonal case.

    That's all very interesting, and all, but what I want to know is:

    Does DD.HHH1 mean DD simulated by HHH1?

    Does DD.exe mean DD directly executed?


    The only relevant cases now are DD.HHH where DD
    is simulated by HHH

    aha!  So that's a "yes" to the first question.

    AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    and I guess I'll just have to accept that as a "yes" for the second
    question


    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.

    and no answer to the third question.


    This is a matter of life and death of the planet
    and stopping the rise of the fourth Reich.

    When truth becomes computable then the liars
    cannot get way with their lies.


    Dude!  Nothing going on here has even the remotest of effects on such
    world events.
    Mike.


    So you have not bothered to Notice that Trump is
    exactly copying Hitter?

    If it was not for the brave soul of the Senate
    parliamentarian cancelling the king maker paragraph
    of Trump's Big Bullshit Bill the USA would already
    be more than halfway to the dictatorship power of
    Nazi Germany.

    Truth can be computable !!!
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 19:23:46 2025
    From Newsgroup: comp.theory

    On 9/16/2025 5:24 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    Rather, it is ridiculously dumb that you cannot see that
    this bypass doesn't occur, in the diagonal test case, when
    the decider returns a "does not halt" decision (HHH(DD) -> 0).


    That is not the actual behavior specified by the actual input to HHH(DD)

    Speaking of which, you have been dodging the question of specifying
    what the "actual input" to HHH comprises in the expression HHH(DD).


    The termination analyzer HHH is only examining whether
    or not the finite string of x86 machine code ending
    in the C3 byte "ret" instruction can be reached by the
    behavior specified by this finite string.

    This behavior does include that DD calls HHH(DD)
    in recursive simulation. DD is the program under
    test and HHH is the test program.




    You understood that each decider can have an input
    defined to "do the opposite" of whatever this decider
    decides thwarting the correct decision for this
    decider/input pair.

    You also understand that another different decider
    can correctly decide this same input.

    You seem to get totally confused when these are
    made specific by HHH/DD and HHH1/DD.

    If you think that it is impossible for DD to have
    different behavior between these two cases then how
    is it that one is conventionally undecidable and
    the other is decidable?


    Can you list the piece or pieces of material that you believe are part
    of the input, omitting anything that is not part of the input?

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 19:29:12 2025
    From Newsgroup: comp.theory

    On 9/16/2025 5:17 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    It enrages me that people insist that I must be
    wrong and they do this entirely on the basis of
    refusing to pay enough attention.

    The problem is that your goalposts for "enough attention" is for
    people to see things which are not there.


    Enough attention is (for example) 100% totally understanding
    everY single detail of the execution trace of DDD
    simulated by HHH1 that includes DDD correctly simulated
    by HHH.

    I proved that these traces do not diverge at the exact
    same point that HHH aborts FIFTEEN TIMES NOW and still
    ZERO PEOPLE HAVE NOTICED.

    Mike and I have been on this one point for over a year.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 19:33:18 2025
    From Newsgroup: comp.theory

    On 9/16/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    The only relevant cases now are DD.HHH where DD
    is simulated by HHH AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    DD is always the diagonal case, targeting HHH.

    It can be decided by something that is not HHH.

    You need not be proving this, if your aim is to
    topple the halting theorem.

    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.


    You are still trying to cheat to show that DD emulated
    by HHH according to the semantics of the x86 language
    reaches its final halt state by doing all kinds of things
    that are not pure simulation.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 19:36:08 2025
    From Newsgroup: comp.theory

    On 9/16/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    The only relevant cases now are DD.HHH where DD
    is simulated by HHH AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    DD is always the diagonal case, targeting HHH.

    It can be decided by something that is not HHH.


    Thus proving that the exact same DD directly
    causes TWO DIFFERENT BEHAVIORS !!!

    TWO DIFFERENT BEHAVIORS !!!
    TWO DIFFERENT BEHAVIORS !!!
    TWO DIFFERENT BEHAVIORS !!!
    TWO DIFFERENT BEHAVIORS !!!
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory on Tue Sep 16 17:56:37 2025
    From Newsgroup: comp.theory

    On 9/16/2025 5:07 PM, olcott wrote:
    On 9/16/2025 5:53 PM, Mike Terry wrote:
    On 16/09/2025 23:25, olcott wrote:
    On 9/16/2025 5:10 PM, Mike Terry wrote:
    On 16/09/2025 19:29, olcott wrote:
    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott: >>>>>>>>>>>>>>> On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>>> On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>>>>>> simply returns.
    Yeah, why does HHH think that it doesn't halt *and then >>>>>>>>>>>>>> halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach
    its final halt state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by >>>>>>>>>> HHH1? And so DD.exe (I think I've seen that) would mean DD >>>>>>>>>> directly executed?

    The alternative (which is what I would have guessed) is "DD >>>>>>>>>> which calls HHH1", but it seems you don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state. >>>>>>>>
    I didn't ask that.


    All of the details of the trace that you erased
    answer every possibly question that you could
    possibly have about these things. My answer directed
    you to the trace.

    HHH(DD) is the conventional diagonal case and

    HHH1(DD) is the common understanding that
    another different decider could correctly
    decide this same input because it does not
    form the diagonal case.

    That's all very interesting, and all, but what I want to know is:

    Does DD.HHH1 mean DD simulated by HHH1?

    Does DD.exe mean DD directly executed?


    The only relevant cases now are DD.HHH where DD
    is simulated by HHH

    aha!  So that's a "yes" to the first question.

    AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    and I guess I'll just have to accept that as a "yes" for the second
    question


    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.

    and no answer to the third question.


    This is a matter of life and death of the planet
    and stopping the rise of the fourth Reich.

    When truth becomes computable then the liars
    cannot get way with their lies.


    Dude!  Nothing going on here has even the remotest of effects on such
    world events. Mike.


    So you have not bothered to Notice that Trump is
    exactly copying Hitter?

    Oh shit. We have might have a full blown kookooo clock on our hands.
    Tick tock... Oh man.



    If it was not for the brave soul of the Senate
    parliamentarian cancelling the king maker paragraph
    of Trump's Big Bullshit Bill the USA would already
    be more than halfway to the dictatorship power of
    Nazi Germany.

    Truth can be computable !!!


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Wed Sep 17 01:56:51 2025
    From Newsgroup: comp.theory

    On 16/09/2025 22:54, Kaz Kylheku wrote:
    On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 16/09/2025 06:50, Kaz Kylheku wrote:
    On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    That's what happens now in effect - x86utm.exe log starts tracing at the halt7.c main() function.

    Then why does he keep claiming that main() is "native"? Nothing is
    "native" in the loaded COFF test case, then.

    I genuinely can't see why you would be asking that question, so I'm missing something.

    Just I thought that the host executable just branches into the loaded module's main(). (Which would be a sensible thing to do; there is no
    need to simulate anyting outside of the halting decider such as HHH.)

    I see - no, x86utm parses input file halt7.obj to grab what it needs regarding data/code sectors,
    symbol definitions (function names and locations), relocation fixups table, then uses that to
    initialise a libx86emu "virtual address space". In effect it performs its own equivalent of
    LoadLibrary() within that virtual address space, loading the module starting at low memory address
    0x00000000. halt7.obj is never linked to form an OS executable. (I suppose it could be, perhaps
    with minor changes...)

    I'm not sure x86utm directly calling halt7.c's main() would be a good design, while it remains the
    case that simulation performed by HHH is within the libx86emu virtual machine. It could be done
    that way, but then code like HHH would be running in two completely separate environments:
    Windows/Unix host, and libx86emu virtual address space. They must be exactly the same, and it's a
    good thing if they're easily seen and verified as being the same. That happens if they're both
    performed by x86utm code via libx86emu, and also it means x86utm's log shows both in the same format.

    Also ISTM the hosting environment should be logically divorced from the halt7.c code as far as
    possible. E.g. let's imagine x86utm is doing its stuff, but then it turns out the x86utm address
    space (which is 32-bit, like the libx86utm virtual address space) starts requiring bigger tables and
    allocations for running multiple libx86utm virtual address spaces or whatever, and a resource limit
    is encountered. We get past that by making x86utm.exe 64-bit. That should all be routine
    (expecting the usual 32- to 64-bit code porting issues...) Problem gone! x86utm.exe now has
    64-bits of address space (or close after OS is taken out) and libx86emu still creates its 32-bit
    virtual machines to run halt7.c code. (You can see where this leads!) But now your design of
    x86utm directly calling main() and hence HHH() that means HHH has to be both 64-bit (for xx86utm to
    directly call) and 32-bit (to run under libx86emu). Or perhaps I just want to run PO's code on some
    RISC architecture, not x86 at all - I can compile C++ code (x86utm) to run on that RISC CPU, but
    halt7.c absolutely must be x86 code...

    Alternatively, x86utm could designed so that halt7.c's main() is invoked exactly like any other
    simulation started e.g. by HHH within halt7.c. That would need some Simulate() function to drive
    the DebugStep() loop, and where would that be? If in halt7.c, what simulates Simulate()? Or it
    could be hard-coded into x86utm since it never changes. Dunno......


    If you're just making the point that /all/ the code in halt7.c is "executed" within PO's x86utm,
    that's perfectly correct. With the possible exception of main(), all the code in halt7.c is "TM
    code" or simulations made by that TM code.

    Is there a possible exception? I'm looking at the code now and it looks
    like if the simulation from the entry point into the loaded file is unconditional; it doesn't appear to be an option to branch to it
    natively.

    I'm not sure what you're referring to.
    You're looking at x86utm code or halt7.c code?

    The latter is never linked to an executable, so it can /only/ be executed within x86utm via
    libx86emu virtual x86 machine.

    x86utm.exe code runs under the hosting OS, reads and "loads" the halt7.obj code into the libx86emu
    VM, then runs its own loop in [x86emu.cpp]Halts() which calls Execute_Instruction() until
    [halt7.c]main() returns. HHH code in halt7.c makes occasional DebugStep() calls to step its
    simulation, and DebugStep transfers into x86utm's [x86emu.cpp]DebugStep() which in turn calls
    Execute_Instruction() to step HHH's simulation.

    x86utm stack at that point will have:

    Execute_Instruction() // simulated instruction from halt7.c
    DebugStep() // ooh! a nested simulation being stepped!
    // has called back to x86utm DebugStep Execute_Instruction() // simulated instruction from halt7.c
    DebugStep() // instruction was a DebugStep in halt7.c which
    // has called back to x86utm DebugStep Execute_Instruction() // 1 instruction from halt7.c
    Halts() // x86utm loop simulating [halt7.c]main()
    ..
    main()


    The TM code is "directly executed" [that's just what the
    phrase means in x86utm context] and code it simulates using DebugStep() is "simulated".

    That distinction makes no sense, like a lot of things from P. O.
    I was tripped up thinking that directly executed means using the host processor.

    Not sure who coined the term. PO had shown HHH(DD), where HHH decides DD never halts. Posters
    wanted to point out that whatever HHH decides, it needs to match up to [what DD actually does] but
    what is the phrase for that? PO tries to only discuss "DD *simulated by HHH*" so in contrast
    posters came up with "DD *run natively*" or "DD *executed directly* (from main)" etc.. to contrast
    with HHH's simulations. What phrase would you use?

    x86utm architecture and hosting OS's (Windows/Unix) is really orthogonal to all this.


    "Directly Executed" should be equivalent to a wrapper which calls
    DebugStep, except that if we open-code the DebugStep loop, we can insert halting criteria, and trace recording and whatnot.

    I think people discussing that might refer to a UTM here, e.g UTM(DD), where UTM would be a function
    in halt7.c that simulates until completion. In TM world, UTM(DD) is still a TM UTM simulating DD,
    which is conceptually different from what I would think of as DD "directly executed" (which is just
    the TM DD! But PO doesn't grok TMs and computations, always thinking instead of actual computers
    loading and running "computer programs" (aka TM-description strings).

    Also if we have 10 posters posting here, we'll have 10 slightly different terminology uses + PO's
    understanding.... :)

    Anyhow in x86utm world as-is, We can put messages into [halt7.c]main(). Halting criteria naturally
    (ISTM) go in [halt7.c]HHH. Like in the HP, if H is a TM halt decider, the halting criteria it
    applies are in H, not some meta-level simulator running the TM H. (There is no such thing. H itself
    does not need criteria to be aborted or "halt-decided", it's just a "native" TM, so to speak.)


    Mike.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 01:02:58 2025
    From Newsgroup: comp.theory

    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:24 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    Rather, it is ridiculously dumb that you cannot see that
    this bypass doesn't occur, in the diagonal test case, when
    the decider returns a "does not halt" decision (HHH(DD) -> 0).


    That is not the actual behavior specified by the actual input to HHH(DD)

    Speaking of which, you have been dodging the question of specifying
    what the "actual input" to HHH comprises in the expression HHH(DD).


    The termination analyzer HHH is only examining whether
    or not the finite string of x86 machine code ending
    in the C3 byte "ret" instruction can be reached by the
    behavior specified by this finite string.

    Are you referring the the sequence of instructions comprising
    the procedure DD?

    This behavior does include that DD calls HHH(DD)
    in recursive simulation. DD is the program under
    test and HHH is the test program.

    The finite string of x86 machine code instructions
    does include the instruction "CALL HHH, DD".

    But are you not aware that the entire HHH routine is also part of
    the input?

    Anything that is called by a piece of the input must be included in the
    bill of materials that comprise the input.

    You understood that each decider can have an input
    defined to "do the opposite" of whatever this decider
    decides thwarting the correct decision for this
    decider/input pair.

    But in order to do that, the input must be understood to be
    carrying a copy of that decider.

    Or possibly, not an exact copy.

    The input can be carrying an /equivalent algorithm/.

    You also understand that another different decider
    can correctly decide this same input.

    Thanks to me, several others, and years of patient effort,
    you also now understand that, which is great.

    You seem to get totally confused when these are
    made specific by HHH/DD and HHH1/DD.

    If you think that it is impossible for DD to have
    different behavior between these two cases then how
    is it that one is conventionally undecidable and
    the other is decidable?

    What is "undecidable" is universal halting; it is an undedicable problem meaning that we don't have a terminating algorithm that will give an
    answer for every possible input.

    That's what the word "undecidable" means.

    The specific test case DD is decidable. For the set of of computations consisting of { DD }, we /can/ have an algorithm which decides that
    entire set { DD }, if it is not required to decide anything else.

    The relationship between HHH and DD isn't that DD is "undecidable" to
    HHH, but that HHH /doesn't/ decide DD (either by not terminating or
    returing the wrong value). This is by design; DD is built on HHH and
    designed such that HHH(DD) is incorrect, if HHH(DD) terminates.

    HHH(DD) disqualifying itself not terminating is entirely the fault of
    the designer of HHH.

    HHH(DD) being wrong when it does terminate is brought about by the
    designer of DD. That designer always has the last word since HHH
    is a building block of DD, not the other way around.

    What's different between two deciders like HHH and HHH1 is
    their /analysis of DD/.

    Analysis of DD is not the /behavior/ of DD!

    You have chosen simulation as the key part of your analysis.
    Simulation follows the behavior of its target, so that its
    structure resembles behavior. That's where you are getting
    confused. Analysis of a computation isn't its behavior,
    even if it involves detailed tracing.

    Only the /complete/ and /correct/ simulation of a terminating
    calculation can be de facto regarded as a bona fide representation of
    its behavior, and discussed as if it were its behavior.

    Any simulation that falls short of this is just an incomplete and/or
    incorrect analysis, and not a description of the subject's
    behavior.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 20:12:27 2025
    From Newsgroup: comp.theory

    On 9/16/2025 7:56 PM, Mike Terry wrote:
    On 16/09/2025 22:54, Kaz Kylheku wrote:
    On 2025-09-16, Mike Terry
    <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 16/09/2025 06:50, Kaz Kylheku wrote:
    On 2025-09-16, Mike Terry
    <news.dead.person.stones@darjeeling.plus.com> wrote:
    That's what happens now in effect - x86utm.exe log starts tracing
    at the halt7.c main() function.

    Then why does he keep claiming that main() is "native"? Nothing is
    "native" in the loaded COFF test case, then.

    I genuinely can't see why you would be asking that question, so I'm
    missing something.

    Just I thought that the host executable just branches into the loaded
    module's main(). (Which would be a sensible thing to do; there is no
    need to simulate anyting outside of the halting decider such as HHH.)

    I see - no, x86utm parses input file halt7.obj to grab what it needs regarding data/code sectors, symbol definitions (function names and locations), relocation fixups table, then uses that to initialise a libx86emu "virtual address space".  In effect it performs its own equivalent of LoadLibrary() within that virtual address space, loading
    the module starting at low memory address 0x00000000.  halt7.obj is
    never linked to form an OS executable.  (I suppose it could be, perhaps with minor changes...)


    Yes and I did that all myself from scratch. https://github.com/plolcott/x86utm/blob/master/include/Read_COFF_Object.h
    Does most of this work.

    I'm not sure x86utm directly calling halt7.c's main() would be a good design, while it remains the case that simulation performed by HHH is
    within the libx86emu virtual machine.  It could be done that way, but
    then code like HHH would be running in two completely separate
    environments: Windows/Unix host, and libx86emu virtual address space.
    They must be exactly the same, and it's a good thing if they're easily
    seen and verified as being the same.  That happens if they're both performed by x86utm code via libx86emu, and also it means x86utm's log
    shows both in the same format.

    Also ISTM the hosting environment should be logically divorced from the halt7.c code as far as possible.  E.g. let's imagine x86utm is doing its stuff, but then it turns out the x86utm address space (which is 32-bit,
    like the libx86utm virtual address space) starts requiring bigger tables
    and allocations for running multiple libx86utm virtual address spaces or whatever, and a resource limit is encountered.  We get past that by
    making x86utm.exe 64-bit.  That should all be routine (expecting the
    usual 32- to 64-bit code porting issues...)  Problem gone!  x86utm.exe
    now has 64-bits of address space (or close after OS is taken out) and libx86emu still creates its 32-bit virtual machines to run halt7.c
    code.  (You can see where this leads!)  But now your design of x86utm directly calling main() and hence HHH() that means HHH has to be both
    64-bit (for xx86utm to directly call) and 32-bit (to run under
    libx86emu).  Or perhaps I just want to run PO's code on some RISC architecture, not x86 at all - I can compile C++ code (x86utm) to run on that RISC CPU, but halt7.c absolutely must be x86 code...


    The author of libx86emu had to upgrade it from
    16-bit to 32-bit for me. No need for 64-bit.

    Alternatively, x86utm could designed so that halt7.c's main() is invoked exactly like any other simulation started e.g. by HHH within halt7.c.
    That would need some Simulate() function to drive the DebugStep() loop,
    and where would that be?  If in halt7.c, what simulates Simulate()?  Or
    it could be hard-coded into x86utm since it never changes.  Dunno......


    That is all in x86utm.cpp


    If you're just making the point that /all/ the code in halt7.c is
    "executed" within PO's x86utm,
    that's perfectly correct.  With the possible exception of main(), all
    the code in halt7.c is "TM
    code" or simulations made by that TM code.

    Is there a possible exception? I'm looking at the code now and it looks
    like if the simulation from the entry point into the loaded file is
    unconditional; it doesn't appear to be an option to branch to it
    natively.

    I'm not sure what you're referring to.
    You're looking at x86utm code or halt7.c code?

    The latter is never linked to an executable, so it can /only/ be
    executed within x86utm via libx86emu virtual x86 machine.

    x86utm.exe code runs under the hosting OS, reads and "loads" the
    halt7.obj code into the libx86emu VM, then runs its own loop in [x86emu.cpp]Halts() which calls Execute_Instruction() until
    [halt7.c]main() returns.  HHH code in halt7.c makes occasional
    DebugStep() calls to step its simulation, and DebugStep transfers into x86utm's [x86emu.cpp]DebugStep() which in turn calls
    Execute_Instruction() to step HHH's simulation.

    x86utm stack at that point will have:

    Execute_Instruction()     // simulated instruction from halt7.c DebugStep()               // ooh! a nested simulation being stepped!
                              // has called back to x86utm DebugStep
    Execute_Instruction()     // simulated instruction from halt7.c DebugStep()               // instruction was a DebugStep in halt7.c which
                              // has called back to x86utm DebugStep
    Execute_Instruction()     // 1 instruction from halt7.c Halts()                   // x86utm loop simulating [halt7.c]main()
    ..
    main()


    The TM code is "directly executed" [that's just what the
    phrase means in x86utm context] and code it simulates using
    DebugStep() is "simulated".

    That distinction makes no sense, like a lot of things from P. O.
    I was tripped up thinking that directly executed means using the host
    processor.

    Not sure who coined the term.  PO had shown HHH(DD), where HHH decides
    DD never halts.  Posters wanted to point out that whatever HHH decides,
    it needs to match up to [what DD actually does] but what is the phrase
    for that?  PO tries to only discuss "DD *simulated by HHH*" so in
    contrast posters came up with "DD *run natively*" or "DD *executed
    directly* (from main)" etc.. to contrast with HHH's simulations.  What phrase would you use?

    x86utm architecture and hosting OS's (Windows/Unix) is really orthogonal
    to all this.



    Like I said it all runs just fine under Linux.
    The Linux MakeFile is still there.


    "Directly Executed" should be equivalent to a wrapper which calls
    DebugStep, except that if we open-code the DebugStep loop, we can insert
    halting criteria, and trace recording and whatnot.

    I think people discussing that might refer to a UTM here, e.g UTM(DD),
    where UTM would be a function in halt7.c that simulates until
    completion.  In TM world, UTM(DD) is still a TM UTM simulating DD, which
    is conceptually different from what I would think of as DD "directly executed" (which is just the TM DD!  But PO doesn't grok TMs and computations, always thinking instead of actual computers loading and running "computer programs" (aka TM-description strings).

    Also if we have 10 posters posting here, we'll have 10 slightly
    different terminology uses + PO's understanding....  :)

    Anyhow in x86utm world as-is, We can put messages into [halt7.c]main(). Halting criteria naturally (ISTM) go in [halt7.c]HHH.  Like in the HP,
    if H is a TM halt decider, the halting criteria it applies are in H, not some meta-level simulator running the TM H.  (There is no such thing. H itself does not need criteria to be aborted or "halt-decided", it's just
    a "native" TM, so to speak.)


    Mike.


    HHH in Halt7.c calls all of its helper functions
    in Halt7.c and some helper functions directly in
    the x86utm OS. These are stubs in Halt7.c.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 01:13:01 2025
    From Newsgroup: comp.theory

    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:17 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    It enrages me that people insist that I must be
    wrong and they do this entirely on the basis of
    refusing to pay enough attention.

    The problem is that your goalposts for "enough attention" is for
    people to see things which are not there.


    Enough attention is (for example) 100% totally understanding
    everY single detail of the execution trace of DDD
    simulated by HHH1 that includes DDD correctly simulated
    by HHH.

    But you don't have that understanding yourself. You don't have the
    insight to see that the abandoned simulation of DD, left behind by HH,
    is in a state that could be stepped further with DebugStep and that
    doing so will bring it to termination.

    I proved that these traces do not diverge at the exact
    same point that HHH aborts FIFTEEN TIMES NOW and still
    ZERO PEOPLE HAVE NOTICED.

    Exactly! Now you are getting it. You have two simulations of the same calculations. They do not diverge.

    Then, one is abandoned. But that abandonment doesn't make them
    diverge!

    If they diverged, it would have to be that one (or both) simulations
    have mishandled the x86 instruction set somehow, which we know
    not to be the case.

    OK and so since they don't diverge, and we know one of them
    terminates, it must be that the other terminates: it's an instance
    of the same, deterministic calculation.

    Since the abandoned calculation has not diverged in any way, we can pick
    up its state where it was left off and continue tracing, showing that it follows exactly the same trajectory as the other one that was simulated
    to the end.

    Where is the difficulty, really.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 01:30:08 2025
    From Newsgroup: comp.theory

    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    The only relevant cases now are DD.HHH where DD
    is simulated by HHH AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    DD is always the diagonal case, targeting HHH.

    It can be decided by something that is not HHH.


    Thus proving that the exact same DD directly
    causes TWO DIFFERENT BEHAVIORS !!!

    The same angle theta=0 causes different behaviors in sin(theta) and cos(theta).

    sin(0) says 0; cos(0) says 1 (halts).

    Does that mean that the 0 in sin(0) and the 0 in cos(0) are different
    angles?

    The different behaviors are just the different analyses performed by the different decider functions, of the same input.

    /being analyzed/ is not a behavior of DD, even if it's
    done with simulation.

    Just like having its cosine taken isn't the behavior of an angle.

    When DD is terminating, and it is being analyzed such that a complete,
    correct simulation of it is performed, only then does /having been
    analyzed by simulation/ coincide with DD's behavior.

    Analysis by simulation is tantalizingly close to behavior.
    They are so closely related, that you confused one for the other,
    believing that two different analyses of DD mean that it has
    different behavior.

    It is not a behavior of DD that its simulation by HHH suddenly stops,
    even though it has executed exactly the same x86 steps as the simulation
    under HHH1. It is an analysis on the part of HHH, of DD.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 01:36:38 2025
    From Newsgroup: comp.theory

    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    The only relevant cases now are DD.HHH where DD
    is simulated by HHH AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    DD is always the diagonal case, targeting HHH.

    It can be decided by something that is not HHH.

    You need not be proving this, if your aim is to
    topple the halting theorem.

    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.


    You are still trying to cheat to show that DD emulated
    by HHH according to the semantics of the x86 language
    reaches its final halt state by doing all kinds of things
    that are not pure simulation.

    You seem to be arguing with yourself.

    The only words above which are not yours (but are rather mine) are:

    "DD is always the diagonal case, targeting HHH.

    It can be decided by something that is not HHH.

    You need not be proving this, if your aim is to
    topple the halting theorem."

    I just saw you assert something very similar to these words
    in another thread, so I'm assuming your new remark above
    is aimed at your own quoted remarks.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 20:37:39 2025
    From Newsgroup: comp.theory

    On 9/16/2025 8:02 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:24 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    Rather, it is ridiculously dumb that you cannot see that
    this bypass doesn't occur, in the diagonal test case, when
    the decider returns a "does not halt" decision (HHH(DD) -> 0).


    That is not the actual behavior specified by the actual input to HHH(DD) >>>
    Speaking of which, you have been dodging the question of specifying
    what the "actual input" to HHH comprises in the expression HHH(DD).


    The termination analyzer HHH is only examining whether
    or not the finite string of x86 machine code ending
    in the C3 byte "ret" instruction can be reached by the
    behavior specified by this finite string.

    Are you referring the the sequence of instructions comprising
    the procedure DD?


    The C function DD.

    This behavior does include that DD calls HHH(DD)
    in recursive simulation. DD is the program under
    test and HHH is the test program.

    The finite string of x86 machine code instructions
    does include the instruction "CALL HHH, DD".


    Yes and the behavior of HHH simulating an instance of itself
    simulating an instance of DD is simulated.

    But are you not aware that the entire HHH routine is also part of
    the input?


    It is only a part of the input in that the HHH
    must see the results of simulating an instance
    of itself simulating an instance of DD to determine
    whether or not this simulated DD can possibly reach
    its own simulated final halt state. Other than
    that is it not part of the input.

    Anything that is called by a piece of the input must be included in the
    bill of materials that comprise the input.

    You understood that each decider can have an input
    defined to "do the opposite" of whatever this decider
    decides thwarting the correct decision for this
    decider/input pair.

    But in order to do that, the input must be understood to be
    carrying a copy of that decider.


    I think that this was conventionally ignored prior
    to my deep dive into simulating termination analyzers.

    Or possibly, not an exact copy.

    The input can be carrying an /equivalent algorithm/.

    You also understand that another different decider
    can correctly decide this same input.

    Thanks to me, several others, and years of patient effort,
    you also now understand that, which is great.


    The ONLY thing that I learned from anyone in this
    group that I didn't already know is the idea that
    Turing computability requires C functions to be
    pure functions. That was a long an arduous process.

    None of my three theory of computation textbooks
    seemed to mention this at all. Where did you get
    it from?

    You seem to get totally confused when these are
    made specific by HHH/DD and HHH1/DD.

    If you think that it is impossible for DD to have
    different behavior between these two cases then how
    is it that one is conventionally undecidable and
    the other is decidable?

    What is "undecidable" is universal halting;

    No, No, No, No, No, No, No, No, No, No.
    That is only shown indirectly by the fact
    that the conventional notion of H/D pairs
    H is forced to get the wrong answer.

    it is an undedicable problem
    meaning that we don't have a terminating algorithm that will give an
    answer for every possible input.

    That's what the word "undecidable" means.


    The same general notion is (perhaps unconventionally)
    applied to the specific H/D pair where H is understood
    to be forced to get the wrong answer.

    I don't think there is a conventional term for the H
    of the H/D pair is forced to get the wrong answer.

    The specific test case DD is decidable. For the set of of computations consisting of { DD }, we /can/ have an algorithm which decides that
    entire set { DD }, if it is not required to decide anything else.


    Sure it is the DD1/HH1, DD2/HH2, DD3/HH3 that is an issue.

    The relationship between HHH and DD isn't that DD is "undecidable" to
    HHH, but that HHH /doesn't/ decide DD (either by not terminating or
    returing the wrong value). This is by design; DD is built on HHH and
    designed such that HHH(DD) is incorrect, if HHH(DD) terminates.


    So what conventional term do we have for the undecidability
    of a single H/D pair? H forced to get the wrong answer seems too clumsy

    HHH(DD) disqualifying itself not terminating is entirely the fault of
    the designer of HHH.


    Termination analyzers need not be pure functions.
    It will probably take an actual computer scientist
    to redefine HHH as a pure function of its inputs
    that keeps the exact same correspondence to the
    HP proofs.

    HHH(DD) being wrong when it does terminate is brought about by the
    designer of DD. That designer always has the last word since HHH
    is a building block of DD, not the other way around.


    Yet what no one else here understands is that
    The actual behavior specified by the actual input to HHH(DD)
    The actual behavior specified by the actual input to HHH(DD)
    The actual behavior specified by the actual input to HHH(DD)
    The actual behavior specified by the actual input to HHH(DD)
    Is the only thing that really counts.

    What's different between two deciders like HHH and HHH1 is
    their /analysis of DD/.

    Analysis of DD is not the /behavior/ of DD!


    I conclusively proved otherwise and you utterly refuse
    to pay close enough attention. You still think that
    DD simulated by HHH reaches its own final halt state
    not even understanding that your mechanism for doing
    this is more than a pure simulation of the input THUS CHEATING

    You have chosen simulation as the key part of your analysis.
    Simulation follows the behavior of its target,

    NOT AT ALL
    NOT AT ALL
    NOT AT ALL
    NOT AT ALL
    NOT AT ALL

    It follows the semantics specified by the input finite string.
    It follows the semantics specified by the input finite string.
    It follows the semantics specified by the input finite string.
    It follows the semantics specified by the input finite string.

    so that its
    structure resembles behavior. That's where you are getting
    confused. Analysis of a computation isn't its behavior,

    It sure as Hell is. WTF is halting if not its behavior?
    It sure as Hell is. WTF is halting if not its behavior?
    It sure as Hell is. WTF is halting if not its behavior?

    even if it involves detailed tracing.

    Only the /complete/ and /correct/ simulation of a terminating
    calculation can be de facto regarded as a bona fide representation of
    its behavior, and discussed as if it were its behavior.


    It follows the semantics specified by the input finite string.
    It follows the semantics specified by the input finite string.
    It follows the semantics specified by the input finite string.

    Any simulation that falls short of this is just an incomplete and/or incorrect analysis, and not a description of the subject's
    behavior.


    It follows the semantics specified by the input finite string.
    It follows the semantics specified by the input finite string.
    It follows the semantics specified by the input finite string.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 20:43:03 2025
    From Newsgroup: comp.theory

    On 9/16/2025 8:30 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    The only relevant cases now are DD.HHH where DD
    is simulated by HHH AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    DD is always the diagonal case, targeting HHH.

    It can be decided by something that is not HHH.


    Thus proving that the exact same DD directly
    causes TWO DIFFERENT BEHAVIORS !!!

    The same angle theta=0 causes different behaviors in sin(theta) and cos(theta).

    sin(0) says 0; cos(0) says 1 (halts).

    Does that mean that the 0 in sin(0) and the 0 in cos(0) are different
    angles?

    The different behaviors are just the different analyses performed by the different decider functions, of the same input.

    /being analyzed/ is not a behavior of DD, even if it's
    done with simulation.

    Just like having its cosine taken isn't the behavior of an angle.

    When DD is terminating, and it is being analyzed such that a complete, correct simulation of it is performed, only then does /having been
    analyzed by simulation/ coincide with DD's behavior.

    Analysis by simulation is tantalizingly close to behavior.

    THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY
    SPECIFIES IS THE ONLY BEHAVIOR THAT ANY HALT DECIDER
    CAN EVER REPORT.

    Is it really that hard to understand that halt deciders
    cannot have psychic ability?
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 21:12:48 2025
    From Newsgroup: comp.theory

    On 9/16/2025 8:13 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:17 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    It enrages me that people insist that I must be
    wrong and they do this entirely on the basis of
    refusing to pay enough attention.

    The problem is that your goalposts for "enough attention" is for
    people to see things which are not there.


    Enough attention is (for example) 100% totally understanding
    everY single detail of the execution trace of DDD
    simulated by HHH1 that includes DDD correctly simulated
    by HHH.

    But you don't have that understanding yourself. You don't have the
    insight to see that the abandoned simulation of DD, left behind by HH,
    is in a state that could be stepped further with DebugStep and that
    doing so will bring it to termination.

    I proved that these traces do not diverge at the exact
    same point that HHH aborts FIFTEEN TIMES NOW and still
    ZERO PEOPLE HAVE NOTICED.

    Exactly! Now you are getting it. You have two simulations of the same calculations. They do not diverge.

    Then, one is abandoned. But that abandonment doesn't make them
    diverge!


    *THAT IS COUNTER-FACTUAL TO THE EXTENT THAT YOU ARE DISHONEST*

    *We have two simulations that do not diverge until*
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 21:14:19 2025
    From Newsgroup: comp.theory

    On 9/16/2025 8:36 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    The only relevant cases now are DD.HHH where DD
    is simulated by HHH AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    DD is always the diagonal case, targeting HHH.

    It can be decided by something that is not HHH.

    You need not be proving this, if your aim is to
    topple the halting theorem.

    This means that it has always been common knowledge
    that the behavior of DD with HHH(DD) is different
    than the behavior of DD with HHH1(DD) yet everyone
    here disagrees because they value disagreement over
    truth.


    You are still trying to cheat to show that DD emulated
    by HHH according to the semantics of the x86 language
    reaches its final halt state by doing all kinds of things
    that are not pure simulation.

    You seem to be arguing with yourself.

    The only words above which are not yours (but are rather mine) are:

    "DD is always the diagonal case, targeting HHH.


    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 21:45:15 2025
    From Newsgroup: comp.theory

    On 9/16/2025 8:30 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    The only relevant cases now are DD.HHH where DD
    is simulated by HHH AKA the conventional diagonal
    relationship and DD.HHH1 where the same input does
    not form the diagonal case thus is conventionally
    decidable.

    DD is always the diagonal case, targeting HHH.

    It can be decided by something that is not HHH.


    Thus proving that the exact same DD directly
    causes TWO DIFFERENT BEHAVIORS !!!



    <snip>


    Analysis by simulation is tantalizingly close to behavior.

    DD simulated by HHH according to the semantics of
    the x86 language IS 100% EXACTLY AND PRECISELY THE
    BEHAVIOR THAT
    *THIS INPUT THIS INPUT THIS INPUT THIS INPUT THIS INPUT*
    SPECIFIES.

    <snip>

    int sum(int x, int y){ return x + y; }
    sum(5,6) does not specify the sum of 7 + 8
    even if everyone in the universe disagrees.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Tue Sep 16 22:53:05 2025
    From Newsgroup: comp.theory

    On 9/16/2025 10:45 PM, olcott wrote:

    int sum(int x, int y){ return x + y; }
    sum(5,6) does not specify the sum of 7 + 8
    even if everyone in the universe disagrees.



    int sum(int x, int y){ return (x+2) + (y+2); }

    How about now?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Wed Sep 17 04:59:39 2025
    From Newsgroup: comp.theory

    On 17/09/2025 01:01, olcott wrote:
    On 9/16/2025 5:50 PM, André G. Isaak wrote:
    On 2025-09-16 16:25, olcott wrote:

    <snip>

    This is a matter of life and death of the planet
    and stopping the rise of the fourth Reich.

    When truth becomes computable then the liars
    cannot get way with their lies.

    If it's so important, then why do you go to such lengths to to
    answer fairly straightforward questions about you claims (like
    what your dot notation means)?


    Because NOT doing this encourages you to go
    back and more carefully study the details

    No, it doesn't. It just makes us think that you daren't answer
    the questions in a lucid, straightforward manner because, if you
    do, your mental model all fall apart as you see that there's
    nothing behind it..
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Sep 16 23:08:33 2025
    From Newsgroup: comp.theory

    On 9/16/2025 10:59 PM, Richard Heathfield wrote:
    On 17/09/2025 01:01, olcott wrote:
    On 9/16/2025 5:50 PM, André G. Isaak wrote:
    On 2025-09-16 16:25, olcott wrote:

    <snip>

    This is a matter of life and death of the planet
    and stopping the rise of the fourth Reich.

    When truth becomes computable then the liars
    cannot get way with their lies.

    If it's so important, then why do you go to such lengths to to answer
    fairly straightforward questions about you claims (like what your dot
    notation means)?


    Because NOT doing this encourages you to go
    back and more carefully study the details

    No, it doesn't. It just makes us think that you daren't answer the
    questions in a lucid, straightforward manner because, if you do, your
    mental model all fall apart as you see that there's nothing behind it..


    The execution trace that I posted provides every
    relevant detail. When you ask these kinds of questions
    that only proves you did not study the execution trace
    well enough.

    I have proved to Mike that the simulation of DD by HHH
    does not diverge when HHH stops simulating it about five
    times now and he still doesn't get it.

    It diverges when HHH simulates DD one more time than HHH1
    ever does. I say this. The execution trace proves this
    and he never hears it.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 04:58:27 2025
    From Newsgroup: comp.theory

    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 8:02 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:24 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    Rather, it is ridiculously dumb that you cannot see that
    this bypass doesn't occur, in the diagonal test case, when
    the decider returns a "does not halt" decision (HHH(DD) -> 0).


    That is not the actual behavior specified by the actual input to HHH(DD) >>>>
    Speaking of which, you have been dodging the question of specifying
    what the "actual input" to HHH comprises in the expression HHH(DD).


    The termination analyzer HHH is only examining whether
    or not the finite string of x86 machine code ending
    in the C3 byte "ret" instruction can be reached by the
    behavior specified by this finite string.

    Are you referring the the sequence of instructions comprising
    the procedure DD?


    The C function DD.

    This behavior does include that DD calls HHH(DD)
    in recursive simulation. DD is the program under
    test and HHH is the test program.

    The finite string of x86 machine code instructions
    does include the instruction "CALL HHH, DD".


    Yes and the behavior of HHH simulating an instance of itself
    simulating an instance of DD is simulated.

    The problem you don't realize is that DD could instead have "CALL GGG,
    DD", where GGG is a clean-room reimplementation of HHH, based
    on a careful specification from reverse-engineering HHH.

    Your halting decision then screws up because it compares addresses
    to answer the question "are these two functions the same function?"

    When HHH(DD) analyzes DD and sees that DD calls GGG(DD),
    it must recognize that HHH and GGG are the same, because GGG is a
    clean-room clone of HHH.

    The way you are testing function equivalence is flawed.

    Function equivalence cannot be determined by address because,
    for instance:

    add_foo(x, y) { return x + y }
    add_bar(x, y) { return x + y }

    are just two names/addresses for the same computation.

    There is no algorithm which determines whether two functions
    are the same; it is an undecidable problem.

    Your abort decision is based on a strawman implementation of
    a problem which is undecidable in its correct form.

    You are seeing this problem in your own code. You created a clone
    of HHH called HHH1, which is the same except for the name.
    Yet, it's behaving differently.

    Why? Because when your machinery sees CALL HHH1 and CALL HHH
    in the execution trace, it treats them as a different functions.

    You must not do that. You must compare function pointers using
    your own CompareFunction(X, Y) function which is calibrated
    such that CompareFunction(HHH, HHH1) yields true.

    If you consistently use CompareFunction eveywhere you would
    otherwise use X == Y, you will see that HHH1(DD) and HHH(DD)
    proceed identically.

    But are you not aware that the entire HHH routine is also part of
    the input?


    It is only a part of the input in that the HHH
    must see the results of simulating an instance
    of itself simulating an instance of DD to determine
    whether or not this simulated DD can possibly reach
    its own simulated final halt state. Other than
    that is it not part of the input.

    That's where you are wrong. DD is built on HHH. Saying HHH
    is not part of the input is like saying the engine is not
    part of a car.

    None of my three theory of computation textbooks
    seemed to mention this at all. Where did you get
    it from?

    By paying attention in CS classes.

    Pure functions are important in topics connected with programming
    languages. Their properties are useful in compiler optimiization,
    concurrent programming and what not.

    If you want to explore the theory of computation by writing code
    in a programming language which has imperative features (side effects),
    it behooves you to make your functions behave as closely to the
    theoretical ones as possible, which implies purity.

    You seem to get totally confused when these are
    made specific by HHH/DD and HHH1/DD.

    If you think that it is impossible for DD to have
    different behavior between these two cases then how
    is it that one is conventionally undecidable and
    the other is decidable?

    What is "undecidable" is universal halting;

    No, No, No, No, No, No, No, No, No, No.
    That is only shown indirectly by the fact
    that the conventional notion of H/D pairs
    H is forced to get the wrong answer.

    Anyway, I'm only illustrating the term "undecidable" and how it is
    used. It is used to describe situations when we believe we don't have
    an algorithm that terminates on every input in the desired input set.

    it is an undedicable problem
    meaning that we don't have a terminating algorithm that will give an
    answer for every possible input.

    That's what the word "undecidable" means.


    The same general notion is (perhaps unconventionally)
    applied to the specific H/D pair where H is understood
    to be forced to get the wrong answer.

    I don't think there is a conventional term for the H
    of the H/D pair is forced to get the wrong answer.

    Just "gets thew wrong answer". The case, as a set of one,
    is decidable.

    The relationship between HHH and DD isn't that DD is "undecidable" to
    HHH, but that HHH /doesn't/ decide DD (either by not terminating or
    returing the wrong value). This is by design; DD is built on HHH and
    designed such that HHH(DD) is incorrect, if HHH(DD) terminates.


    So what conventional term do we have for the undecidability
    of a single H/D pair? H forced to get the wrong answer seems too clumsy

    It's not only clumsy, but it's wrong. Nothing is forced.

    Forced means that an altered course of action is imposed by
    interference where a different course was to take place without
    interference.

    But H is not altered; it is set in stone, and then D is built on top of
    it, revealing a latent flaw in H.

    HHH(DD) disqualifying itself not terminating is entirely the fault of
    the designer of HHH.


    Termination analyzers need not be pure functions.

    A termination analyzer needs to be an algorithm. An algorithm is an
    abstraction that can be rendered in purely functional form, as a
    sequence of side-effect-free transformations of data representations.

    Any simulation that falls short of this is just an incomplete and/or
    incorrect analysis, and not a description of the subject's
    behavior.

    It follows the semantics specified by the input finite string.

    Not all the way to the end, right? The semantics is not completely
    evolved when the simulation is abruptly abandoned.

    (And you have the wrong idea of what that input is; the input
    isn't just the body of D, but all of HHH, and all the simulation
    machinery that HHH calls. Debug_Step is part of DD, etc.)
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 05:26:58 2025
    From Newsgroup: comp.theory

    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    Analysis by simulation is tantalizingly close to behavior.

    DD simulated by HHH according to the semantics of
    the x86 language IS 100% EXACTLY AND PRECISELY THE

    DD is not simulated by HHH according to the semantics of the
    x86 language.

    Can you cite the chpater and verse of any Intel architecture manual
    where it says that the CPU can go into a suspended state wherenever the
    same function is called twice, without any intervening conditional
    brandch instructions?

    HHH's partial, incomplete simulation of DD is not an evocation
    of DD's behavior. It is just a botched analysis of DD.

    Only the completed simulation of a terminating procedure can be
    identified as having evoked a representation of its behavior.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 05:30:17 2025
    From Newsgroup: comp.theory

    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE

    That's vacuously true because DD is never correctly
    simulated by HHH.

    The diagonal case is something that happens; a situation which never
    happens is never identifiable as one one which happens.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From vallor@vallor@cultnix.org to comp.theory on Wed Sep 17 06:34:54 2025
    From Newsgroup: comp.theory

    On Mon, 15 Sep 2025 21:59:33 +0100, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote in <10a9unl$26h2q$1@dont-email.me>:

    hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname option.
    I don't have objcopy to test, but PO only needs a few basic COFF capabilities, so it might be enough...

    <https://man7.org/linux/man-pages/man1/objcopy.1.html>

    Mike.

    $ objcopy -I elf64-x86-64 -O pe-i386 hello.o hello.coff
    $ file hello.coff
    hello.coff: Intel 80386 COFF object file, no line number info, not
    stripped, 8 sections, symbol offset=0x22e, 4 symbols, 1st section name
    ".text"

    Looks like it can handle it -- whether or not Olcott's COFF handler
    can read those, no clue.

    (Something tells me one would have to build the original .o file
    with -m32 -march=i386, but my machine isn't set up to do that. That,
    because IIRC Olcott's system is 32-bit.)
    --
    -v
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From vallor@vallor@cultnix.org to comp.theory on Wed Sep 17 06:46:26 2025
    From Newsgroup: comp.theory

    On Tue, 16 Sep 2025 20:12:27 -0500, olcott <polcott333@gmail.com> wrote in <10ad1ts$2u79h$1@dont-email.me>:

    On 9/16/2025 7:56 PM, Mike Terry wrote:
    On 16/09/2025 22:54, Kaz Kylheku wrote:
    On 2025-09-16, Mike Terry
    <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 16/09/2025 06:50, Kaz Kylheku wrote:
    On 2025-09-16, Mike Terry
    <news.dead.person.stones@darjeeling.plus.com> wrote:
    That's what happens now in effect - x86utm.exe log starts tracing
    at the halt7.c main() function.

    Then why does he keep claiming that main() is "native"? Nothing is
    "native" in the loaded COFF test case, then.

    I genuinely can't see why you would be asking that question, so I'm
    missing something.

    Just I thought that the host executable just branches into the loaded
    module's main(). (Which would be a sensible thing to do; there is no
    need to simulate anyting outside of the halting decider such as HHH.)

    I see - no, x86utm parses input file halt7.obj to grab what it needs
    regarding data/code sectors, symbol definitions (function names and
    locations), relocation fixups table, then uses that to initialise a
    libx86emu "virtual address space".  In effect it performs its own
    equivalent of LoadLibrary() within that virtual address space, loading
    the module starting at low memory address 0x00000000.  halt7.obj is
    never linked to form an OS executable.  (I suppose it could be, perhaps
    with minor changes...)


    Yes and I did that all myself from scratch. https://github.com/plolcott/x86utm/blob/master/include/Read_COFF_Object.h Does most of this work.

    I'm not sure x86utm directly calling halt7.c's main() would be a good
    design, while it remains the case that simulation performed by HHH is
    within the libx86emu virtual machine.  It could be done that way, but
    then code like HHH would be running in two completely separate
    environments: Windows/Unix host, and libx86emu virtual address space.
    They must be exactly the same, and it's a good thing if they're easily
    seen and verified as being the same.  That happens if they're both
    performed by x86utm code via libx86emu, and also it means x86utm's log
    shows both in the same format.

    Also ISTM the hosting environment should be logically divorced from the
    halt7.c code as far as possible.  E.g. let's imagine x86utm is doing
    its stuff, but then it turns out the x86utm address space (which is
    32-bit,
    like the libx86utm virtual address space) starts requiring bigger
    tables and allocations for running multiple libx86utm virtual address
    spaces or whatever, and a resource limit is encountered.  We get past
    that by making x86utm.exe 64-bit.  That should all be routine
    (expecting the usual 32- to 64-bit code porting issues...)  Problem
    gone!  x86utm.exe now has 64-bits of address space (or close after OS
    is taken out) and libx86emu still creates its 32-bit virtual machines
    to run halt7.c code.  (You can see where this leads!)  But now your
    design of x86utm directly calling main() and hence HHH() that means HHH
    has to be both 64-bit (for xx86utm to directly call) and 32-bit (to run
    under libx86emu).  Or perhaps I just want to run PO's code on some RISC
    architecture, not x86 at all - I can compile C++ code (x86utm) to run
    on that RISC CPU, but halt7.c absolutely must be x86 code...


    The author of libx86emu had to upgrade it from 16-bit to 32-bit for me.
    No need for 64-bit.

    Alternatively, x86utm could designed so that halt7.c's main() is
    invoked exactly like any other simulation started e.g. by HHH within
    halt7.c.
    That would need some Simulate() function to drive the DebugStep() loop,
    and where would that be?  If in halt7.c, what simulates Simulate()?  Or
    it could be hard-coded into x86utm since it never changes.  Dunno......


    That is all in x86utm.cpp


    If you're just making the point that /all/ the code in halt7.c is
    "executed" within PO's x86utm,
    that's perfectly correct.  With the possible exception of main(), all >>>> the code in halt7.c is "TM code" or simulations made by that TM code.

    Is there a possible exception? I'm looking at the code now and it
    looks like if the simulation from the entry point into the loaded file
    is unconditional; it doesn't appear to be an option to branch to it
    natively.

    I'm not sure what you're referring to.
    You're looking at x86utm code or halt7.c code?

    The latter is never linked to an executable, so it can /only/ be
    executed within x86utm via libx86emu virtual x86 machine.

    x86utm.exe code runs under the hosting OS, reads and "loads" the
    halt7.obj code into the libx86emu VM, then runs its own loop in
    [x86emu.cpp]Halts() which calls Execute_Instruction() until
    [halt7.c]main() returns.  HHH code in halt7.c makes occasional
    DebugStep() calls to step its simulation, and DebugStep transfers into
    x86utm's [x86emu.cpp]DebugStep() which in turn calls
    Execute_Instruction() to step HHH's simulation.

    x86utm stack at that point will have:

    Execute_Instruction()     // simulated instruction from halt7.c
    DebugStep()               // ooh! a nested simulation being stepped!
                              // has called back
                              to x86utm DebugStep
    Execute_Instruction()     // simulated instruction from halt7.c
    DebugStep()               // instruction was a DebugStep in halt7.c
    which
                              // has called back
                              to x86utm DebugStep
    Execute_Instruction()     // 1 instruction from halt7.c
    Halts()                   // x86utm loop simulating [halt7.c]main() ..
    main()


    The TM code is "directly executed" [that's just what the phrase means
    in x86utm context] and code it simulates using DebugStep() is
    "simulated".

    That distinction makes no sense, like a lot of things from P. O.
    I was tripped up thinking that directly executed means using the host
    processor.

    Not sure who coined the term.  PO had shown HHH(DD), where HHH decides
    DD never halts.  Posters wanted to point out that whatever HHH decides,
    it needs to match up to [what DD actually does] but what is the phrase
    for that?  PO tries to only discuss "DD *simulated by HHH*" so in
    contrast posters came up with "DD *run natively*" or "DD *executed
    directly* (from main)" etc.. to contrast with HHH's simulations.  What
    phrase would you use?

    x86utm architecture and hosting OS's (Windows/Unix) is really
    orthogonal to all this.



    Like I said it all runs just fine under Linux.
    The Linux MakeFile is still there.


    "Directly Executed" should be equivalent to a wrapper which calls
    DebugStep, except that if we open-code the DebugStep loop, we can
    insert halting criteria, and trace recording and whatnot.

    I think people discussing that might refer to a UTM here, e.g UTM(DD),
    where UTM would be a function in halt7.c that simulates until
    completion.  In TM world, UTM(DD) is still a TM UTM simulating DD,
    which is conceptually different from what I would think of as DD
    "directly executed" (which is just the TM DD!  But PO doesn't grok TMs
    and computations, always thinking instead of actual computers loading
    and running "computer programs" (aka TM-description strings).

    Also if we have 10 posters posting here, we'll have 10 slightly
    different terminology uses + PO's understanding....  :)

    Anyhow in x86utm world as-is, We can put messages into [halt7.c]main().
    Halting criteria naturally (ISTM) go in [halt7.c]HHH.  Like in the HP,
    if H is a TM halt decider, the halting criteria it applies are in H,
    not some meta-level simulator running the TM H.  (There is no such
    thing. H itself does not need criteria to be aborted or "halt-decided",
    it's just a "native" TM, so to speak.)


    Mike.


    HHH in Halt7.c calls all of its helper functions in Halt7.c and some
    helper functions directly in the x86utm OS. These are stubs in Halt7.c.

    Can you verify that this x86utm is not a fork of this? :

    https://github.com/utmapp/UTM

    Thanks.
    --
    -v
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From joes@noreply@example.org to comp.theory on Wed Sep 17 07:18:46 2025
    From Newsgroup: comp.theory

    Am Tue, 16 Sep 2025 20:37:39 -0500 schrieb olcott:
    On 9/16/2025 8:02 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:24 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:

    Speaking of which, you have been dodging the question of specifying
    what the "actual input" to HHH comprises in the expression HHH(DD).
    The termination analyzer HHH is only examining whether or not the
    finite string of x86 machine code ending in the C3 byte "ret"
    instruction can be reached by the behavior specified by this finite
    string.
    Are you referring the the sequence of instructions comprising the
    procedure DD?
    The C function DD.
    Is the input to HHH(DDD).

    But are you not aware that the entire HHH routine is also part of the
    input?
    It is only a part of the input in that the HHH must see the results of simulating an instance of itself simulating an instance of DD to
    determine whether or not this simulated DD can possibly reach its own simulated final halt state. Other than that is it not part of the input.
    Yes, and HHH does not simulate itself as halting.
    How else would it be part of the input?

    You understood that each decider can have an input defined to "do the
    opposite" of whatever this decider decides thwarting the correct
    decision for this decider/input pair.
    But in order to do that, the input must be understood to be carrying a
    copy of that decider.
    I think that this was conventionally ignored prior to my deep dive into simulating termination analyzers.
    Exactly the other way around.

    If you think that it is impossible for DD to have different behavior
    between these two cases then how is it that one is conventionally
    undecidable and the other is decidable?
    What is "undecidable" is universal halting;
    That is only shown indirectly by the fact that the conventional notion
    of H/D pairs H is forced to get the wrong answer.
    It is shown.

    it is an undedicable problem meaning that we don't have a terminating
    algorithm that will give an answer for every possible input.
    That's what the word "undecidable" means.
    The same general notion is (perhaps unconventionally)
    applied to the specific H/D pair where H is understood to be forced to
    get the wrong answer.
    I don't think there is a conventional term for the H of the H/D pair is forced to get the wrong answer.
    "Wrong".

    HHH(DD) disqualifying itself not terminating is entirely the fault of
    the designer of HHH.
    Termination analyzers need not be pure functions.
    It will probably take an actual computer scientist to redefine HHH as a
    pure function of its inputs that keeps the exact same correspondence to
    the HP proofs.
    That is impossible.

    HHH(DD) being wrong when it does terminate is brought about by the
    designer of DD. That designer always has the last word since HHH is a
    building block of DD, not the other way around.

    What's different between two deciders like HHH and HHH1 is their
    /analysis of DD/. Analysis of DD is not the /behavior/ of DD!
    I conclusively proved otherwise and you utterly refuse to pay close
    enough attention. You still think that DD simulated by HHH reaches its
    own final halt state not even understanding that your mechanism for
    doing this is more than a pure simulation of the input THUS CHEATING
    No. You didn't prove that the simulation matches the direct execution.
    Nobody thinks that HHH as it is simulates DD returning. Jumping past
    the call is a strawman.

    [spam snipped]
    --
    Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
    It is not guaranteed that n+1 exists for every n.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From vallor@vallor@cultnix.org to comp.theory on Wed Sep 17 07:20:44 2025
    From Newsgroup: comp.theory

    On Tue, 16 Sep 2025 17:11:46 -0500, olcott <polcott333@gmail.com> wrote in <10acnb4$2ru85$1@dont-email.me>:

    On 9/16/2025 4:49 PM, André G. Isaak wrote:
    On 2025-09-16 13:19, olcott wrote:
    On 9/16/2025 2:09 PM, joes wrote:
    Am Tue, 16 Sep 2025 13:29:28 -0500 schrieb olcott:
    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:

    I guess I missed an earlier post.
    What do DD.HHH1 and DDD.HHH mean?

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1? >>>>>>>>>> And so DD.exe (I think I've seen that) would mean DD directly >>>>>>>>>> executed?
    The alternative (which is what I would have guessed) is "DD >>>>>>>>>> which calls HHH1", but it seems you don't mean that (?)

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state. >>>>>>>>
    I didn't ask that.

    All of the details of the trace that you erased answer every
    possibly question that you could possibly have about these things. >>>>>>> My answer directed you to the trace.
    HHH(DD) is the conventional diagonal case and HHH1(DD) is the
    common understanding that another different decider could
    correctly decide this same input because it does not form the
    diagonal case.

    That's all very interesting, and all, but what I want to know is:
    Does DD.HHH1 mean DD simulated by HHH1?
    Does DD.exe mean DD directly executed?

    The only relevant cases now are DD.HHH where DD is simulated by HHH
    AKA the conventional diagonal relationship and DD.HHH1 where the
    same input does not form the diagonal case thus is conventionally
    decidable.
    You could have just said "yes" the first time.


    It enrages me that people insist that I must be wrong and they do this
    entirely on the basis of refusing to pay enough attention.

    Chill, dude.,.

    Mike Terry never said anything about you being right or wrong. He
    merely asked you to clarify your dot notation...

    André


    I am not referring to Mike specifically yet it does seem that he did say
    that I am wrong on the basis of his own lack of understanding of one
    very key point.

    What is at stake here is life on Earth (death by climate change) and the
    rise of the fourth Reich on the basis that we have not unequivocally
    divided lies from truth.

    My system of reasoning makes the set of {True on the basis of meaning} computable.

    Is severe climate change caused by humans? YES It Donald Trump exactly copying Hitler's rise to power? YES

    I...don't follow. These decisions, true or false, have nothing
    to do with the halting problem.

    Isn't it the truth that some attempts at deciders turn out to
    be undecidable?
    --
    -v
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From vallor@vallor@cultnix.org to comp.theory on Wed Sep 17 07:32:38 2025
    From Newsgroup: comp.theory

    On Tue, 16 Sep 2025 19:07:11 -0500, olcott <polcott333@gmail.com> wrote in <10acu3g$2tftd$1@dont-email.me>:

    On 9/16/2025 5:53 PM, Mike Terry wrote:
    On 16/09/2025 23:25, olcott wrote:
    On 9/16/2025 5:10 PM, Mike Terry wrote:
    On 16/09/2025 19:29, olcott wrote:
    On 9/16/2025 1:15 PM, Mike Terry wrote:
    On 16/09/2025 18:44, olcott wrote:
    On 9/16/2025 10:44 AM, Mike Terry wrote:
    On 16/09/2025 05:13, olcott wrote:
    On 9/15/2025 10:52 PM, Mike Terry wrote:
    On 16/09/2025 03:37, olcott wrote:
    On 9/15/2025 9:10 PM, Mike Terry wrote:
    On 16/09/2025 00:52, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott: >>>>>>>>>>>>>>> On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>>> On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>>>>>> simply returns.
    Yeah, why does HHH think that it doesn't halt *and then >>>>>>>>>>>>>> halts*?


    DD.HHH1 halts DDD.HHH cannot possibly reach its final halt >>>>>>>>>>>>> state.


    I guess I missed an earlier post.

    What do DD.HHH1 and DDD.HHH mean?


    Mike.

    *DDD of HHH1 versus DDD of HHH see below*
    DDD of HHH keeps calling HHH(DD) to simulate it again.
    DDD of HHH1 never calls HHH1 at all.

    So DD.HHH1 means DD *OF* HHH1 ?  You mean DD simulated by HHH1? >>>>>>>>>> And so DD.exe (I think I've seen that) would mean DD directly >>>>>>>>>> executed?

    The alternative (which is what I would have guessed) is "DD >>>>>>>>>> which calls HHH1", but it seems you don't mean that (?)


    Mike.

    DD simulated by HHH1 has the same behavior as DD().
    DD simulated by HHH cannot possibly reach its final halt state. >>>>>>>>
    I didn't ask that.


    All of the details of the trace that you erased answer every
    possibly question that you could possibly have about these things. >>>>>>> My answer directed you to the trace.

    HHH(DD) is the conventional diagonal case and

    HHH1(DD) is the common understanding that another different
    decider could correctly decide this same input because it does not >>>>>>> form the diagonal case.

    That's all very interesting, and all, but what I want to know is:

    Does DD.HHH1 mean DD simulated by HHH1?

    Does DD.exe mean DD directly executed?


    The only relevant cases now are DD.HHH where DD is simulated by HHH

    aha!  So that's a "yes" to the first question.

    AKA the conventional diagonal relationship and DD.HHH1 where the
    same input does not form the diagonal case thus is conventionally
    decidable.

    and I guess I'll just have to accept that as a "yes" for the second
    question


    This means that it has always been common knowledge that the
    behavior of DD with HHH(DD) is different than the behavior of DD
    with HHH1(DD) yet everyone here disagrees because they value
    disagreement over truth.

    and no answer to the third question.


    This is a matter of life and death of the planet and stopping the rise
    of the fourth Reich.

    When truth becomes computable then the liars cannot get way with their
    lies.


    Dude!  Nothing going on here has even the remotest of effects on such
    world events.
    Mike.


    So you have not bothered to Notice that Trump is exactly copying Hitter?

    If it was not for the brave soul of the Senate parliamentarian
    cancelling the king maker paragraph of Trump's Big Bullshit Bill the USA would already be more than halfway to the dictatorship power of Nazi
    Germany.

    Truth can be computable !!!

    How do you compute "empathy" ...or "mercy"?

    These are matters of conscience. Is conscience nothing more
    than a computation?

    Humoring you a bit, it seems that the closest you could get to
    such a thing might be a system of ethics based on the results of
    game theory, part of information theory.

    But another part of information theory shows that the halting problem is undecidable.

    None of that has anything to do with explaining your notation. If
    you're cagey about such basic things, it invites suspicion.
    --
    -v
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From vallor@vallor@cultnix.org to comp.theory on Wed Sep 17 07:48:56 2025
    From Newsgroup: comp.theory

    On Mon, 15 Sep 2025 14:35:18 -0700, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote in <10aa0qm$26msp$5@dont-email.me>:

    On 9/15/2025 2:32 PM, olcott wrote:
    On 9/15/2025 4:30 PM, Chris M. Thomasson wrote:
    On 9/15/2025 2:26 PM, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?

    Anyway, HHH1 is not HHH and is therefore superfluous and
    irrelevant.
    DDD correctly simulated by HHH1 is identical to the behavior of the >>>>>> directly executed DDD().
    When we have emulation compared to emulation we are comparing
    Apples to Apples and not Apples to Oranges.

    You have yet to explain the significance of HHH1.
    Just did.
    Why are you comparing at all?

    HHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus >>>>>>>> must abort DD itself.
    That's a tautology. HHH only sees the behavior that HHH has seen >>>>>>> in order to convince itself that seeing more behavior is not
    necessary.
    Yes?
    HHH has complete proof that DDD correctly simulated by HHH cannot
    possibly reach its own simulated final halt state.

    So yes. What is the proof?


    It is apparently over everyone's head.

    void Infinite_Recursion()
    {
       Infinite_Recursion();
       return;
    }

    How can a program know with complete certainty that
    Infinite_Recursion() never halts?

    Check...


    Ummm... Your Infinite_Recursion example is basically akin to:

    10 PRINT "Halt" : GOTO 10

    Right? It says halt but does not... ;^)


    You aren't this stupid on the other forums



    Well, what are you trying to say here? That the following might halt?

    void Infinite_Recursion()
    {
    Infinite_Recursion();
    return;
    }

    I think not. Blowing the stack is not the same as halting...

    As a matter of principle, it's part of the execution environment,
    just like his partial decider simulating the code.

    In his world, catching it calling itself twice without an intervening
    decision is grounds to abort.

    In our world, when the stack gets used up, it aborts.

    Fair's fair. If one is valid -- and if he's thumping the x86 bible
    saying that the rules of the instruction set are the source of
    truth -- then he can't have an infinite stack.

    (
    The example could be
    loop: goto loop;

    But if I'm not mistaken, a decider can be written for such a trivial example...by parsing the source, not simulating it!
    )
    --
    -v
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory on Wed Sep 17 11:21:55 2025
    From Newsgroup: comp.theory

    Op 17.sep.2025 om 04:12 schreef olcott:
    On 9/16/2025 8:13 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:17 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    It enrages me that people insist that I must be
    wrong and they do this entirely on the basis of
    refusing to pay enough attention.

    The problem is that your goalposts for "enough attention" is for
    people to see things which are not there.


    Enough attention is (for example) 100% totally understanding
    everY single detail of the execution trace of DDD
    simulated by HHH1 that includes DDD correctly simulated
    by HHH.

    But you don't have that understanding yourself. You don't have the
    insight to see that the abandoned simulation of DD, left behind by HH,
    is in a state that could be stepped further with DebugStep and that
    doing so will bring it to termination.

    I proved that these traces do not diverge at the exact
    same point that HHH aborts FIFTEEN TIMES NOW and still
    ZERO PEOPLE HAVE NOTICED.

    Exactly! Now you are getting it. You have two simulations of the same
    calculations. They do not diverge.

    Then, one is abandoned. But that abandonment doesn't make them
    diverge!


    *THAT IS COUNTER-FACTUAL TO THE EXTENT THAT YOU ARE DISHONEST*

    *We have two simulations that do not diverge until*
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN
    HHH simulates DD ALL OVER AGAIN


    Up to the point where HHH1 continues and reaches the final halt state
    where HHH1 does not abort and reaches the final halt state. Proving that
    HHH aborted prematurely. One cycle later HHH would have reached the
    final halt state as well.
    Aborting and ignoring the rest of the specification of the input does
    not change the specification of the input.
    Aborting without analysis of what would happen if the simulation would continue, is not a proof for non-termination.
    Declaring a finite recursion a non-termination pattern is not a proof,
    it is only an assumption that needs a proof.
    In particular when many conditional branch instructions have been
    encountered during this finite recursion, for which the conditions
    can/will change in the next cycle.
    Repeating your assumptions to prove these assumptions does not help you
    to convince anyone.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Wed Sep 17 07:40:27 2025
    From Newsgroup: comp.theory

    On 9/17/2025 1:26 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    Analysis by simulation is tantalizingly close to behavior.

    DD simulated by HHH according to the semantics of
    the x86 language IS 100% EXACTLY AND PRECISELY THE

    DD is not simulated by HHH according to the semantics of the
    x86 language.

    Can you cite the chpater and verse of any Intel architecture manual
    where it says that the CPU can go into a suspended state wherenever the
    same function is called twice, without any intervening conditional
    brandch instructions?

    Sound familiar, Peter?


    HHH's partial, incomplete simulation of DD is not an evocation
    of DD's behavior. It is just a botched analysis of DD.

    Only the completed simulation of a terminating procedure can be
    identified as having evoked a representation of its behavior.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 07:50:02 2025
    From Newsgroup: comp.theory

    On 9/17/2025 1:46 AM, vallor wrote:
    On Tue, 16 Sep 2025 20:12:27 -0500, olcott <polcott333@gmail.com> wrote in <10ad1ts$2u79h$1@dont-email.me>:

    On 9/16/2025 7:56 PM, Mike Terry wrote:
    On 16/09/2025 22:54, Kaz Kylheku wrote:
    On 2025-09-16, Mike Terry
    <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 16/09/2025 06:50, Kaz Kylheku wrote:
    On 2025-09-16, Mike Terry
    <news.dead.person.stones@darjeeling.plus.com> wrote:
    That's what happens now in effect - x86utm.exe log starts tracing >>>>>>> at the halt7.c main() function.

    Then why does he keep claiming that main() is "native"? Nothing is >>>>>> "native" in the loaded COFF test case, then.

    I genuinely can't see why you would be asking that question, so I'm
    missing something.

    Just I thought that the host executable just branches into the loaded
    module's main(). (Which would be a sensible thing to do; there is no
    need to simulate anyting outside of the halting decider such as HHH.)

    I see - no, x86utm parses input file halt7.obj to grab what it needs
    regarding data/code sectors, symbol definitions (function names and
    locations), relocation fixups table, then uses that to initialise a
    libx86emu "virtual address space".  In effect it performs its own
    equivalent of LoadLibrary() within that virtual address space, loading
    the module starting at low memory address 0x00000000.  halt7.obj is
    never linked to form an OS executable.  (I suppose it could be, perhaps >>> with minor changes...)


    Yes and I did that all myself from scratch.
    https://github.com/plolcott/x86utm/blob/master/include/Read_COFF_Object.h
    Does most of this work.

    I'm not sure x86utm directly calling halt7.c's main() would be a good
    design, while it remains the case that simulation performed by HHH is
    within the libx86emu virtual machine.  It could be done that way, but
    then code like HHH would be running in two completely separate
    environments: Windows/Unix host, and libx86emu virtual address space.
    They must be exactly the same, and it's a good thing if they're easily
    seen and verified as being the same.  That happens if they're both
    performed by x86utm code via libx86emu, and also it means x86utm's log
    shows both in the same format.

    Also ISTM the hosting environment should be logically divorced from the
    halt7.c code as far as possible.  E.g. let's imagine x86utm is doing
    its stuff, but then it turns out the x86utm address space (which is
    32-bit,
    like the libx86utm virtual address space) starts requiring bigger
    tables and allocations for running multiple libx86utm virtual address
    spaces or whatever, and a resource limit is encountered.  We get past
    that by making x86utm.exe 64-bit.  That should all be routine
    (expecting the usual 32- to 64-bit code porting issues...)  Problem
    gone!  x86utm.exe now has 64-bits of address space (or close after OS
    is taken out) and libx86emu still creates its 32-bit virtual machines
    to run halt7.c code.  (You can see where this leads!)  But now your
    design of x86utm directly calling main() and hence HHH() that means HHH
    has to be both 64-bit (for xx86utm to directly call) and 32-bit (to run
    under libx86emu).  Or perhaps I just want to run PO's code on some RISC >>> architecture, not x86 at all - I can compile C++ code (x86utm) to run
    on that RISC CPU, but halt7.c absolutely must be x86 code...


    The author of libx86emu had to upgrade it from 16-bit to 32-bit for me.
    No need for 64-bit.

    Alternatively, x86utm could designed so that halt7.c's main() is
    invoked exactly like any other simulation started e.g. by HHH within
    halt7.c.
    That would need some Simulate() function to drive the DebugStep() loop,
    and where would that be?  If in halt7.c, what simulates Simulate()?  Or >>> it could be hard-coded into x86utm since it never changes.  Dunno...... >>>

    That is all in x86utm.cpp


    If you're just making the point that /all/ the code in halt7.c is
    "executed" within PO's x86utm,
    that's perfectly correct.  With the possible exception of main(), all >>>>> the code in halt7.c is "TM code" or simulations made by that TM code. >>>>
    Is there a possible exception? I'm looking at the code now and it
    looks like if the simulation from the entry point into the loaded file >>>> is unconditional; it doesn't appear to be an option to branch to it
    natively.

    I'm not sure what you're referring to.
    You're looking at x86utm code or halt7.c code?

    The latter is never linked to an executable, so it can /only/ be
    executed within x86utm via libx86emu virtual x86 machine.

    x86utm.exe code runs under the hosting OS, reads and "loads" the
    halt7.obj code into the libx86emu VM, then runs its own loop in
    [x86emu.cpp]Halts() which calls Execute_Instruction() until
    [halt7.c]main() returns.  HHH code in halt7.c makes occasional
    DebugStep() calls to step its simulation, and DebugStep transfers into
    x86utm's [x86emu.cpp]DebugStep() which in turn calls
    Execute_Instruction() to step HHH's simulation.

    x86utm stack at that point will have:

    Execute_Instruction()     // simulated instruction from halt7.c
    DebugStep()               // ooh! a nested simulation being stepped!
                              // has called back
                              to x86utm DebugStep >>> Execute_Instruction()     // simulated instruction from halt7.c
    DebugStep()               // instruction was a DebugStep in halt7.c
    which
                              // has called back
                              to x86utm DebugStep >>> Execute_Instruction()     // 1 instruction from halt7.c
    Halts()                   // x86utm loop simulating [halt7.c]main() ..
    main()


    The TM code is "directly executed" [that's just what the phrase means >>>>> in x86utm context] and code it simulates using DebugStep() is
    "simulated".

    That distinction makes no sense, like a lot of things from P. O.
    I was tripped up thinking that directly executed means using the host
    processor.

    Not sure who coined the term.  PO had shown HHH(DD), where HHH decides
    DD never halts.  Posters wanted to point out that whatever HHH decides, >>> it needs to match up to [what DD actually does] but what is the phrase
    for that?  PO tries to only discuss "DD *simulated by HHH*" so in
    contrast posters came up with "DD *run natively*" or "DD *executed
    directly* (from main)" etc.. to contrast with HHH's simulations.  What
    phrase would you use?

    x86utm architecture and hosting OS's (Windows/Unix) is really
    orthogonal to all this.



    Like I said it all runs just fine under Linux.
    The Linux MakeFile is still there.


    "Directly Executed" should be equivalent to a wrapper which calls
    DebugStep, except that if we open-code the DebugStep loop, we can
    insert halting criteria, and trace recording and whatnot.

    I think people discussing that might refer to a UTM here, e.g UTM(DD),
    where UTM would be a function in halt7.c that simulates until
    completion.  In TM world, UTM(DD) is still a TM UTM simulating DD,
    which is conceptually different from what I would think of as DD
    "directly executed" (which is just the TM DD!  But PO doesn't grok TMs
    and computations, always thinking instead of actual computers loading
    and running "computer programs" (aka TM-description strings).

    Also if we have 10 posters posting here, we'll have 10 slightly
    different terminology uses + PO's understanding....  :)

    Anyhow in x86utm world as-is, We can put messages into [halt7.c]main().
    Halting criteria naturally (ISTM) go in [halt7.c]HHH.  Like in the HP,
    if H is a TM halt decider, the halting criteria it applies are in H,
    not some meta-level simulator running the TM H.  (There is no such
    thing. H itself does not need criteria to be aborted or "halt-decided",
    it's just a "native" TM, so to speak.)


    Mike.


    HHH in Halt7.c calls all of its helper functions in Halt7.c and some
    helper functions directly in the x86utm OS. These are stubs in Halt7.c.

    Can you verify that this x86utm is not a fork of this? :

    https://github.com/utmapp/UTM

    Thanks.


    It is not. There are libx86emu files that I ported
    to Windows while keeping Linux compatibility.
    One of these files I adapted to provide disassembly.
    The rest are x86utm files that I wrote.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 08:01:05 2025
    From Newsgroup: comp.theory

    On 9/17/2025 2:20 AM, vallor wrote:
    On Tue, 16 Sep 2025 17:11:46 -0500, olcott <polcott333@gmail.com> wrote in
    I am not referring to Mike specifically yet it does seem that he did say
    that I am wrong on the basis of his own lack of understanding of one
    very key point.

    What is at stake here is life on Earth (death by climate change) and the
    rise of the fourth Reich on the basis that we have not unequivocally
    divided lies from truth.

    My system of reasoning makes the set of {True on the basis of meaning}
    computable.

    Is severe climate change caused by humans? YES It Donald Trump exactly
    copying Hitler's rise to power? YES

    I...don't follow. These decisions, true or false, have nothing
    to do with the halting problem.


    The halting problem, The Liar Paradox,
    The Tarski Undefinability Theorem, and
    Gödel's 1931 Incompleteness theorem all
    pertain the human misunderstanding of
    the nature of truth itself.

    Isn't it the truth that some attempts at deciders turn out to
    be undecidable?


    Not through the halting problem proofs.
    By correcting these errors we can make truth
    computable and get rid of LLM hallucinations.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Wed Sep 17 14:05:04 2025
    From Newsgroup: comp.theory

    On 17/09/2025 05:08, olcott wrote:

    <snip>


    The execution trace that I posted provides every
    relevant detail. When you ask these kinds of questions
    that only proves you did not study the execution trace
    well enough.

    On the contrary, it proves that you gave it more credence thanit
    deserves.

    HHH has a simple question to answer. Does DD halt?

    HHH gets the answer wrong. It is therefore not fit for purpose.

    I have proved to Mike that the simulation of DD by HHH
    does not diverge when HHH stops simulating it about five
    times now and he still doesn't get it.

    It doesn't matter.

    All that matters is the answer, which HHH gets wrong.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 08:06:19 2025
    From Newsgroup: comp.theory

    On 9/17/2025 2:48 AM, vallor wrote:
    On Mon, 15 Sep 2025 14:35:18 -0700, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote in <10aa0qm$26msp$5@dont-email.me>:

    Well, what are you trying to say here? That the following might halt?

    void Infinite_Recursion()
    {
    Infinite_Recursion();
    return;
    }

    I think not. Blowing the stack is not the same as halting...

    As a matter of principle, it's part of the execution environment,
    just like his partial decider simulating the code.

    In his world, catching it calling itself twice without an intervening decision is grounds to abort.

    In our world, when the stack gets used up, it aborts.

    Fair's fair. If one is valid -- and if he's thumping the x86 bible
    saying that the rules of the instruction set are the source of
    truth -- then he can't have an infinite stack.

    (
    The example could be
    loop: goto loop;

    But if I'm not mistaken, a decider can be written for such a trivial example...by parsing the source, not simulating it!
    )

    HHH is smart enough to detect infinite loops
    and complex cases of infinite recursion
    involving many functions.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 08:23:16 2025
    From Newsgroup: comp.theory

    On 9/17/2025 2:32 AM, vallor wrote:
    On Tue, 16 Sep 2025 19:07:11 -0500, olcott wrote

    On 9/16/2025 5:53 PM, Mike Terry wrote:
    Dude!  Nothing going on here has even the remotest of effects on such
    world events.
    Mike.


    So you have not bothered to Notice that Trump is exactly copying Hitler?

    If it was not for the brave soul of the Senate parliamentarian
    cancelling the king maker paragraph of Trump's Big Bullshit Bill the USA
    would already be more than halfway to the dictatorship power of Nazi
    Germany.

    Truth can be computable !!!

    How do you compute "empathy" ...or "mercy"?


    First we must divide verified fast from brilliantly well created lies.
    Lies are taking control of the world and are on the path to kill it.

    These are matters of conscience. Is conscience nothing more
    than a computation?


    Morality seems best computed on the basis of Consequentialism. https://plato.stanford.edu/entries/consequentialism/

    Maximizing beneficial consequences of thoughts words and deeds
    while minimizing detrimental consequences.

    Humoring you a bit, it seems that the closest you could get to
    such a thing might be a system of ethics based on the results of
    game theory, part of information theory.

    But another part of information theory shows that the halting problem is undecidable.


    The Halting Problem proof is merely one of several mistakes.

    The halting problem, The Liar Paradox,
    The Tarski Undefinability Theorem, and
    Gödel's 1931 Incompleteness theorem all
    pertain the human misunderstanding of
    the nature of truth itself.

    None of that has anything to do with explaining your notation. If
    you're cagey about such basic things, it invites suspicion.


    When I explain that the key elements of my proof
    are validated by a specific execution trace and people
    continue to disparage my work by not bothering to
    carefully examine this trace I make them go find
    their own answers in this trace.

    I said that DD.HHH is DD of HHH in the trace.

    When you go look at the trace you find that
    DDD emulated by HHH1 and DDD emulated by HHH
    DO NOT DIVERGE WHEN HHH ABORTS. They diverge
    much later. I have told you this five times
    now and you still don't get it.

    DDD emulating by HHH proves non-halting behavior
    and DDD emulated by HHH1 proves halting behavior.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 08:27:47 2025
    From Newsgroup: comp.theory

    On 9/17/2025 2:18 AM, joes wrote:
    Am Tue, 16 Sep 2025 20:37:39 -0500 schrieb olcott:
    On 9/16/2025 8:02 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:24 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:

    Speaking of which, you have been dodging the question of specifying
    what the "actual input" to HHH comprises in the expression HHH(DD).
    The termination analyzer HHH is only examining whether or not the
    finite string of x86 machine code ending in the C3 byte "ret"
    instruction can be reached by the behavior specified by this finite
    string.
    Are you referring the the sequence of instructions comprising the
    procedure DD?
    The C function DD.
    Is the input to HHH(DDD).

    But are you not aware that the entire HHH routine is also part of the
    input?
    It is only a part of the input in that the HHH must see the results of
    simulating an instance of itself simulating an instance of DD to
    determine whether or not this simulated DD can possibly reach its own
    simulated final halt state. Other than that is it not part of the input.

    Yes, and HHH does not simulate itself as halting.
    How else would it be part of the input?

    You understood that each decider can have an input defined to "do the
    opposite" of whatever this decider decides thwarting the correct
    decision for this decider/input pair.
    But in order to do that, the input must be understood to be carrying a
    copy of that decider.
    I think that this was conventionally ignored prior to my deep dive into
    simulating termination analyzers.

    Exactly the other way around.


    I am the first person on the face of the Earth
    that showed a simulating termination analyzer:
    (a) Makes the do the opposite code unreachable
    (b) Shows the HP diagonal case is correctly decided as non-halting.

    If you think that it is impossible for DD to have different behavior
    between these two cases then how is it that one is conventionally
    undecidable and the other is decidable?
    What is "undecidable" is universal halting;
    That is only shown indirectly by the fact that the conventional notion
    of H/D pairs H is forced to get the wrong answer.
    It is shown.

    it is an undedicable problem meaning that we don't have a terminating
    algorithm that will give an answer for every possible input.
    That's what the word "undecidable" means.
    The same general notion is (perhaps unconventionally)
    applied to the specific H/D pair where H is understood to be forced to
    get the wrong answer.
    I don't think there is a conventional term for the H of the H/D pair is
    forced to get the wrong answer.
    "Wrong".

    HHH(DD) disqualifying itself not terminating is entirely the fault of
    the designer of HHH.
    Termination analyzers need not be pure functions.
    It will probably take an actual computer scientist to redefine HHH as a
    pure function of its inputs that keeps the exact same correspondence to
    the HP proofs.
    That is impossible.

    HHH(DD) being wrong when it does terminate is brought about by the
    designer of DD. That designer always has the last word since HHH is a
    building block of DD, not the other way around.

    What's different between two deciders like HHH and HHH1 is their
    /analysis of DD/. Analysis of DD is not the /behavior/ of DD!
    I conclusively proved otherwise and you utterly refuse to pay close
    enough attention. You still think that DD simulated by HHH reaches its
    own final halt state not even understanding that your mechanism for
    doing this is more than a pure simulation of the input THUS CHEATING
    No. You didn't prove that the simulation matches the direct execution.
    Nobody thinks that HHH as it is simulates DD returning. Jumping past
    the call is a strawman.

    [spam snipped]
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 09:16:12 2025
    From Newsgroup: comp.theory

    On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 8:02 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 5:24 PM, Kaz Kylheku wrote:
    On 2025-09-16, olcott <polcott333@gmail.com> wrote:
    Rather, it is ridiculously dumb that you cannot see that
    this bypass doesn't occur, in the diagonal test case, when
    the decider returns a "does not halt" decision (HHH(DD) -> 0).


    That is not the actual behavior specified by the actual input to HHH(DD) >>>>>
    Speaking of which, you have been dodging the question of specifying
    what the "actual input" to HHH comprises in the expression HHH(DD).


    The termination analyzer HHH is only examining whether
    or not the finite string of x86 machine code ending
    in the C3 byte "ret" instruction can be reached by the
    behavior specified by this finite string.

    Are you referring the the sequence of instructions comprising
    the procedure DD?


    The C function DD.

    This behavior does include that DD calls HHH(DD)
    in recursive simulation. DD is the program under
    test and HHH is the test program.

    The finite string of x86 machine code instructions
    does include the instruction "CALL HHH, DD".


    Yes and the behavior of HHH simulating an instance of itself
    simulating an instance of DD is simulated.

    The problem you don't realize is that DD could instead have "CALL GGG,
    DD", where GGG is a clean-room reimplementation of HHH, based
    on a careful specification from reverse-engineering HHH.

    Your halting decision then screws up because it compares addresses
    to answer the question "are these two functions the same function?"

    When HHH(DD) analyzes DD and sees that DD calls GGG(DD),
    it must recognize that HHH and GGG are the same, because GGG is a
    clean-room clone of HHH.

    The way you are testing function equivalence is flawed.


    It is not and you convoluted example seems to make no point.

    Function equivalence cannot be determined by address because,
    for instance:


    In the semantics of the x86 model of computation
    same address means same function.

    add_foo(x, y) { return x + y }
    add_bar(x, y) { return x + y }

    are just two names/addresses for the same computation.

    There is no algorithm which determines whether two functions
    are the same; it is an undecidable problem.

    Your abort decision is based on a strawman implementation of
    a problem which is undecidable in its correct form.


    Identical finite strings of machine code can achieve
    this same result in the Linz proof.

    You are seeing this problem in your own code. You created a clone
    of HHH called HHH1, which is the same except for the name.
    Yet, it's behaving differently.


    In the very well known conventionally understood way.

    Why? Because when your machinery sees CALL HHH1 and CALL HHH
    in the execution trace, it treats them as a different functions.


    When I say that DD calls HHH(DD) in recursive simulation
    and DD does not call HHH1 at all and I say this 500 times
    and no one sees that these behaviors cannot be the same
    on the basis of these differences I can only reasonably
    conclude short-circuits in brains or lying.

    You must not do that. You must compare function pointers using
    your own CompareFunction(X, Y) function which is calibrated
    such that CompareFunction(HHH, HHH1) yields true.

    If you consistently use CompareFunction eveywhere you would
    otherwise use X == Y, you will see that HHH1(DD) and HHH(DD)
    proceed identically.

    The trace already shows that DD emulated by HHH1 and
    DD emulated by HHH are identical until HHH emulates
    an instance of itself emulating an instance of DD.


    But are you not aware that the entire HHH routine is also part of
    the input?


    It is only a part of the input in that the HHH
    must see the results of simulating an instance
    of itself simulating an instance of DD to determine
    whether or not this simulated DD can possibly reach
    its own simulated final halt state. Other than
    that is it not part of the input.

    That's where you are wrong. DD is built on HHH. Saying HHH
    is not part of the input is like saying the engine is not
    part of a car.


    DD is the program under test.
    HHH is not the program under test.
    HHH is the test program.
    DD is not the test program.

    None of my three theory of computation textbooks
    seemed to mention this at all. Where did you get
    it from?

    By paying attention in CS classes.


    I only learned theory of computation from textbooks
    and you say the whole pure function thing is not
    in any textbooks?

    Pure functions are important in topics connected with programming
    languages. Their properties are useful in compiler optimiization,
    concurrent programming and what not.


    Useful yet not mandatory.

    If you want to explore the theory of computation by writing code
    in a programming language which has imperative features (side effects),
    it behooves you to make your functions behave as closely to the
    theoretical ones as possible, which implies purity.


    I think that I learned that from you.
    Pure functions are Turing computable functions.
    On the other hand impure functions seem to
    contradict the Church/Turing thesis.

    You seem to get totally confused when these are
    made specific by HHH/DD and HHH1/DD.

    If you think that it is impossible for DD to have
    different behavior between these two cases then how
    is it that one is conventionally undecidable and
    the other is decidable?

    What is "undecidable" is universal halting;

    No, No, No, No, No, No, No, No, No, No.
    That is only shown indirectly by the fact
    that the conventional notion of H/D pairs
    H is forced to get the wrong answer.

    Anyway, I'm only illustrating the term "undecidable" and how it is
    used. It is used to describe situations when we believe we don't have
    an algorithm that terminates on every input in the desired input set.


    So maybe the conventional "do the opposite" relationship
    can be called an undecidable instance. I will not tolerate
    that there is no existing term for a meaning that I must
    expression.

    https://en.wikipedia.org/wiki/Newspeak
    Was intentionally defined to restrict thought.

    it is an undedicable problem
    meaning that we don't have a terminating algorithm that will give an
    answer for every possible input.

    That's what the word "undecidable" means.


    The same general notion is (perhaps unconventionally)
    applied to the specific H/D pair where H is understood
    to be forced to get the wrong answer.

    I don't think there is a conventional term for the H
    of the H/D pair is forced to get the wrong answer.

    Just "gets thew wrong answer". The case, as a set of one,
    is decidable.


    I am hereby establishing the term "undecidable instance" for this case.

    The relationship between HHH and DD isn't that DD is "undecidable" to
    HHH, but that HHH /doesn't/ decide DD (either by not terminating or
    returing the wrong value). This is by design; DD is built on HHH and
    designed such that HHH(DD) is incorrect, if HHH(DD) terminates.


    So what conventional term do we have for the undecidability
    of a single H/D pair? H forced to get the wrong answer seems too clumsy

    It's not only clumsy, but it's wrong. Nothing is forced.

    Forced means that an altered course of action is imposed by
    interference where a different course was to take place without
    interference.


    It is an input/decider combination intentionally defined
    to create an undecidable instance.

    But H is not altered; it is set in stone, and then D is built on top of
    it, revealing a latent flaw in H.

    HHH(DD) disqualifying itself not terminating is entirely the fault of
    the designer of HHH.


    Termination analyzers need not be pure functions.

    A termination analyzer needs to be an algorithm. An algorithm is an abstraction that can be rendered in purely functional form, as a
    sequence of side-effect-free transformations of data representations.


    If you can tell me how to convert HHH into a pure function
    and keep complete correspondence to the HP proof I will do this.
    Until then HHH is a correct termination analyzer.

    Any simulation that falls short of this is just an incomplete and/or
    incorrect analysis, and not a description of the subject's
    behavior.

    It follows the semantics specified by the input finite string.

    Not all the way to the end, right? The semantics is not completely
    evolved when the simulation is abruptly abandoned.


    If you only paid enough attention you would see that the
    only possible end is out-of-memory error. It is very very
    annoying that you utterly refuse to pay enough attention.
    This is beginning to look like willful deception.

    (And you have the wrong idea of what that input is; the input
    isn't just the body of D, but all of HHH, and all the simulation
    machinery that HHH calls. Debug_Step is part of DD, etc.)


    The input is just the body of DD except for the behavior
    that this input specifies where HHH emulated an instance
    of itself emulating an instance of DD.

    DD is the program under test.
    HHH is not the program under test it is the test program.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 09:24:47 2025
    From Newsgroup: comp.theory

    On 9/17/2025 12:26 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    Analysis by simulation is tantalizingly close to behavior.

    DD simulated by HHH according to the semantics of
    the x86 language IS 100% EXACTLY AND PRECISELY THE

    DD is not simulated by HHH according to the semantics of the
    x86 language.


    That is proven to be counter-factual.

    Can you cite the chpater and verse of any Intel architecture manual
    where it says that the CPU can go into a suspended state wherenever the
    same function is called twice, without any intervening conditional
    brandch instructions?


    DD is simulated according to the exact semantics of the x86
    language up to the point where it is no longer simulated.

    HHH's partial, incomplete simulation of DD is not an evocation
    of DD's behavior. It is just a botched analysis of DD.


    You lack the technical skill to meet my challenge
    of deriving the correct non-termination criteria
    therefore you have no basis to determine that my
    non-termination criteria are incorrect. It is a
    matter of objective fact that my non-termination
    criteria are correct.

    Only the completed simulation of a terminating procedure can be
    identified as having evoked a representation of its behavior.


    That you fail to understand that DD simulated by HHH
    according to the semantics of the x86 language cannot
    possibly reach its own simulated final halt state is
    purely your own ignorance thus not my mistake.

    I don't know how you think you could get away with
    your alternative as being a pure simulation.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 09:40:54 2025
    From Newsgroup: comp.theory

    On 9/17/2025 12:30 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE

    That's vacuously true because DD is never correctly
    simulated by HHH.


    It seems that you might be intentionally dishonest here.
    I proved that DD is correctly simulated by HHH up to the
    point where DD has met its non-halting criteria.

    The diagonal case is something that happens; a situation which never
    happens is never identifiable as one one which happens.


    When DD is emulated by HHH it is very easy to see
    that the "do the opposite" code is unreachable.

    Within the premise that HHH(DD) is only supposed
    to report on the actual behavior that it actually
    sees then the diagonal case is defeated. HHH
    simply rejects DD as non-halting.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@agisaak@gm.invalid to comp.theory on Wed Sep 17 08:46:42 2025
    From Newsgroup: comp.theory

    On 2025-09-17 08:16, olcott wrote:
    On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:

    It is only a part of the input in that the HHH
    must see the results of simulating an instance
    of itself simulating an instance of DD to determine
    whether or not this simulated DD can possibly reach
    its own simulated final halt state. Other than
    that is it not part of the input.

    That's where you are wrong. DD is built on HHH. Saying HHH
    is not part of the input is like saying the engine is not
    part of a car.


    DD is the program under test.
    HHH is not the program under test.
    HHH is the test program.
    DD is not the test program.

    This is a fundamental misunderstanding on your part.

    The outermost HHH is the test program.
    DD is the program under test.
    The HHH called from within DD is *part* of the program under test. It is
    *not* the test program.

    André
    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 09:56:47 2025
    From Newsgroup: comp.theory

    On 9/17/2025 9:46 AM, André G. Isaak wrote:
    On 2025-09-17 08:16, olcott wrote:
    On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:

    It is only a part of the input in that the HHH
    must see the results of simulating an instance
    of itself simulating an instance of DD to determine
    whether or not this simulated DD can possibly reach
    its own simulated final halt state. Other than
    that is it not part of the input.

    That's where you are wrong. DD is built on HHH. Saying HHH
    is not part of the input is like saying the engine is not
    part of a car.


    DD is the program under test.
    HHH is not the program under test.
    HHH is the test program.
    DD is not the test program.

    This is a fundamental misunderstanding on your part.

    The outermost HHH is the test program.

    Stated more accurately.

    DD is the program under test.

    Yes.

    The HHH called from within DD is *part* of the program under test. It is *not* the test program.

    André


    Yes that seems more accurate too, yet people can become
    confused by these additional details.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 15:19:09 2025
    From Newsgroup: comp.theory

    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
    The way you are testing function equivalence is flawed.


    It is not and you convoluted example seems to make no point.

    Function equivalence cannot be determined by address because,
    for instance:


    In the semantics of the x86 model of computation
    same address means same function.

    Sure, you genius computer scientist, you!

    The thing you are not understanding in my text above is that
    a different address does /not/ mean different function!

    The exact /same/ computation can be implemented in multiple ways,
    and located at /different/ addresseds.

    Your native pointer comparison wrongly concludes that
    two functions that are the same are different.


    add_foo(x, y) { return x + y }
    add_bar(x, y) { return x + y }

    are just two names/addresses for the same computation.

    There is no algorithm which determines whether two functions
    are the same; it is an undecidable problem.

    Your abort decision is based on a strawman implementation of
    a problem which is undecidable in its correct form.

    Identical finite strings of machine code can achieve
    this same result in the Linz proof.

    Machines can be identical/equivalent computations yet be completely
    different strings.

    You are seeing this problem in your own code. You created a clone
    of HHH called HHH1, which is the same except for the name.
    Yet, it's behaving differently.


    In the very well known conventionally understood way.

    Why? Because when your machinery sees CALL HHH1 and CALL HHH
    in the execution trace, it treats them as a different functions.


    When I say that DD calls HHH(DD) in recursive simulation
    and DD does not call HHH1 at all and I say this 500 times

    But if HHH1 and HHH were correctly identified as the same computation,
    then there would be no difference between "call HHH(DD)" and
    "call HHH1(DD)".

    It doesn't strike you as wrong that if you just copy a function
    under a different name, you get a different behavior?

    (Didn't you work as a software engineer and get to pick most function
    names all your life without worrying that the choice would break your
    program?)

    and no one sees that these behaviors cannot be the same
    on the basis of these differences I can only reasonably
    conclude short-circuits in brains or lying.

    The difference you created is wrong; your shit is concluding
    that if there are differnt addresses in a CALL instruction
    in an execution trace, then the functions must be different.

    And /that test/ is what is /introducing/ the difference.

    I'm saying that /if/ you had an abstract comparison Compare_Function(X,
    Y) which reports true when X is HHH and Y is HHH1 (or vice versa),
    and used that comparison whenever your abort logic compares
    function pointers rather than using the == operator, that
    difference in behavior would disappear.

    You must not do that. You must compare function pointers using
    your own CompareFunction(X, Y) function which is calibrated
    such that CompareFunction(HHH, HHH1) yields true.

    If you consistently use CompareFunction eveywhere you would
    otherwise use X == Y, you will see that HHH1(DD) and HHH(DD)
    proceed identically.

    The trace already shows that DD emulated by HHH1 and
    DD emulated by HHH are identical until HHH emulates
    an instance of itself emulating an instance of DD.

    They are identical until a decision is made, which involves
    comparing whether functions are the same, by address.

    That's where you are wrong. DD is built on HHH. Saying HHH
    is not part of the input is like saying the engine is not
    part of a car.

    DD is the program under test.
    HHH is not the program under test.

    The bulk of DD consists of HHH, and the bulk of HHH consists of the
    simulator that it relies on, so what you are saying is completely
    moronic, as if you don't understand software eng.

    HHH is the test program.
    DD is not the test program.

    The diagonal case changes that; the program under test
    by an algorithm includes the implementation of that
    algorithm.

    None of my three theory of computation textbooks
    seemed to mention this at all. Where did you get
    it from?

    By paying attention in CS classes.

    I only learned theory of computation from textbooks
    and you say the whole pure function thing is not
    in any textbooks?

    Pure functions are important in topics connected with programming
    languages. Their properties are useful in compiler optimiization,

    Useful yet not mandatory.

    That's because when we are developing systems, we are not trying to
    prove theorems about halting. (Ideally, we would just want to prove that
    our systems do what we think and say they do; and we /can/ prove that combinations of impure functions have the properties we want them to
    have in that context.)

    If you want to explore the theory of computation by writing code
    in a programming language which has imperative features (side effects),
    it behooves you to make your functions behave as closely to the
    theoretical ones as possible, which implies purity.

    I think that I learned that from you.
    Pure functions are Turing computable functions.
    On the other hand impure functions seem to
    contradict the Church/Turing thesis.

    Not necessarily. When you have impurity, you have to manage it carefully
    and prove that it's not making your theoretical result wrong. This is an additional burden which requires you to be extra clever, and it's an
    extra burden to anyone following your work.

    Turing's own tape machine is designed such that the tape head
    performs impure calculations it mutates the tape.

    This is managed by isolation. Each Turing Machine gets its own tape.
    Each Turing Machine is understood to be a process that starts with the
    tape in the specified initial contents. We never have to think bout the
    tape being corrupt, or being tampered with.

    Anyway, I'm only illustrating the term "undecidable" and how it is
    used. It is used to describe situations when we believe we don't have
    an algorithm that terminates on every input in the desired input set.


    So maybe the conventional "do the opposite" relationship
    can be called an undecidable instance.

    I wouldn't. Because the instance is positively decidable.
    The term "undecidable" in computer science is so strongly linked
    with the idea of there being no algorithm which terminates
    and provides the correct answer for all instances in the space
    of concern, I wouldn't reuse the term for anything else.

    I will not tolerate
    that there is no existing term for a meaning that I must
    expression.

    I just use "the diagonal case", because it is understood that in the
    diagonal case there is a procedure and an input, such that the procedure decdides incorrectly. However, that is a bit of a coded term
    understandable just to people who are ramped up on the problem.

    Just "gets thew wrong answer". The case, as a set of one,
    is decidable.

    I am hereby establishing the term "undecidable instance" for this case.

    Hard disagree; naming is important. Reusing deeply entrenced,
    loaded terms, big no no.


    The relationship between HHH and DD isn't that DD is "undecidable" to
    HHH, but that HHH /doesn't/ decide DD (either by not terminating or
    returing the wrong value). This is by design; DD is built on HHH and
    designed such that HHH(DD) is incorrect, if HHH(DD) terminates.


    So what conventional term do we have for the undecidability
    of a single H/D pair? H forced to get the wrong answer seems too clumsy

    It's not only clumsy, but it's wrong. Nothing is forced.

    Forced means that an altered course of action is imposed by
    interference where a different course was to take place without
    interference.

    It is an input/decider combination intentionally defined
    to create an undecidable instance.

    But the combination doens't do anything to the decider whatsoever,
    taking it as-is.

    If you can tell me how to convert HHH into a pure function
    and keep complete correspondence to the HP proof I will do this.
    Until then HHH is a correct termination analyzer.

    "Until someone shows how to correct my mistake, I am right"
    is kind of not how it works.

    Any simulation that falls short of this is just an incomplete and/or
    incorrect analysis, and not a description of the subject's
    behavior.

    It follows the semantics specified by the input finite string.

    Not all the way to the end, right? The semantics is not completely
    evolved when the simulation is abruptly abandoned.

    If you only paid enough attention you would see that the
    only possible end is out-of-memory error.

    I paid more attention and saw that the abandoned simulation is actually terminating. Take a few more steps of it and a CALL HHH DD instruction
    is seen to terminate, and conditional jumps are coming.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 10:26:49 2025
    From Newsgroup: comp.theory

    On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
    The way you are testing function equivalence is flawed.


    It is not and you convoluted example seems to make no point.

    Function equivalence cannot be determined by address because,
    for instance:


    In the semantics of the x86 model of computation
    same address means same function.

    Sure, you genius computer scientist, you!

    The thing you are not understanding in my text above is that
    a different address does /not/ mean different function!

    The exact /same/ computation can be implemented in multiple ways,
    and located at /different/ addresseds.

    Your native pointer comparison wrongly concludes that
    two functions that are the same are different.


    add_foo(x, y) { return x + y }
    add_bar(x, y) { return x + y }

    are just two names/addresses for the same computation.

    There is no algorithm which determines whether two functions
    are the same; it is an undecidable problem.

    Your abort decision is based on a strawman implementation of
    a problem which is undecidable in its correct form.

    Identical finite strings of machine code can achieve
    this same result in the Linz proof.

    Machines can be identical/equivalent computations yet be completely
    different strings.

    You are seeing this problem in your own code. You created a clone
    of HHH called HHH1, which is the same except for the name.
    Yet, it's behaving differently.


    In the very well known conventionally understood way.

    Why? Because when your machinery sees CALL HHH1 and CALL HHH
    in the execution trace, it treats them as a different functions.


    When I say that DD calls HHH(DD) in recursive simulation
    and DD does not call HHH1 at all and I say this 500 times

    But if HHH1 and HHH were correctly identified as the same computation,
    then there would be no difference between "call HHH(DD)" and
    "call HHH1(DD)".

    It doesn't strike you as wrong that if you just copy a function
    under a different name, you get a different behavior?

    (Didn't you work as a software engineer and get to pick most function
    names all your life without worrying that the choice would break your program?)

    and no one sees that these behaviors cannot be the same
    on the basis of these differences I can only reasonably
    conclude short-circuits in brains or lying.

    The difference you created is wrong; your shit is concluding
    that if there are differnt addresses in a CALL instruction
    in an execution trace, then the functions must be different.

    And /that test/ is what is /introducing/ the difference.

    I'm saying that /if/ you had an abstract comparison Compare_Function(X,
    Y) which reports true when X is HHH and Y is HHH1 (or vice versa),
    and used that comparison whenever your abort logic compares
    function pointers rather than using the == operator, that
    difference in behavior would disappear.

    You must not do that. You must compare function pointers using
    your own CompareFunction(X, Y) function which is calibrated
    such that CompareFunction(HHH, HHH1) yields true.

    If you consistently use CompareFunction eveywhere you would
    otherwise use X == Y, you will see that HHH1(DD) and HHH(DD)
    proceed identically.

    The trace already shows that DD emulated by HHH1 and
    DD emulated by HHH are identical until HHH emulates
    an instance of itself emulating an instance of DD.

    They are identical until a decision is made, which involves
    comparing whether functions are the same, by address.

    That's where you are wrong. DD is built on HHH. Saying HHH
    is not part of the input is like saying the engine is not
    part of a car.

    DD is the program under test.
    HHH is not the program under test.

    The bulk of DD consists of HHH, and the bulk of HHH consists of the
    simulator that it relies on, so what you are saying is completely
    moronic, as if you don't understand software eng.

    HHH is the test program.
    DD is not the test program.

    The diagonal case changes that; the program under test
    by an algorithm includes the implementation of that
    algorithm.

    None of my three theory of computation textbooks
    seemed to mention this at all. Where did you get
    it from?

    By paying attention in CS classes.

    I only learned theory of computation from textbooks
    and you say the whole pure function thing is not
    in any textbooks?

    Pure functions are important in topics connected with programming
    languages. Their properties are useful in compiler optimiization,

    Useful yet not mandatory.

    That's because when we are developing systems, we are not trying to
    prove theorems about halting. (Ideally, we would just want to prove that
    our systems do what we think and say they do; and we /can/ prove that combinations of impure functions have the properties we want them to
    have in that context.)

    If you want to explore the theory of computation by writing code
    in a programming language which has imperative features (side effects),
    it behooves you to make your functions behave as closely to the
    theoretical ones as possible, which implies purity.

    I think that I learned that from you.
    Pure functions are Turing computable functions.
    On the other hand impure functions seem to
    contradict the Church/Turing thesis.

    Not necessarily. When you have impurity, you have to manage it carefully
    and prove that it's not making your theoretical result wrong. This is an additional burden which requires you to be extra clever, and it's an
    extra burden to anyone following your work.

    Turing's own tape machine is designed such that the tape head
    performs impure calculations it mutates the tape.

    This is managed by isolation. Each Turing Machine gets its own tape.
    Each Turing Machine is understood to be a process that starts with the
    tape in the specified initial contents. We never have to think bout the
    tape being corrupt, or being tampered with.

    Anyway, I'm only illustrating the term "undecidable" and how it is
    used. It is used to describe situations when we believe we don't have
    an algorithm that terminates on every input in the desired input set.


    So maybe the conventional "do the opposite" relationship
    can be called an undecidable instance.

    I wouldn't. Because the instance is positively decidable.
    The term "undecidable" in computer science is so strongly linked
    with the idea of there being no algorithm which terminates
    and provides the correct answer for all instances in the space
    of concern, I wouldn't reuse the term for anything else.

    I will not tolerate
    that there is no existing term for a meaning that I must
    expression.

    I just use "the diagonal case", because it is understood that in the
    diagonal case there is a procedure and an input, such that the procedure decdides incorrectly. However, that is a bit of a coded term
    understandable just to people who are ramped up on the problem.

    Just "gets thew wrong answer". The case, as a set of one,
    is decidable.

    I am hereby establishing the term "undecidable instance" for this case.

    Hard disagree; naming is important. Reusing deeply entrenced,
    loaded terms, big no no.


    The relationship between HHH and DD isn't that DD is "undecidable" to >>>>> HHH, but that HHH /doesn't/ decide DD (either by not terminating or
    returing the wrong value). This is by design; DD is built on HHH and >>>>> designed such that HHH(DD) is incorrect, if HHH(DD) terminates.


    So what conventional term do we have for the undecidability
    of a single H/D pair? H forced to get the wrong answer seems too clumsy >>>
    It's not only clumsy, but it's wrong. Nothing is forced.

    Forced means that an altered course of action is imposed by
    interference where a different course was to take place without
    interference.

    It is an input/decider combination intentionally defined
    to create an undecidable instance.

    But the combination doens't do anything to the decider whatsoever,
    taking it as-is.

    If you can tell me how to convert HHH into a pure function
    and keep complete correspondence to the HP proof I will do this.
    Until then HHH is a correct termination analyzer.

    "Until someone shows how to correct my mistake, I am right"
    is kind of not how it works.

    Any simulation that falls short of this is just an incomplete and/or >>>>> incorrect analysis, and not a description of the subject's
    behavior.

    It follows the semantics specified by the input finite string.

    Not all the way to the end, right? The semantics is not completely
    evolved when the simulation is abruptly abandoned.

    If you only paid enough attention you would see that the
    only possible end is out-of-memory error.

    I paid more attention and saw that the abandoned simulation is actually terminating. Take a few more steps of it and a CALL HHH DD instruction
    is seen to terminate, and conditional jumps are coming.


    You are the one that seems to not be able to
    understand the easily verified fact that DD calls
    HHH(DD) in recursive simulation changes the behavior
    relative to DD does not call HHH1 at all.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory on Wed Sep 17 15:28:15 2025
    From Newsgroup: comp.theory

    On Wed, 17 Sep 2025 09:24:47 -0500, olcott wrote:

    On 9/17/2025 12:26 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    Analysis by simulation is tantalizingly close to behavior.

    DD simulated by HHH according to the semantics of the x86 language IS
    100% EXACTLY AND PRECISELY THE

    DD is not simulated by HHH according to the semantics of the x86
    language.


    That is proven to be counter-factual.

    Can you cite the chpater and verse of any Intel architecture manual
    where it says that the CPU can go into a suspended state wherenever the
    same function is called twice, without any intervening conditional
    brandch instructions?


    DD is simulated according to the exact semantics of the x86 language up
    to the point where it is no longer simulated.

    HHH's partial, incomplete simulation of DD is not an evocation of DD's
    behavior. It is just a botched analysis of DD.


    You lack the technical skill to meet my challenge of deriving the
    correct non-termination criteria therefore you have no basis to
    determine that my non-termination criteria are incorrect. It is a matter
    of objective fact that my non-termination criteria are correct.

    Only the completed simulation of a terminating procedure can be
    identified as having evoked a representation of its behavior.


    That you fail to understand that DD simulated by HHH according to the semantics of the x86 language cannot possibly reach its own simulated
    final halt state is purely your own ignorance thus not my mistake.

    I don't know how you think you could get away with your alternative as
    being a pure simulation.

    DDD halts.

    /Flibble
    --
    meet ever shorter deadlines, known as "beat the clock"
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory on Wed Sep 17 15:35:04 2025
    From Newsgroup: comp.theory

    On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:

    On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
    The way you are testing function equivalence is flawed.


    It is not and you convoluted example seems to make no point.

    Function equivalence cannot be determined by address because,
    for instance:


    In the semantics of the x86 model of computation same address means
    same function.

    Sure, you genius computer scientist, you!

    The thing you are not understanding in my text above is that a
    different address does /not/ mean different function!

    The exact /same/ computation can be implemented in multiple ways,
    and located at /different/ addresseds.

    Your native pointer comparison wrongly concludes that two functions
    that are the same are different.


    add_foo(x, y) { return x + y }
    add_bar(x, y) { return x + y }

    are just two names/addresses for the same computation.

    There is no algorithm which determines whether two functions are the
    same; it is an undecidable problem.

    Your abort decision is based on a strawman implementation of a
    problem which is undecidable in its correct form.

    Identical finite strings of machine code can achieve this same result
    in the Linz proof.

    Machines can be identical/equivalent computations yet be completely
    different strings.

    You are seeing this problem in your own code. You created a clone of
    HHH called HHH1, which is the same except for the name.
    Yet, it's behaving differently.


    In the very well known conventionally understood way.

    Why? Because when your machinery sees CALL HHH1 and CALL HHH in the
    execution trace, it treats them as a different functions.


    When I say that DD calls HHH(DD) in recursive simulation and DD does
    not call HHH1 at all and I say this 500 times

    But if HHH1 and HHH were correctly identified as the same computation,
    then there would be no difference between "call HHH(DD)" and "call
    HHH1(DD)".

    It doesn't strike you as wrong that if you just copy a function under a
    different name, you get a different behavior?

    (Didn't you work as a software engineer and get to pick most function
    names all your life without worrying that the choice would break your
    program?)

    and no one sees that these behaviors cannot be the same on the basis
    of these differences I can only reasonably conclude short-circuits in
    brains or lying.

    The difference you created is wrong; your shit is concluding that if
    there are differnt addresses in a CALL instruction in an execution
    trace, then the functions must be different.

    And /that test/ is what is /introducing/ the difference.

    I'm saying that /if/ you had an abstract comparison Compare_Function(X,
    Y) which reports true when X is HHH and Y is HHH1 (or vice versa),
    and used that comparison whenever your abort logic compares function
    pointers rather than using the == operator, that difference in behavior
    would disappear.

    You must not do that. You must compare function pointers using your
    own CompareFunction(X, Y) function which is calibrated such that
    CompareFunction(HHH, HHH1) yields true.

    If you consistently use CompareFunction eveywhere you would otherwise
    use X == Y, you will see that HHH1(DD) and HHH(DD)
    proceed identically.

    The trace already shows that DD emulated by HHH1 and DD emulated by
    HHH are identical until HHH emulates an instance of itself emulating
    an instance of DD.

    They are identical until a decision is made, which involves comparing
    whether functions are the same, by address.

    That's where you are wrong. DD is built on HHH. Saying HHH is not
    part of the input is like saying the engine is not part of a car.

    DD is the program under test.
    HHH is not the program under test.

    The bulk of DD consists of HHH, and the bulk of HHH consists of the
    simulator that it relies on, so what you are saying is completely
    moronic, as if you don't understand software eng.

    HHH is the test program.
    DD is not the test program.

    The diagonal case changes that; the program under test by an algorithm
    includes the implementation of that algorithm.

    None of my three theory of computation textbooks seemed to mention
    this at all. Where did you get it from?

    By paying attention in CS classes.

    I only learned theory of computation from textbooks and you say the
    whole pure function thing is not in any textbooks?

    Pure functions are important in topics connected with programming
    languages. Their properties are useful in compiler optimiization,

    Useful yet not mandatory.

    That's because when we are developing systems, we are not trying to
    prove theorems about halting. (Ideally, we would just want to prove
    that our systems do what we think and say they do; and we /can/ prove
    that combinations of impure functions have the properties we want them
    to have in that context.)

    If you want to explore the theory of computation by writing code in a
    programming language which has imperative features (side effects),
    it behooves you to make your functions behave as closely to the
    theoretical ones as possible, which implies purity.

    I think that I learned that from you.
    Pure functions are Turing computable functions.
    On the other hand impure functions seem to contradict the
    Church/Turing thesis.

    Not necessarily. When you have impurity, you have to manage it
    carefully and prove that it's not making your theoretical result wrong.
    This is an additional burden which requires you to be extra clever, and
    it's an extra burden to anyone following your work.

    Turing's own tape machine is designed such that the tape head performs
    impure calculations it mutates the tape.

    This is managed by isolation. Each Turing Machine gets its own tape.
    Each Turing Machine is understood to be a process that starts with the
    tape in the specified initial contents. We never have to think bout
    the tape being corrupt, or being tampered with.

    Anyway, I'm only illustrating the term "undecidable" and how it is
    used. It is used to describe situations when we believe we don't have
    an algorithm that terminates on every input in the desired input set.


    So maybe the conventional "do the opposite" relationship can be called
    an undecidable instance.

    I wouldn't. Because the instance is positively decidable.
    The term "undecidable" in computer science is so strongly linked with
    the idea of there being no algorithm which terminates and provides the
    correct answer for all instances in the space of concern, I wouldn't
    reuse the term for anything else.

    I will not tolerate that there is no existing term for a meaning that
    I must expression.

    I just use "the diagonal case", because it is understood that in the
    diagonal case there is a procedure and an input, such that the
    procedure decdides incorrectly. However, that is a bit of a coded term
    understandable just to people who are ramped up on the problem.

    Just "gets thew wrong answer". The case, as a set of one,
    is decidable.

    I am hereby establishing the term "undecidable instance" for this
    case.

    Hard disagree; naming is important. Reusing deeply entrenced,
    loaded terms, big no no.


    The relationship between HHH and DD isn't that DD is "undecidable" >>>>>> to HHH, but that HHH /doesn't/ decide DD (either by not terminating >>>>>> or returing the wrong value). This is by design; DD is built on HHH >>>>>> and designed such that HHH(DD) is incorrect, if HHH(DD) terminates. >>>>>>

    So what conventional term do we have for the undecidability of a
    single H/D pair? H forced to get the wrong answer seems too clumsy

    It's not only clumsy, but it's wrong. Nothing is forced.

    Forced means that an altered course of action is imposed by
    interference where a different course was to take place without
    interference.

    It is an input/decider combination intentionally defined to create an
    undecidable instance.

    But the combination doens't do anything to the decider whatsoever,
    taking it as-is.

    If you can tell me how to convert HHH into a pure function and keep
    complete correspondence to the HP proof I will do this.
    Until then HHH is a correct termination analyzer.

    "Until someone shows how to correct my mistake, I am right"
    is kind of not how it works.

    Any simulation that falls short of this is just an incomplete
    and/or incorrect analysis, and not a description of the subject's
    behavior.

    It follows the semantics specified by the input finite string.

    Not all the way to the end, right? The semantics is not completely
    evolved when the simulation is abruptly abandoned.

    If you only paid enough attention you would see that the only possible
    end is out-of-memory error.

    I paid more attention and saw that the abandoned simulation is actually
    terminating. Take a few more steps of it and a CALL HHH DD instruction
    is seen to terminate, and conditional jumps are coming.


    You are the one that seems to not be able to understand the easily
    verified fact that DD calls HHH(DD) in recursive simulation changes the behavior relative to DD does not call HHH1 at all.

    DD halts.

    /Flibble
    --
    meet ever shorter deadlines, known as "beat the clock"
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 10:44:39 2025
    From Newsgroup: comp.theory

    On 9/17/2025 10:35 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:

    On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
    The way you are testing function equivalence is flawed.


    It is not and you convoluted example seems to make no point.

    Function equivalence cannot be determined by address because,
    for instance:


    In the semantics of the x86 model of computation same address means
    same function.

    Sure, you genius computer scientist, you!

    The thing you are not understanding in my text above is that a
    different address does /not/ mean different function!

    The exact /same/ computation can be implemented in multiple ways,
    and located at /different/ addresseds.

    Your native pointer comparison wrongly concludes that two functions
    that are the same are different.


    add_foo(x, y) { return x + y }
    add_bar(x, y) { return x + y }

    are just two names/addresses for the same computation.

    There is no algorithm which determines whether two functions are the >>>>> same; it is an undecidable problem.

    Your abort decision is based on a strawman implementation of a
    problem which is undecidable in its correct form.

    Identical finite strings of machine code can achieve this same result
    in the Linz proof.

    Machines can be identical/equivalent computations yet be completely
    different strings.

    You are seeing this problem in your own code. You created a clone of >>>>> HHH called HHH1, which is the same except for the name.
    Yet, it's behaving differently.


    In the very well known conventionally understood way.

    Why? Because when your machinery sees CALL HHH1 and CALL HHH in the
    execution trace, it treats them as a different functions.


    When I say that DD calls HHH(DD) in recursive simulation and DD does
    not call HHH1 at all and I say this 500 times

    But if HHH1 and HHH were correctly identified as the same computation,
    then there would be no difference between "call HHH(DD)" and "call
    HHH1(DD)".

    It doesn't strike you as wrong that if you just copy a function under a
    different name, you get a different behavior?

    (Didn't you work as a software engineer and get to pick most function
    names all your life without worrying that the choice would break your
    program?)

    and no one sees that these behaviors cannot be the same on the basis
    of these differences I can only reasonably conclude short-circuits in
    brains or lying.

    The difference you created is wrong; your shit is concluding that if
    there are differnt addresses in a CALL instruction in an execution
    trace, then the functions must be different.

    And /that test/ is what is /introducing/ the difference.

    I'm saying that /if/ you had an abstract comparison Compare_Function(X,
    Y) which reports true when X is HHH and Y is HHH1 (or vice versa),
    and used that comparison whenever your abort logic compares function
    pointers rather than using the == operator, that difference in behavior
    would disappear.

    You must not do that. You must compare function pointers using your
    own CompareFunction(X, Y) function which is calibrated such that
    CompareFunction(HHH, HHH1) yields true.

    If you consistently use CompareFunction eveywhere you would otherwise >>>>> use X == Y, you will see that HHH1(DD) and HHH(DD)
    proceed identically.

    The trace already shows that DD emulated by HHH1 and DD emulated by
    HHH are identical until HHH emulates an instance of itself emulating
    an instance of DD.

    They are identical until a decision is made, which involves comparing
    whether functions are the same, by address.

    That's where you are wrong. DD is built on HHH. Saying HHH is not
    part of the input is like saying the engine is not part of a car.

    DD is the program under test.
    HHH is not the program under test.

    The bulk of DD consists of HHH, and the bulk of HHH consists of the
    simulator that it relies on, so what you are saying is completely
    moronic, as if you don't understand software eng.

    HHH is the test program.
    DD is not the test program.

    The diagonal case changes that; the program under test by an algorithm
    includes the implementation of that algorithm.

    None of my three theory of computation textbooks seemed to mention >>>>>> this at all. Where did you get it from?

    By paying attention in CS classes.

    I only learned theory of computation from textbooks and you say the
    whole pure function thing is not in any textbooks?

    Pure functions are important in topics connected with programming
    languages. Their properties are useful in compiler optimiization,

    Useful yet not mandatory.

    That's because when we are developing systems, we are not trying to
    prove theorems about halting. (Ideally, we would just want to prove
    that our systems do what we think and say they do; and we /can/ prove
    that combinations of impure functions have the properties we want them
    to have in that context.)

    If you want to explore the theory of computation by writing code in a >>>>> programming language which has imperative features (side effects),
    it behooves you to make your functions behave as closely to the
    theoretical ones as possible, which implies purity.

    I think that I learned that from you.
    Pure functions are Turing computable functions.
    On the other hand impure functions seem to contradict the
    Church/Turing thesis.

    Not necessarily. When you have impurity, you have to manage it
    carefully and prove that it's not making your theoretical result wrong.
    This is an additional burden which requires you to be extra clever, and
    it's an extra burden to anyone following your work.

    Turing's own tape machine is designed such that the tape head performs
    impure calculations it mutates the tape.

    This is managed by isolation. Each Turing Machine gets its own tape.
    Each Turing Machine is understood to be a process that starts with the
    tape in the specified initial contents. We never have to think bout
    the tape being corrupt, or being tampered with.

    Anyway, I'm only illustrating the term "undecidable" and how it is
    used. It is used to describe situations when we believe we don't have >>>>> an algorithm that terminates on every input in the desired input set. >>>>>

    So maybe the conventional "do the opposite" relationship can be called >>>> an undecidable instance.

    I wouldn't. Because the instance is positively decidable.
    The term "undecidable" in computer science is so strongly linked with
    the idea of there being no algorithm which terminates and provides the
    correct answer for all instances in the space of concern, I wouldn't
    reuse the term for anything else.

    I will not tolerate that there is no existing term for a meaning that
    I must expression.

    I just use "the diagonal case", because it is understood that in the
    diagonal case there is a procedure and an input, such that the
    procedure decdides incorrectly. However, that is a bit of a coded term
    understandable just to people who are ramped up on the problem.

    Just "gets thew wrong answer". The case, as a set of one,
    is decidable.

    I am hereby establishing the term "undecidable instance" for this
    case.

    Hard disagree; naming is important. Reusing deeply entrenced,
    loaded terms, big no no.


    The relationship between HHH and DD isn't that DD is "undecidable" >>>>>>> to HHH, but that HHH /doesn't/ decide DD (either by not terminating >>>>>>> or returing the wrong value). This is by design; DD is built on HHH >>>>>>> and designed such that HHH(DD) is incorrect, if HHH(DD) terminates. >>>>>>>

    So what conventional term do we have for the undecidability of a
    single H/D pair? H forced to get the wrong answer seems too clumsy

    It's not only clumsy, but it's wrong. Nothing is forced.

    Forced means that an altered course of action is imposed by
    interference where a different course was to take place without
    interference.

    It is an input/decider combination intentionally defined to create an
    undecidable instance.

    But the combination doens't do anything to the decider whatsoever,
    taking it as-is.

    If you can tell me how to convert HHH into a pure function and keep
    complete correspondence to the HP proof I will do this.
    Until then HHH is a correct termination analyzer.

    "Until someone shows how to correct my mistake, I am right"
    is kind of not how it works.

    Any simulation that falls short of this is just an incomplete
    and/or incorrect analysis, and not a description of the subject's >>>>>>> behavior.

    It follows the semantics specified by the input finite string.

    Not all the way to the end, right? The semantics is not completely
    evolved when the simulation is abruptly abandoned.

    If you only paid enough attention you would see that the only possible >>>> end is out-of-memory error.

    I paid more attention and saw that the abandoned simulation is actually
    terminating. Take a few more steps of it and a CALL HHH DD instruction
    is seen to terminate, and conditional jumps are coming.


    You are the one that seems to not be able to understand the easily
    verified fact that DD calls HHH(DD) in recursive simulation changes the
    behavior relative to DD does not call HHH1 at all.

    DD halts.

    /Flibble



    DD.HHH does not halt.
    You keep trying to get away with the strawman deception.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory on Wed Sep 17 15:59:25 2025
    From Newsgroup: comp.theory

    On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:

    On 9/17/2025 10:35 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:

    On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
    The way you are testing function equivalence is flawed.


    It is not and you convoluted example seems to make no point.

    Function equivalence cannot be determined by address because,
    for instance:


    In the semantics of the x86 model of computation same address means
    same function.

    Sure, you genius computer scientist, you!

    The thing you are not understanding in my text above is that a
    different address does /not/ mean different function!

    The exact /same/ computation can be implemented in multiple ways,
    and located at /different/ addresseds.

    Your native pointer comparison wrongly concludes that two functions
    that are the same are different.


    add_foo(x, y) { return x + y }
    add_bar(x, y) { return x + y }

    are just two names/addresses for the same computation.

    There is no algorithm which determines whether two functions are
    the same; it is an undecidable problem.

    Your abort decision is based on a strawman implementation of a
    problem which is undecidable in its correct form.

    Identical finite strings of machine code can achieve this same
    result in the Linz proof.

    Machines can be identical/equivalent computations yet be completely
    different strings.

    You are seeing this problem in your own code. You created a clone
    of HHH called HHH1, which is the same except for the name.
    Yet, it's behaving differently.


    In the very well known conventionally understood way.

    Why? Because when your machinery sees CALL HHH1 and CALL HHH in the >>>>>> execution trace, it treats them as a different functions.


    When I say that DD calls HHH(DD) in recursive simulation and DD does >>>>> not call HHH1 at all and I say this 500 times

    But if HHH1 and HHH were correctly identified as the same
    computation, then there would be no difference between "call HHH(DD)"
    and "call HHH1(DD)".

    It doesn't strike you as wrong that if you just copy a function under
    a different name, you get a different behavior?

    (Didn't you work as a software engineer and get to pick most function
    names all your life without worrying that the choice would break your
    program?)

    and no one sees that these behaviors cannot be the same on the basis >>>>> of these differences I can only reasonably conclude short-circuits
    in brains or lying.

    The difference you created is wrong; your shit is concluding that if
    there are differnt addresses in a CALL instruction in an execution
    trace, then the functions must be different.

    And /that test/ is what is /introducing/ the difference.

    I'm saying that /if/ you had an abstract comparison
    Compare_Function(X,
    Y) which reports true when X is HHH and Y is HHH1 (or vice versa),
    and used that comparison whenever your abort logic compares function
    pointers rather than using the == operator, that difference in
    behavior would disappear.

    You must not do that. You must compare function pointers using your >>>>>> own CompareFunction(X, Y) function which is calibrated such that
    CompareFunction(HHH, HHH1) yields true.

    If you consistently use CompareFunction eveywhere you would
    otherwise use X == Y, you will see that HHH1(DD) and HHH(DD)
    proceed identically.

    The trace already shows that DD emulated by HHH1 and DD emulated by
    HHH are identical until HHH emulates an instance of itself emulating >>>>> an instance of DD.

    They are identical until a decision is made, which involves comparing
    whether functions are the same, by address.

    That's where you are wrong. DD is built on HHH. Saying HHH is not
    part of the input is like saying the engine is not part of a car.

    DD is the program under test.
    HHH is not the program under test.

    The bulk of DD consists of HHH, and the bulk of HHH consists of the
    simulator that it relies on, so what you are saying is completely
    moronic, as if you don't understand software eng.

    HHH is the test program.
    DD is not the test program.

    The diagonal case changes that; the program under test by an
    algorithm includes the implementation of that algorithm.

    None of my three theory of computation textbooks seemed to mention >>>>>>> this at all. Where did you get it from?

    By paying attention in CS classes.

    I only learned theory of computation from textbooks and you say the
    whole pure function thing is not in any textbooks?

    Pure functions are important in topics connected with programming
    languages. Their properties are useful in compiler optimiization,

    Useful yet not mandatory.

    That's because when we are developing systems, we are not trying to
    prove theorems about halting. (Ideally, we would just want to prove
    that our systems do what we think and say they do; and we /can/ prove
    that combinations of impure functions have the properties we want
    them to have in that context.)

    If you want to explore the theory of computation by writing code in >>>>>> a programming language which has imperative features (side
    effects), it behooves you to make your functions behave as closely >>>>>> to the theoretical ones as possible, which implies purity.

    I think that I learned that from you.
    Pure functions are Turing computable functions.
    On the other hand impure functions seem to contradict the
    Church/Turing thesis.

    Not necessarily. When you have impurity, you have to manage it
    carefully and prove that it's not making your theoretical result
    wrong.
    This is an additional burden which requires you to be extra clever,
    and it's an extra burden to anyone following your work.

    Turing's own tape machine is designed such that the tape head
    performs impure calculations it mutates the tape.

    This is managed by isolation. Each Turing Machine gets its own tape.
    Each Turing Machine is understood to be a process that starts with
    the tape in the specified initial contents. We never have to think
    bout the tape being corrupt, or being tampered with.

    Anyway, I'm only illustrating the term "undecidable" and how it is >>>>>> used. It is used to describe situations when we believe we don't
    have an algorithm that terminates on every input in the desired
    input set.


    So maybe the conventional "do the opposite" relationship can be
    called an undecidable instance.

    I wouldn't. Because the instance is positively decidable.
    The term "undecidable" in computer science is so strongly linked with
    the idea of there being no algorithm which terminates and provides
    the correct answer for all instances in the space of concern, I
    wouldn't reuse the term for anything else.

    I will not tolerate that there is no existing term for a meaning
    that I must expression.

    I just use "the diagonal case", because it is understood that in the
    diagonal case there is a procedure and an input, such that the
    procedure decdides incorrectly. However, that is a bit of a coded
    term understandable just to people who are ramped up on the problem.

    Just "gets thew wrong answer". The case, as a set of one,
    is decidable.

    I am hereby establishing the term "undecidable instance" for this
    case.

    Hard disagree; naming is important. Reusing deeply entrenced,
    loaded terms, big no no.


    The relationship between HHH and DD isn't that DD is
    "undecidable" to HHH, but that HHH /doesn't/ decide DD (either by >>>>>>>> not terminating or returing the wrong value). This is by design; >>>>>>>> DD is built on HHH and designed such that HHH(DD) is incorrect, >>>>>>>> if HHH(DD) terminates.


    So what conventional term do we have for the undecidability of a >>>>>>> single H/D pair? H forced to get the wrong answer seems too clumsy >>>>>>
    It's not only clumsy, but it's wrong. Nothing is forced.

    Forced means that an altered course of action is imposed by
    interference where a different course was to take place without
    interference.

    It is an input/decider combination intentionally defined to create
    an undecidable instance.

    But the combination doens't do anything to the decider whatsoever,
    taking it as-is.

    If you can tell me how to convert HHH into a pure function and keep
    complete correspondence to the HP proof I will do this.
    Until then HHH is a correct termination analyzer.

    "Until someone shows how to correct my mistake, I am right"
    is kind of not how it works.

    Any simulation that falls short of this is just an incomplete
    and/or incorrect analysis, and not a description of the subject's >>>>>>>> behavior.

    It follows the semantics specified by the input finite string.

    Not all the way to the end, right? The semantics is not completely >>>>>> evolved when the simulation is abruptly abandoned.

    If you only paid enough attention you would see that the only
    possible end is out-of-memory error.

    I paid more attention and saw that the abandoned simulation is
    actually terminating. Take a few more steps of it and a CALL HHH DD
    instruction is seen to terminate, and conditional jumps are coming.


    You are the one that seems to not be able to understand the easily
    verified fact that DD calls HHH(DD) in recursive simulation changes
    the behavior relative to DD does not call HHH1 at all.

    DD halts.

    /Flibble



    DD.HHH does not halt.
    You keep trying to get away with the strawman deception.

    To be a halt decider HHH must report a halting decision to its caller, DD,
    and DD then halts if HHH reports non-halting thus proving HHH to be
    incorrect.

    /Flibble
    --
    meet ever shorter deadlines, known as "beat the clock"
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 11:11:42 2025
    From Newsgroup: comp.theory

    On 9/17/2025 10:59 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:

    On 9/17/2025 10:35 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:

    On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
    The way you are testing function equivalence is flawed.


    It is not and you convoluted example seems to make no point.

    Function equivalence cannot be determined by address because,
    for instance:


    In the semantics of the x86 model of computation same address means >>>>>> same function.

    Sure, you genius computer scientist, you!

    The thing you are not understanding in my text above is that a
    different address does /not/ mean different function!

    The exact /same/ computation can be implemented in multiple ways,
    and located at /different/ addresseds.

    Your native pointer comparison wrongly concludes that two functions
    that are the same are different.


    add_foo(x, y) { return x + y }
    add_bar(x, y) { return x + y }

    are just two names/addresses for the same computation.

    There is no algorithm which determines whether two functions are >>>>>>> the same; it is an undecidable problem.

    Your abort decision is based on a strawman implementation of a
    problem which is undecidable in its correct form.

    Identical finite strings of machine code can achieve this same
    result in the Linz proof.

    Machines can be identical/equivalent computations yet be completely
    different strings.

    You are seeing this problem in your own code. You created a clone >>>>>>> of HHH called HHH1, which is the same except for the name.
    Yet, it's behaving differently.


    In the very well known conventionally understood way.

    Why? Because when your machinery sees CALL HHH1 and CALL HHH in the >>>>>>> execution trace, it treats them as a different functions.


    When I say that DD calls HHH(DD) in recursive simulation and DD does >>>>>> not call HHH1 at all and I say this 500 times

    But if HHH1 and HHH were correctly identified as the same
    computation, then there would be no difference between "call HHH(DD)" >>>>> and "call HHH1(DD)".

    It doesn't strike you as wrong that if you just copy a function under >>>>> a different name, you get a different behavior?

    (Didn't you work as a software engineer and get to pick most function >>>>> names all your life without worrying that the choice would break your >>>>> program?)

    and no one sees that these behaviors cannot be the same on the basis >>>>>> of these differences I can only reasonably conclude short-circuits >>>>>> in brains or lying.

    The difference you created is wrong; your shit is concluding that if >>>>> there are differnt addresses in a CALL instruction in an execution
    trace, then the functions must be different.

    And /that test/ is what is /introducing/ the difference.

    I'm saying that /if/ you had an abstract comparison
    Compare_Function(X,
    Y) which reports true when X is HHH and Y is HHH1 (or vice versa),
    and used that comparison whenever your abort logic compares function >>>>> pointers rather than using the == operator, that difference in
    behavior would disappear.

    You must not do that. You must compare function pointers using your >>>>>>> own CompareFunction(X, Y) function which is calibrated such that >>>>>>> CompareFunction(HHH, HHH1) yields true.

    If you consistently use CompareFunction eveywhere you would
    otherwise use X == Y, you will see that HHH1(DD) and HHH(DD)
    proceed identically.

    The trace already shows that DD emulated by HHH1 and DD emulated by >>>>>> HHH are identical until HHH emulates an instance of itself emulating >>>>>> an instance of DD.

    They are identical until a decision is made, which involves comparing >>>>> whether functions are the same, by address.

    That's where you are wrong. DD is built on HHH. Saying HHH is not >>>>>>> part of the input is like saying the engine is not part of a car. >>>>>>
    DD is the program under test.
    HHH is not the program under test.

    The bulk of DD consists of HHH, and the bulk of HHH consists of the
    simulator that it relies on, so what you are saying is completely
    moronic, as if you don't understand software eng.

    HHH is the test program.
    DD is not the test program.

    The diagonal case changes that; the program under test by an
    algorithm includes the implementation of that algorithm.

    None of my three theory of computation textbooks seemed to mention >>>>>>>> this at all. Where did you get it from?

    By paying attention in CS classes.

    I only learned theory of computation from textbooks and you say the >>>>>> whole pure function thing is not in any textbooks?

    Pure functions are important in topics connected with programming >>>>>>> languages. Their properties are useful in compiler optimiization, >>>>>>
    Useful yet not mandatory.

    That's because when we are developing systems, we are not trying to
    prove theorems about halting. (Ideally, we would just want to prove
    that our systems do what we think and say they do; and we /can/ prove >>>>> that combinations of impure functions have the properties we want
    them to have in that context.)

    If you want to explore the theory of computation by writing code in >>>>>>> a programming language which has imperative features (side
    effects), it behooves you to make your functions behave as closely >>>>>>> to the theoretical ones as possible, which implies purity.

    I think that I learned that from you.
    Pure functions are Turing computable functions.
    On the other hand impure functions seem to contradict the
    Church/Turing thesis.

    Not necessarily. When you have impurity, you have to manage it
    carefully and prove that it's not making your theoretical result
    wrong.
    This is an additional burden which requires you to be extra clever,
    and it's an extra burden to anyone following your work.

    Turing's own tape machine is designed such that the tape head
    performs impure calculations it mutates the tape.

    This is managed by isolation. Each Turing Machine gets its own tape. >>>>> Each Turing Machine is understood to be a process that starts with
    the tape in the specified initial contents. We never have to think
    bout the tape being corrupt, or being tampered with.

    Anyway, I'm only illustrating the term "undecidable" and how it is >>>>>>> used. It is used to describe situations when we believe we don't >>>>>>> have an algorithm that terminates on every input in the desired
    input set.


    So maybe the conventional "do the opposite" relationship can be
    called an undecidable instance.

    I wouldn't. Because the instance is positively decidable.
    The term "undecidable" in computer science is so strongly linked with >>>>> the idea of there being no algorithm which terminates and provides
    the correct answer for all instances in the space of concern, I
    wouldn't reuse the term for anything else.

    I will not tolerate that there is no existing term for a meaning
    that I must expression.

    I just use "the diagonal case", because it is understood that in the >>>>> diagonal case there is a procedure and an input, such that the
    procedure decdides incorrectly. However, that is a bit of a coded
    term understandable just to people who are ramped up on the problem. >>>>>
    Just "gets thew wrong answer". The case, as a set of one,
    is decidable.

    I am hereby establishing the term "undecidable instance" for this
    case.

    Hard disagree; naming is important. Reusing deeply entrenced,
    loaded terms, big no no.


    The relationship between HHH and DD isn't that DD is
    "undecidable" to HHH, but that HHH /doesn't/ decide DD (either by >>>>>>>>> not terminating or returing the wrong value). This is by design; >>>>>>>>> DD is built on HHH and designed such that HHH(DD) is incorrect, >>>>>>>>> if HHH(DD) terminates.


    So what conventional term do we have for the undecidability of a >>>>>>>> single H/D pair? H forced to get the wrong answer seems too clumsy >>>>>>>
    It's not only clumsy, but it's wrong. Nothing is forced.

    Forced means that an altered course of action is imposed by
    interference where a different course was to take place without
    interference.

    It is an input/decider combination intentionally defined to create >>>>>> an undecidable instance.

    But the combination doens't do anything to the decider whatsoever,
    taking it as-is.

    If you can tell me how to convert HHH into a pure function and keep >>>>>> complete correspondence to the HP proof I will do this.
    Until then HHH is a correct termination analyzer.

    "Until someone shows how to correct my mistake, I am right"
    is kind of not how it works.

    Any simulation that falls short of this is just an incomplete >>>>>>>>> and/or incorrect analysis, and not a description of the subject's >>>>>>>>> behavior.

    It follows the semantics specified by the input finite string.

    Not all the way to the end, right? The semantics is not completely >>>>>>> evolved when the simulation is abruptly abandoned.

    If you only paid enough attention you would see that the only
    possible end is out-of-memory error.

    I paid more attention and saw that the abandoned simulation is
    actually terminating. Take a few more steps of it and a CALL HHH DD
    instruction is seen to terminate, and conditional jumps are coming.


    You are the one that seems to not be able to understand the easily
    verified fact that DD calls HHH(DD) in recursive simulation changes
    the behavior relative to DD does not call HHH1 at all.

    DD halts.

    /Flibble



    DD.HHH does not halt.
    You keep trying to get away with the strawman deception.

    To be a halt decider HHH must report a halting decision to its caller, DD, and DD then halts if HHH reports non-halting thus proving HHH to be incorrect.

    /Flibble


    HHH has never been supposed to report on the behavior
    of its caller. That is just not the way that deciders
    have ever worked. HHH has always been required to
    report on the semantic property specified by its input
    finite string. Rice's theorem mentions semantic properties
    yet still misattributes these to something other than
    the input finite string.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory on Wed Sep 17 16:38:01 2025
    From Newsgroup: comp.theory

    On Wed, 17 Sep 2025 11:11:42 -0500, olcott wrote:

    On 9/17/2025 10:59 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:

    On 9/17/2025 10:35 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:

    On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
    The way you are testing function equivalence is flawed.


    It is not and you convoluted example seems to make no point.

    Function equivalence cannot be determined by address because,
    for instance:


    In the semantics of the x86 model of computation same address
    means same function.

    Sure, you genius computer scientist, you!

    The thing you are not understanding in my text above is that a
    different address does /not/ mean different function!

    The exact /same/ computation can be implemented in multiple ways,
    and located at /different/ addresseds.

    Your native pointer comparison wrongly concludes that two functions >>>>>> that are the same are different.


    add_foo(x, y) { return x + y }
    add_bar(x, y) { return x + y }

    are just two names/addresses for the same computation.

    There is no algorithm which determines whether two functions are >>>>>>>> the same; it is an undecidable problem.

    Your abort decision is based on a strawman implementation of a >>>>>>>> problem which is undecidable in its correct form.

    Identical finite strings of machine code can achieve this same
    result in the Linz proof.

    Machines can be identical/equivalent computations yet be completely >>>>>> different strings.

    You are seeing this problem in your own code. You created a clone >>>>>>>> of HHH called HHH1, which is the same except for the name.
    Yet, it's behaving differently.


    In the very well known conventionally understood way.

    Why? Because when your machinery sees CALL HHH1 and CALL HHH in >>>>>>>> the execution trace, it treats them as a different functions.


    When I say that DD calls HHH(DD) in recursive simulation and DD
    does not call HHH1 at all and I say this 500 times

    But if HHH1 and HHH were correctly identified as the same
    computation, then there would be no difference between "call
    HHH(DD)"
    and "call HHH1(DD)".

    It doesn't strike you as wrong that if you just copy a function
    under a different name, you get a different behavior?

    (Didn't you work as a software engineer and get to pick most
    function names all your life without worrying that the choice would >>>>>> break your program?)

    and no one sees that these behaviors cannot be the same on the
    basis of these differences I can only reasonably conclude
    short-circuits in brains or lying.

    The difference you created is wrong; your shit is concluding that
    if there are differnt addresses in a CALL instruction in an
    execution trace, then the functions must be different.

    And /that test/ is what is /introducing/ the difference.

    I'm saying that /if/ you had an abstract comparison
    Compare_Function(X,
    Y) which reports true when X is HHH and Y is HHH1 (or vice versa), >>>>>> and used that comparison whenever your abort logic compares
    function pointers rather than using the == operator, that
    difference in behavior would disappear.

    You must not do that. You must compare function pointers using >>>>>>>> your own CompareFunction(X, Y) function which is calibrated such >>>>>>>> that CompareFunction(HHH, HHH1) yields true.

    If you consistently use CompareFunction eveywhere you would
    otherwise use X == Y, you will see that HHH1(DD) and HHH(DD)
    proceed identically.

    The trace already shows that DD emulated by HHH1 and DD emulated >>>>>>> by HHH are identical until HHH emulates an instance of itself
    emulating an instance of DD.

    They are identical until a decision is made, which involves
    comparing whether functions are the same, by address.

    That's where you are wrong. DD is built on HHH. Saying HHH is not >>>>>>>> part of the input is like saying the engine is not part of a car. >>>>>>>
    DD is the program under test.
    HHH is not the program under test.

    The bulk of DD consists of HHH, and the bulk of HHH consists of the >>>>>> simulator that it relies on, so what you are saying is completely
    moronic, as if you don't understand software eng.

    HHH is the test program.
    DD is not the test program.

    The diagonal case changes that; the program under test by an
    algorithm includes the implementation of that algorithm.

    None of my three theory of computation textbooks seemed to
    mention this at all. Where did you get it from?

    By paying attention in CS classes.

    I only learned theory of computation from textbooks and you say
    the whole pure function thing is not in any textbooks?

    Pure functions are important in topics connected with programming >>>>>>>> languages. Their properties are useful in compiler optimiization, >>>>>>>
    Useful yet not mandatory.

    That's because when we are developing systems, we are not trying to >>>>>> prove theorems about halting. (Ideally, we would just want to prove >>>>>> that our systems do what we think and say they do; and we /can/
    prove that combinations of impure functions have the properties we >>>>>> want them to have in that context.)

    If you want to explore the theory of computation by writing code >>>>>>>> in a programming language which has imperative features (side
    effects), it behooves you to make your functions behave as
    closely to the theoretical ones as possible, which implies
    purity.

    I think that I learned that from you.
    Pure functions are Turing computable functions.
    On the other hand impure functions seem to contradict the
    Church/Turing thesis.

    Not necessarily. When you have impurity, you have to manage it
    carefully and prove that it's not making your theoretical result
    wrong.
    This is an additional burden which requires you to be extra clever, >>>>>> and it's an extra burden to anyone following your work.

    Turing's own tape machine is designed such that the tape head
    performs impure calculations it mutates the tape.

    This is managed by isolation. Each Turing Machine gets its own
    tape. Each Turing Machine is understood to be a process that starts >>>>>> with the tape in the specified initial contents. We never have to >>>>>> think bout the tape being corrupt, or being tampered with.

    Anyway, I'm only illustrating the term "undecidable" and how it >>>>>>>> is used. It is used to describe situations when we believe we
    don't have an algorithm that terminates on every input in the
    desired input set.


    So maybe the conventional "do the opposite" relationship can be
    called an undecidable instance.

    I wouldn't. Because the instance is positively decidable.
    The term "undecidable" in computer science is so strongly linked
    with the idea of there being no algorithm which terminates and
    provides the correct answer for all instances in the space of
    concern, I wouldn't reuse the term for anything else.

    I will not tolerate that there is no existing term for a meaning >>>>>>> that I must expression.

    I just use "the diagonal case", because it is understood that in
    the diagonal case there is a procedure and an input, such that the >>>>>> procedure decdides incorrectly. However, that is a bit of a coded
    term understandable just to people who are ramped up on the
    problem.

    Just "gets thew wrong answer". The case, as a set of one,
    is decidable.

    I am hereby establishing the term "undecidable instance" for this >>>>>>> case.

    Hard disagree; naming is important. Reusing deeply entrenced,
    loaded terms, big no no.


    The relationship between HHH and DD isn't that DD is
    "undecidable" to HHH, but that HHH /doesn't/ decide DD (either >>>>>>>>>> by not terminating or returing the wrong value). This is by >>>>>>>>>> design; DD is built on HHH and designed such that HHH(DD) is >>>>>>>>>> incorrect, if HHH(DD) terminates.


    So what conventional term do we have for the undecidability of a >>>>>>>>> single H/D pair? H forced to get the wrong answer seems too
    clumsy

    It's not only clumsy, but it's wrong. Nothing is forced.

    Forced means that an altered course of action is imposed by
    interference where a different course was to take place without >>>>>>>> interference.

    It is an input/decider combination intentionally defined to create >>>>>>> an undecidable instance.

    But the combination doens't do anything to the decider whatsoever, >>>>>> taking it as-is.

    If you can tell me how to convert HHH into a pure function and
    keep complete correspondence to the HP proof I will do this.
    Until then HHH is a correct termination analyzer.

    "Until someone shows how to correct my mistake, I am right"
    is kind of not how it works.

    Any simulation that falls short of this is just an incomplete >>>>>>>>>> and/or incorrect analysis, and not a description of the
    subject's behavior.

    It follows the semantics specified by the input finite string. >>>>>>>>
    Not all the way to the end, right? The semantics is not
    completely evolved when the simulation is abruptly abandoned.

    If you only paid enough attention you would see that the only
    possible end is out-of-memory error.

    I paid more attention and saw that the abandoned simulation is
    actually terminating. Take a few more steps of it and a CALL HHH DD >>>>>> instruction is seen to terminate, and conditional jumps are coming. >>>>>>

    You are the one that seems to not be able to understand the easily
    verified fact that DD calls HHH(DD) in recursive simulation changes
    the behavior relative to DD does not call HHH1 at all.

    DD halts.

    /Flibble



    DD.HHH does not halt.
    You keep trying to get away with the strawman deception.

    To be a halt decider HHH must report a halting decision to its caller,
    DD,
    and DD then halts if HHH reports non-halting thus proving HHH to be
    incorrect.

    /Flibble


    HHH has never been supposed to report on the behavior of its caller.
    That is just not the way that deciders have ever worked. HHH has always
    been required to report on the semantic property specified by its input finite string. Rice's theorem mentions semantic properties yet still misattributes these to something other than the input finite string.

    However, in the case of the Halting Problem diagonalization-based proofs, HHH's input just happens to ALSO be a description of its caller. HHH gets
    the answer wrong because the Halting Problem has been proven to be undecidable.

    /Flibble
    --
    meet ever shorter deadlines, known as "beat the clock"
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Wed Sep 17 18:01:56 2025
    From Newsgroup: comp.theory

    On 17/09/2025 07:34, vallor wrote:
    On Mon, 15 Sep 2025 21:59:33 +0100, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote in <10a9unl$26h2q$1@dont-email.me>:

    hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname option.
    I don't have objcopy to test, but PO only needs a few basic COFF
    capabilities, so it might be enough...

    <https://man7.org/linux/man-pages/man1/objcopy.1.html>

    Mike.

    $ objcopy -I elf64-x86-64 -O pe-i386 hello.o hello.coff
    $ file hello.coff
    hello.coff: Intel 80386 COFF object file, no line number info, not
    stripped, 8 sections, symbol offset=0x22e, 4 symbols, 1st section name ".text"

    Looks like it can handle it -- whether or not Olcott's COFF handler
    can read those, no clue.

    (Something tells me one would have to build the original .o file
    with -m32 -march=i386, but my machine isn't set up to do that. That,
    because IIRC Olcott's system is 32-bit.)

    Right - the object code needs to target the 32-bit x86 instruction set, because that's what the
    libx86emu library (used by x86utm.exe) interprets.

    pe-i386 sounds like it produces a PE file format - that would be a linked executable, rather than a
    COFF (object code) file, although I think COFF and PE share many internal record/struct formats.
    Someone would have to try it I guess.

    [If someone supplies such a .coff file I can see what x86utm.exe makes of it...]

    Mike.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 12:22:44 2025
    From Newsgroup: comp.theory

    On 9/17/2025 11:38 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 11:11:42 -0500, olcott wrote:

    On 9/17/2025 10:59 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:

    On 9/17/2025 10:35 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:

    On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
    The way you are testing function equivalence is flawed.


    It is not and you convoluted example seems to make no point.

    Function equivalence cannot be determined by address because, >>>>>>>>> for instance:


    In the semantics of the x86 model of computation same address
    means same function.

    Sure, you genius computer scientist, you!

    The thing you are not understanding in my text above is that a
    different address does /not/ mean different function!

    The exact /same/ computation can be implemented in multiple ways, >>>>>>> and located at /different/ addresseds.

    Your native pointer comparison wrongly concludes that two functions >>>>>>> that are the same are different.


    add_foo(x, y) { return x + y }
    add_bar(x, y) { return x + y }

    are just two names/addresses for the same computation.

    There is no algorithm which determines whether two functions are >>>>>>>>> the same; it is an undecidable problem.

    Your abort decision is based on a strawman implementation of a >>>>>>>>> problem which is undecidable in its correct form.

    Identical finite strings of machine code can achieve this same >>>>>>>> result in the Linz proof.

    Machines can be identical/equivalent computations yet be completely >>>>>>> different strings.

    You are seeing this problem in your own code. You created a clone >>>>>>>>> of HHH called HHH1, which is the same except for the name.
    Yet, it's behaving differently.


    In the very well known conventionally understood way.

    Why? Because when your machinery sees CALL HHH1 and CALL HHH in >>>>>>>>> the execution trace, it treats them as a different functions. >>>>>>>>>

    When I say that DD calls HHH(DD) in recursive simulation and DD >>>>>>>> does not call HHH1 at all and I say this 500 times

    But if HHH1 and HHH were correctly identified as the same
    computation, then there would be no difference between "call
    HHH(DD)"
    and "call HHH1(DD)".

    It doesn't strike you as wrong that if you just copy a function
    under a different name, you get a different behavior?

    (Didn't you work as a software engineer and get to pick most
    function names all your life without worrying that the choice would >>>>>>> break your program?)

    and no one sees that these behaviors cannot be the same on the >>>>>>>> basis of these differences I can only reasonably conclude
    short-circuits in brains or lying.

    The difference you created is wrong; your shit is concluding that >>>>>>> if there are differnt addresses in a CALL instruction in an
    execution trace, then the functions must be different.

    And /that test/ is what is /introducing/ the difference.

    I'm saying that /if/ you had an abstract comparison
    Compare_Function(X,
    Y) which reports true when X is HHH and Y is HHH1 (or vice versa), >>>>>>> and used that comparison whenever your abort logic compares
    function pointers rather than using the == operator, that
    difference in behavior would disappear.

    You must not do that. You must compare function pointers using >>>>>>>>> your own CompareFunction(X, Y) function which is calibrated such >>>>>>>>> that CompareFunction(HHH, HHH1) yields true.

    If you consistently use CompareFunction eveywhere you would
    otherwise use X == Y, you will see that HHH1(DD) and HHH(DD) >>>>>>>>> proceed identically.

    The trace already shows that DD emulated by HHH1 and DD emulated >>>>>>>> by HHH are identical until HHH emulates an instance of itself
    emulating an instance of DD.

    They are identical until a decision is made, which involves
    comparing whether functions are the same, by address.

    That's where you are wrong. DD is built on HHH. Saying HHH is not >>>>>>>>> part of the input is like saying the engine is not part of a car. >>>>>>>>
    DD is the program under test.
    HHH is not the program under test.

    The bulk of DD consists of HHH, and the bulk of HHH consists of the >>>>>>> simulator that it relies on, so what you are saying is completely >>>>>>> moronic, as if you don't understand software eng.

    HHH is the test program.
    DD is not the test program.

    The diagonal case changes that; the program under test by an
    algorithm includes the implementation of that algorithm.

    None of my three theory of computation textbooks seemed to >>>>>>>>>> mention this at all. Where did you get it from?

    By paying attention in CS classes.

    I only learned theory of computation from textbooks and you say >>>>>>>> the whole pure function thing is not in any textbooks?

    Pure functions are important in topics connected with programming >>>>>>>>> languages. Their properties are useful in compiler optimiization, >>>>>>>>
    Useful yet not mandatory.

    That's because when we are developing systems, we are not trying to >>>>>>> prove theorems about halting. (Ideally, we would just want to prove >>>>>>> that our systems do what we think and say they do; and we /can/
    prove that combinations of impure functions have the properties we >>>>>>> want them to have in that context.)

    If you want to explore the theory of computation by writing code >>>>>>>>> in a programming language which has imperative features (side >>>>>>>>> effects), it behooves you to make your functions behave as
    closely to the theoretical ones as possible, which implies
    purity.

    I think that I learned that from you.
    Pure functions are Turing computable functions.
    On the other hand impure functions seem to contradict the
    Church/Turing thesis.

    Not necessarily. When you have impurity, you have to manage it
    carefully and prove that it's not making your theoretical result >>>>>>> wrong.
    This is an additional burden which requires you to be extra clever, >>>>>>> and it's an extra burden to anyone following your work.

    Turing's own tape machine is designed such that the tape head
    performs impure calculations it mutates the tape.

    This is managed by isolation. Each Turing Machine gets its own
    tape. Each Turing Machine is understood to be a process that starts >>>>>>> with the tape in the specified initial contents. We never have to >>>>>>> think bout the tape being corrupt, or being tampered with.

    Anyway, I'm only illustrating the term "undecidable" and how it >>>>>>>>> is used. It is used to describe situations when we believe we >>>>>>>>> don't have an algorithm that terminates on every input in the >>>>>>>>> desired input set.


    So maybe the conventional "do the opposite" relationship can be >>>>>>>> called an undecidable instance.

    I wouldn't. Because the instance is positively decidable.
    The term "undecidable" in computer science is so strongly linked >>>>>>> with the idea of there being no algorithm which terminates and
    provides the correct answer for all instances in the space of
    concern, I wouldn't reuse the term for anything else.

    I will not tolerate that there is no existing term for a meaning >>>>>>>> that I must expression.

    I just use "the diagonal case", because it is understood that in >>>>>>> the diagonal case there is a procedure and an input, such that the >>>>>>> procedure decdides incorrectly. However, that is a bit of a coded >>>>>>> term understandable just to people who are ramped up on the
    problem.

    Just "gets thew wrong answer". The case, as a set of one,
    is decidable.

    I am hereby establishing the term "undecidable instance" for this >>>>>>>> case.

    Hard disagree; naming is important. Reusing deeply entrenced,
    loaded terms, big no no.


    The relationship between HHH and DD isn't that DD is
    "undecidable" to HHH, but that HHH /doesn't/ decide DD (either >>>>>>>>>>> by not terminating or returing the wrong value). This is by >>>>>>>>>>> design; DD is built on HHH and designed such that HHH(DD) is >>>>>>>>>>> incorrect, if HHH(DD) terminates.


    So what conventional term do we have for the undecidability of a >>>>>>>>>> single H/D pair? H forced to get the wrong answer seems too >>>>>>>>>> clumsy

    It's not only clumsy, but it's wrong. Nothing is forced.

    Forced means that an altered course of action is imposed by
    interference where a different course was to take place without >>>>>>>>> interference.

    It is an input/decider combination intentionally defined to create >>>>>>>> an undecidable instance.

    But the combination doens't do anything to the decider whatsoever, >>>>>>> taking it as-is.

    If you can tell me how to convert HHH into a pure function and >>>>>>>> keep complete correspondence to the HP proof I will do this.
    Until then HHH is a correct termination analyzer.

    "Until someone shows how to correct my mistake, I am right"
    is kind of not how it works.

    Any simulation that falls short of this is just an incomplete >>>>>>>>>>> and/or incorrect analysis, and not a description of the
    subject's behavior.

    It follows the semantics specified by the input finite string. >>>>>>>>>
    Not all the way to the end, right? The semantics is not
    completely evolved when the simulation is abruptly abandoned. >>>>>>>>
    If you only paid enough attention you would see that the only
    possible end is out-of-memory error.

    I paid more attention and saw that the abandoned simulation is
    actually terminating. Take a few more steps of it and a CALL HHH DD >>>>>>> instruction is seen to terminate, and conditional jumps are coming. >>>>>>>

    You are the one that seems to not be able to understand the easily >>>>>> verified fact that DD calls HHH(DD) in recursive simulation changes >>>>>> the behavior relative to DD does not call HHH1 at all.

    DD halts.

    /Flibble



    DD.HHH does not halt.
    You keep trying to get away with the strawman deception.

    To be a halt decider HHH must report a halting decision to its caller,
    DD,
    and DD then halts if HHH reports non-halting thus proving HHH to be
    incorrect.

    /Flibble


    HHH has never been supposed to report on the behavior of its caller.
    That is just not the way that deciders have ever worked. HHH has always
    been required to report on the semantic property specified by its input
    finite string. Rice's theorem mentions semantic properties yet still
    misattributes these to something other than the input finite string.

    However, in the case of the Halting Problem diagonalization-based proofs, HHH's input just happens to ALSO be a description of its caller.

    Not exactly if you are paying 100% complete
    attention.

    HHH gets
    the answer wrong because the Halting Problem has been proven to be undecidable.


    Only when it is incorrectly defined such that the
    decider is required to report on something besides
    the actual behavior that the actual input actually
    specifies.

    No one ever noticed this before because no one
    ever investigated simulating halt deciders to
    the degree that they can see the "do the opposite"
    portion of the input is unreachable code when it
    is being simulated by its corresponding decider.

    /Flibble



    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 17:50:15 2025
    From Newsgroup: comp.theory

    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/17/2025 12:30 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE

    That's vacuously true because DD is never correctly
    simulated by HHH.


    It seems that you might be intentionally dishonest here.
    I proved that DD is correctly simulated by HHH up to the
    point where DD has met its non-halting criteria.

    "correctly simulated up to the point" is exactly the same
    kind of weasel phrasing as as "lawfully behaved up to the point of
    robbing a convenience store".

    If you correctly answer some exam questions, but then run out
    of time, so that 15 out of 20 are unanswered, your score is 25%.

    Completeness is part of correctness.

    If you write a program that receives some packets from the network and
    has to handle 15 cases, but you implemented only four, it is not
    correct, even if the four cases are flawless.

    The diagonal case is something that happens; a situation which never
    happens is never identifiable as one one which happens.

    When DD is emulated by HHH it is very easy to see
    that the "do the opposite" code is unreachable.

    No, it isn't. That only happens when you disable the abort
    code. Then HHH(DD) doesn't return.

    Within the premise that HHH(DD) is only supposed
    to report on the actual behavior that it actually
    sees then the diagonal case is defeated. HHH
    simply rejects DD as non-halting.

    But the simulation that is rejected has an instruction
    pointer that's ready to go to the next instruction.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 18:01:22 2025
    From Newsgroup: comp.theory

    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    Only when it is incorrectly defined such that the
    decider is required to report on something besides
    the actual behavior that the actual input actually
    specifies.

    The actual input is the function DD, the function HHH,
    and all the functions that HHH calls like Debug_Step,
    Allocate and everything else in the call graph.

    The input is the entire call graph of DD, not just the
    code in DD.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 13:10:49 2025
    From Newsgroup: comp.theory

    On 9/17/2025 12:50 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/17/2025 12:30 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
    DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE

    That's vacuously true because DD is never correctly
    simulated by HHH.


    It seems that you might be intentionally dishonest here.
    I proved that DD is correctly simulated by HHH up to the
    point where DD has met its non-halting criteria.

    "correctly simulated up to the point" is exactly the same
    kind of weasel phrasing as as "lawfully behaved up to the point of
    robbing a convenience store".


    void Infinite_Recursion()
    {
    Infinite_Recursion();
    return;
    }

    Yes intentionally dishonest it really seems to be.

    If you correctly answer some exam questions, but then run out
    of time, so that 15 out of 20 are unanswered, your score is 25%.

    Completeness is part of correctness.

    If you write a program that receives some packets from the network and
    has to handle 15 cases, but you implemented only four, it is not
    correct, even if the four cases are flawless.

    The diagonal case is something that happens; a situation which never
    happens is never identifiable as one one which happens.

    When DD is emulated by HHH it is very easy to see
    that the "do the opposite" code is unreachable.

    No, it isn't. That only happens when you disable the abort
    code. Then HHH(DD) doesn't return.


    Maybe you are much less technically skilled at
    software engineering than I estimated.

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    *Generic HHH defined*
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.

    This HHH simulates DD that calls HH(DD)
    that simulates DD that calls HH(DD)...
    until out-of-memory error or HHH kills the process.

    Within the premise that HHH(DD) is only supposed
    to report on the actual behavior that it actually
    sees then the diagonal case is defeated. HHH
    simply rejects DD as non-halting.

    But the simulation that is rejected has an instruction
    pointer that's ready to go to the next instruction.


    until out-of-memory exception.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Wed Sep 17 19:19:40 2025
    From Newsgroup: comp.theory

    On 17/09/2025 16:44, olcott wrote:

    <snip>

    You quoted 250 lines to add TWO!!

    Learn to snip!!

    And learn what halting means.


    DD.HHH does not halt.
    You keep trying to get away with the strawman deception.

    DD.HHH /is/ the strawman.

    DD halts.

    If DD.HHH doesn't halt, then HHH has misunderstood DD.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Wed Sep 17 19:22:30 2025
    From Newsgroup: comp.theory

    On 17/09/2025 17:11, olcott wrote:
    HHH has never been supposed to report on the behavior
    of its caller.

    And it doesn't. It completely fails to report on DD.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 13:24:34 2025
    From Newsgroup: comp.theory

    On 9/17/2025 1:01 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    Only when it is incorrectly defined such that the
    decider is required to report on something besides
    the actual behavior that the actual input actually
    specifies.

    The actual input is the function DD, the function HHH,

    You already know that TMs DO NOT take other
    TMS as inputs. They only tale finite strings.

    Why lie?

    THE ACTUAL BEHAVIOR THAT THE ACTUAL FINITE STRING INPUT ACTUALLY SPECIFIES

    HHH does not freaking take its own freaking caller as its input.

    Why lie?

    and all the functions that HHH calls like Debug_Step,
    Allocate and everything else in the call graph.

    The input is the entire call graph of DD, not just the
    code in DD.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 13:44:25 2025
    From Newsgroup: comp.theory

    On 9/17/2025 1:22 PM, Richard Heathfield wrote:
    On 17/09/2025 17:11, olcott wrote:
    HHH has never been supposed to report on the behavior
    of its caller.

    And it doesn't. It completely fails to report on DD.


    HHH cannot see the behavior of its caller or
    which function is calling it.

    A halt decider cannot use psychic power to
    REPORT ON BEHAVIOR THAT IT CANNOT SEE.

    Thus the job of a halt decider is to report
    on the actual behavior that its actual finite
    string input actually specifies.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Wed Sep 17 20:02:57 2025
    From Newsgroup: comp.theory

    On 17/09/2025 19:44, olcott wrote:
    On 9/17/2025 1:22 PM, Richard Heathfield wrote:
    On 17/09/2025 17:11, olcott wrote:
    HHH has never been supposed to report on the behavior
    of its caller.

    And it doesn't. It completely fails to report on DD.


    HHH cannot see the behavior of its caller or
    which function is calling it.

    Nor can it correctly decide the question it was written to answer.

    A halt decider cannot use psychic power to
    REPORT ON BEHAVIOR THAT IT CANNOT SEE.

    HHH("dd.exe") or HHH("dd.c") - take your pick. There is NOTHING
    stopping HHH from seeing everything... except you.

    Thus the job of a halt decider is to report
    on the actual behavior that its actual finite
    string input actually specifies.

    Then why not just pass it NULL? If you're not going to give it
    all the facts, why give it any at all?
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 19:08:38 2025
    From Newsgroup: comp.theory

    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/17/2025 1:22 PM, Richard Heathfield wrote:
    On 17/09/2025 17:11, olcott wrote:
    HHH has never been supposed to report on the behavior
    of its caller.

    And it doesn't. It completely fails to report on DD.

    HHH cannot see the behavior of its caller or
    which function is calling it.

    The caller of the top-level activation of HHH is main isn't it?

    int main()
    {
    if (HHH(DD)) {
    OutputString("HHH says DD halts");
    } else {
    OutputString("HHH says DD doesn't halt");
    }
    }

    In your twisted simulation world, it /is/ possible for HHH to have access
    to an execution trace which could inform it that it is being called by
    main. That kind of thing isn't valid, though.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 14:09:47 2025
    From Newsgroup: comp.theory

    On 9/17/2025 2:02 PM, Richard Heathfield wrote:
    On 17/09/2025 19:44, olcott wrote:
    On 9/17/2025 1:22 PM, Richard Heathfield wrote:
    On 17/09/2025 17:11, olcott wrote:
    HHH has never been supposed to report on the behavior
    of its caller.

    And it doesn't. It completely fails to report on DD.


    HHH cannot see the behavior of its caller or
    which function is calling it.

    Nor can it correctly decide the question it was written to answer.


    So you don't understand that requiring
    a halt decider to have psychic ability is nuts?
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Wed Sep 17 15:14:04 2025
    From Newsgroup: comp.theory

    On 9/17/2025 3:09 PM, olcott wrote:
    On 9/17/2025 2:02 PM, Richard Heathfield wrote:
    On 17/09/2025 19:44, olcott wrote:
    On 9/17/2025 1:22 PM, Richard Heathfield wrote:
    On 17/09/2025 17:11, olcott wrote:
    HHH has never been supposed to report on the behavior
    of its caller.

    And it doesn't. It completely fails to report on DD.


    HHH cannot see the behavior of its caller or
    which function is calling it.

    Nor can it correctly decide the question it was written to answer.


    So you don't understand that requiring
    a halt decider

    i.e. an algorithm that satisfies these requirements:


    Given any algorithm (i.e. a fixed immutable sequence of instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly



    to have psychic ability is nuts?



    In other words, you agree with Turing and Linz that the above
    requirements cannot be satisfied.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 14:14:36 2025
    From Newsgroup: comp.theory

    On 9/17/2025 2:08 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/17/2025 1:22 PM, Richard Heathfield wrote:
    On 17/09/2025 17:11, olcott wrote:
    HHH has never been supposed to report on the behavior
    of its caller.

    And it doesn't. It completely fails to report on DD.

    HHH cannot see the behavior of its caller or
    which function is calling it.

    The caller of the top-level activation of HHH is main isn't it?


    Not when the directly executed DD() exists.

    int main()
    {
    if (HHH(DD)) {
    OutputString("HHH says DD halts");
    } else {
    OutputString("HHH says DD doesn't halt");
    }
    }

    In your twisted simulation world, it /is/ possible for HHH to have access
    to an execution trace which could inform it that it is being called by
    main. That kind of thing isn't valid, though.


    Its not at all twisted.
    The x86utm operating system did have an
    OS level halt decider that abnormally terminates
    (abend with core dump from IBM 370 days) when
    DD() is executed from main().

    A non halting function was treated the same
    as a divide by zero error.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 19:54:24 2025
    From Newsgroup: comp.theory

    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/17/2025 2:02 PM, Richard Heathfield wrote:
    On 17/09/2025 19:44, olcott wrote:
    On 9/17/2025 1:22 PM, Richard Heathfield wrote:
    On 17/09/2025 17:11, olcott wrote:
    HHH has never been supposed to report on the behavior
    of its caller.

    And it doesn't. It completely fails to report on DD.


    HHH cannot see the behavior of its caller or
    which function is calling it.

    Nor can it correctly decide the question it was written to answer.


    So you don't understand that requiring
    a halt decider to have psychic ability is nuts?

    Requiring someone to write a universal halt decider, or to solve
    any undecidable program, is nuts.

    Yes, deciding halting in the Turing domain requires psychic ability,
    so to speak.

    That is discussed in some literature as a thought experiment, a Magic
    Oracle that can report the halting status of any Turing Machine.

    The undecidability of halting means that only such an imaginary Magic
    Oracle can decide; it is not computable by a Turing Machine,
    which has no access to such an Oracle.

    The Magic Oracle can be defeated by the same diagonal trick, though;
    the diagonal case just includes a call to the Magic Oracle and
    then "behaves opposite". Therefore, the Magic Oracle cannot decide
    universal halting in the Magic Oracle domain. Only in the Turing domain.

    The Magic Oracle thought experiment is used to illustrate how there
    can be computational frameworks of different powers. A more powerful
    framework could decide halting of cases confied to a less powerful
    framework.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 20:04:50 2025
    From Newsgroup: comp.theory

    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/17/2025 2:08 PM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/17/2025 1:22 PM, Richard Heathfield wrote:
    On 17/09/2025 17:11, olcott wrote:
    HHH has never been supposed to report on the behavior
    of its caller.

    And it doesn't. It completely fails to report on DD.

    HHH cannot see the behavior of its caller or
    which function is calling it.

    The caller of the top-level activation of HHH is main isn't it?

    Not when the directly executed DD() exists.

    The directly executed HHH(DD), when a directly executed DD() does not
    exist, is indeed not being asked to report on its caller. That would be
    main, which doesn't make sense.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From joes@noreply@example.org to comp.theory on Wed Sep 17 20:33:19 2025
    From Newsgroup: comp.theory

    Am Wed, 17 Sep 2025 12:22:44 -0500 schrieb olcott:
    On 9/17/2025 11:38 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 11:11:42 -0500, olcott wrote:
    On 9/17/2025 10:59 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:
    On 9/17/2025 10:35 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:
    On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 11:58 PM, Kaz Kylheku wrote:

    The way you are testing function equivalence is flawed.
    It is not and you convoluted example seems to make no point.
    The point is that they are the same.

    You are seeing this problem in your own code. You created a >>>>>>>>>> clone of HHH called HHH1, which is the same except for the >>>>>>>>>> name. Yet, it's behaving differently.

    Why? Because when your machinery sees CALL HHH1 and CALL HHH in >>>>>>>>>> the execution trace, it treats them as a different functions. >>>>>>>>>>
    When I say that DD calls HHH(DD) in recursive simulation and DD >>>>>>>>> does not call HHH1 at all and I say this 500 times

    But if HHH1 and HHH were correctly identified as the same
    computation, then there would be no difference between "call
    HHH(DD)" and "call HHH1(DD)".
    It doesn't strike you as wrong that if you just copy a function >>>>>>>> under a different name, you get a different behavior?
    (Didn't you work as a software engineer and get to pick most
    function names all your life without worrying that the choice
    would break your program?)

    and no one sees that these behaviors cannot be the same on the >>>>>>>>> basis of these differences I can only reasonably conclude
    short-circuits in brains or lying.
    You are, naturally, infallible. What is the difference between HHH
    and HHH1 again?

    The trace already shows that DD emulated by HHH1 and DD emulated >>>>>>>>> by HHH are identical until HHH emulates an instance of itself >>>>>>>>> emulating an instance of DD.
    They are identical until a decision is made, which involves
    comparing whether functions are the same, by address.
    Good catch.

    I will not tolerate that there is no existing term for a meaning >>>>>>>>> that I must expression.
    There is no need to make up a term for something nonexistent.

    The relationship between HHH and DD isn't that DD is
    "undecidable" to HHH, but that HHH /doesn't/ decide DD >>>>>>>>>>>> (either by not terminating or returing the wrong value). This >>>>>>>>>>>> is by design; DD is built on HHH and designed such that >>>>>>>>>>>> HHH(DD) is incorrect, if HHH(DD) terminates.

    It is an input/decider combination intentionally defined to
    create an undecidable instance.
    I'd just say the "decider" gets this input wrong.

    But the combination doens't do anything to the decider
    whatsoever, taking it as-is.

    It follows the semantics specified by the input finite string. >>>>>>>>>> Not all the way to the end, right? The semantics is not
    completely evolved when the simulation is abruptly abandoned. >>>>>>>>> If you only paid enough attention you would see that the only >>>>>>>>> possible end is out-of-memory error.
    Not if you fix DD to mean "the program that calls the decider that
    aborts after two levels of simulation" instead of "the diagonal
    program that calls whatever is simulating it".

    DD halts. DD.HHH does not halt.
    You keep trying to get away with the strawman deception.
    DD also halts if it were simulated further, but DD is not DD1.

    However, in the case of the Halting Problem diagonalization-based
    proofs,
    HHH's input just happens to ALSO be a description of its caller.
    Not exactly if you are paying 100% complete attention.
    DD is the input to HHH; DD calls HHH.

    HHH gets the answer wrong because the Halting Problem has been proven
    to be undecidable.
    Only when it is incorrectly defined such that the decider is required to report on something besides the actual behavior that the actual input actually specifies.
    You may not like it, but the way the HP is defined it is undecidable.

    No one ever noticed this before because no one ever investigated
    simulating halt deciders to the degree that they can see the "do the opposite"
    portion of the input is unreachable code when it is being simulated by
    its corresponding decider.
    That's pretty obvious.
    --
    Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
    It is not guaranteed that n+1 exists for every n.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 15:36:37 2025
    From Newsgroup: comp.theory

    On 9/17/2025 3:33 PM, joes wrote:
    Am Wed, 17 Sep 2025 12:22:44 -0500 schrieb olcott:
    On 9/17/2025 11:38 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 11:11:42 -0500, olcott wrote:
    On 9/17/2025 10:59 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:
    On 9/17/2025 10:35 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:
    On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:
    On 9/16/2025 11:58 PM, Kaz Kylheku wrote:

    The way you are testing function equivalence is flawed.
    It is not and you convoluted example seems to make no point.
    The point is that they are the same.

    You are seeing this problem in your own code. You created a >>>>>>>>>>> clone of HHH called HHH1, which is the same except for the >>>>>>>>>>> name. Yet, it's behaving differently.

    Why? Because when your machinery sees CALL HHH1 and CALL HHH in >>>>>>>>>>> the execution trace, it treats them as a different functions. >>>>>>>>>>>
    When I say that DD calls HHH(DD) in recursive simulation and DD >>>>>>>>>> does not call HHH1 at all and I say this 500 times

    But if HHH1 and HHH were correctly identified as the same
    computation, then there would be no difference between "call >>>>>>>>> HHH(DD)" and "call HHH1(DD)".
    It doesn't strike you as wrong that if you just copy a function >>>>>>>>> under a different name, you get a different behavior?
    (Didn't you work as a software engineer and get to pick most >>>>>>>>> function names all your life without worrying that the choice >>>>>>>>> would break your program?)

    and no one sees that these behaviors cannot be the same on the >>>>>>>>>> basis of these differences I can only reasonably conclude
    short-circuits in brains or lying.
    You are, naturally, infallible. What is the difference between HHH
    and HHH1 again?

    The trace already shows that DD emulated by HHH1 and DD emulated >>>>>>>>>> by HHH are identical until HHH emulates an instance of itself >>>>>>>>>> emulating an instance of DD.
    They are identical until a decision is made, which involves
    comparing whether functions are the same, by address.

    Good catch.


    That is not a good catch it is a damned lie.

    They are identical until HHH emulates an instance
    of itself emulating an instance of DD.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Wed Sep 17 21:48:57 2025
    From Newsgroup: comp.theory

    On 17/09/2025 20:09, olcott wrote:
    On 9/17/2025 2:02 PM, Richard Heathfield wrote:
    On 17/09/2025 19:44, olcott wrote:
    On 9/17/2025 1:22 PM, Richard Heathfield wrote:
    On 17/09/2025 17:11, olcott wrote:
    HHH has never been supposed to report on the behavior
    of its caller.

    And it doesn't. It completely fails to report on DD.


    HHH cannot see the behavior of its caller or
    which function is calling it.

    Nor can it correctly decide the question it was written to answer.


    So you don't understand that requiring
    a halt decider to have psychic ability is nuts?

    Nobody (except you) requires your halt decider to do anything.

    Turing showed that a universal halt decider cannot exist, so
    requiring it to have any characteristics whatsoever is an
    exercise in futility.

    You don't /have/ a UTM, of course, and I presume you know that.
    What you have is a program to decide *one* simple C function...
    which it gets wrong.

    But if you are (AGAIN) making the point that HHH can't see what
    it's being asked about, I've shown you over and over again how to
    fix that, but you daren't listen because you need to be able to
    pretend that HHH can't get the info it needs.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From joes@noreply@example.org to comp.theory on Wed Sep 17 20:51:02 2025
    From Newsgroup: comp.theory

    Am Wed, 17 Sep 2025 15:36:37 -0500 schrieb olcott:
    On 9/17/2025 3:33 PM, joes wrote:
    Am Wed, 17 Sep 2025 12:22:44 -0500 schrieb olcott:
    On 9/17/2025 11:38 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 11:11:42 -0500, olcott wrote:
    On 9/17/2025 10:59 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:
    On 9/17/2025 10:35 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:
    On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:

    The trace already shows that DD emulated by HHH1 and DD
    emulated by HHH are identical until HHH emulates an instance >>>>>>>>>>> of itself emulating an instance of DD.
    They are identical until a decision is made, which involves >>>>>>>>>> comparing whether functions are the same, by address.

    Good catch.

    That is not a good catch it is a damned lie.
    They are identical until HHH emulates an instance of itself emulating an instance of DD.
    Exactly, HHH decides HHH is different from HHH1.

    You snipped a lot of stuff here...
    --
    Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
    It is not guaranteed that n+1 exists for every n.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Sep 17 21:21:18 2025
    From Newsgroup: comp.theory

    On 2025-09-17, joes <noreply@example.org> wrote:
    Am Wed, 17 Sep 2025 15:36:37 -0500 schrieb olcott:
    On 9/17/2025 3:33 PM, joes wrote:
    Am Wed, 17 Sep 2025 12:22:44 -0500 schrieb olcott:
    On 9/17/2025 11:38 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 11:11:42 -0500, olcott wrote:
    On 9/17/2025 10:59 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:
    On 9/17/2025 10:35 AM, Mr Flibble wrote:
    On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:
    On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
    On 2025-09-17, olcott <polcott333@gmail.com> wrote:

    The trace already shows that DD emulated by HHH1 and DD >>>>>>>>>>>> emulated by HHH are identical until HHH emulates an instance >>>>>>>>>>>> of itself emulating an instance of DD.
    They are identical until a decision is made, which involves >>>>>>>>>>> comparing whether functions are the same, by address.

    Good catch.

    That is not a good catch it is a damned lie.
    They are identical until HHH emulates an instance of itself emulating an
    instance of DD.
    Exactly, HHH decides HHH is different from HHH1.

    When Olcott copy and pasted HHH to make HHH1, changing only a name, I
    can only speculate that he did this under the belief that they are
    supposed to be the same function. Otherwise what could be the point?

    However, the inappropriate HHH1 == HHH produces a false result,
    deeming the functions not to be the same.

    HHH1 and HHH are supposed to be the same function, yet the function is
    not equal to itself under this low-level implementation-based
    comparison.

    It's the same error like newbie C programmers using

    str1 == str2

    to compare strings, causing a false negative when two identical
    strings exist at different addresses.

    Now implementing a proper function comparison is impossible; it is
    an undecidable problem to calculate whether two programs are
    equivalent!

    However, sine the number of functions in the whole apparatus
    is enumerable (and small) you can easily write a function-comparing
    function which knows aobut all the functions that exist and knows
    which pairs of them are intended to be equivalent. You can just
    hard-code that. Or have a table initialized on start-up:

    // on start-up
    Register_As_Equivalent(HHH, HHH1);

    Then you have

    u32 Compare_Functions(Ptr left, Ptr right)
    {
    if (left == right)
    return 1;
    if (Is_Registered_Equivalent_Pair(left, right))
    return 1;
    return 0;
    }

    If the Needs_Abort_HH stuff correctly uses Compare_Functions
    rather than the == operator, all difference between the HHH
    and HHH1 traces can be expected to vanish.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory on Wed Sep 17 14:38:01 2025
    From Newsgroup: comp.theory

    On 9/17/2025 12:48 AM, vallor wrote:
    On Mon, 15 Sep 2025 14:35:18 -0700, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote in <10aa0qm$26msp$5@dont-email.me>:

    On 9/15/2025 2:32 PM, olcott wrote:
    On 9/15/2025 4:30 PM, Chris M. Thomasson wrote:
    On 9/15/2025 2:26 PM, olcott wrote:
    On 9/15/2025 4:01 PM, joes wrote:
    Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
    On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
    On 2025-09-15, olcott <polcott333@gmail.com> wrote:
    On 9/14/2025 1:41 PM, Kaz Kylheku wrote:

      From the POV of HHH1(DDD) the call from DD to HHH(DD)
    simply returns.
    Yeah, why does HHH think that it doesn't halt *and then halts*?

    Anyway, HHH1 is not HHH and is therefore superfluous and
    irrelevant.
    DDD correctly simulated by HHH1 is identical to the behavior of the >>>>>>> directly executed DDD().
    When we have emulation compared to emulation we are comparing
    Apples to Apples and not Apples to Oranges.

    You have yet to explain the significance of HHH1.
    Just did.
    Why are you comparing at all?

    HHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus >>>>>>>>> must abort DD itself.
    That's a tautology. HHH only sees the behavior that HHH has seen >>>>>>>> in order to convince itself that seeing more behavior is not
    necessary.
    Yes?
    HHH has complete proof that DDD correctly simulated by HHH cannot >>>>>>> possibly reach its own simulated final halt state.

    So yes. What is the proof?


    It is apparently over everyone's head.

    void Infinite_Recursion()
    {
       Infinite_Recursion();
       return;
    }

    How can a program know with complete certainty that
    Infinite_Recursion() never halts?

    Check...


    Ummm... Your Infinite_Recursion example is basically akin to:

    10 PRINT "Halt" : GOTO 10

    Right? It says halt but does not... ;^)


    You aren't this stupid on the other forums



    Well, what are you trying to say here? That the following might halt?

    void Infinite_Recursion()
    {
    Infinite_Recursion();
    return;
    }

    I think not. Blowing the stack is not the same as halting...

    As a matter of principle, it's part of the execution environment,
    just like his partial decider simulating the code.

    In his world, catching it calling itself twice without an intervening decision is grounds to abort.

    In our world, when the stack gets used up, it aborts.

    :^)

    Iirc, SEH can detect the abortion and try to recover, sigh
    (irrecoverable?), but that is besides the point.


    Fair's fair. If one is valid -- and if he's thumping the x86 bible
    saying that the rules of the instruction set are the source of
    truth -- then he can't have an infinite stack.

    lol. Oh yeah.


    (
    The example could be
    loop: goto loop;

    But if I'm not mistaken, a decider can be written for such a trivial example...by parsing the source, not simulating it!
    )


    Exactly. No need to simulate such a simple example...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory on Wed Sep 17 14:39:04 2025
    From Newsgroup: comp.theory

    On 9/17/2025 6:06 AM, olcott wrote:
    On 9/17/2025 2:48 AM, vallor wrote:
    On Mon, 15 Sep 2025 14:35:18 -0700, "Chris M. Thomasson"
    <chris.m.thomasson.1@gmail.com> wrote in <10aa0qm$26msp$5@dont-email.me>:

    Well, what are you trying to say here? That the following might halt?

    void Infinite_Recursion()
    {
         Infinite_Recursion();
         return;
    }

    I think not. Blowing the stack is not the same as halting...

    As a matter of principle, it's part of the execution environment,
    just like his partial decider simulating the code.

    In his world, catching it calling itself twice without an intervening
    decision is grounds to abort.

    In our world, when the stack gets used up, it aborts.

    Fair's fair.  If one is valid -- and if he's thumping the x86 bible
    saying that the rules of the instruction set are the source of
    truth -- then he can't have an infinite stack.

    (
    The example could be
    loop: goto loop;

    But if I'm not mistaken, a decider can be written for such a trivial
    example...by parsing the source, not simulating it!
    )

    HHH is smart enough to detect infinite loops
    and complex cases of infinite recursion
    involving many functions.


    Show us a really complex function that is hard to detect even with the
    source code? Think if it is using a TRNG? ;^)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory on Wed Sep 17 14:55:35 2025
    From Newsgroup: comp.theory

    On 9/17/2025 8:44 AM, olcott wrote:
    [..]
    DD.HHH does not halt.
    You keep trying to get away with the strawman deception.

    Am I lying when I say I think you might be a bit unstable?

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Sep 17 20:08:53 2025
    From Newsgroup: comp.theory

    On 9/17/2025 4:55 PM, Chris M. Thomasson wrote:
    On 9/17/2025 8:44 AM, olcott wrote:
    [..]
    DD.HHH does not halt.
    You keep trying to get away with the strawman deception.

    Am I lying when I say I think you might be a bit unstable?


    Ad Hominem the first resort of Pea brains.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2