On Tue, 2025-09-09 at 04:31 +0100, Mike Terry wrote:
On 09/09/2025 02:03, wij wrote:
On Tue, 2025-09-09 at 01:25 +0100, Mike Terry wrote:[..snip..]
 There are
soooo many other more basic problems to pick on first if we're just refuting PO's claims -
like
HHH
decides DD never halts, when DD halts.
Mike.
IMO, in POOH case, since HHH is the halt decider (a TM), reaching final halt
state should be defined like:
int main() {
  int final_stat= HHH(DD); // "halt at the final state" when HHH returns here
                            // and the variable final_stat is initialized.
}
Yes. In x86utm world, the equivalent of a TM machine is something like HHH, but x86utm is written
to locate and run a function called "main" which calls HHH, and HHH subsequently returns to main.
So main here isn't part of "TM" (HHH) being executed. It's part of the paraphernalia PO uses to
get
the x86utm virtual address space (particularly the stack) into the right state to start the TM
(HHH). And when HHH returns, the "TM machine" ends like you say when it returns its final state.
The rest of main isn't part of the TM.
But it's useful and quite flexible to have main there, and the coding for x86utm is simplified.
There are other ways it might have been done, e.g. eliminating main() and have x86utm itself set
up
the stack to run HHH. Or other ways of indicating halting might have been used like calling a
primitive operation with signature "void Halt(int code)". But given we all like "structured
programming" concepts like strictly nested code blocks, returning from HHH seems like the most
natural way to do it.
That also means:
  1. DD halts means:
     int main() {
       DD();
     } // DD halts means the real *TM* DD reaches here
Yes.
  2. The DD 'input simulated' by HHH is not real, it can never really reach
     the real final halt state but a data/string analyzed/decided to reach its
     final state.
Well, HHH could analyse some other halting function like say SomeFunc(). It can do its simulation
and simulate right up to SomeFunc's return. HHH will see that, and conclude that SomeFunc() is
halting. That's ok, but simulation is just one kind of analysis a halt decider can perform to
reach
its decision. Like you say in this case SomeFunc() is not "real" in the way HHH is - it's part of
the calculation HHH is performing.
HHH can simulate /some/ functions to their final state, but not DD, because of the specific way DD
and HHH are related.
And, so, I think 'pure' function or not (and others) should not be important (so far).
Probably not. (so far).
But... PO wants to argue about his simulations and what they're doing. His x86utm is written so
that the original HHH and all its nested simulations run in the same address space. When a
simulation changes something in memory every simulation and outer HHH has visibility of that
change,
at least in principle! (The simulations might try to avoid looking at those changes, but that
takes
discipline through coding rules.)
Also PO talks about HHH and DD() which calls HHH because that corresponds to what the Linz proof
does: the Linz proof (using TMs) embeds the functionality of HHH inside DD and the key point is
that
HHH inside DD must do exactly what HHH did.
It's easy to make HHH (the decider) and HHH (embedded in DD) behave differently if impure
functions
e.g. misusing global variables are allowed: just set a global flag which modifies HHH behaviour
according to how deep it is in the simulation stack. So we could make HHH decide DD's halting
correctly with such "cheating"!
But with this cheating PO would be breaking the correspondence with the Linz proof, and for such a
cheat there would be no contradiction with that proof! So for PO's argument he needs to follow
coding rules to guarantee no such cheating - and when using one address space for all the code
(decider HHH, and all its nested simulations) those rules mean pure functions or something that
amounts to that.
###Â That is the point which makes it clearest that HHH/DD need to following coding rules such as
being pure functions (or something very like this) so that there can be no such cheating. That
way,
if PO's HHH/DD pair are correctly coded to reflect the relationship they have in Linz proof, and
if
HHH /still/ correctly decides DD()'s halting status, that would be a problem for the Linz proof.
Anyway that's why the talk of pure functions comes up - it's not relevant if we simply want to use
x86utm to execute one program in isolation, but PO must follow the code restrictions if he wants
his
arguments about HHH and DD to hold. He's made his own life more complicated by his design. If he
had created his simulations each in their own address space the use of global data would not
really
matter - it would just be "part of the computation" happening in that address space, and could not
influence other simulation levels in a cheating manner. So there would be no requirement for
"pure"
functions to prevent that cheating... (I think! maybe more thought is needed.)
Not sure if that is useful or the sort of response you were looking for. It seems to me that you
were in effect asking "why do people talk about pure functions?"
I just came up with a solution. POO proof could be defined as:Mike.
Thanks for the long explanation.
In my view, pure function (or other rules) may also work, but a restricted case of TM (restricted by
the coding rule).
It seems you are trying to make olcott understand.
I would first making olcott understand what contradiction is. He cannot even understand what proposition X&~X means !!!
On Tue, 2025-09-09 at 23:19 +0800, wij wrote:
On Tue, 2025-09-09 at 04:31 +0100, Mike Terry wrote:
On 09/09/2025 02:03, wij wrote:
On Tue, 2025-09-09 at 01:25 +0100, Mike Terry wrote:[..snip..]
 There are
soooo many other more basic problems to pick on first if we're just refuting PO's claims -
like
HHH
decides DD never halts, when DD halts.
Mike.
IMO, in POOH case, since HHH is the halt decider (a TM), reaching final halt
state should be defined like:
int main() {
  int final_stat= HHH(DD); // "halt at the final state" when HHH returns here
                            // and the variable final_stat is initialized.
}
Yes. In x86utm world, the equivalent of a TM machine is something like HHH, but x86utm is
written
to locate and run a function called "main" which calls HHH, and HHH subsequently returns to
main.
So main here isn't part of "TM" (HHH) being executed. It's part of the paraphernalia PO uses to
get
the x86utm virtual address space (particularly the stack) into the right state to start the TM
(HHH). And when HHH returns, the "TM machine" ends like you say when it returns its final
state.
The rest of main isn't part of the TM.
But it's useful and quite flexible to have main there, and the coding for x86utm is simplified.
There are other ways it might have been done, e.g. eliminating main() and have x86utm itself set
up
the stack to run HHH. Or other ways of indicating halting might have been used like calling a
primitive operation with signature "void Halt(int code)". But given we all like "structured
programming" concepts like strictly nested code blocks, returning from HHH seems like the most
natural way to do it.
That also means:
  1. DD halts means:
     int main() {
       DD();
     } // DD halts means the real *TM* DD reaches here
Yes.
  2. The DD 'input simulated' by HHH is not real, it can never really reach
     the real final halt state but a data/string analyzed/decided to reach its
     final state.
Well, HHH could analyse some other halting function like say SomeFunc(). It can do its
simulation
and simulate right up to SomeFunc's return. HHH will see that, and conclude that SomeFunc() is
halting. That's ok, but simulation is just one kind of analysis a halt decider can perform to
reach
its decision. Like you say in this case SomeFunc() is not "real" in the way HHH is - it's part
of
the calculation HHH is performing.
HHH can simulate /some/ functions to their final state, but not DD, because of the specific way
DD
and HHH are related.
And, so, I think 'pure' function or not (and others) should not be important (so far).
Probably not. (so far).
But... PO wants to argue about his simulations and what they're doing. His x86utm is written so
that the original HHH and all its nested simulations run in the same address space. When a
simulation changes something in memory every simulation and outer HHH has visibility of that
change,
at least in principle! (The simulations might try to avoid looking at those changes, but that
takes
discipline through coding rules.)
Also PO talks about HHH and DD() which calls HHH because that corresponds to what the Linz proof
does: the Linz proof (using TMs) embeds the functionality of HHH inside DD and the key point is
that
HHH inside DD must do exactly what HHH did.
It's easy to make HHH (the decider) and HHH (embedded in DD) behave differently if impure
functions
e.g. misusing global variables are allowed: just set a global flag which modifies HHH behaviour
according to how deep it is in the simulation stack. So we could make HHH decide DD's halting
correctly with such "cheating"!
But with this cheating PO would be breaking the correspondence with the Linz proof, and for such
a
cheat there would be no contradiction with that proof! So for PO's argument he needs to follow
coding rules to guarantee no such cheating - and when using one address space for all the code
(decider HHH, and all its nested simulations) those rules mean pure functions or something that
amounts to that.
###Â That is the point which makes it clearest that HHH/DD need to following coding rules such
as
being pure functions (or something very like this) so that there can be no such cheating. That
way,
if PO's HHH/DD pair are correctly coded to reflect the relationship they have in Linz proof, and
if
HHH /still/ correctly decides DD()'s halting status, that would be a problem for the Linz proof.
Anyway that's why the talk of pure functions comes up - it's not relevant if we simply want to
use
x86utm to execute one program in isolation, but PO must follow the code restrictions if he wants
his
arguments about HHH and DD to hold. He's made his own life more complicated by his design. If
he
had created his simulations each in their own address space the use of global data would not
really
matter - it would just be "part of the computation" happening in that address space, and could
not
influence other simulation levels in a cheating manner. So there would be no requirement for
"pure"
functions to prevent that cheating... (I think! maybe more thought is needed.)
Not sure if that is useful or the sort of response you were looking for. It seems to me that
you
were in effect asking "why do people talk about pure functions?"
Re-write the previous post to be clearer.Mike.
Thanks for the long explanation.
In my view, pure function (or other rules) may also work, but a restricted case of TM (restricted
by
the coding rule).
It seems you are trying to make olcott understand.
I would first making olcott understand what contradiction is. He cannot even
understand what proposition X&~X means !!!
I just came up with a solution. POO proof could be defined as:
Let set S={f| f is a 'TM' decision function}. Does a H,H∈S exists so that for any 'TM' function D (no argument), H(D)==1 iff D() halts?
So, POO proof can be modified this way saving the need involving tape encoding.
But of course, the detail of POOH is very problematic.
On 9/8/2025 7:25 PM, Mike Terry wrote:[..snip..]
On 08/09/2025 20:07, Kaz Kylheku wrote:
On 2025-09-08, Richard Heathfield <rjh@cpax.org.uk> wrote:
HHH/HHH1 do not use "their own addresses" in Decide_Halting. If they do in your halt7.c, what
date is that from? (My copy is from around July 2024).
February 15, 2025 is the last commit https://github.com/plolcott/x86utm/blob/master/Halt7.c
Of course the execution traces are different
before and after the abort.
HHH and HHH1 have identical source code except
for their name.
The DDD of HHH1(DDD) has identical
behavior to the directly executed DDD().
DDD calls HHH(DDD) in recursive emulation. DDD does
not call HHH1 at all. This is why the behavior
of DDD.HHH1 is different than the behavior of DDD.HHH
_DDD()
[00002183] 55Â Â Â Â Â Â Â Â Â Â Â Â push ebp
[00002184] 8bec          mov ebp,esp
[00002186] 6883210000Â Â Â Â push 00002183 ; push DDD
[0000218b] e833f4ffff    call 000015c3 ; call HHH
[00002190] 83c404Â Â Â Â Â Â Â Â add esp,+04
[00002193] 5d            pop ebp
[00002194] c3Â Â Â Â Â Â Â Â Â Â Â Â ret
Size in bytes:(0018) [00002194]
_main()
[000021a3] 55Â Â Â Â Â Â Â Â Â Â Â Â push ebp
[000021a4] 8bec          mov ebp,esp
[000021a6] 6883210000Â Â Â Â push 00002183 ; push DDD
[000021ab] e843f3ffff    call 000014f3 ; call HHH1
[000021b0] 83c404Â Â Â Â Â Â Â Â add esp,+04
[000021b3] 33c0Â Â Â Â Â Â Â Â Â Â xor eax,eax
[000021b5] 5d            pop ebp
[000021b6] c3Â Â Â Â Â Â Â Â Â Â Â Â ret
Size in bytes:(0020) [000021b6]
 machine  stack    stack    machine   assembly
 address  address  data     code      language
 ======== ======== ======== ========== ============= [000021a3][0010382d][00000000] 55        push ebp     ; main() [000021a4][0010382d][00000000] 8bec      mov ebp,esp  ; main() [000021a6][00103829][00002183] 6883210000 push 00002183 ; push DDD [000021ab][00103825][000021b0] e843f3ffff call 000014f3 ; call HHH1
New slave_stack at:1038d1
Begin Local Halt Decider Simulation  Execution Trace Stored at:1138d9 [00002183][001138c9][001138cd] 55        push ebp     ; DDD of HHH1
[00002184][001138c9][001138cd] 8bec      mov ebp,esp  ; DDD of HHH1 [00002186][001138c5][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001138c1][00002190] e833f4ffff call 000015c3 ; call HHH
New slave_stack at:14e2f9
Begin Local Halt Decider Simulation  Execution Trace Stored at:15e301 [00002183][0015e2f1][0015e2f5] 55        push ebp     ; DDD of HHH[0]
[00002184][0015e2f1][0015e2f5] 8bec      mov ebp,esp  ; DDD of HHH[0]
[00002186][0015e2ed][00002183] 6883210000 push 00002183 ; push DDD [0000218b][0015e2e9][00002190] e833f4ffff call 000015c3 ; call HHH
New slave_stack at:198d21
*This is the beginning of the divergence of the behavior*
*of DDD emulated by HHH versus DDD emulated by HHH1*
[00002183][001a8d19][001a8d1d] 55        push ebp     ; DDD of HHH[1]
[00002184][001a8d19][001a8d1d] 8bec      mov ebp,esp  ; DDD of HHH[1]
[00002186][001a8d15][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001a8d11][00002190] e833f4ffff call 000015c3 ; call HHH
Local Halt Decider: Infinite Recursion Detected Simulation Stopped
[00002190][001138c9][001138cd] 83c404    add esp,+04 ; DDD of HHH1 [00002193][001138cd][000015a8] 5d        pop ebp    ; DDD of HHH1
[00002194][001138d1][0003a980] c3        ret        ; DDD of HHH1
[000021b0][0010382d][00000000] 83c404    add esp,+04 ; main() [000021b3][0010382d][00000000] 33c0      xor eax,eax ; main() [000021b5][00103831][00000018] 5d        pop ebp    ; main() [000021b6][00103835][00000000] c3        ret        ; main()
Number of Instructions Executed(352831) == 5266 Pages
void DDD()
{
 HHH(DDD);
 return;
}
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
HHH ONLY sees the behavior of DD *BEFORE* HHH
has aborted DD thus must abort DD itself.
That is why I said it is so important for you to
carefully study this carefully annotated execution
trace instead of continuing to totally ignore it.
You are the one that is backed into a corner here and no amount
of pure bluster will get you out. Failing to provide the requested
steps *is construed as your admission that I am correct*
HHH and HHH1 have identical source code except for their name. The DDD
of HHH1(DDD) has identical behavior to the directly executed DDD().
DDD calls HHH(DDD) in recursive emulation. DDD does not call HHH1 at
all. This is why the behavior of DDD.HHH1 is different than the behavior
of DDD.HHH
_DDD()
[00002183] 55 push ebp [00002184] 8bec mov ebp,esp [00002186] 6883210000 push 00002183 ; push DDD [0000218b] e833f4ffff
call 000015c3 ; call HHH [00002190] 83c404 add esp,+04
[00002193] 5d pop ebp [00002194] c3 ret Size in bytes:(0018) [00002194]
_main()
[000021a3] 55 push ebp [000021a4] 8bec mov ebp,esp [000021a6] 6883210000 push 00002183 ; push DDD [000021ab] e843f3ffff
call 000014f3 ; call HHH1 [000021b0] 83c404 add esp,+04 [000021b3] 33c0 xor eax,eax [000021b5] 5d pop ebp [000021b6] c3 ret Size in bytes:(0020) [000021b6]
machine stack stack machine assembly address address
data code language ======== ======== ======== ==========
=============
[000021a3][0010382d][00000000] 55 push ebp ; main() [000021a4][0010382d][00000000] 8bec mov ebp,esp ; main() [000021a6][00103829][00002183] 6883210000 push 00002183 ; push DDD [000021ab][00103825][000021b0] e843f3ffff call 000014f3 ; call HHH1 New slave_stack at:1038d1
Begin Local Halt Decider Simulation Execution Trace Stored at:1138d9 [00002183][001138c9][001138cd] 55 push ebp ; DDD of HHH1 [00002184][001138c9][001138cd] 8bec mov ebp,esp ; DDD of HHH1 [00002186][001138c5][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001138c1][00002190] e833f4ffff call 000015c3 ; call HHH New slave_stack at:14e2f9
Begin Local Halt Decider Simulation Execution Trace Stored at:15e301 [00002183][0015e2f1][0015e2f5] 55 push ebp ; DDD of HHH[0] [00002184][0015e2f1][0015e2f5] 8bec mov ebp,esp ; DDD of HHH[0] [00002186][0015e2ed][00002183] 6883210000 push 00002183 ; push DDD [0000218b][0015e2e9][00002190] e833f4ffff call 000015c3 ; call HHH New slave_stack at:198d21
*This is the beginning of the divergence of the behavior*
*of DDD emulated by HHH versus DDD emulated by HHH1*
[00002183][001a8d19][001a8d1d] 55 push ebp ; DDD of HHH[1] [00002184][001a8d19][001a8d1d] 8bec mov ebp,esp ; DDD of HHH[1] [00002186][001a8d15][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001a8d11][00002190] e833f4ffff call 000015c3 ; call HHH Local
Halt Decider: Infinite Recursion Detected Simulation Stopped
[00002190][001138c9][001138cd] 83c404 add esp,+04 ; DDD of HHH1 [00002193][001138cd][000015a8] 5d pop ebp ; DDD of HHH1 [00002194][001138d1][0003a980] c3 ret ; DDD of HHH1 [000021b0][0010382d][00000000] 83c404 add esp,+04 ; main() [000021b3][0010382d][00000000] 33c0 xor eax,eax ; main() [000021b5][00103831][00000018] 5d pop ebp ; main() [000021b6][00103835][00000000] c3 ret ; main()
Number of Instructions Executed(352831) == 5266 Pages
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:You mean the outermost simulated HHH, like so (indented by simulation
Of course the execution traces are different before and after the
abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH has aborted DD thus need
not abort DD itself.
HHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus must
abort DD itself.
HHH and HHH1 have identical source code except for their name.The different behaviour shows they are not equal.
DDD calls HHH(DDD) in recursive emulation. DDD does not call HHH1 atIt shouldn't matter which simulator is simulating.
all. This is why the behavior of DDD.HHH1 is different than the behavior
of DDD.HHH
HHH and HHH1
have identical source code except
for their name.
The DDD of HHH1(DDD) has identical
behavior to the directly executed DDD().
*This is the beginning of the divergence of the behavior*
*of DDD emulated by HHH versus DDD emulated by HHH1*
[00002183][001a8d19][001a8d1d] 55 push ebp ; DDD of HHH[1] [00002184][001a8d19][001a8d1d] 8bec mov ebp,esp ; DDD of HHH[1] [00002186][001a8d15][00002183] 6883210000 push 00002183 ; push DDD [0000218b][001a8d11][00002190] e833f4ffff call 000015c3 ; call HHH
Local Halt Decider: Infinite Recursion Detected Simulation Stopped
[00002190][001138c9][001138cd] 83c404 add esp,+04 ; DDD of HHH1 [00002193][001138cd][000015a8] 5d pop ebp ; DDD of HHH1 [00002194][001138d1][0003a980] c3 ret ; DDD of HHH1
[000021b0][0010382d][00000000] 83c404 add esp,+04 ; main() [000021b3][0010382d][00000000] 33c0 xor eax,eax ; main()
[000021b5][00103831][00000018] 5d pop ebp ; main() [000021b6][00103835][00000000] c3 ret ; main()--
Number of Instructions Executed(352831) == 5266 Pages
void DDD()
{
HHH(DDD);
return;
}
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
HHH ONLY sees the behavior of DD *BEFORE* HHH
has aborted DD thus must abort DD itself.
*This is the beginning of the divergence of the behavior*
*of DDD emulated by HHH versus DDD emulated by HHH1*
[00002190][001138c9][001138cd] 83c404 add esp,+04 ; DDD of HHH1 [00002193][001138cd][000015a8] 5d pop ebp ; DDD of HHH1 [00002194][001138d1][0003a980] c3 ret ; DDD of HHH1
DDD calls HHH(DDD) in recursive emulation. DDD does not call HHH1 atIt shouldn't matter which simulator is simulating.
all. This is why the behavior of DDD.HHH1 is different than the behavior
of DDD.HHH
On 2025-09-14, olcott <polcott333@gmail.com> wrote:
void DDD()
{
HHH(DDD);
return;
}
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
HHH ONLY sees the behavior of DD *BEFORE* HHH
has aborted DD thus must abort DD itself.
You are literally in this same posting showing
that HHH and HHH1 trace are identical before the abort;
On 9/14/2025 1:36 PM, Kaz Kylheku wrote:
On 2025-09-14, olcott <polcott333@gmail.com> wrote:
void DDD()
{
   HHH(DDD);
   return;
}
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
HHH ONLY sees the behavior of DD *BEFORE* HHH
has aborted DD thus must abort DD itself.
You are literally in this same posting showing
that HHH and HHH1 trace are identical before the abort;
Counter factual, please go back to the
preceding post and try again. This time
don't erase anything that I said.
void DDD()
{
 HHH(DDD);
 return;
}
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
HHH ONLY sees the behavior of DD *BEFORE* HHH
has aborted DD thus must abort DD itself.
The presence of HHH1 is irrelevant and uninteresting, except insofar
that he's concealing that it returns 1.
Of course the execution traces are different
before and after the abort.
On 15/09/2025 06:58, olcott wrote:
On 9/14/2025 1:36 PM, Kaz Kylheku wrote:
On 2025-09-14, olcott <polcott333@gmail.com> wrote:
void DDD()
{
   HHH(DDD);
   return;
}
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
HHH ONLY sees the behavior of DD *BEFORE* HHH
has aborted DD thus must abort DD itself.
You are literally in this same posting showing
that HHH and HHH1 trace are identical before the abort;
Counter factual, please go back to the
preceding post and try again. This time
don't erase anything that I said.
If it matters that much to your case and you need people to understand
it *so much* that snipping it is a problem, hiding it in 92 lines of
stack trace and other eye-glazing material wasn't smart. Didn't you
recently claim to be clever?
If it's not important enough for you to explain and defend, it's not important enough to survive a snip.
Op 14.sep.2025 om 15:22 schreef olcott:I provided the full trace so that you could
void DDD()Exactly! That is the error of HHH!
{
  HHH(DDD);
  return;
}
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
HHH ONLY sees the behavior of DD *BEFORE* HHH
has aborted DD thus must abort DD itself.
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
The presence of HHH1 is irrelevant and uninteresting, except insofar
that he's concealing that it returns 1.
You get this same information when DDD of HHH1
reaches [00002194].
I try to make it as simple as possible so that you
can keep track of every detail of the execution trace of
DDD correctly simulated by HHH
DDD correctly simulated by HHH1
*So that you can directly see that*
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
HHH ONLY sees the behavior of DD *BEFORE* HHH
has aborted DD thus must abort DD itself.
*When HHH reports on the actual behavior that it*
*actually sees then HHH(DDD)==0 IS CORRECT*
*That is why I said it is so important for you to*
*carefully study this carefully annotated execution*
*trace instead of continuing to totally ignore it*
HHH and HHH1 have identical source code except
for their name.
The DDD of HHH1(DDD) has identical
behavior to the directly executed DDD().
DDD calls HHH(DDD) in recursive emulation. DDD does
not call HHH1 at all.
This is why the behavior
of DDD.HHH1 is different than the behavior of DDD.HHH
If it's not important enough for you to explain and defend, it's
not important enough to survive a snip.
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
The presence of HHH1 is irrelevant and uninteresting, except insofar
that he's concealing that it returns 1.
You get this same information when DDD of HHH1
reaches [00002194].
I try to make it as simple as possible so that you
can keep track of every detail of the execution trace of
DDD correctly simulated by HHH
DDD correctly simulated by HHH1
*So that you can directly see that*
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
That is obviously false. HHH1(DD) begins simulating
DD from the entry to DD, and creating execution
trace entries, before DD reaches its HHH(DD) call.
You know this, which is why in your execution traces you include this
remark:
*This is the beginning of the divergence of the behavior*
*of DDD emulated by HHH versus DDD emulated by HHH1*
Before the divergence, there are behaviors, and they are not
divergent.
Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.
You have yet to explain the significance of HHH1.
HHH ONLY sees the behavior of DD *BEFORE* HHH
has aborted DD thus must abort DD itself.
That's a tautology. HHH only sees the behavior that HHH has seen in
order to convince itself that seeing more behavior is not necessary.
Yes?
*When HHH reports on the actual behavior that it*
*actually sees then HHH(DDD)==0 IS CORRECT*
The simulation which HHH has abandoned can be
continued to show that DDD halts,
which indicates
that the 0 is incorrect.
*That is why I said it is so important for you to*
*carefully study this carefully annotated execution*
*trace instead of continuing to totally ignore it*
Why don't you properly port it to Linux. Modern Linux toolchains do not produce COFF object files, only ELF. There evidently aren't any options
you can supply to get a COFF object file.
You have an ELF-handling class and COFF_handling class, but you
neglected to create an easily switchable abstraction; you have
hard-coded use of the COFF class in a whole bunch of places.
HHH and HHH1 have identical source code except
for their name.
You keep repeating this without explaining what you
believe to be the signficance of it, or why HHH1
has to be present at all.
The DDD of HHH1(DDD) has identical
behavior to the directly executed DDD().
And you know that the directly executed DDD is halting; you have
acknowledged that multiple times and brushed it aside.
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:From the POV of HHH1(DDD) the call from DD to HHH(DD)
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
The presence of HHH1 is irrelevant and uninteresting, except insofarYou get this same information when DDD of HHH1 reaches [00002194].
that he's concealing that it returns 1.
I try to make it as simple as possible so that you can keep track of
every detail of the execution trace of
DDD correctly simulated by HHH DDD correctly simulated by HHH1
*So that you can directly see that*
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different before and after the
abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH has aborted DD thus need
not abort DD itself.
That is obviously false. HHH1(DD) begins simulating DD from the entry
to DD, and creating execution trace entries, before DD reaches its
HHH(DD) call.
simply returns.
You know this, which is why in your execution traces you include thisHHH1 cannot see the behavior of DDD correctly simulated by HHH. HHH1 has
remark:
*This is the beginning of the divergence of the behavior*
*of DDD emulated by HHH versus DDD emulated by HHH1*
Before the divergence, there are behaviors, and they are not divergent.
no idea that HHH is identical to itself. HHH1 only sees that when its
DDD calls HHH(DD) that this call returns.
Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.DDD correctly simulated by HHH1 is identical to the behavior of the
directly executed DDD().
When we have emulation compared to emulation we are comparing Apples to Apples and not Apples to Oranges.
You have yet to explain the significance of HHH1.Just did.
HHH has complete proof that DDD correctly simulated by HHH cannotHHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus must
abort DD itself.
That's a tautology. HHH only sees the behavior that HHH has seen in
order to convince itself that seeing more behavior is not necessary.
Yes?
possibly reach its own simulated final halt state.
That you do not understand this complete proof is no actual rebuttal what-so-ever.
You are a very smart guy. The way around this is for you to figure out
on your own how HHH would correctly determine that:
void Infinite_Recursion()
{
Infinite_Recursion();
return;
}
Would never halt.
Exactly what execution trace details would HHH need to see to correctly conclude beyond all possible doubt that Infinite_Recursion() never
halts?
*When HHH reports on the actual behavior that it* *actually sees then
HHH(DDD)==0 IS CORRECT*
The simulation which HHH has abandoned can be continued to show that
DDD halts,
Try and trace through every single detail of every single step to show
this.
which indicates that the 0 is incorrect.I came from Linux and was ported to Windows.
*That is why I said it is so important for you to* *carefully study
this carefully annotated execution* *trace instead of continuing to
totally ignore it*
Why don't you properly port it to Linux. Modern Linux toolchains do not
produce COFF object files, only ELF. There evidently aren't any
options you can supply to get a COFF object file.
I started writing and Elf version of this https://github.com/plolcott/x86utm/blob/master/include/Read_COFF_Object.h
and determined that it was not worth the effort to finish it.
You have an ELF-handling class and COFF_handling class, but youI explained the crucial importance of this many hundreds of times over several years and not one single person ever paid any attention at all
neglected to create an easily switchable abstraction; you have
hard-coded use of the COFF class in a whole bunch of places.
HHH and HHH1 have identical source code except for their name.
You keep repeating this without explaining what you believe to be the
signficance of it, or why HHH1 has to be present at all.
to any of the details.
*IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
*IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
*IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
*IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
*IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
*IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
*IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
*IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
*IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
*IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
People are so damned sure that I am wrong that whenever I explain the
details of the proof that I am correct they never ever hear anything
beside blah, blah, blah...
THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIORThe DDD of HHH1(DDD) has identical behavior to the directly executed
DDD().
And you know that the directly executed DDD is halting; you have
acknowledged that multiple times and brushed it aside.
THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE
ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL
INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT
ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR
THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE
ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL
INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT
ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES THE ACTUAL BEHAVIOR THAT THE ACTUAL INPUT ACTUALLY SPECIFIES
I can give you 100 megabytes of that if it will help you see that I ever
said it at least once.
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
The presence of HHH1 is irrelevant and uninteresting, except insofar
that he's concealing that it returns 1.
You get this same information when DDD of HHH1
reaches [00002194].
I try to make it as simple as possible so that you
can keep track of every detail of the execution trace of
DDD correctly simulated by HHH
DDD correctly simulated by HHH1
*So that you can directly see that*
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
That is obviously false. HHH1(DD) begins simulating
DD from the entry to DD, and creating execution
trace entries, before DD reaches its HHH(DD) call.
You know this, which is why in your execution traces you include this
remark:
*This is the beginning of the divergence of the behavior*
*of DDD emulated by HHH versus DDD emulated by HHH1*
Before the divergence, there are behaviors, and they are not
divergent.
Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.
You have yet to explain the significance of HHH1.
HHH ONLY sees the behavior of DD *BEFORE* HHH
has aborted DD thus must abort DD itself.
That's a tautology. HHH only sees the behavior that HHH has seen in
order to convince itself that seeing more behavior is not necessary.
Yes?
*When HHH reports on the actual behavior that it*
*actually sees then HHH(DDD)==0 IS CORRECT*
The simulation which HHH has abandoned can be
continued to show that DDD halts, which indicates
that the 0 is incorrect.
*That is why I said it is so important for you to*
*carefully study this carefully annotated execution*
*trace instead of continuing to totally ignore it*
Why don't you properly port it to Linux. Modern Linux toolchains do not produce COFF object files, only ELF. There evidently aren't any options
you can supply to get a COFF object file.
You have an ELF-handling class and COFF_handling class, but you
neglected to create an easily switchable abstraction; you have
hard-coded use of the COFF class in a whole bunch of places.
HHH and HHH1 have identical source code except
for their name.
You keep repeating this without explaining what you
believe to be the signficance of it, or why HHH1
has to be present at all.
The DDD of HHH1(DDD) has identical
behavior to the directly executed DDD().
And you know that the directly executed DDD is halting; you have
acknowledged that multiple times and brushed it aside.
You know that when HHH1(DDD) executes its RET instruction,
the return register EAX is nonzero.
DDD calls HHH(DDD) in recursive emulation. DDD does
not call HHH1 at all.
That's why HHH1 has a shot at deciding DDD correctly
by returning nonzero. HHH1 is not embroiled in the diagonal pair
HHH/DDD.
This is why the behavior
of DDD.HHH1 is different than the behavior of DDD.HHH
There is only one DDD with one behavior. If something else is observed, something is seriously wrong. Luckily, something is not seriously wrong
(in that regard).
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
Why are you comparing at all?Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.DDD correctly simulated by HHH1 is identical to the behavior of the
directly executed DDD().
When we have emulation compared to emulation we are comparing Apples to Apples and not Apples to Oranges.
You have yet to explain the significance of HHH1.Just did.
So yes. What is the proof?HHH has complete proof that DDD correctly simulated by HHH cannotHHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus mustThat's a tautology. HHH only sees the behavior that HHH has seen in
abort DD itself.
order to convince itself that seeing more behavior is not necessary.
Yes?
possibly reach its own simulated final halt state.
Exactly what execution trace details would HHH need to see to correctly conclude beyond all possible doubt that Infinite_Recursion() neverIt would need to see the whole input, if that includes itself.
halts?
The same tautology still applies to my null simulator.*When HHH reports on the actual behavior that it* *actually sees then
HHH(DDD)==0 IS CORRECT*
Why don't you show how DDD doesn't halt?The simulation which HHH has abandoned can be continued to show thatTry and trace through every single detail of every single step to show
DDD halts, which indicates that the 0 is incorrect.
this.
Only because you spam. So what is the reason?I explained the crucial importance of this many hundreds of times over several years and not one single person ever paid any attention at allHHH and HHH1 have identical source code except for their name.
You keep repeating this without explaining what you believe to be the
signficance of it, or why HHH1 has to be present at all.
to any of the details.
People are so damned sure that I am wrong that whenever I explain theTBF you don't explain anything else.
details of the proof that I am correct they never ever hear anything
beside blah, blah, blah...
I can give you 100 megabytes of that if it will help you see that I everIt wouldn't help. It's not a matter of noticing.
said it at least once.
On 15/09/2025 18:27, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
The presence of HHH1 is irrelevant and uninteresting, except insofar
that he's concealing that it returns 1.
You get this same information when DDD of HHH1
reaches [00002194].
I try to make it as simple as possible so that you
can keep track of every detail of the execution trace of
DDD correctly simulated by HHH
DDD correctly simulated by HHH1
*So that you can directly see that*
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
That is obviously false. HHH1(DD) begins simulating
DD from the entry to DD, and creating execution
trace entries, before DD reaches its HHH(DD) call.
You know this, which is why in your execution traces you include this
remark:
  *This is the beginning of the divergence of the behavior*
  *of DDD emulated by HHH versus DDD emulated by HHH1*
Before the divergence, there are behaviors, and they are not
divergent.
Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.
You have yet to explain the significance of HHH1.
HHH ONLY sees the behavior of DD *BEFORE* HHH
has aborted DD thus must abort DD itself.
That's a tautology. HHH only sees the behavior that HHH has seen in
order to convince itself that seeing more behavior is not necessary.
Yes?
*When HHH reports on the actual behavior that it*
*actually sees then HHH(DDD)==0 IS CORRECT*
The simulation which HHH has abandoned can be
continued to show that DDD halts, which indicates
that the 0 is incorrect.
*That is why I said it is so important for you to*
*carefully study this carefully annotated execution*
*trace instead of continuing to totally ignore it*
Why don't you properly port it to Linux. Modern Linux toolchains do not
produce COFF object files, only ELF. There evidently aren't any options
you can supply to get a COFF object file.
hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname
option. I don't have objcopy to test, but PO only needs a few basic
COFF capabilities, so it might be enough...
 <https://man7.org/linux/man-pages/man1/objcopy.1.html>
Mike.
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
From the POV of HHH1(DDD) the call from DD to HHH(DD)Yeah, why does HHH think that it doesn't halt *and then halts*?
simply returns.
Why are you comparing at all?Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.DDD correctly simulated by HHH1 is identical to the behavior of the
directly executed DDD().
When we have emulation compared to emulation we are comparing Apples to
Apples and not Apples to Oranges.
You have yet to explain the significance of HHH1.Just did.
HHH has complete proof that DDD correctly simulated by HHH cannotHHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus must >>>> abort DD itself.That's a tautology. HHH only sees the behavior that HHH has seen in
order to convince itself that seeing more behavior is not necessary.
Yes?
possibly reach its own simulated final halt state.
So yes. What is the proof?
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
Why are you comparing at all?Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.DDD correctly simulated by HHH1 is identical to the behavior of the
directly executed DDD().
When we have emulation compared to emulation we are comparing Apples to
Apples and not Apples to Oranges.
You have yet to explain the significance of HHH1.Just did.
HHH has complete proof that DDD correctly simulated by HHH cannotHHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus must >>>>> abort DD itself.That's a tautology. HHH only sees the behavior that HHH has seen in
order to convince itself that seeing more behavior is not necessary.
Yes?
possibly reach its own simulated final halt state.
So yes. What is the proof?
It is apparently over everyone's head.
void Infinite_Recursion()
{
 Infinite_Recursion();
 return;
}
How can a program know with complete
certainty that Infinite_Recursion()
never halts?
It is apparently over everyone's head.
On 9/15/2025 2:26 PM, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
Why are you comparing at all?Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.DDD correctly simulated by HHH1 is identical to the behavior of the
directly executed DDD().
When we have emulation compared to emulation we are comparing Apples to >>>> Apples and not Apples to Oranges.
You have yet to explain the significance of HHH1.Just did.
HHH has complete proof that DDD correctly simulated by HHH cannotHHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus >>>>>> mustThat's a tautology. HHH only sees the behavior that HHH has seen in
abort DD itself.
order to convince itself that seeing more behavior is not necessary. >>>>> Yes?
possibly reach its own simulated final halt state.
So yes. What is the proof?
It is apparently over everyone's head.
void Infinite_Recursion()
{
  Infinite_Recursion();
  return;
}
How can a program know with complete
certainty that Infinite_Recursion()
never halts?
Ummm... Your Infinite_Recursion example is basically akin to:
10 PRINT "Halt" : GOTO 10
Right? It says halt but does not... ;^)
On 9/15/2025 4:30 PM, Chris M. Thomasson wrote:
On 9/15/2025 2:26 PM, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
Why are you comparing at all?Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant. >>>>> DDD correctly simulated by HHH1 is identical to the behavior of thedirectly executed DDD().
When we have emulation compared to emulation we are comparing
Apples to
Apples and not Apples to Oranges.
You have yet to explain the significance of HHH1.Just did.
HHH has complete proof that DDD correctly simulated by HHH cannotHHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus >>>>>>> mustThat's a tautology. HHH only sees the behavior that HHH has seen in >>>>>> order to convince itself that seeing more behavior is not necessary. >>>>>> Yes?
abort DD itself.
possibly reach its own simulated final halt state.
So yes. What is the proof?
It is apparently over everyone's head.
void Infinite_Recursion()
{
  Infinite_Recursion();
  return;
}
How can a program know with complete
certainty that Infinite_Recursion()
never halts?
Ummm... Your Infinite_Recursion example is basically akin to:
10 PRINT "Halt" : GOTO 10
Right? It says halt but does not... ;^)
You aren't this stupid on the other forums
On 15/09/2025 18:27, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
Why don't you properly port it to Linux. Modern Linux toolchains do not
produce COFF object files, only ELF. There evidently aren't any options
you can supply to get a COFF object file.
hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname option. I don't have objcopy to
It must be fully integrated into my code. https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h
I just never got around to finishing that.
https://github.com/plolcott/x86utm/blob/master/include/Read_COFF_Object.h
is the one that I use.
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
The presence of HHH1 is irrelevant and uninteresting, except insofar
that he's concealing that it returns 1.
You get this same information when DDD of HHH1
reaches [00002194].
I try to make it as simple as possible so that you
can keep track of every detail of the execution trace of
DDD correctly simulated by HHH
DDD correctly simulated by HHH1
*So that you can directly see that*
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
That is obviously false. HHH1(DD) begins simulating
DD from the entry to DD, and creating execution
trace entries, before DD reaches its HHH(DD) call.
From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
You know this, which is why in your execution traces you include this
remark:
*This is the beginning of the divergence of the behavior*
*of DDD emulated by HHH versus DDD emulated by HHH1*
Before the divergence, there are behaviors, and they are not
divergent.
HHH1 cannot see the behavior of DDD correctly
simulated by HHH. HHH1 has no idea that HHH is
identical to itself.
Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.
DDD correctly simulated by HHH1 is identical to
the behavior of the directly executed DDD().
When we have emulation compared to emulation we are
comparing Apples to Apples and not Apples to Oranges.
You have yet to explain the significance of HHH1.
Just did.
HHH ONLY sees the behavior of DD *BEFORE* HHH
has aborted DD thus must abort DD itself.
That's a tautology. HHH only sees the behavior that HHH has seen in
order to convince itself that seeing more behavior is not necessary.
Yes?
HHH has complete proof that DDD correctly
simulated by HHH cannot possibly reach its
own simulated final halt state.
That you do not understand this complete
proof is no actual rebuttal what-so-ever.
You are a very smart guy. The way around this
is for you to figure out on your own how HHH
would correctly determine that:
void Infinite_Recursion()
{
Infinite_Recursion();
return;
}
Would never halt.
I explained the crucial importance of this many hundreds
of times over several years and not one single person
ever paid any attention at all to any of the details.
*IT IS THE REASON WHY HHH(DDD)==0 IS CORRECT*
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
From the POV of HHH1(DDD) the call from DD to HHH(DD)Yeah, why does HHH think that it doesn't halt *and then halts*?
simply returns.
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
From the POV of HHH1(DDD) the call from DD to HHH(DD)Yeah, why does HHH think that it doesn't halt *and then halts*?
simply returns.
On 2025-09-15, olcott <polcott333@gmail.com> wrote:DD.exe halts
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
The presence of HHH1 is irrelevant and uninteresting, except insofar >>>>> that he's concealing that it returns 1.
You get this same information when DDD of HHH1
reaches [00002194].
I try to make it as simple as possible so that you
can keep track of every detail of the execution trace of
DDD correctly simulated by HHH
DDD correctly simulated by HHH1
*So that you can directly see that*
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
That is obviously false. HHH1(DD) begins simulating
DD from the entry to DD, and creating execution
trace entries, before DD reaches its HHH(DD) call.
From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
But that's already true from the POV of the native x86
processor, if we run DD() out of main.
If DD is correctly a pure computation/TM,
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
On 9/15/2025 5:09 PM, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:DD.exe halts
From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
But that's already true from the POV of the native x86
processor, if we run DD() out of main.
If DD is correctly a pure computation/TM,
DD.HHH1 halts
DDD.HHH cannot possibly reach its own final halt state.
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
On 2025-09-15, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
On 15/09/2025 18:27, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
Why don't you properly port it to Linux. Modern Linux toolchains do not
produce COFF object files, only ELF. There evidently aren't any options >>> you can supply to get a COFF object file.
hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname option. I don't have objcopy to
I tried that a bunch of weeks ago. It's not there.
Basically COFF is gone. It appears to be thoroughly deprecated and not present on platforms that don't needed.
It seems it would make sense for Olcott's code not to rely on these
formats, and just use dlopen()/dlsym() or LoadLibrary() and
GetProcAddress, with his build system making a .so or .dll.
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
And yet it does.
On 15/09/2025 22:57, Kaz Kylheku wrote:
On 2025-09-15, Mike Terry
<news.dead.person.stones@darjeeling.plus.com> wrote:
On 15/09/2025 18:27, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
Why don't you properly port it to Linux. Modern Linux toolchains do not >>>> produce COFF object files, only ELF. There evidently aren't any
options
you can supply to get a COFF object file.
hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname
option. I don't have objcopy to
I tried that a bunch of weeks ago. It's not there.
Basically COFF is gone. It appears to be thoroughly deprecated and not
present on platforms that don't needed.
It seems it would make sense for Olcott's code not to rely on these
formats, and just use dlopen()/dlsym() or LoadLibrary() and
GetProcAddress, with his build system making a .so or .dll.
The problem I see is PO needs to single step his simulations. Under Windows there are Debug system APIs, but that doesn't sound straight
forward at all (They enable one process to debug a second process,
setting breakpoints, inspecting memory and so on. You rather need a
good understanding of what you're doing! I don't know what's available
for unix world. To me this sounds much harder than adding ELF support...) Probably you had a different plan, I'm just guessing.
Hmm, I imagine you could get the x86utm program built on your platform
if you had to. Maybe there's some server somewhere, where you could
send halt7.c for MSVC compilation? Probably that would run into
licensing issues or something, although there's a free MSVC compiler
(for Windows users)...
Mike.
On 15/09/2025 22:57, Kaz Kylheku wrote:
On 2025-09-15, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
On 15/09/2025 18:27, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
Why don't you properly port it to Linux. Modern Linux toolchains do not >>>> produce COFF object files, only ELF. There evidently aren't any options >>>> you can supply to get a COFF object file.
hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname option. I don't have objcopy to
I tried that a bunch of weeks ago. It's not there.
Basically COFF is gone. It appears to be thoroughly deprecated and not
present on platforms that don't needed.
It seems it would make sense for Olcott's code not to rely on these
formats, and just use dlopen()/dlsym() or LoadLibrary() and
GetProcAddress, with his build system making a .so or .dll.
The problem I see is PO needs to single step his simulations. Under Windows there are Debug system
APIs, but that doesn't sound straight forward at all (They enable one process to debug a second
process, setting breakpoints, inspecting memory and so on. You rather need a good understanding of
what you're doing! I don't know what's available for unix world. To me this sounds much harder
than adding ELF support...) Probably you had a different plan, I'm just guessing.
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
x86utm works just fine under Linux.^^^^^^^^^^^^^^
https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h would need to be completed using^^^^^^^^^^^^^^^^^^^^^^^^^^
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
x86utm works just fine under Linux.^^^^^^^^^^^^^^
Present tense!
https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h^^^^^^^^^^^^^^^^^^^^^^^^^^
would need to be completed using
Subjunctive!
Story of your life, eh! Claiming that stuff works now but, oh, if
so and so would be implemented.
Oh, I disproved the Halting Theorem beyond a doubt ... except I would
have to make sure my HHH is a pure function, and this and that ...
On 9/15/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
x86utm works just fine under Linux.^^^^^^^^^^^^^^
Present tense!
https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h >>> would need to be completed using^^^^^^^^^^^^^^^^^^^^^^^^^^
Subjunctive!
Story of your life, eh! Claiming that stuff works now but, oh, if
so and so would be implemented.
Oh, I disproved the Halting Theorem beyond a doubt ... except I would
have to make sure my HHH is a pure function, and this and that ...
I ran x86utm under Linux with COFF object file input.
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
On 9/15/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
x86utm works just fine under Linux.^^^^^^^^^^^^^^
Present tense!
https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h >>>> would need to be completed using^^^^^^^^^^^^^^^^^^^^^^^^^^
Subjunctive!
Story of your life, eh! Claiming that stuff works now but, oh, if
so and so would be implemented.
Oh, I disproved the Halting Theorem beyond a doubt ... except I would
have to make sure my HHH is a pure function, and this and that ...
I ran x86utm under Linux with COFF object file input.
The problem is that toolchains on modern Linux do not produce COFF
object files, even as an option. The support is removed.
Did you copy them over from your Windows installation?
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
On 9/15/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
x86utm works just fine under Linux.^^^^^^^^^^^^^^
Present tense!
https://github.com/plolcott/x86utm/blob/master/include/Read_ELF_Object.h >>>> would need to be completed using^^^^^^^^^^^^^^^^^^^^^^^^^^
Subjunctive!
Story of your life, eh! Claiming that stuff works now but, oh, if
so and so would be implemented.
Oh, I disproved the Halting Theorem beyond a doubt ... except I would
have to make sure my HHH is a pure function, and this and that ...
I ran x86utm under Linux with COFF object file input.
The problem is that toolchains on modern Linux do not produce COFF
object files, even as an option. The support is removed.
Did you copy them over from your Windows installation?
On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
On 15/09/2025 22:57, Kaz Kylheku wrote:
On 2025-09-15, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
On 15/09/2025 18:27, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
Why don't you properly port it to Linux. Modern Linux toolchains do not >>>>> produce COFF object files, only ELF. There evidently aren't any options >>>>> you can supply to get a COFF object file.
hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname option. I don't have objcopy to
I tried that a bunch of weeks ago. It's not there.
Basically COFF is gone. It appears to be thoroughly deprecated and not
present on platforms that don't needed.
It seems it would make sense for Olcott's code not to rely on these
formats, and just use dlopen()/dlsym() or LoadLibrary() and
GetProcAddress, with his build system making a .so or .dll.
The problem I see is PO needs to single step his simulations. Under Windows there are Debug system
APIs, but that doesn't sound straight forward at all (They enable one process to debug a second
process, setting breakpoints, inspecting memory and so on. You rather need a good understanding of
what you're doing! I don't know what's available for unix world. To me this sounds much harder
than adding ELF support...) Probably you had a different plan, I'm just guessing.
I mean, he could just boostrap into a simulation inside main:
void sim_main(void)
{
if (HHH(DD)) ... etc
}
int main(void)
{
Simulate(sim_main);
}
Then his own Debug_Step would be stepping everything in software; no
need to mess around with the host system's access to processor single stepping. Plus he could record one of his famous execution traces for
the entire test case starting at sim_main.
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
On 16/09/2025 03:37, olcott wrote:
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? And so DD.exe (I think I've seen that) would mean DD directly executed?
The alternative (which is what I would have guessed) is "DD which calls HHH1", but it seems you don't mean that (?)
Mike.
That's what happens now in effect - x86utm.exe log starts tracing at the halt7.c main() function.
On 9/15/2025 4:01 PM, joes wrote:That is what I ask you! Among all the other questions above.
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halt*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
Why are you comparing at all?Anyway, HHH1 is not HHH and is therefore superfluous and irrelevant.DDD correctly simulated by HHH1 is identical to the behavior of the
directly executed DDD().
When we have emulation compared to emulation we are comparing Apples
to Apples and not Apples to Oranges.
You have yet to explain the significance of HHH1.Just did.
HHH has complete proof that DDD correctly simulated by HHH cannotHHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thusThat's a tautology. HHH only sees the behavior that HHH has seen in
must abort DD itself.
order to convince itself that seeing more behavior is not necessary.
Yes?
possibly reach its own simulated final halt state.
So yes. What is the proof?
How can a program know with complete certainty that Infinite_Recursion() never halts?
On 9/15/2025 8:53 PM, Richard Heathfield wrote:
On 16/09/2025 00:52, olcott wrote:
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
And yet it does.
DD.HHH cannot possibly reach its own final halt state.
On 9/15/2025 2:48 AM, Fred. Zwarts wrote:
Op 14.sep.2025 om 15:22 schreef olcott:I provided the full trace so that you could
void DDD()Exactly! That is the error of HHH!
{
  HHH(DDD);
  return;
}
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
HHH ONLY sees the behavior of DD *BEFORE* HHH
has aborted DD thus must abort DD itself.
see that it is not any error. That you ignored
this is not any sort of actual rebuttal.
On 9/15/2025 8:53 PM, Richard Heathfield wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
And yet it does.
DD.HHH cannot possibly reach its own final halt state.
On 9/15/2025 10:52 PM, Mike Terry wrote:Exactly. Do you get it? HHH1 is able reach the final halt state, but HHH
On 16/09/2025 03:37, olcott wrote:DD simulated by HHH1 has the same behavior as DD().
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? And
so DD.exe (I think I've seen that) would mean DD directly executed?
The alternative (which is what I would have guessed) is "DD which
calls HHH1", but it seems you don't mean that (?)
Mike.
DD simulated by HHH cannot possibly reach its final halt state.
It is clear that you have no counter-argument. You close your
eyes for these facts and pretend that they do not exist, because
the disturb your dreams. That is your attitude.
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
The presence of HHH1 is irrelevant and uninteresting, except insofar >>>>> that he's concealing that it returns 1.
You get this same information when DDD of HHH1
reaches [00002194].
I try to make it as simple as possible so that you
can keep track of every detail of the execution trace of
DDD correctly simulated by HHH
DDD correctly simulated by HHH1
*So that you can directly see that*
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
That is obviously false. HHH1(DD) begins simulating
DD from the entry to DD, and creating execution
trace entries, before DD reaches its HHH(DD) call.
From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
But that's already true from the POV of the native x86
processor, if we run DD() out of main.
If DD is correctly a pure computation/TM, that must be true regardless
of the context in which HHH(DD) is invoked; HHH(DD)
returns, period.
So again, what is the point of introducing HHH1.
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:DD simulated by HHH1 has the same behavior as DD().
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? And so DD.exe (I think I've seen
that) would mean DD directly executed?
The alternative (which is what I would have guessed) is "DD which calls HHH1", but it seems you
don't mean that (?)
Mike.
DD simulated by HHH cannot possibly reach its final halt state.
On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
That's what happens now in effect - x86utm.exe log starts tracing at the halt7.c main() function.
Then why does he keep claiming that main() is "native"? Nothing is
"native" in the loaded COFF test case, then.
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:DD simulated by HHH1 has the same behavior as DD().
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? And
so DD.exe (I think I've seen that) would mean DD directly executed?
The alternative (which is what I would have guessed) is "DD which
calls HHH1", but it seems you don't mean that (?)
Mike.
DD simulated by HHH cannot possibly reach its final halt state.
I didn't ask that.
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:DD simulated by HHH1 has the same behavior as DD().
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*? >>>>>>>>
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? And so DD.exe (I think I've
seen that) would mean DD directly executed?
The alternative (which is what I would have guessed) is "DD which calls HHH1", but it seems you
don't mean that (?)
Mike.
DD simulated by HHH cannot possibly reach its final halt state.
I didn't ask that.
All of the details of the trace that you erased
answer every possibly question that you could
possibly have about these things. My answer directed
you to the trace.
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that
another different decider could correctly
decide this same input because it does not
form the diagonal case.
On 16/09/2025 18:44, olcott wrote:
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:DD simulated by HHH1 has the same behavior as DD().
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*? >>>>>>>>>
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1?
And so DD.exe (I think I've seen that) would mean DD directly
executed?
The alternative (which is what I would have guessed) is "DD which
calls HHH1", but it seems you don't mean that (?)
Mike.
DD simulated by HHH cannot possibly reach its final halt state.
I didn't ask that.
All of the details of the trace that you erased
answer every possibly question that you could
possibly have about these things. My answer directed
you to the trace.
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that
another different decider could correctly
decide this same input because it does not
form the diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
On 9/16/2025 1:15 PM, Mike Terry wrote:You could have just said "yes" the first time.
On 16/09/2025 18:44, olcott wrote:
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:
On 9/15/2025 9:10 PM, Mike Terry wrote:
The only relevant cases now are DD.HHH where DD is simulated by HHH AKAAll of the details of the trace that you erased answer every possiblyDD simulated by HHH1 has the same behavior as DD().I guess I missed an earlier post.*DDD of HHH1 versus DDD of HHH see below*
What do DD.HHH1 and DDD.HHH mean?
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? And >>>>>> so DD.exe (I think I've seen that) would mean DD directly executed? >>>>>> The alternative (which is what I would have guessed) is "DD which
calls HHH1", but it seems you don't mean that (?)
DD simulated by HHH cannot possibly reach its final halt state.
I didn't ask that.
question that you could possibly have about these things. My answer
directed you to the trace.
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that another different decider
could correctly decide this same input because it does not form the
diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
the conventional diagonal relationship and DD.HHH1 where the same input
does not form the diagonal case thus is conventionally decidable.
This means that it has always been common knowledge that the behavior ofNobody says that HHH and HHH1 are the same. But they should.
DD with HHH(DD) is different than the behavior of DD with HHH1(DD) yet everyone here disagrees because they value disagreement over truth.
Am Tue, 16 Sep 2025 13:29:28 -0500 schrieb olcott:
On 9/16/2025 1:15 PM, Mike Terry wrote:
On 16/09/2025 18:44, olcott wrote:
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:
On 9/15/2025 9:10 PM, Mike Terry wrote:
You could have just said "yes" the first time.The only relevant cases now are DD.HHH where DD is simulated by HHH AKAAll of the details of the trace that you erased answer every possiblyDD simulated by HHH1 has the same behavior as DD().I guess I missed an earlier post.*DDD of HHH1 versus DDD of HHH see below*
What do DD.HHH1 and DDD.HHH mean?
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? And >>>>>>> so DD.exe (I think I've seen that) would mean DD directly executed? >>>>>>> The alternative (which is what I would have guessed) is "DD which >>>>>>> calls HHH1", but it seems you don't mean that (?)
DD simulated by HHH cannot possibly reach its final halt state.
I didn't ask that.
question that you could possibly have about these things. My answer
directed you to the trace.
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that another different decider
could correctly decide this same input because it does not form the
diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
the conventional diagonal relationship and DD.HHH1 where the same input
does not form the diagonal case thus is conventionally decidable.
--This means that it has always been common knowledge that the behavior ofNobody says that HHH and HHH1 are the same. But they should.>
DD with HHH(DD) is different than the behavior of DD with HHH1(DD) yet
everyone here disagrees because they value disagreement over truth.
On 15/09/2025 23:09, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
The presence of HHH1 is irrelevant and uninteresting, except insofar >>>>>> that he's concealing that it returns 1.
You get this same information when DDD of HHH1
reaches [00002194].
I try to make it as simple as possible so that you
can keep track of every detail of the execution trace of
DDD correctly simulated by HHH
DDD correctly simulated by HHH1
*So that you can directly see that*
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
That is obviously false. HHH1(DD) begins simulating
DD from the entry to DD, and creating execution
trace entries, before DD reaches its HHH(DD) call.
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
But that's already true from the POV of the native x86
processor, if we run DD() out of main.
If DD is correctly a pure computation/TM, that must be true regardless
of the context in which HHH(DD) is invoked; HHH(DD)
returns, period.
So again, what is the point of introducing HHH1.
My suggestion: PO has seen DD() halt [when called from main()]. He has traces of DD halting! He /wants/ to say that when HHH simulates DD,
DD's "behaviour" is different from the behaviour of DD called from main().
But everyone else understands DD does what DD does, or at least assuming we're talking pure functions. The idea that a simulation goes different ways depending on who's performing the simulation is plain silly - one
of PO's fantasies invented to try to maintain even sillier beliefs
elsewhere [like that DD doesn't /really/ halt when it obviously does].
His HHH1 supports PO's narative, in PO's mind - here we (supposedly)
have two identical simulators simulating the same DD, but they behave differently. That can only be explained (in PO's mind) by invoking
magical thinking about "pathelogical self reference".
If PO were to acknowledge that all simulators just step along the "One
True Path" of the computation step by step, up to the point they give
up, he would lose his argument that HHH /simulating/ DD "sees" different halting behaviour from DD /directly executed/. Then his whole Linz
proof argument would be seen by all as nonsense. [I have no doubt he
could think up some other explanation which is even less logical if he needed to, in order to maintain his delusional framework, so there's no danger here of "sending PO over the edge". Such people will always "do whatever it takes" to maintain their delusions.]
Mike.
On 9/16/2025 1:15 PM, Mike Terry wrote:
On 16/09/2025 18:44, olcott wrote:
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:DD simulated by HHH1 has the same behavior as DD().
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>> simply returns.
halts*?
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by
HHH1? And so DD.exe (I think I've seen that) would mean DD
directly executed?
The alternative (which is what I would have guessed) is "DD
which calls HHH1", but it seems you don't mean that (?)
Mike.
DD simulated by HHH cannot possibly reach its final halt state.
I didn't ask that.
All of the details of the trace that you erased
answer every possibly question that you could
possibly have about these things. My answer directed
you to the trace.
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that
another different decider could correctly
decide this same input because it does not
form the diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
The only relevant cases now are DD.HHH where DD
is simulated by HHH AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
This means that it has always been common knowledge
that the behavior of DD with HHH(DD) is different
than the behavior of DD with HHH1(DD) yet everyone
here disagrees because they value disagreement over
truth.
It enrages me that people insist that I must be
wrong
It seems ridiculously dumb that you can not see
that the diagonal case presented to a simulating
termination analyzer:
(1) Bypasses the "do the opposite" code as unreachable.
On 9/16/2025 2:09 PM, joes wrote:
Am Tue, 16 Sep 2025 13:29:28 -0500 schrieb olcott:
On 9/16/2025 1:15 PM, Mike Terry wrote:You could have just said "yes" the first time.
On 16/09/2025 18:44, olcott wrote:
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:
On 9/15/2025 9:10 PM, Mike Terry wrote:
The only relevant cases now are DD.HHH where DD is simulated by HHH AKAAll of the details of the trace that you erased answer every possibly >>>>> question that you could possibly have about these things. My answerDD simulated by HHH1 has the same behavior as DD().I guess I missed an earlier post.*DDD of HHH1 versus DDD of HHH see below*
What do DD.HHH1 and DDD.HHH mean?
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? And >>>>>>>> so DD.exe (I think I've seen that) would mean DD directly executed? >>>>>>>> The alternative (which is what I would have guessed) is "DD which >>>>>>>> calls HHH1", but it seems you don't mean that (?)
DD simulated by HHH cannot possibly reach its final halt state.
I didn't ask that.
directed you to the trace.
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that another different decider
could correctly decide this same input because it does not form the
diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
the conventional diagonal relationship and DD.HHH1 where the same input
does not form the diagonal case thus is conventionally decidable.
It enrages me that people insist that I must be
wrong and they do this entirely on the basis of
refusing to pay enough attention.
On 16/09/2025 06:50, Kaz Kylheku wrote:
On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:I genuinely can't see why you would be asking that question, so I'm missing something.
That's what happens now in effect - x86utm.exe log starts tracing at the halt7.c main() function.
Then why does he keep claiming that main() is "native"? Nothing is
"native" in the loaded COFF test case, then.
If you're just making the point that /all/ the code in halt7.c is "executed" within PO's x86utm,
that's perfectly correct. With the possible exception of main(), all the code in halt7.c is "TM
code" or simulations made by that TM code.
The TM code is "directly executed" [that's just what the
phrase means in x86utm context] and code it simulates using DebugStep() is "simulated".
On 9/16/2025 10:33 AM, Mike Terry wrote:
On 15/09/2025 23:09, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
The presence of HHH1 is irrelevant and uninteresting, except insofar >>>>>>> that he's concealing that it returns 1.
You get this same information when DDD of HHH1
reaches [00002194].
I try to make it as simple as possible so that you
can keep track of every detail of the execution trace of
DDD correctly simulated by HHH
DDD correctly simulated by HHH1
*So that you can directly see that*
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
That is obviously false. HHH1(DD) begins simulating
DD from the entry to DD, and creating execution
trace entries, before DD reaches its HHH(DD) call.
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
But that's already true from the POV of the native x86
processor, if we run DD() out of main.
If DD is correctly a pure computation/TM, that must be true regardless
of the context in which HHH(DD) is invoked; HHH(DD)
returns, period.
So again, what is the point of introducing HHH1.
My suggestion: PO has seen DD() halt [when called from main()]. He has >> traces of DD halting! He /wants/ to say that when HHH simulates DD,
DD's "behaviour" is different from the behaviour of DD called from main(). >>
But everyone else understands DD does what DD does, or at least assuming
we're talking pure functions. The idea that a simulation goes different >> ways depending on who's performing the simulation is plain silly - one
of PO's fantasies invented to try to maintain even sillier beliefs
elsewhere [like that DD doesn't /really/ halt when it obviously does].
His HHH1 supports PO's narative, in PO's mind - here we (supposedly)
have two identical simulators simulating the same DD, but they behave
differently. That can only be explained (in PO's mind) by invoking
magical thinking about "pathelogical self reference".
If PO were to acknowledge that all simulators just step along the "One
True Path" of the computation step by step, up to the point they give
up, he would lose his argument that HHH /simulating/ DD "sees" different
halting behaviour from DD /directly executed/. Then his whole Linz
proof argument would be seen by all as nonsense. [I have no doubt he
could think up some other explanation which is even less logical if he
needed to, in order to maintain his delusional framework, so there's no
danger here of "sending PO over the edge". Such people will always "do
whatever it takes" to maintain their delusions.]
Mike.
It seems ridiculously dumb that you can not see
that the diagonal case presented to a simulating
termination analyzer:
(1) Bypasses the "do the opposite" code as unreachable.
(2) Causes the simulating termination analyzer to continue
to be called in recursive simulation that:
(a) Cannot possibly stop running unless aborted.
(b) Cannot possibly reach its own simulated final
halt state whether aborted at some point or not.
On 9/16/2025 1:15 PM, Mike Terry wrote:
On 16/09/2025 18:44, olcott wrote:
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:DD simulated by HHH1 has the same behavior as DD().
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*? >>>>>>>>>>
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? And so DD.exe (I think I've
seen that) would mean DD directly executed?
The alternative (which is what I would have guessed) is "DD which calls HHH1", but it seems
you don't mean that (?)
Mike.
DD simulated by HHH cannot possibly reach its final halt state.
I didn't ask that.
All of the details of the trace that you erased
answer every possibly question that you could
possibly have about these things. My answer directed
you to the trace.
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that
another different decider could correctly
decide this same input because it does not
form the diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
The only relevant cases now are DD.HHH where DD
is simulated by HHH
AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
This means that it has always been common knowledge
that the behavior of DD with HHH(DD) is different
than the behavior of DD with HHH1(DD) yet everyone
here disagrees because they value disagreement over
truth.
On 2025-09-16 13:19, olcott wrote:
On 9/16/2025 2:09 PM, joes wrote:
Am Tue, 16 Sep 2025 13:29:28 -0500 schrieb olcott:
On 9/16/2025 1:15 PM, Mike Terry wrote:You could have just said "yes" the first time.
On 16/09/2025 18:44, olcott wrote:
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:
On 9/15/2025 9:10 PM, Mike Terry wrote:
The only relevant cases now are DD.HHH where DD is simulated by HHH AKA >>>> the conventional diagonal relationship and DD.HHH1 where the same input >>>> does not form the diagonal case thus is conventionally decidable.All of the details of the trace that you erased answer every possibly >>>>>> question that you could possibly have about these things. My answer >>>>>> directed you to the trace.I didn't ask that.DD simulated by HHH1 has the same behavior as DD().I guess I missed an earlier post.*DDD of HHH1 versus DDD of HHH see below*
What do DD.HHH1 and DDD.HHH mean?
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? >>>>>>>>> And
so DD.exe (I think I've seen that) would mean DD directly
executed?
The alternative (which is what I would have guessed) is "DD which >>>>>>>>> calls HHH1", but it seems you don't mean that (?)
DD simulated by HHH cannot possibly reach its final halt state. >>>>>>>
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that another different decider >>>>>> could correctly decide this same input because it does not form the >>>>>> diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
It enrages me that people insist that I must be
wrong and they do this entirely on the basis of
refusing to pay enough attention.
Chill, dude.,.
Mike Terry never said anything about you being right or wrong. He merely asked you to clarify your dot notation...
André
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 10:33 AM, Mike Terry wrote:
On 15/09/2025 23:09, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
The presence of HHH1 is irrelevant and uninteresting, except insofar >>>>>>>> that he's concealing that it returns 1.
You get this same information when DDD of HHH1
reaches [00002194].
I try to make it as simple as possible so that you
can keep track of every detail of the execution trace of
DDD correctly simulated by HHH
DDD correctly simulated by HHH1
*So that you can directly see that*
On 9/12/2025 3:01 PM, Kaz Kylheku wrote:
Of course the execution traces are different
before and after the abort.
HHH1 ONLY sees the behavior of DD *AFTER* HHH
has aborted DD thus need not abort DD itself.
That is obviously false. HHH1(DD) begins simulating
DD from the entry to DD, and creating execution
trace entries, before DD reaches its HHH(DD) call.
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
But that's already true from the POV of the native x86
processor, if we run DD() out of main.
If DD is correctly a pure computation/TM, that must be true regardless >>>> of the context in which HHH(DD) is invoked; HHH(DD)
returns, period.
So again, what is the point of introducing HHH1.
My suggestion: PO has seen DD() halt [when called from main()]. He has >>> traces of DD halting! He /wants/ to say that when HHH simulates DD,
DD's "behaviour" is different from the behaviour of DD called from main(). >>>
But everyone else understands DD does what DD does, or at least assuming >>> we're talking pure functions. The idea that a simulation goes different >>> ways depending on who's performing the simulation is plain silly - one
of PO's fantasies invented to try to maintain even sillier beliefs
elsewhere [like that DD doesn't /really/ halt when it obviously does].
His HHH1 supports PO's narative, in PO's mind - here we (supposedly)
have two identical simulators simulating the same DD, but they behave
differently. That can only be explained (in PO's mind) by invoking
magical thinking about "pathelogical self reference".
If PO were to acknowledge that all simulators just step along the "One
True Path" of the computation step by step, up to the point they give
up, he would lose his argument that HHH /simulating/ DD "sees" different >>> halting behaviour from DD /directly executed/. Then his whole Linz
proof argument would be seen by all as nonsense. [I have no doubt he
could think up some other explanation which is even less logical if he
needed to, in order to maintain his delusional framework, so there's no
danger here of "sending PO over the edge". Such people will always "do >>> whatever it takes" to maintain their delusions.]
Mike.
It seems ridiculously dumb that you can not see
that the diagonal case presented to a simulating
termination analyzer:
(1) Bypasses the "do the opposite" code as unreachable.
Rather, it is ridiculously dumb that you cannot see that
this bypass doesn't occur, in the diagonal test case, when
the decider returns a "does not halt" decision (HHH(DD) -> 0).
The only relevant cases now are DD.HHH where DD
is simulated by HHH AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
This means that it has always been common knowledge
that the behavior of DD with HHH(DD) is different
than the behavior of DD with HHH1(DD) yet everyone
here disagrees because they value disagreement over
truth.
It enrages me that people insist that I must be
wrong and they do this entirely on the basis of
refusing to pay enough attention.
On 16/09/2025 19:29, olcott wrote:
On 9/16/2025 1:15 PM, Mike Terry wrote:
On 16/09/2025 18:44, olcott wrote:
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:DD simulated by HHH1 has the same behavior as DD().
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*? >>>>>>>>>>>
On 2025-09-15, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>>> simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? And so DD.exe (I think I've
seen that) would mean DD directly executed?
The alternative (which is what I would have guessed) is "DD which calls HHH1", but it seems
you don't mean that (?)
Mike.
DD simulated by HHH cannot possibly reach its final halt state.
I didn't ask that.
All of the details of the trace that you erased
answer every possibly question that you could
possibly have about these things. My answer directed
you to the trace.
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that
another different decider could correctly
decide this same input because it does not
form the diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
The only relevant cases now are DD.HHH where DD
is simulated by HHH AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
This means that it has always been common knowledge
that the behavior of DD with HHH(DD) is different
than the behavior of DD with HHH1(DD) yet everyone
here disagrees because they value disagreement over
truth.
They weren't hard questions really, but it took Olcott 69 words not to answer them.
Rather, it is ridiculously dumb that you cannot see that
this bypass doesn't occur, in the diagonal test case, when
the decider returns a "does not halt" decision (HHH(DD) -> 0).
That is not the actual behavior specified by the actual input to HHH(DD)
On 16/09/2025 19:29, olcott wrote:
On 9/16/2025 1:15 PM, Mike Terry wrote:
On 16/09/2025 18:44, olcott wrote:
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:DD simulated by HHH1 has the same behavior as DD().
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*? >>>>>>>>>>>
On 2025-09-15, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>>> simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? >>>>>>> And so DD.exe (I think I've seen that) would mean DD directly
executed?
The alternative (which is what I would have guessed) is "DD which >>>>>>> calls HHH1", but it seems you don't mean that (?)
Mike.
DD simulated by HHH cannot possibly reach its final halt state.
I didn't ask that.
All of the details of the trace that you erased
answer every possibly question that you could
possibly have about these things. My answer directed
you to the trace.
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that
another different decider could correctly
decide this same input because it does not
form the diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
The only relevant cases now are DD.HHH where DD
is simulated by HHH
aha! So that's a "yes" to the first question.
AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
and I guess I'll just have to accept that as a "yes" for the second
question
This means that it has always been common knowledge
that the behavior of DD with HHH(DD) is different
than the behavior of DD with HHH1(DD) yet everyone
here disagrees because they value disagreement over
truth.
and no answer to the third question.
On 9/16/2025 5:10 PM, Mike Terry wrote:
On 16/09/2025 19:29, olcott wrote:
aha! So that's a "yes" to the first question.
AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
and I guess I'll just have to accept that as a "yes" for the second
question
This means that it has always been common knowledge
that the behavior of DD with HHH(DD) is different
than the behavior of DD with HHH1(DD) yet everyone
here disagrees because they value disagreement over
truth.
and no answer to the third question.
This is a matter of life and death of the planet
and stopping the rise of the fourth Reich.
When truth becomes computable then the liars
cannot get way with their lies.
On 9/16/2025 5:10 PM, Mike Terry wrote:
On 16/09/2025 19:29, olcott wrote:
On 9/16/2025 1:15 PM, Mike Terry wrote:
On 16/09/2025 18:44, olcott wrote:
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:DD simulated by HHH1 has the same behavior as DD().
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*? >>>>>>>>>>>>
On 2025-09-15, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>> On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>>>> simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? And so DD.exe (I think I've
seen that) would mean DD directly executed?
The alternative (which is what I would have guessed) is "DD which calls HHH1", but it seems
you don't mean that (?)
Mike.
DD simulated by HHH cannot possibly reach its final halt state.
I didn't ask that.
All of the details of the trace that you erased
answer every possibly question that you could
possibly have about these things. My answer directed
you to the trace.
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that
another different decider could correctly
decide this same input because it does not
form the diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
The only relevant cases now are DD.HHH where DD
is simulated by HHH
aha! So that's a "yes" to the first question.
AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
and I guess I'll just have to accept that as a "yes" for the second question >>
This means that it has always been common knowledge
that the behavior of DD with HHH(DD) is different
than the behavior of DD with HHH1(DD) yet everyone
here disagrees because they value disagreement over
truth.
and no answer to the third question.
This is a matter of life and death of the planet
and stopping the rise of the fourth Reich.
When truth becomes computable then the liars
cannot get way with their lies.
On 2025-09-16 16:25, olcott wrote:
On 9/16/2025 5:10 PM, Mike Terry wrote:
On 16/09/2025 19:29, olcott wrote:
aha! So that's a "yes" to the first question.
AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
and I guess I'll just have to accept that as a "yes" for the second
question
This means that it has always been common knowledge
that the behavior of DD with HHH(DD) is different
than the behavior of DD with HHH1(DD) yet everyone
here disagrees because they value disagreement over
truth.
and no answer to the third question.
This is a matter of life and death of the planet
and stopping the rise of the fourth Reich.
When truth becomes computable then the liars
cannot get way with their lies.
If it's so important, then why do you go to such lengths to to answer
fairly straightforward questions about you claims (like what your dot notation means)?
André
On 16/09/2025 23:25, olcott wrote:
On 9/16/2025 5:10 PM, Mike Terry wrote:
On 16/09/2025 19:29, olcott wrote:
On 9/16/2025 1:15 PM, Mike Terry wrote:
On 16/09/2025 18:44, olcott wrote:
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:I didn't ask that.
On 16/09/2025 03:37, olcott wrote:DD simulated by HHH1 has the same behavior as DD().
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then >>>>>>>>>>>>> halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>> On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>>>>> simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? >>>>>>>>> And so DD.exe (I think I've seen that) would mean DD directly >>>>>>>>> executed?
The alternative (which is what I would have guessed) is "DD >>>>>>>>> which calls HHH1", but it seems you don't mean that (?)
Mike.
DD simulated by HHH cannot possibly reach its final halt state. >>>>>>>
All of the details of the trace that you erased
answer every possibly question that you could
possibly have about these things. My answer directed
you to the trace.
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that
another different decider could correctly
decide this same input because it does not
form the diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
The only relevant cases now are DD.HHH where DD
is simulated by HHH
aha! So that's a "yes" to the first question.
AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
and I guess I'll just have to accept that as a "yes" for the second
question
This means that it has always been common knowledge
that the behavior of DD with HHH(DD) is different
than the behavior of DD with HHH1(DD) yet everyone
here disagrees because they value disagreement over
truth.
and no answer to the third question.
This is a matter of life and death of the planet
and stopping the rise of the fourth Reich.
When truth becomes computable then the liars
cannot get way with their lies.
Dude! Nothing going on here has even the remotest of effects on such
world events.
Mike.
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
Rather, it is ridiculously dumb that you cannot see that
this bypass doesn't occur, in the diagonal test case, when
the decider returns a "does not halt" decision (HHH(DD) -> 0).
That is not the actual behavior specified by the actual input to HHH(DD)
Speaking of which, you have been dodging the question of specifying
what the "actual input" to HHH comprises in the expression HHH(DD).
Can you list the piece or pieces of material that you believe are part
of the input, omitting anything that is not part of the input?
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
It enrages me that people insist that I must be
wrong and they do this entirely on the basis of
refusing to pay enough attention.
The problem is that your goalposts for "enough attention" is for
people to see things which are not there.
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
The only relevant cases now are DD.HHH where DD
is simulated by HHH AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
DD is always the diagonal case, targeting HHH.
It can be decided by something that is not HHH.
You need not be proving this, if your aim is to
topple the halting theorem.
This means that it has always been common knowledge
that the behavior of DD with HHH(DD) is different
than the behavior of DD with HHH1(DD) yet everyone
here disagrees because they value disagreement over
truth.
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
The only relevant cases now are DD.HHH where DD
is simulated by HHH AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
DD is always the diagonal case, targeting HHH.
It can be decided by something that is not HHH.
On 9/16/2025 5:53 PM, Mike Terry wrote:
On 16/09/2025 23:25, olcott wrote:
On 9/16/2025 5:10 PM, Mike Terry wrote:
On 16/09/2025 19:29, olcott wrote:
On 9/16/2025 1:15 PM, Mike Terry wrote:
On 16/09/2025 18:44, olcott wrote:
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:I didn't ask that.
On 16/09/2025 03:37, olcott wrote:DD simulated by HHH1 has the same behavior as DD().
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott: >>>>>>>>>>>>>>> On 9/15/2025 12:27 PM, Kaz Kylheku wrote:
Yeah, why does HHH think that it doesn't halt *and then >>>>>>>>>>>>>> halts*?On 2025-09-15, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>>> On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>>>>>> simply returns.
DD.HHH1 halts DDD.HHH cannot possibly reach
its final halt state.
I guess I missed an earlier post.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by >>>>>>>>>> HHH1? And so DD.exe (I think I've seen that) would mean DD >>>>>>>>>> directly executed?
The alternative (which is what I would have guessed) is "DD >>>>>>>>>> which calls HHH1", but it seems you don't mean that (?)
Mike.
DD simulated by HHH cannot possibly reach its final halt state. >>>>>>>>
All of the details of the trace that you erased
answer every possibly question that you could
possibly have about these things. My answer directed
you to the trace.
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that
another different decider could correctly
decide this same input because it does not
form the diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
The only relevant cases now are DD.HHH where DD
is simulated by HHH
aha! So that's a "yes" to the first question.
AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
and I guess I'll just have to accept that as a "yes" for the second
question
This means that it has always been common knowledge
that the behavior of DD with HHH(DD) is different
than the behavior of DD with HHH1(DD) yet everyone
here disagrees because they value disagreement over
truth.
and no answer to the third question.
This is a matter of life and death of the planet
and stopping the rise of the fourth Reich.
When truth becomes computable then the liars
cannot get way with their lies.
Dude! Nothing going on here has even the remotest of effects on such
world events. Mike.
So you have not bothered to Notice that Trump is
exactly copying Hitter?
If it was not for the brave soul of the Senate
parliamentarian cancelling the king maker paragraph
of Trump's Big Bullshit Bill the USA would already
be more than halfway to the dictatorship power of
Nazi Germany.
Truth can be computable !!!
On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
On 16/09/2025 06:50, Kaz Kylheku wrote:
On 2025-09-16, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:I genuinely can't see why you would be asking that question, so I'm missing something.
That's what happens now in effect - x86utm.exe log starts tracing at the halt7.c main() function.
Then why does he keep claiming that main() is "native"? Nothing is
"native" in the loaded COFF test case, then.
Just I thought that the host executable just branches into the loaded module's main(). (Which would be a sensible thing to do; there is no
need to simulate anyting outside of the halting decider such as HHH.)
If you're just making the point that /all/ the code in halt7.c is "executed" within PO's x86utm,
that's perfectly correct. With the possible exception of main(), all the code in halt7.c is "TM
code" or simulations made by that TM code.
Is there a possible exception? I'm looking at the code now and it looks
like if the simulation from the entry point into the loaded file is unconditional; it doesn't appear to be an option to branch to it
natively.
The TM code is "directly executed" [that's just what the
phrase means in x86utm context] and code it simulates using DebugStep() is "simulated".
That distinction makes no sense, like a lot of things from P. O.
I was tripped up thinking that directly executed means using the host processor.
"Directly Executed" should be equivalent to a wrapper which calls
DebugStep, except that if we open-code the DebugStep loop, we can insert halting criteria, and trace recording and whatnot.
On 9/16/2025 5:24 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
Rather, it is ridiculously dumb that you cannot see that
this bypass doesn't occur, in the diagonal test case, when
the decider returns a "does not halt" decision (HHH(DD) -> 0).
That is not the actual behavior specified by the actual input to HHH(DD)
Speaking of which, you have been dodging the question of specifying
what the "actual input" to HHH comprises in the expression HHH(DD).
The termination analyzer HHH is only examining whether
or not the finite string of x86 machine code ending
in the C3 byte "ret" instruction can be reached by the
behavior specified by this finite string.
This behavior does include that DD calls HHH(DD)
in recursive simulation. DD is the program under
test and HHH is the test program.
You understood that each decider can have an input
defined to "do the opposite" of whatever this decider
decides thwarting the correct decision for this
decider/input pair.
You also understand that another different decider
can correctly decide this same input.
You seem to get totally confused when these are
made specific by HHH/DD and HHH1/DD.
If you think that it is impossible for DD to have
different behavior between these two cases then how
is it that one is conventionally undecidable and
the other is decidable?
On 16/09/2025 22:54, Kaz Kylheku wrote:
On 2025-09-16, Mike Terry
<news.dead.person.stones@darjeeling.plus.com> wrote:
On 16/09/2025 06:50, Kaz Kylheku wrote:
On 2025-09-16, Mike TerryI genuinely can't see why you would be asking that question, so I'm
<news.dead.person.stones@darjeeling.plus.com> wrote:
That's what happens now in effect - x86utm.exe log starts tracing
at the halt7.c main() function.
Then why does he keep claiming that main() is "native"? Nothing is
"native" in the loaded COFF test case, then.
missing something.
Just I thought that the host executable just branches into the loaded
module's main(). (Which would be a sensible thing to do; there is no
need to simulate anyting outside of the halting decider such as HHH.)
I see - no, x86utm parses input file halt7.obj to grab what it needs regarding data/code sectors, symbol definitions (function names and locations), relocation fixups table, then uses that to initialise a libx86emu "virtual address space". In effect it performs its own equivalent of LoadLibrary() within that virtual address space, loading
the module starting at low memory address 0x00000000. halt7.obj is
never linked to form an OS executable. (I suppose it could be, perhaps with minor changes...)
I'm not sure x86utm directly calling halt7.c's main() would be a good design, while it remains the case that simulation performed by HHH is
within the libx86emu virtual machine. It could be done that way, but
then code like HHH would be running in two completely separate
environments: Windows/Unix host, and libx86emu virtual address space.
They must be exactly the same, and it's a good thing if they're easily
seen and verified as being the same. That happens if they're both performed by x86utm code via libx86emu, and also it means x86utm's log
shows both in the same format.
Also ISTM the hosting environment should be logically divorced from the halt7.c code as far as possible. E.g. let's imagine x86utm is doing its stuff, but then it turns out the x86utm address space (which is 32-bit,
like the libx86utm virtual address space) starts requiring bigger tables
and allocations for running multiple libx86utm virtual address spaces or whatever, and a resource limit is encountered. We get past that by
making x86utm.exe 64-bit. That should all be routine (expecting the
usual 32- to 64-bit code porting issues...) Problem gone! x86utm.exe
now has 64-bits of address space (or close after OS is taken out) and libx86emu still creates its 32-bit virtual machines to run halt7.c
code. (You can see where this leads!) But now your design of x86utm directly calling main() and hence HHH() that means HHH has to be both
64-bit (for xx86utm to directly call) and 32-bit (to run under
libx86emu). Or perhaps I just want to run PO's code on some RISC architecture, not x86 at all - I can compile C++ code (x86utm) to run on that RISC CPU, but halt7.c absolutely must be x86 code...
Alternatively, x86utm could designed so that halt7.c's main() is invoked exactly like any other simulation started e.g. by HHH within halt7.c.
That would need some Simulate() function to drive the DebugStep() loop,
and where would that be? If in halt7.c, what simulates Simulate()? Or
it could be hard-coded into x86utm since it never changes. Dunno......
If you're just making the point that /all/ the code in halt7.c is
"executed" within PO's x86utm,
that's perfectly correct. With the possible exception of main(), all
the code in halt7.c is "TM
code" or simulations made by that TM code.
Is there a possible exception? I'm looking at the code now and it looks
like if the simulation from the entry point into the loaded file is
unconditional; it doesn't appear to be an option to branch to it
natively.
I'm not sure what you're referring to.
You're looking at x86utm code or halt7.c code?
The latter is never linked to an executable, so it can /only/ be
executed within x86utm via libx86emu virtual x86 machine.
x86utm.exe code runs under the hosting OS, reads and "loads" the
halt7.obj code into the libx86emu VM, then runs its own loop in [x86emu.cpp]Halts() which calls Execute_Instruction() until
[halt7.c]main() returns. HHH code in halt7.c makes occasional
DebugStep() calls to step its simulation, and DebugStep transfers into x86utm's [x86emu.cpp]DebugStep() which in turn calls
Execute_Instruction() to step HHH's simulation.
x86utm stack at that point will have:
Execute_Instruction()Â Â Â Â // simulated instruction from halt7.c DebugStep()Â Â Â Â Â Â Â Â Â Â Â Â Â Â // ooh! a nested simulation being stepped!
                         // has called back to x86utm DebugStep
Execute_Instruction()Â Â Â Â // simulated instruction from halt7.c DebugStep()Â Â Â Â Â Â Â Â Â Â Â Â Â Â // instruction was a DebugStep in halt7.c which
                         // has called back to x86utm DebugStep
Execute_Instruction()Â Â Â Â // 1 instruction from halt7.c Halts()Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â // x86utm loop simulating [halt7.c]main()
..
main()
The TM code is "directly executed" [that's just what the
phrase means in x86utm context] and code it simulates using
DebugStep() is "simulated".
That distinction makes no sense, like a lot of things from P. O.
I was tripped up thinking that directly executed means using the host
processor.
Not sure who coined the term. PO had shown HHH(DD), where HHH decides
DD never halts. Posters wanted to point out that whatever HHH decides,
it needs to match up to [what DD actually does] but what is the phrase
for that? PO tries to only discuss "DD *simulated by HHH*" so in
contrast posters came up with "DD *run natively*" or "DD *executed
directly* (from main)" etc.. to contrast with HHH's simulations. What phrase would you use?
x86utm architecture and hosting OS's (Windows/Unix) is really orthogonal
to all this.
I think people discussing that might refer to a UTM here, e.g UTM(DD),
"Directly Executed" should be equivalent to a wrapper which calls
DebugStep, except that if we open-code the DebugStep loop, we can insert
halting criteria, and trace recording and whatnot.
where UTM would be a function in halt7.c that simulates until
completion. In TM world, UTM(DD) is still a TM UTM simulating DD, which
is conceptually different from what I would think of as DD "directly executed" (which is just the TM DD! But PO doesn't grok TMs and computations, always thinking instead of actual computers loading and running "computer programs" (aka TM-description strings).
Also if we have 10 posters posting here, we'll have 10 slightly
different terminology uses + PO's understanding.... :)
Anyhow in x86utm world as-is, We can put messages into [halt7.c]main(). Halting criteria naturally (ISTM) go in [halt7.c]HHH. Like in the HP,
if H is a TM halt decider, the halting criteria it applies are in H, not some meta-level simulator running the TM H. (There is no such thing. H itself does not need criteria to be aborted or "halt-decided", it's just
a "native" TM, so to speak.)
Mike.
On 9/16/2025 5:17 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
It enrages me that people insist that I must be
wrong and they do this entirely on the basis of
refusing to pay enough attention.
The problem is that your goalposts for "enough attention" is for
people to see things which are not there.
Enough attention is (for example) 100% totally understanding
everY single detail of the execution trace of DDD
simulated by HHH1 that includes DDD correctly simulated
by HHH.
I proved that these traces do not diverge at the exact
same point that HHH aborts FIFTEEN TIMES NOW and still
ZERO PEOPLE HAVE NOTICED.
On 9/16/2025 5:14 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
The only relevant cases now are DD.HHH where DD
is simulated by HHH AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
DD is always the diagonal case, targeting HHH.
It can be decided by something that is not HHH.
Thus proving that the exact same DD directly
causes TWO DIFFERENT BEHAVIORS !!!
On 9/16/2025 5:14 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
The only relevant cases now are DD.HHH where DD
is simulated by HHH AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
DD is always the diagonal case, targeting HHH.
It can be decided by something that is not HHH.
You need not be proving this, if your aim is to
topple the halting theorem.
This means that it has always been common knowledge
that the behavior of DD with HHH(DD) is different
than the behavior of DD with HHH1(DD) yet everyone
here disagrees because they value disagreement over
truth.
You are still trying to cheat to show that DD emulated
by HHH according to the semantics of the x86 language
reaches its final halt state by doing all kinds of things
that are not pure simulation.
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 5:24 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
Speaking of which, you have been dodging the question of specifyingRather, it is ridiculously dumb that you cannot see that
this bypass doesn't occur, in the diagonal test case, when
the decider returns a "does not halt" decision (HHH(DD) -> 0).
That is not the actual behavior specified by the actual input to HHH(DD) >>>
what the "actual input" to HHH comprises in the expression HHH(DD).
The termination analyzer HHH is only examining whether
or not the finite string of x86 machine code ending
in the C3 byte "ret" instruction can be reached by the
behavior specified by this finite string.
Are you referring the the sequence of instructions comprising
the procedure DD?
This behavior does include that DD calls HHH(DD)
in recursive simulation. DD is the program under
test and HHH is the test program.
The finite string of x86 machine code instructions
does include the instruction "CALL HHH, DD".
But are you not aware that the entire HHH routine is also part of
the input?
Anything that is called by a piece of the input must be included in the
bill of materials that comprise the input.
You understood that each decider can have an input
defined to "do the opposite" of whatever this decider
decides thwarting the correct decision for this
decider/input pair.
But in order to do that, the input must be understood to be
carrying a copy of that decider.
Or possibly, not an exact copy.
The input can be carrying an /equivalent algorithm/.
You also understand that another different decider
can correctly decide this same input.
Thanks to me, several others, and years of patient effort,
you also now understand that, which is great.
You seem to get totally confused when these are
made specific by HHH/DD and HHH1/DD.
If you think that it is impossible for DD to have
different behavior between these two cases then how
is it that one is conventionally undecidable and
the other is decidable?
What is "undecidable" is universal halting;
it is an undedicable problem
meaning that we don't have a terminating algorithm that will give an
answer for every possible input.
That's what the word "undecidable" means.
The specific test case DD is decidable. For the set of of computations consisting of { DD }, we /can/ have an algorithm which decides that
entire set { DD }, if it is not required to decide anything else.
The relationship between HHH and DD isn't that DD is "undecidable" to
HHH, but that HHH /doesn't/ decide DD (either by not terminating or
returing the wrong value). This is by design; DD is built on HHH and
designed such that HHH(DD) is incorrect, if HHH(DD) terminates.
HHH(DD) disqualifying itself not terminating is entirely the fault of
the designer of HHH.
HHH(DD) being wrong when it does terminate is brought about by the
designer of DD. That designer always has the last word since HHH
is a building block of DD, not the other way around.
What's different between two deciders like HHH and HHH1 is
their /analysis of DD/.
Analysis of DD is not the /behavior/ of DD!
You have chosen simulation as the key part of your analysis.
Simulation follows the behavior of its target,
so that its
structure resembles behavior. That's where you are getting
confused. Analysis of a computation isn't its behavior,
even if it involves detailed tracing.
Only the /complete/ and /correct/ simulation of a terminating
calculation can be de facto regarded as a bona fide representation of
its behavior, and discussed as if it were its behavior.
Any simulation that falls short of this is just an incomplete and/or incorrect analysis, and not a description of the subject's
behavior.
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 5:14 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
The only relevant cases now are DD.HHH where DD
is simulated by HHH AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
DD is always the diagonal case, targeting HHH.
It can be decided by something that is not HHH.
Thus proving that the exact same DD directly
causes TWO DIFFERENT BEHAVIORS !!!
The same angle theta=0 causes different behaviors in sin(theta) and cos(theta).
sin(0) says 0; cos(0) says 1 (halts).
Does that mean that the 0 in sin(0) and the 0 in cos(0) are different
angles?
The different behaviors are just the different analyses performed by the different decider functions, of the same input.
/being analyzed/ is not a behavior of DD, even if it's
done with simulation.
Just like having its cosine taken isn't the behavior of an angle.
When DD is terminating, and it is being analyzed such that a complete, correct simulation of it is performed, only then does /having been
analyzed by simulation/ coincide with DD's behavior.
Analysis by simulation is tantalizingly close to behavior.
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 5:17 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
It enrages me that people insist that I must be
wrong and they do this entirely on the basis of
refusing to pay enough attention.
The problem is that your goalposts for "enough attention" is for
people to see things which are not there.
Enough attention is (for example) 100% totally understanding
everY single detail of the execution trace of DDD
simulated by HHH1 that includes DDD correctly simulated
by HHH.
But you don't have that understanding yourself. You don't have the
insight to see that the abandoned simulation of DD, left behind by HH,
is in a state that could be stepped further with DebugStep and that
doing so will bring it to termination.
I proved that these traces do not diverge at the exact
same point that HHH aborts FIFTEEN TIMES NOW and still
ZERO PEOPLE HAVE NOTICED.
Exactly! Now you are getting it. You have two simulations of the same calculations. They do not diverge.
Then, one is abandoned. But that abandonment doesn't make them
diverge!
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 5:14 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
The only relevant cases now are DD.HHH where DD
is simulated by HHH AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
DD is always the diagonal case, targeting HHH.
It can be decided by something that is not HHH.
You need not be proving this, if your aim is to
topple the halting theorem.
This means that it has always been common knowledge
that the behavior of DD with HHH(DD) is different
than the behavior of DD with HHH1(DD) yet everyone
here disagrees because they value disagreement over
truth.
You are still trying to cheat to show that DD emulated
by HHH according to the semantics of the x86 language
reaches its final halt state by doing all kinds of things
that are not pure simulation.
You seem to be arguing with yourself.
The only words above which are not yours (but are rather mine) are:
"DD is always the diagonal case, targeting HHH.
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 5:14 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
The only relevant cases now are DD.HHH where DD
is simulated by HHH AKA the conventional diagonal
relationship and DD.HHH1 where the same input does
not form the diagonal case thus is conventionally
decidable.
DD is always the diagonal case, targeting HHH.
It can be decided by something that is not HHH.
Thus proving that the exact same DD directly
causes TWO DIFFERENT BEHAVIORS !!!
Analysis by simulation is tantalizingly close to behavior.
int sum(int x, int y){ return x + y; }
sum(5,6) does not specify the sum of 7 + 8
even if everyone in the universe disagrees.
On 9/16/2025 5:50 PM, André G. Isaak wrote:
On 2025-09-16 16:25, olcott wrote:
This is a matter of life and death of the planet
and stopping the rise of the fourth Reich.
When truth becomes computable then the liars
cannot get way with their lies.
If it's so important, then why do you go to such lengths to to
answer fairly straightforward questions about you claims (like
what your dot notation means)?
Because NOT doing this encourages you to go
back and more carefully study the details
On 17/09/2025 01:01, olcott wrote:
On 9/16/2025 5:50 PM, André G. Isaak wrote:
On 2025-09-16 16:25, olcott wrote:
<snip>
This is a matter of life and death of the planet
and stopping the rise of the fourth Reich.
When truth becomes computable then the liars
cannot get way with their lies.
If it's so important, then why do you go to such lengths to to answer
fairly straightforward questions about you claims (like what your dot
notation means)?
Because NOT doing this encourages you to go
back and more carefully study the details
No, it doesn't. It just makes us think that you daren't answer the
questions in a lucid, straightforward manner because, if you do, your
mental model all fall apart as you see that there's nothing behind it..
On 9/16/2025 8:02 PM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 5:24 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
Speaking of which, you have been dodging the question of specifyingRather, it is ridiculously dumb that you cannot see that
this bypass doesn't occur, in the diagonal test case, when
the decider returns a "does not halt" decision (HHH(DD) -> 0).
That is not the actual behavior specified by the actual input to HHH(DD) >>>>
what the "actual input" to HHH comprises in the expression HHH(DD).
The termination analyzer HHH is only examining whether
or not the finite string of x86 machine code ending
in the C3 byte "ret" instruction can be reached by the
behavior specified by this finite string.
Are you referring the the sequence of instructions comprising
the procedure DD?
The C function DD.
This behavior does include that DD calls HHH(DD)
in recursive simulation. DD is the program under
test and HHH is the test program.
The finite string of x86 machine code instructions
does include the instruction "CALL HHH, DD".
Yes and the behavior of HHH simulating an instance of itself
simulating an instance of DD is simulated.
But are you not aware that the entire HHH routine is also part of
the input?
It is only a part of the input in that the HHH
must see the results of simulating an instance
of itself simulating an instance of DD to determine
whether or not this simulated DD can possibly reach
its own simulated final halt state. Other than
that is it not part of the input.
None of my three theory of computation textbooks
seemed to mention this at all. Where did you get
it from?
You seem to get totally confused when these are
made specific by HHH/DD and HHH1/DD.
If you think that it is impossible for DD to have
different behavior between these two cases then how
is it that one is conventionally undecidable and
the other is decidable?
What is "undecidable" is universal halting;
No, No, No, No, No, No, No, No, No, No.
That is only shown indirectly by the fact
that the conventional notion of H/D pairs
H is forced to get the wrong answer.
it is an undedicable problem
meaning that we don't have a terminating algorithm that will give an
answer for every possible input.
That's what the word "undecidable" means.
The same general notion is (perhaps unconventionally)
applied to the specific H/D pair where H is understood
to be forced to get the wrong answer.
I don't think there is a conventional term for the H
of the H/D pair is forced to get the wrong answer.
The relationship between HHH and DD isn't that DD is "undecidable" to
HHH, but that HHH /doesn't/ decide DD (either by not terminating or
returing the wrong value). This is by design; DD is built on HHH and
designed such that HHH(DD) is incorrect, if HHH(DD) terminates.
So what conventional term do we have for the undecidability
of a single H/D pair? H forced to get the wrong answer seems too clumsy
HHH(DD) disqualifying itself not terminating is entirely the fault of
the designer of HHH.
Termination analyzers need not be pure functions.
Any simulation that falls short of this is just an incomplete and/or
incorrect analysis, and not a description of the subject's
behavior.
It follows the semantics specified by the input finite string.
Analysis by simulation is tantalizingly close to behavior.
DD simulated by HHH according to the semantics of
the x86 language IS 100% EXACTLY AND PRECISELY THE
DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname option.
I don't have objcopy to test, but PO only needs a few basic COFF capabilities, so it might be enough...
<https://man7.org/linux/man-pages/man1/objcopy.1.html>
Mike.
On 9/16/2025 7:56 PM, Mike Terry wrote:
On 16/09/2025 22:54, Kaz Kylheku wrote:Yes and I did that all myself from scratch. https://github.com/plolcott/x86utm/blob/master/include/Read_COFF_Object.h Does most of this work.
On 2025-09-16, Mike Terry
<news.dead.person.stones@darjeeling.plus.com> wrote:
On 16/09/2025 06:50, Kaz Kylheku wrote:
On 2025-09-16, Mike TerryI genuinely can't see why you would be asking that question, so I'm
<news.dead.person.stones@darjeeling.plus.com> wrote:
That's what happens now in effect - x86utm.exe log starts tracing
at the halt7.c main() function.
Then why does he keep claiming that main() is "native"? Nothing is
"native" in the loaded COFF test case, then.
missing something.
Just I thought that the host executable just branches into the loaded
module's main(). (Which would be a sensible thing to do; there is no
need to simulate anyting outside of the halting decider such as HHH.)
I see - no, x86utm parses input file halt7.obj to grab what it needs
regarding data/code sectors, symbol definitions (function names and
locations), relocation fixups table, then uses that to initialise a
libx86emu "virtual address space". In effect it performs its own
equivalent of LoadLibrary() within that virtual address space, loading
the module starting at low memory address 0x00000000. halt7.obj is
never linked to form an OS executable. (I suppose it could be, perhaps
with minor changes...)
I'm not sure x86utm directly calling halt7.c's main() would be a goodThe author of libx86emu had to upgrade it from 16-bit to 32-bit for me.
design, while it remains the case that simulation performed by HHH is
within the libx86emu virtual machine. It could be done that way, but
then code like HHH would be running in two completely separate
environments: Windows/Unix host, and libx86emu virtual address space.
They must be exactly the same, and it's a good thing if they're easily
seen and verified as being the same. That happens if they're both
performed by x86utm code via libx86emu, and also it means x86utm's log
shows both in the same format.
Also ISTM the hosting environment should be logically divorced from the
halt7.c code as far as possible. E.g. let's imagine x86utm is doing
its stuff, but then it turns out the x86utm address space (which is
32-bit,
like the libx86utm virtual address space) starts requiring bigger
tables and allocations for running multiple libx86utm virtual address
spaces or whatever, and a resource limit is encountered. We get past
that by making x86utm.exe 64-bit. That should all be routine
(expecting the usual 32- to 64-bit code porting issues...)Â Problem
gone! x86utm.exe now has 64-bits of address space (or close after OS
is taken out) and libx86emu still creates its 32-bit virtual machines
to run halt7.c code. (You can see where this leads!) But now your
design of x86utm directly calling main() and hence HHH() that means HHH
has to be both 64-bit (for xx86utm to directly call) and 32-bit (to run
under libx86emu). Or perhaps I just want to run PO's code on some RISC
architecture, not x86 at all - I can compile C++ code (x86utm) to run
on that RISC CPU, but halt7.c absolutely must be x86 code...
No need for 64-bit.
Alternatively, x86utm could designed so that halt7.c's main() isThat is all in x86utm.cpp
invoked exactly like any other simulation started e.g. by HHH within
halt7.c.
That would need some Simulate() function to drive the DebugStep() loop,
and where would that be? If in halt7.c, what simulates Simulate()? Or
it could be hard-coded into x86utm since it never changes. Dunno......
If you're just making the point that /all/ the code in halt7.c is
"executed" within PO's x86utm,
that's perfectly correct. With the possible exception of main(), all >>>> the code in halt7.c is "TM code" or simulations made by that TM code.
Is there a possible exception? I'm looking at the code now and it
looks like if the simulation from the entry point into the loaded file
is unconditional; it doesn't appear to be an option to branch to it
natively.
I'm not sure what you're referring to.
You're looking at x86utm code or halt7.c code?
The latter is never linked to an executable, so it can /only/ be
executed within x86utm via libx86emu virtual x86 machine.
x86utm.exe code runs under the hosting OS, reads and "loads" the
halt7.obj code into the libx86emu VM, then runs its own loop in
[x86emu.cpp]Halts() which calls Execute_Instruction() until
[halt7.c]main() returns. HHH code in halt7.c makes occasional
DebugStep() calls to step its simulation, and DebugStep transfers into
x86utm's [x86emu.cpp]DebugStep() which in turn calls
Execute_Instruction() to step HHH's simulation.
x86utm stack at that point will have:
Execute_Instruction()Â Â Â Â // simulated instruction from halt7.c
DebugStep()Â Â Â Â Â Â Â Â Â Â Â Â Â Â // ooh! a nested simulation being stepped!
                         // has called back
                         to x86utm DebugStep
Execute_Instruction()Â Â Â Â // simulated instruction from halt7.c
DebugStep()Â Â Â Â Â Â Â Â Â Â Â Â Â Â // instruction was a DebugStep in halt7.c
which
                         // has called back
                         to x86utm DebugStep
Execute_Instruction()Â Â Â Â // 1 instruction from halt7.c
Halts()Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â // x86utm loop simulating [halt7.c]main() ..
main()
The TM code is "directly executed" [that's just what the phrase means
in x86utm context] and code it simulates using DebugStep() is
"simulated".
That distinction makes no sense, like a lot of things from P. O.
I was tripped up thinking that directly executed means using the host
processor.
Not sure who coined the term. PO had shown HHH(DD), where HHH decides
DD never halts. Posters wanted to point out that whatever HHH decides,
it needs to match up to [what DD actually does] but what is the phrase
for that? PO tries to only discuss "DD *simulated by HHH*" so in
contrast posters came up with "DD *run natively*" or "DD *executed
directly* (from main)" etc.. to contrast with HHH's simulations. What
phrase would you use?
x86utm architecture and hosting OS's (Windows/Unix) is really
orthogonal to all this.
Like I said it all runs just fine under Linux.
The Linux MakeFile is still there.
HHH in Halt7.c calls all of its helper functions in Halt7.c and some"Directly Executed" should be equivalent to a wrapper which callsI think people discussing that might refer to a UTM here, e.g UTM(DD),
DebugStep, except that if we open-code the DebugStep loop, we can
insert halting criteria, and trace recording and whatnot.
where UTM would be a function in halt7.c that simulates until
completion. In TM world, UTM(DD) is still a TM UTM simulating DD,
which is conceptually different from what I would think of as DD
"directly executed" (which is just the TM DD! But PO doesn't grok TMs
and computations, always thinking instead of actual computers loading
and running "computer programs" (aka TM-description strings).
Also if we have 10 posters posting here, we'll have 10 slightly
different terminology uses + PO's understanding.... :)
Anyhow in x86utm world as-is, We can put messages into [halt7.c]main().
Halting criteria naturally (ISTM) go in [halt7.c]HHH. Like in the HP,
if H is a TM halt decider, the halting criteria it applies are in H,
not some meta-level simulator running the TM H. (There is no such
thing. H itself does not need criteria to be aborted or "halt-decided",
it's just a "native" TM, so to speak.)
Mike.
helper functions directly in the x86utm OS. These are stubs in Halt7.c.
On 9/16/2025 8:02 PM, Kaz Kylheku wrote:Is the input to HHH(DDD).
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 5:24 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
The C function DD.Are you referring the the sequence of instructions comprising theSpeaking of which, you have been dodging the question of specifyingThe termination analyzer HHH is only examining whether or not the
what the "actual input" to HHH comprises in the expression HHH(DD).
finite string of x86 machine code ending in the C3 byte "ret"
instruction can be reached by the behavior specified by this finite
string.
procedure DD?
Yes, and HHH does not simulate itself as halting.But are you not aware that the entire HHH routine is also part of theIt is only a part of the input in that the HHH must see the results of simulating an instance of itself simulating an instance of DD to
input?
determine whether or not this simulated DD can possibly reach its own simulated final halt state. Other than that is it not part of the input.
Exactly the other way around.I think that this was conventionally ignored prior to my deep dive into simulating termination analyzers.You understood that each decider can have an input defined to "do theBut in order to do that, the input must be understood to be carrying a
opposite" of whatever this decider decides thwarting the correct
decision for this decider/input pair.
copy of that decider.
It is shown.That is only shown indirectly by the fact that the conventional notionIf you think that it is impossible for DD to have different behaviorWhat is "undecidable" is universal halting;
between these two cases then how is it that one is conventionally
undecidable and the other is decidable?
of H/D pairs H is forced to get the wrong answer.
"Wrong".it is an undedicable problem meaning that we don't have a terminatingThe same general notion is (perhaps unconventionally)
algorithm that will give an answer for every possible input.
That's what the word "undecidable" means.
applied to the specific H/D pair where H is understood to be forced to
get the wrong answer.
I don't think there is a conventional term for the H of the H/D pair is forced to get the wrong answer.
That is impossible.HHH(DD) disqualifying itself not terminating is entirely the fault ofTermination analyzers need not be pure functions.
the designer of HHH.
It will probably take an actual computer scientist to redefine HHH as a
pure function of its inputs that keeps the exact same correspondence to
the HP proofs.
No. You didn't prove that the simulation matches the direct execution.HHH(DD) being wrong when it does terminate is brought about by the
designer of DD. That designer always has the last word since HHH is a
building block of DD, not the other way around.
What's different between two deciders like HHH and HHH1 is theirI conclusively proved otherwise and you utterly refuse to pay close
/analysis of DD/. Analysis of DD is not the /behavior/ of DD!
enough attention. You still think that DD simulated by HHH reaches its
own final halt state not even understanding that your mechanism for
doing this is more than a pure simulation of the input THUS CHEATING
On 9/16/2025 4:49 PM, André G. Isaak wrote:
On 2025-09-16 13:19, olcott wrote:I am not referring to Mike specifically yet it does seem that he did say
On 9/16/2025 2:09 PM, joes wrote:
Am Tue, 16 Sep 2025 13:29:28 -0500 schrieb olcott:It enrages me that people insist that I must be wrong and they do this
On 9/16/2025 1:15 PM, Mike Terry wrote:You could have just said "yes" the first time.
On 16/09/2025 18:44, olcott wrote:
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:
On 9/15/2025 10:52 PM, Mike Terry wrote:
On 16/09/2025 03:37, olcott wrote:
On 9/15/2025 9:10 PM, Mike Terry wrote:
The only relevant cases now are DD.HHH where DD is simulated by HHHAll of the details of the trace that you erased answer everyI didn't ask that.DD simulated by HHH1 has the same behavior as DD().I guess I missed an earlier post.*DDD of HHH1 versus DDD of HHH see below*
What do DD.HHH1 and DDD.HHH mean?
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? >>>>>>>>>> And so DD.exe (I think I've seen that) would mean DD directly >>>>>>>>>> executed?
The alternative (which is what I would have guessed) is "DD >>>>>>>>>> which calls HHH1", but it seems you don't mean that (?)
DD simulated by HHH cannot possibly reach its final halt state. >>>>>>>>
possibly question that you could possibly have about these things. >>>>>>> My answer directed you to the trace.
HHH(DD) is the conventional diagonal case and HHH1(DD) is the
common understanding that another different decider could
correctly decide this same input because it does not form the
diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
AKA the conventional diagonal relationship and DD.HHH1 where the
same input does not form the diagonal case thus is conventionally
decidable.
entirely on the basis of refusing to pay enough attention.
Chill, dude.,.
Mike Terry never said anything about you being right or wrong. He
merely asked you to clarify your dot notation...
André
that I am wrong on the basis of his own lack of understanding of one
very key point.
What is at stake here is life on Earth (death by climate change) and the
rise of the fourth Reich on the basis that we have not unequivocally
divided lies from truth.
My system of reasoning makes the set of {True on the basis of meaning} computable.
Is severe climate change caused by humans? YES It Donald Trump exactly copying Hitler's rise to power? YES
On 9/16/2025 5:53 PM, Mike Terry wrote:
On 16/09/2025 23:25, olcott wrote:So you have not bothered to Notice that Trump is exactly copying Hitter?
On 9/16/2025 5:10 PM, Mike Terry wrote:Dude! Nothing going on here has even the remotest of effects on such
On 16/09/2025 19:29, olcott wrote:This is a matter of life and death of the planet and stopping the rise
On 9/16/2025 1:15 PM, Mike Terry wrote:
On 16/09/2025 18:44, olcott wrote:The only relevant cases now are DD.HHH where DD is simulated by HHH
On 9/16/2025 10:44 AM, Mike Terry wrote:
On 16/09/2025 05:13, olcott wrote:All of the details of the trace that you erased answer every
On 9/15/2025 10:52 PM, Mike Terry wrote:I didn't ask that.
On 16/09/2025 03:37, olcott wrote:DD simulated by HHH1 has the same behavior as DD().
On 9/15/2025 9:10 PM, Mike Terry wrote:
On 16/09/2025 00:52, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:I guess I missed an earlier post.
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott: >>>>>>>>>>>>>>> On 9/15/2025 12:27 PM, Kaz Kylheku wrote:DD.HHH1 halts DDD.HHH cannot possibly reach its final halt >>>>>>>>>>>>> state.
Yeah, why does HHH think that it doesn't halt *and then >>>>>>>>>>>>>> halts*?On 2025-09-15, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>>> On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD) >>>>>>>>>>>>>>> simply returns.
What do DD.HHH1 and DDD.HHH mean?
Mike.
*DDD of HHH1 versus DDD of HHH see below*
DDD of HHH keeps calling HHH(DD) to simulate it again.
DDD of HHH1 never calls HHH1 at all.
So DD.HHH1 means DD *OF* HHH1 ? You mean DD simulated by HHH1? >>>>>>>>>> And so DD.exe (I think I've seen that) would mean DD directly >>>>>>>>>> executed?
The alternative (which is what I would have guessed) is "DD >>>>>>>>>> which calls HHH1", but it seems you don't mean that (?)
Mike.
DD simulated by HHH cannot possibly reach its final halt state. >>>>>>>>
possibly question that you could possibly have about these things. >>>>>>> My answer directed you to the trace.
HHH(DD) is the conventional diagonal case and
HHH1(DD) is the common understanding that another different
decider could correctly decide this same input because it does not >>>>>>> form the diagonal case.
That's all very interesting, and all, but what I want to know is:
Does DD.HHH1 mean DD simulated by HHH1?
Does DD.exe mean DD directly executed?
aha! So that's a "yes" to the first question.
AKA the conventional diagonal relationship and DD.HHH1 where the
same input does not form the diagonal case thus is conventionally
decidable.
and I guess I'll just have to accept that as a "yes" for the second
question
This means that it has always been common knowledge that the
behavior of DD with HHH(DD) is different than the behavior of DD
with HHH1(DD) yet everyone here disagrees because they value
disagreement over truth.
and no answer to the third question.
of the fourth Reich.
When truth becomes computable then the liars cannot get way with their
lies.
world events.
Mike.
If it was not for the brave soul of the Senate parliamentarian
cancelling the king maker paragraph of Trump's Big Bullshit Bill the USA would already be more than halfway to the dictatorship power of Nazi
Germany.
Truth can be computable !!!
On 9/15/2025 2:32 PM, olcott wrote:
On 9/15/2025 4:30 PM, Chris M. Thomasson wrote:
On 9/15/2025 2:26 PM, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:It is apparently over everyone's head.
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
Why are you comparing at all?Anyway, HHH1 is not HHH and is therefore superfluous andDDD correctly simulated by HHH1 is identical to the behavior of the >>>>>> directly executed DDD().
irrelevant.
When we have emulation compared to emulation we are comparing
Apples to Apples and not Apples to Oranges.
You have yet to explain the significance of HHH1.Just did.
HHH has complete proof that DDD correctly simulated by HHH cannotHHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus >>>>>>>> must abort DD itself.That's a tautology. HHH only sees the behavior that HHH has seen >>>>>>> in order to convince itself that seeing more behavior is not
necessary.
Yes?
possibly reach its own simulated final halt state.
So yes. What is the proof?
void Infinite_Recursion()
{
  Infinite_Recursion();
  return;
}
How can a program know with complete certainty that
Infinite_Recursion() never halts?
Check...
Well, what are you trying to say here? That the following might halt?Ummm... Your Infinite_Recursion example is basically akin to:You aren't this stupid on the other forums
10 PRINT "Halt" : GOTO 10
Right? It says halt but does not... ;^)
void Infinite_Recursion()
{
Infinite_Recursion();
return;
}
I think not. Blowing the stack is not the same as halting...
On 9/16/2025 8:13 PM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 5:17 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
It enrages me that people insist that I must be
wrong and they do this entirely on the basis of
refusing to pay enough attention.
The problem is that your goalposts for "enough attention" is for
people to see things which are not there.
Enough attention is (for example) 100% totally understanding
everY single detail of the execution trace of DDD
simulated by HHH1 that includes DDD correctly simulated
by HHH.
But you don't have that understanding yourself. You don't have the
insight to see that the abandoned simulation of DD, left behind by HH,
is in a state that could be stepped further with DebugStep and that
doing so will bring it to termination.
I proved that these traces do not diverge at the exact
same point that HHH aborts FIFTEEN TIMES NOW and still
ZERO PEOPLE HAVE NOTICED.
Exactly! Now you are getting it. You have two simulations of the same
calculations. They do not diverge.
Then, one is abandoned. But that abandonment doesn't make them
diverge!
*THAT IS COUNTER-FACTUAL TO THE EXTENT THAT YOU ARE DISHONEST*
*We have two simulations that do not diverge until*
HHH simulates DD ALL OVER AGAIN
HHH simulates DD ALL OVER AGAIN
HHH simulates DD ALL OVER AGAIN
HHH simulates DD ALL OVER AGAIN
HHH simulates DD ALL OVER AGAIN
HHH simulates DD ALL OVER AGAIN
HHH simulates DD ALL OVER AGAIN
HHH simulates DD ALL OVER AGAIN
HHH simulates DD ALL OVER AGAIN
HHH simulates DD ALL OVER AGAIN
HHH simulates DD ALL OVER AGAIN
HHH simulates DD ALL OVER AGAIN
HHH simulates DD ALL OVER AGAIN
HHH simulates DD ALL OVER AGAIN
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
Analysis by simulation is tantalizingly close to behavior.
DD simulated by HHH according to the semantics of
the x86 language IS 100% EXACTLY AND PRECISELY THE
DD is not simulated by HHH according to the semantics of the
x86 language.
Can you cite the chpater and verse of any Intel architecture manual
where it says that the CPU can go into a suspended state wherenever the
same function is called twice, without any intervening conditional
brandch instructions?
HHH's partial, incomplete simulation of DD is not an evocation
of DD's behavior. It is just a botched analysis of DD.
Only the completed simulation of a terminating procedure can be
identified as having evoked a representation of its behavior.
On Tue, 16 Sep 2025 20:12:27 -0500, olcott <polcott333@gmail.com> wrote in <10ad1ts$2u79h$1@dont-email.me>:
On 9/16/2025 7:56 PM, Mike Terry wrote:
On 16/09/2025 22:54, Kaz Kylheku wrote:Yes and I did that all myself from scratch.
On 2025-09-16, Mike Terry
<news.dead.person.stones@darjeeling.plus.com> wrote:
On 16/09/2025 06:50, Kaz Kylheku wrote:
On 2025-09-16, Mike TerryI genuinely can't see why you would be asking that question, so I'm
<news.dead.person.stones@darjeeling.plus.com> wrote:
That's what happens now in effect - x86utm.exe log starts tracing >>>>>>> at the halt7.c main() function.
Then why does he keep claiming that main() is "native"? Nothing is >>>>>> "native" in the loaded COFF test case, then.
missing something.
Just I thought that the host executable just branches into the loaded
module's main(). (Which would be a sensible thing to do; there is no
need to simulate anyting outside of the halting decider such as HHH.)
I see - no, x86utm parses input file halt7.obj to grab what it needs
regarding data/code sectors, symbol definitions (function names and
locations), relocation fixups table, then uses that to initialise a
libx86emu "virtual address space". In effect it performs its own
equivalent of LoadLibrary() within that virtual address space, loading
the module starting at low memory address 0x00000000. halt7.obj is
never linked to form an OS executable. (I suppose it could be, perhaps >>> with minor changes...)
https://github.com/plolcott/x86utm/blob/master/include/Read_COFF_Object.h
Does most of this work.
I'm not sure x86utm directly calling halt7.c's main() would be a goodThe author of libx86emu had to upgrade it from 16-bit to 32-bit for me.
design, while it remains the case that simulation performed by HHH is
within the libx86emu virtual machine. It could be done that way, but
then code like HHH would be running in two completely separate
environments: Windows/Unix host, and libx86emu virtual address space.
They must be exactly the same, and it's a good thing if they're easily
seen and verified as being the same. That happens if they're both
performed by x86utm code via libx86emu, and also it means x86utm's log
shows both in the same format.
Also ISTM the hosting environment should be logically divorced from the
halt7.c code as far as possible. E.g. let's imagine x86utm is doing
its stuff, but then it turns out the x86utm address space (which is
32-bit,
like the libx86utm virtual address space) starts requiring bigger
tables and allocations for running multiple libx86utm virtual address
spaces or whatever, and a resource limit is encountered. We get past
that by making x86utm.exe 64-bit. That should all be routine
(expecting the usual 32- to 64-bit code porting issues...)Â Problem
gone! x86utm.exe now has 64-bits of address space (or close after OS
is taken out) and libx86emu still creates its 32-bit virtual machines
to run halt7.c code. (You can see where this leads!) But now your
design of x86utm directly calling main() and hence HHH() that means HHH
has to be both 64-bit (for xx86utm to directly call) and 32-bit (to run
under libx86emu). Or perhaps I just want to run PO's code on some RISC >>> architecture, not x86 at all - I can compile C++ code (x86utm) to run
on that RISC CPU, but halt7.c absolutely must be x86 code...
No need for 64-bit.
Alternatively, x86utm could designed so that halt7.c's main() isThat is all in x86utm.cpp
invoked exactly like any other simulation started e.g. by HHH within
halt7.c.
That would need some Simulate() function to drive the DebugStep() loop,
and where would that be? If in halt7.c, what simulates Simulate()? Or >>> it could be hard-coded into x86utm since it never changes. Dunno...... >>>
If you're just making the point that /all/ the code in halt7.c isIs there a possible exception? I'm looking at the code now and it
"executed" within PO's x86utm,
that's perfectly correct. With the possible exception of main(), all >>>>> the code in halt7.c is "TM code" or simulations made by that TM code. >>>>
looks like if the simulation from the entry point into the loaded file >>>> is unconditional; it doesn't appear to be an option to branch to it
natively.
I'm not sure what you're referring to.
You're looking at x86utm code or halt7.c code?
The latter is never linked to an executable, so it can /only/ be
executed within x86utm via libx86emu virtual x86 machine.
x86utm.exe code runs under the hosting OS, reads and "loads" the
halt7.obj code into the libx86emu VM, then runs its own loop in
[x86emu.cpp]Halts() which calls Execute_Instruction() until
[halt7.c]main() returns. HHH code in halt7.c makes occasional
DebugStep() calls to step its simulation, and DebugStep transfers into
x86utm's [x86emu.cpp]DebugStep() which in turn calls
Execute_Instruction() to step HHH's simulation.
x86utm stack at that point will have:
Execute_Instruction()Â Â Â Â // simulated instruction from halt7.c
DebugStep()Â Â Â Â Â Â Â Â Â Â Â Â Â Â // ooh! a nested simulation being stepped!
                         // has called back
                         to x86utm DebugStep >>> Execute_Instruction()    // simulated instruction from halt7.c
DebugStep()Â Â Â Â Â Â Â Â Â Â Â Â Â Â // instruction was a DebugStep in halt7.c
which
                         // has called back
                         to x86utm DebugStep >>> Execute_Instruction()    // 1 instruction from halt7.c
Halts()Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â // x86utm loop simulating [halt7.c]main() ..
main()
The TM code is "directly executed" [that's just what the phrase means >>>>> in x86utm context] and code it simulates using DebugStep() is
"simulated".
That distinction makes no sense, like a lot of things from P. O.
I was tripped up thinking that directly executed means using the host
processor.
Not sure who coined the term. PO had shown HHH(DD), where HHH decides
DD never halts. Posters wanted to point out that whatever HHH decides, >>> it needs to match up to [what DD actually does] but what is the phrase
for that? PO tries to only discuss "DD *simulated by HHH*" so in
contrast posters came up with "DD *run natively*" or "DD *executed
directly* (from main)" etc.. to contrast with HHH's simulations. What
phrase would you use?
x86utm architecture and hosting OS's (Windows/Unix) is really
orthogonal to all this.
Like I said it all runs just fine under Linux.
The Linux MakeFile is still there.
HHH in Halt7.c calls all of its helper functions in Halt7.c and some"Directly Executed" should be equivalent to a wrapper which callsI think people discussing that might refer to a UTM here, e.g UTM(DD),
DebugStep, except that if we open-code the DebugStep loop, we can
insert halting criteria, and trace recording and whatnot.
where UTM would be a function in halt7.c that simulates until
completion. In TM world, UTM(DD) is still a TM UTM simulating DD,
which is conceptually different from what I would think of as DD
"directly executed" (which is just the TM DD! But PO doesn't grok TMs
and computations, always thinking instead of actual computers loading
and running "computer programs" (aka TM-description strings).
Also if we have 10 posters posting here, we'll have 10 slightly
different terminology uses + PO's understanding.... :)
Anyhow in x86utm world as-is, We can put messages into [halt7.c]main().
Halting criteria naturally (ISTM) go in [halt7.c]HHH. Like in the HP,
if H is a TM halt decider, the halting criteria it applies are in H,
not some meta-level simulator running the TM H. (There is no such
thing. H itself does not need criteria to be aborted or "halt-decided",
it's just a "native" TM, so to speak.)
Mike.
helper functions directly in the x86utm OS. These are stubs in Halt7.c.
Can you verify that this x86utm is not a fork of this? :
https://github.com/utmapp/UTM
Thanks.
On Tue, 16 Sep 2025 17:11:46 -0500, olcott <polcott333@gmail.com> wrote in
I am not referring to Mike specifically yet it does seem that he did say
that I am wrong on the basis of his own lack of understanding of one
very key point.
What is at stake here is life on Earth (death by climate change) and the
rise of the fourth Reich on the basis that we have not unequivocally
divided lies from truth.
My system of reasoning makes the set of {True on the basis of meaning}
computable.
Is severe climate change caused by humans? YES It Donald Trump exactly
copying Hitler's rise to power? YES
I...don't follow. These decisions, true or false, have nothing
to do with the halting problem.
Isn't it the truth that some attempts at deciders turn out to
be undecidable?
The execution trace that I posted provides every
relevant detail. When you ask these kinds of questions
that only proves you did not study the execution trace
well enough.
I have proved to Mike that the simulation of DD by HHH
does not diverge when HHH stops simulating it about five
times now and he still doesn't get it.
On Mon, 15 Sep 2025 14:35:18 -0700, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote in <10aa0qm$26msp$5@dont-email.me>:
Well, what are you trying to say here? That the following might halt?
void Infinite_Recursion()
{
Infinite_Recursion();
return;
}
I think not. Blowing the stack is not the same as halting...
As a matter of principle, it's part of the execution environment,
just like his partial decider simulating the code.
In his world, catching it calling itself twice without an intervening decision is grounds to abort.
In our world, when the stack gets used up, it aborts.
Fair's fair. If one is valid -- and if he's thumping the x86 bible
saying that the rules of the instruction set are the source of
truth -- then he can't have an infinite stack.
(
The example could be
loop: goto loop;
But if I'm not mistaken, a decider can be written for such a trivial example...by parsing the source, not simulating it!
)
On Tue, 16 Sep 2025 19:07:11 -0500, olcott wrote
On 9/16/2025 5:53 PM, Mike Terry wrote:
Dude! Nothing going on here has even the remotest of effects on suchSo you have not bothered to Notice that Trump is exactly copying Hitler?
world events.
Mike.
If it was not for the brave soul of the Senate parliamentarian
cancelling the king maker paragraph of Trump's Big Bullshit Bill the USA
would already be more than halfway to the dictatorship power of Nazi
Germany.
Truth can be computable !!!
How do you compute "empathy" ...or "mercy"?
These are matters of conscience. Is conscience nothing more
than a computation?
Humoring you a bit, it seems that the closest you could get to
such a thing might be a system of ethics based on the results of
game theory, part of information theory.
But another part of information theory shows that the halting problem is undecidable.
None of that has anything to do with explaining your notation. If
you're cagey about such basic things, it invites suspicion.
Am Tue, 16 Sep 2025 20:37:39 -0500 schrieb olcott:
On 9/16/2025 8:02 PM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 5:24 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
Is the input to HHH(DDD).The C function DD.Are you referring the the sequence of instructions comprising theSpeaking of which, you have been dodging the question of specifyingThe termination analyzer HHH is only examining whether or not the
what the "actual input" to HHH comprises in the expression HHH(DD).
finite string of x86 machine code ending in the C3 byte "ret"
instruction can be reached by the behavior specified by this finite
string.
procedure DD?
But are you not aware that the entire HHH routine is also part of theIt is only a part of the input in that the HHH must see the results of
input?
simulating an instance of itself simulating an instance of DD to
determine whether or not this simulated DD can possibly reach its own
simulated final halt state. Other than that is it not part of the input.
Yes, and HHH does not simulate itself as halting.
How else would it be part of the input?
I think that this was conventionally ignored prior to my deep dive intoYou understood that each decider can have an input defined to "do theBut in order to do that, the input must be understood to be carrying a
opposite" of whatever this decider decides thwarting the correct
decision for this decider/input pair.
copy of that decider.
simulating termination analyzers.
Exactly the other way around.
--It is shown.That is only shown indirectly by the fact that the conventional notionIf you think that it is impossible for DD to have different behaviorWhat is "undecidable" is universal halting;
between these two cases then how is it that one is conventionally
undecidable and the other is decidable?
of H/D pairs H is forced to get the wrong answer.
"Wrong".it is an undedicable problem meaning that we don't have a terminatingThe same general notion is (perhaps unconventionally)
algorithm that will give an answer for every possible input.
That's what the word "undecidable" means.
applied to the specific H/D pair where H is understood to be forced to
get the wrong answer.
I don't think there is a conventional term for the H of the H/D pair is
forced to get the wrong answer.
That is impossible.HHH(DD) disqualifying itself not terminating is entirely the fault ofTermination analyzers need not be pure functions.
the designer of HHH.
It will probably take an actual computer scientist to redefine HHH as a
pure function of its inputs that keeps the exact same correspondence to
the HP proofs.
HHH(DD) being wrong when it does terminate is brought about by the
designer of DD. That designer always has the last word since HHH is a
building block of DD, not the other way around.
No. You didn't prove that the simulation matches the direct execution.What's different between two deciders like HHH and HHH1 is theirI conclusively proved otherwise and you utterly refuse to pay close
/analysis of DD/. Analysis of DD is not the /behavior/ of DD!
enough attention. You still think that DD simulated by HHH reaches its
own final halt state not even understanding that your mechanism for
doing this is more than a pure simulation of the input THUS CHEATING
Nobody thinks that HHH as it is simulates DD returning. Jumping past
the call is a strawman.
[spam snipped]
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 8:02 PM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 5:24 PM, Kaz Kylheku wrote:
On 2025-09-16, olcott <polcott333@gmail.com> wrote:
Speaking of which, you have been dodging the question of specifyingRather, it is ridiculously dumb that you cannot see that
this bypass doesn't occur, in the diagonal test case, when
the decider returns a "does not halt" decision (HHH(DD) -> 0).
That is not the actual behavior specified by the actual input to HHH(DD) >>>>>
what the "actual input" to HHH comprises in the expression HHH(DD).
The termination analyzer HHH is only examining whether
or not the finite string of x86 machine code ending
in the C3 byte "ret" instruction can be reached by the
behavior specified by this finite string.
Are you referring the the sequence of instructions comprising
the procedure DD?
The C function DD.
This behavior does include that DD calls HHH(DD)
in recursive simulation. DD is the program under
test and HHH is the test program.
The finite string of x86 machine code instructions
does include the instruction "CALL HHH, DD".
Yes and the behavior of HHH simulating an instance of itself
simulating an instance of DD is simulated.
The problem you don't realize is that DD could instead have "CALL GGG,
DD", where GGG is a clean-room reimplementation of HHH, based
on a careful specification from reverse-engineering HHH.
Your halting decision then screws up because it compares addresses
to answer the question "are these two functions the same function?"
When HHH(DD) analyzes DD and sees that DD calls GGG(DD),
it must recognize that HHH and GGG are the same, because GGG is a
clean-room clone of HHH.
The way you are testing function equivalence is flawed.
Function equivalence cannot be determined by address because,
for instance:
add_foo(x, y) { return x + y }
add_bar(x, y) { return x + y }
are just two names/addresses for the same computation.
There is no algorithm which determines whether two functions
are the same; it is an undecidable problem.
Your abort decision is based on a strawman implementation of
a problem which is undecidable in its correct form.
You are seeing this problem in your own code. You created a clone
of HHH called HHH1, which is the same except for the name.
Yet, it's behaving differently.
Why? Because when your machinery sees CALL HHH1 and CALL HHH
in the execution trace, it treats them as a different functions.
You must not do that. You must compare function pointers using
your own CompareFunction(X, Y) function which is calibrated
such that CompareFunction(HHH, HHH1) yields true.
If you consistently use CompareFunction eveywhere you would
otherwise use X == Y, you will see that HHH1(DD) and HHH(DD)
proceed identically.
But are you not aware that the entire HHH routine is also part of
the input?
It is only a part of the input in that the HHH
must see the results of simulating an instance
of itself simulating an instance of DD to determine
whether or not this simulated DD can possibly reach
its own simulated final halt state. Other than
that is it not part of the input.
That's where you are wrong. DD is built on HHH. Saying HHH
is not part of the input is like saying the engine is not
part of a car.
None of my three theory of computation textbooks
seemed to mention this at all. Where did you get
it from?
By paying attention in CS classes.
Pure functions are important in topics connected with programming
languages. Their properties are useful in compiler optimiization,
concurrent programming and what not.
If you want to explore the theory of computation by writing code
in a programming language which has imperative features (side effects),
it behooves you to make your functions behave as closely to the
theoretical ones as possible, which implies purity.
You seem to get totally confused when these are
made specific by HHH/DD and HHH1/DD.
If you think that it is impossible for DD to have
different behavior between these two cases then how
is it that one is conventionally undecidable and
the other is decidable?
What is "undecidable" is universal halting;
No, No, No, No, No, No, No, No, No, No.
That is only shown indirectly by the fact
that the conventional notion of H/D pairs
H is forced to get the wrong answer.
Anyway, I'm only illustrating the term "undecidable" and how it is
used. It is used to describe situations when we believe we don't have
an algorithm that terminates on every input in the desired input set.
it is an undedicable problem
meaning that we don't have a terminating algorithm that will give an
answer for every possible input.
That's what the word "undecidable" means.
The same general notion is (perhaps unconventionally)
applied to the specific H/D pair where H is understood
to be forced to get the wrong answer.
I don't think there is a conventional term for the H
of the H/D pair is forced to get the wrong answer.
Just "gets thew wrong answer". The case, as a set of one,
is decidable.
The relationship between HHH and DD isn't that DD is "undecidable" to
HHH, but that HHH /doesn't/ decide DD (either by not terminating or
returing the wrong value). This is by design; DD is built on HHH and
designed such that HHH(DD) is incorrect, if HHH(DD) terminates.
So what conventional term do we have for the undecidability
of a single H/D pair? H forced to get the wrong answer seems too clumsy
It's not only clumsy, but it's wrong. Nothing is forced.
Forced means that an altered course of action is imposed by
interference where a different course was to take place without
interference.
But H is not altered; it is set in stone, and then D is built on top of
it, revealing a latent flaw in H.
HHH(DD) disqualifying itself not terminating is entirely the fault of
the designer of HHH.
Termination analyzers need not be pure functions.
A termination analyzer needs to be an algorithm. An algorithm is an abstraction that can be rendered in purely functional form, as a
sequence of side-effect-free transformations of data representations.
Any simulation that falls short of this is just an incomplete and/or
incorrect analysis, and not a description of the subject's
behavior.
It follows the semantics specified by the input finite string.
Not all the way to the end, right? The semantics is not completely
evolved when the simulation is abruptly abandoned.
(And you have the wrong idea of what that input is; the input
isn't just the body of D, but all of HHH, and all the simulation
machinery that HHH calls. Debug_Step is part of DD, etc.)
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
Analysis by simulation is tantalizingly close to behavior.
DD simulated by HHH according to the semantics of
the x86 language IS 100% EXACTLY AND PRECISELY THE
DD is not simulated by HHH according to the semantics of the
x86 language.
Can you cite the chpater and verse of any Intel architecture manual
where it says that the CPU can go into a suspended state wherenever the
same function is called twice, without any intervening conditional
brandch instructions?
HHH's partial, incomplete simulation of DD is not an evocation
of DD's behavior. It is just a botched analysis of DD.
Only the completed simulation of a terminating procedure can be
identified as having evoked a representation of its behavior.
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
That's vacuously true because DD is never correctly
simulated by HHH.
The diagonal case is something that happens; a situation which never
happens is never identifiable as one one which happens.
On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
It is only a part of the input in that the HHH
must see the results of simulating an instance
of itself simulating an instance of DD to determine
whether or not this simulated DD can possibly reach
its own simulated final halt state. Other than
that is it not part of the input.
That's where you are wrong. DD is built on HHH. Saying HHH
is not part of the input is like saying the engine is not
part of a car.
DD is the program under test.
HHH is not the program under test.
HHH is the test program.
DD is not the test program.
On 2025-09-17 08:16, olcott wrote:
On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
It is only a part of the input in that the HHH
must see the results of simulating an instance
of itself simulating an instance of DD to determine
whether or not this simulated DD can possibly reach
its own simulated final halt state. Other than
that is it not part of the input.
That's where you are wrong. DD is built on HHH. Saying HHH
is not part of the input is like saying the engine is not
part of a car.
DD is the program under test.
HHH is not the program under test.
HHH is the test program.
DD is not the test program.
This is a fundamental misunderstanding on your part.
The outermost HHH is the test program.
DD is the program under test.
The HHH called from within DD is *part* of the program under test. It is *not* the test program.
André
On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
The way you are testing function equivalence is flawed.
It is not and you convoluted example seems to make no point.
Function equivalence cannot be determined by address because,
for instance:
In the semantics of the x86 model of computation
same address means same function.
add_foo(x, y) { return x + y }
add_bar(x, y) { return x + y }
are just two names/addresses for the same computation.
There is no algorithm which determines whether two functions
are the same; it is an undecidable problem.
Your abort decision is based on a strawman implementation of
a problem which is undecidable in its correct form.
Identical finite strings of machine code can achieve
this same result in the Linz proof.
You are seeing this problem in your own code. You created a clone
of HHH called HHH1, which is the same except for the name.
Yet, it's behaving differently.
In the very well known conventionally understood way.
Why? Because when your machinery sees CALL HHH1 and CALL HHH
in the execution trace, it treats them as a different functions.
When I say that DD calls HHH(DD) in recursive simulation
and DD does not call HHH1 at all and I say this 500 times
and no one sees that these behaviors cannot be the same
on the basis of these differences I can only reasonably
conclude short-circuits in brains or lying.
You must not do that. You must compare function pointers using
your own CompareFunction(X, Y) function which is calibrated
such that CompareFunction(HHH, HHH1) yields true.
If you consistently use CompareFunction eveywhere you would
otherwise use X == Y, you will see that HHH1(DD) and HHH(DD)
proceed identically.
The trace already shows that DD emulated by HHH1 and
DD emulated by HHH are identical until HHH emulates
an instance of itself emulating an instance of DD.
That's where you are wrong. DD is built on HHH. Saying HHH
is not part of the input is like saying the engine is not
part of a car.
DD is the program under test.
HHH is not the program under test.
HHH is the test program.
DD is not the test program.
None of my three theory of computation textbooks
seemed to mention this at all. Where did you get
it from?
By paying attention in CS classes.
I only learned theory of computation from textbooks
and you say the whole pure function thing is not
in any textbooks?
Pure functions are important in topics connected with programming
languages. Their properties are useful in compiler optimiization,
Useful yet not mandatory.
If you want to explore the theory of computation by writing code
in a programming language which has imperative features (side effects),
it behooves you to make your functions behave as closely to the
theoretical ones as possible, which implies purity.
I think that I learned that from you.
Pure functions are Turing computable functions.
On the other hand impure functions seem to
contradict the Church/Turing thesis.
Anyway, I'm only illustrating the term "undecidable" and how it is
used. It is used to describe situations when we believe we don't have
an algorithm that terminates on every input in the desired input set.
So maybe the conventional "do the opposite" relationship
can be called an undecidable instance.
I will not tolerate
that there is no existing term for a meaning that I must
expression.
Just "gets thew wrong answer". The case, as a set of one,
is decidable.
I am hereby establishing the term "undecidable instance" for this case.
The relationship between HHH and DD isn't that DD is "undecidable" to
HHH, but that HHH /doesn't/ decide DD (either by not terminating or
returing the wrong value). This is by design; DD is built on HHH and
designed such that HHH(DD) is incorrect, if HHH(DD) terminates.
So what conventional term do we have for the undecidability
of a single H/D pair? H forced to get the wrong answer seems too clumsy
It's not only clumsy, but it's wrong. Nothing is forced.
Forced means that an altered course of action is imposed by
interference where a different course was to take place without
interference.
It is an input/decider combination intentionally defined
to create an undecidable instance.
If you can tell me how to convert HHH into a pure function
and keep complete correspondence to the HP proof I will do this.
Until then HHH is a correct termination analyzer.
Any simulation that falls short of this is just an incomplete and/or
incorrect analysis, and not a description of the subject's
behavior.
It follows the semantics specified by the input finite string.
Not all the way to the end, right? The semantics is not completely
evolved when the simulation is abruptly abandoned.
If you only paid enough attention you would see that the
only possible end is out-of-memory error.
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
The way you are testing function equivalence is flawed.
It is not and you convoluted example seems to make no point.
Function equivalence cannot be determined by address because,
for instance:
In the semantics of the x86 model of computation
same address means same function.
Sure, you genius computer scientist, you!
The thing you are not understanding in my text above is that
a different address does /not/ mean different function!
The exact /same/ computation can be implemented in multiple ways,
and located at /different/ addresseds.
Your native pointer comparison wrongly concludes that
two functions that are the same are different.
add_foo(x, y) { return x + y }
add_bar(x, y) { return x + y }
are just two names/addresses for the same computation.
There is no algorithm which determines whether two functions
are the same; it is an undecidable problem.
Your abort decision is based on a strawman implementation of
a problem which is undecidable in its correct form.
Identical finite strings of machine code can achieve
this same result in the Linz proof.
Machines can be identical/equivalent computations yet be completely
different strings.
You are seeing this problem in your own code. You created a clone
of HHH called HHH1, which is the same except for the name.
Yet, it's behaving differently.
In the very well known conventionally understood way.
Why? Because when your machinery sees CALL HHH1 and CALL HHH
in the execution trace, it treats them as a different functions.
When I say that DD calls HHH(DD) in recursive simulation
and DD does not call HHH1 at all and I say this 500 times
But if HHH1 and HHH were correctly identified as the same computation,
then there would be no difference between "call HHH(DD)" and
"call HHH1(DD)".
It doesn't strike you as wrong that if you just copy a function
under a different name, you get a different behavior?
(Didn't you work as a software engineer and get to pick most function
names all your life without worrying that the choice would break your program?)
and no one sees that these behaviors cannot be the same
on the basis of these differences I can only reasonably
conclude short-circuits in brains or lying.
The difference you created is wrong; your shit is concluding
that if there are differnt addresses in a CALL instruction
in an execution trace, then the functions must be different.
And /that test/ is what is /introducing/ the difference.
I'm saying that /if/ you had an abstract comparison Compare_Function(X,
Y) which reports true when X is HHH and Y is HHH1 (or vice versa),
and used that comparison whenever your abort logic compares
function pointers rather than using the == operator, that
difference in behavior would disappear.
You must not do that. You must compare function pointers using
your own CompareFunction(X, Y) function which is calibrated
such that CompareFunction(HHH, HHH1) yields true.
If you consistently use CompareFunction eveywhere you would
otherwise use X == Y, you will see that HHH1(DD) and HHH(DD)
proceed identically.
The trace already shows that DD emulated by HHH1 and
DD emulated by HHH are identical until HHH emulates
an instance of itself emulating an instance of DD.
They are identical until a decision is made, which involves
comparing whether functions are the same, by address.
That's where you are wrong. DD is built on HHH. Saying HHH
is not part of the input is like saying the engine is not
part of a car.
DD is the program under test.
HHH is not the program under test.
The bulk of DD consists of HHH, and the bulk of HHH consists of the
simulator that it relies on, so what you are saying is completely
moronic, as if you don't understand software eng.
HHH is the test program.
DD is not the test program.
The diagonal case changes that; the program under test
by an algorithm includes the implementation of that
algorithm.
None of my three theory of computation textbooks
seemed to mention this at all. Where did you get
it from?
By paying attention in CS classes.
I only learned theory of computation from textbooks
and you say the whole pure function thing is not
in any textbooks?
Pure functions are important in topics connected with programming
languages. Their properties are useful in compiler optimiization,
Useful yet not mandatory.
That's because when we are developing systems, we are not trying to
prove theorems about halting. (Ideally, we would just want to prove that
our systems do what we think and say they do; and we /can/ prove that combinations of impure functions have the properties we want them to
have in that context.)
If you want to explore the theory of computation by writing code
in a programming language which has imperative features (side effects),
it behooves you to make your functions behave as closely to the
theoretical ones as possible, which implies purity.
I think that I learned that from you.
Pure functions are Turing computable functions.
On the other hand impure functions seem to
contradict the Church/Turing thesis.
Not necessarily. When you have impurity, you have to manage it carefully
and prove that it's not making your theoretical result wrong. This is an additional burden which requires you to be extra clever, and it's an
extra burden to anyone following your work.
Turing's own tape machine is designed such that the tape head
performs impure calculations it mutates the tape.
This is managed by isolation. Each Turing Machine gets its own tape.
Each Turing Machine is understood to be a process that starts with the
tape in the specified initial contents. We never have to think bout the
tape being corrupt, or being tampered with.
Anyway, I'm only illustrating the term "undecidable" and how it is
used. It is used to describe situations when we believe we don't have
an algorithm that terminates on every input in the desired input set.
So maybe the conventional "do the opposite" relationship
can be called an undecidable instance.
I wouldn't. Because the instance is positively decidable.
The term "undecidable" in computer science is so strongly linked
with the idea of there being no algorithm which terminates
and provides the correct answer for all instances in the space
of concern, I wouldn't reuse the term for anything else.
I will not tolerate
that there is no existing term for a meaning that I must
expression.
I just use "the diagonal case", because it is understood that in the
diagonal case there is a procedure and an input, such that the procedure decdides incorrectly. However, that is a bit of a coded term
understandable just to people who are ramped up on the problem.
Just "gets thew wrong answer". The case, as a set of one,
is decidable.
I am hereby establishing the term "undecidable instance" for this case.
Hard disagree; naming is important. Reusing deeply entrenced,
loaded terms, big no no.
It's not only clumsy, but it's wrong. Nothing is forced.The relationship between HHH and DD isn't that DD is "undecidable" to >>>>> HHH, but that HHH /doesn't/ decide DD (either by not terminating or
returing the wrong value). This is by design; DD is built on HHH and >>>>> designed such that HHH(DD) is incorrect, if HHH(DD) terminates.
So what conventional term do we have for the undecidability
of a single H/D pair? H forced to get the wrong answer seems too clumsy >>>
Forced means that an altered course of action is imposed by
interference where a different course was to take place without
interference.
It is an input/decider combination intentionally defined
to create an undecidable instance.
But the combination doens't do anything to the decider whatsoever,
taking it as-is.
If you can tell me how to convert HHH into a pure function
and keep complete correspondence to the HP proof I will do this.
Until then HHH is a correct termination analyzer.
"Until someone shows how to correct my mistake, I am right"
is kind of not how it works.
Any simulation that falls short of this is just an incomplete and/or >>>>> incorrect analysis, and not a description of the subject's
behavior.
It follows the semantics specified by the input finite string.
Not all the way to the end, right? The semantics is not completely
evolved when the simulation is abruptly abandoned.
If you only paid enough attention you would see that the
only possible end is out-of-memory error.
I paid more attention and saw that the abandoned simulation is actually terminating. Take a few more steps of it and a CALL HHH DD instruction
is seen to terminate, and conditional jumps are coming.
On 9/17/2025 12:26 AM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:That is proven to be counter-factual.
Analysis by simulation is tantalizingly close to behavior.
DD simulated by HHH according to the semantics of the x86 language IS
100% EXACTLY AND PRECISELY THE
DD is not simulated by HHH according to the semantics of the x86
language.
Can you cite the chpater and verse of any Intel architecture manualDD is simulated according to the exact semantics of the x86 language up
where it says that the CPU can go into a suspended state wherenever the
same function is called twice, without any intervening conditional
brandch instructions?
to the point where it is no longer simulated.
HHH's partial, incomplete simulation of DD is not an evocation of DD'sYou lack the technical skill to meet my challenge of deriving the
behavior. It is just a botched analysis of DD.
correct non-termination criteria therefore you have no basis to
determine that my non-termination criteria are incorrect. It is a matter
of objective fact that my non-termination criteria are correct.
Only the completed simulation of a terminating procedure can beThat you fail to understand that DD simulated by HHH according to the semantics of the x86 language cannot possibly reach its own simulated
identified as having evoked a representation of its behavior.
final halt state is purely your own ignorance thus not my mistake.
I don't know how you think you could get away with your alternative as
being a pure simulation.
On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:You are the one that seems to not be able to understand the easily
On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
The way you are testing function equivalence is flawed.It is not and you convoluted example seems to make no point.
Function equivalence cannot be determined by address because,In the semantics of the x86 model of computation same address means
for instance:
same function.
Sure, you genius computer scientist, you!
The thing you are not understanding in my text above is that a
different address does /not/ mean different function!
The exact /same/ computation can be implemented in multiple ways,
and located at /different/ addresseds.
Your native pointer comparison wrongly concludes that two functions
that are the same are different.
add_foo(x, y) { return x + y }
add_bar(x, y) { return x + y }
are just two names/addresses for the same computation.
There is no algorithm which determines whether two functions are the
same; it is an undecidable problem.
Your abort decision is based on a strawman implementation of a
problem which is undecidable in its correct form.
Identical finite strings of machine code can achieve this same result
in the Linz proof.
Machines can be identical/equivalent computations yet be completely
different strings.
You are seeing this problem in your own code. You created a clone ofIn the very well known conventionally understood way.
HHH called HHH1, which is the same except for the name.
Yet, it's behaving differently.
Why? Because when your machinery sees CALL HHH1 and CALL HHH in theWhen I say that DD calls HHH(DD) in recursive simulation and DD does
execution trace, it treats them as a different functions.
not call HHH1 at all and I say this 500 times
But if HHH1 and HHH were correctly identified as the same computation,
then there would be no difference between "call HHH(DD)" and "call
HHH1(DD)".
It doesn't strike you as wrong that if you just copy a function under a
different name, you get a different behavior?
(Didn't you work as a software engineer and get to pick most function
names all your life without worrying that the choice would break your
program?)
and no one sees that these behaviors cannot be the same on the basis
of these differences I can only reasonably conclude short-circuits in
brains or lying.
The difference you created is wrong; your shit is concluding that if
there are differnt addresses in a CALL instruction in an execution
trace, then the functions must be different.
And /that test/ is what is /introducing/ the difference.
I'm saying that /if/ you had an abstract comparison Compare_Function(X,
Y) which reports true when X is HHH and Y is HHH1 (or vice versa),
and used that comparison whenever your abort logic compares function
pointers rather than using the == operator, that difference in behavior
would disappear.
You must not do that. You must compare function pointers using your
own CompareFunction(X, Y) function which is calibrated such that
CompareFunction(HHH, HHH1) yields true.
If you consistently use CompareFunction eveywhere you would otherwise
use X == Y, you will see that HHH1(DD) and HHH(DD)
proceed identically.
The trace already shows that DD emulated by HHH1 and DD emulated by
HHH are identical until HHH emulates an instance of itself emulating
an instance of DD.
They are identical until a decision is made, which involves comparing
whether functions are the same, by address.
That's where you are wrong. DD is built on HHH. Saying HHH is not
part of the input is like saying the engine is not part of a car.
DD is the program under test.
HHH is not the program under test.
The bulk of DD consists of HHH, and the bulk of HHH consists of the
simulator that it relies on, so what you are saying is completely
moronic, as if you don't understand software eng.
HHH is the test program.
DD is not the test program.
The diagonal case changes that; the program under test by an algorithm
includes the implementation of that algorithm.
None of my three theory of computation textbooks seemed to mention
this at all. Where did you get it from?
By paying attention in CS classes.
I only learned theory of computation from textbooks and you say the
whole pure function thing is not in any textbooks?
Pure functions are important in topics connected with programming
languages. Their properties are useful in compiler optimiization,
Useful yet not mandatory.
That's because when we are developing systems, we are not trying to
prove theorems about halting. (Ideally, we would just want to prove
that our systems do what we think and say they do; and we /can/ prove
that combinations of impure functions have the properties we want them
to have in that context.)
If you want to explore the theory of computation by writing code in a
programming language which has imperative features (side effects),
it behooves you to make your functions behave as closely to the
theoretical ones as possible, which implies purity.
I think that I learned that from you.
Pure functions are Turing computable functions.
On the other hand impure functions seem to contradict the
Church/Turing thesis.
Not necessarily. When you have impurity, you have to manage it
carefully and prove that it's not making your theoretical result wrong.
This is an additional burden which requires you to be extra clever, and
it's an extra burden to anyone following your work.
Turing's own tape machine is designed such that the tape head performs
impure calculations it mutates the tape.
This is managed by isolation. Each Turing Machine gets its own tape.
Each Turing Machine is understood to be a process that starts with the
tape in the specified initial contents. We never have to think bout
the tape being corrupt, or being tampered with.
Anyway, I'm only illustrating the term "undecidable" and how it isSo maybe the conventional "do the opposite" relationship can be called
used. It is used to describe situations when we believe we don't have
an algorithm that terminates on every input in the desired input set.
an undecidable instance.
I wouldn't. Because the instance is positively decidable.
The term "undecidable" in computer science is so strongly linked with
the idea of there being no algorithm which terminates and provides the
correct answer for all instances in the space of concern, I wouldn't
reuse the term for anything else.
I will not tolerate that there is no existing term for a meaning that
I must expression.
I just use "the diagonal case", because it is understood that in the
diagonal case there is a procedure and an input, such that the
procedure decdides incorrectly. However, that is a bit of a coded term
understandable just to people who are ramped up on the problem.
Just "gets thew wrong answer". The case, as a set of one,
is decidable.
I am hereby establishing the term "undecidable instance" for this
case.
Hard disagree; naming is important. Reusing deeply entrenced,
loaded terms, big no no.
The relationship between HHH and DD isn't that DD is "undecidable" >>>>>> to HHH, but that HHH /doesn't/ decide DD (either by not terminating >>>>>> or returing the wrong value). This is by design; DD is built on HHH >>>>>> and designed such that HHH(DD) is incorrect, if HHH(DD) terminates. >>>>>>So what conventional term do we have for the undecidability of a
single H/D pair? H forced to get the wrong answer seems too clumsy
It's not only clumsy, but it's wrong. Nothing is forced.
Forced means that an altered course of action is imposed by
interference where a different course was to take place without
interference.
It is an input/decider combination intentionally defined to create an
undecidable instance.
But the combination doens't do anything to the decider whatsoever,
taking it as-is.
If you can tell me how to convert HHH into a pure function and keep
complete correspondence to the HP proof I will do this.
Until then HHH is a correct termination analyzer.
"Until someone shows how to correct my mistake, I am right"
is kind of not how it works.
Any simulation that falls short of this is just an incomplete
and/or incorrect analysis, and not a description of the subject's
behavior.
It follows the semantics specified by the input finite string.
Not all the way to the end, right? The semantics is not completely
evolved when the simulation is abruptly abandoned.
If you only paid enough attention you would see that the only possible
end is out-of-memory error.
I paid more attention and saw that the abandoned simulation is actually
terminating. Take a few more steps of it and a CALL HHH DD instruction
is seen to terminate, and conditional jumps are coming.
verified fact that DD calls HHH(DD) in recursive simulation changes the behavior relative to DD does not call HHH1 at all.
On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:
On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:You are the one that seems to not be able to understand the easily
On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
The way you are testing function equivalence is flawed.It is not and you convoluted example seems to make no point.
Function equivalence cannot be determined by address because,In the semantics of the x86 model of computation same address means
for instance:
same function.
Sure, you genius computer scientist, you!
The thing you are not understanding in my text above is that a
different address does /not/ mean different function!
The exact /same/ computation can be implemented in multiple ways,
and located at /different/ addresseds.
Your native pointer comparison wrongly concludes that two functions
that are the same are different.
add_foo(x, y) { return x + y }
add_bar(x, y) { return x + y }
are just two names/addresses for the same computation.
There is no algorithm which determines whether two functions are the >>>>> same; it is an undecidable problem.
Your abort decision is based on a strawman implementation of a
problem which is undecidable in its correct form.
Identical finite strings of machine code can achieve this same result
in the Linz proof.
Machines can be identical/equivalent computations yet be completely
different strings.
You are seeing this problem in your own code. You created a clone of >>>>> HHH called HHH1, which is the same except for the name.In the very well known conventionally understood way.
Yet, it's behaving differently.
Why? Because when your machinery sees CALL HHH1 and CALL HHH in theWhen I say that DD calls HHH(DD) in recursive simulation and DD does
execution trace, it treats them as a different functions.
not call HHH1 at all and I say this 500 times
But if HHH1 and HHH were correctly identified as the same computation,
then there would be no difference between "call HHH(DD)" and "call
HHH1(DD)".
It doesn't strike you as wrong that if you just copy a function under a
different name, you get a different behavior?
(Didn't you work as a software engineer and get to pick most function
names all your life without worrying that the choice would break your
program?)
and no one sees that these behaviors cannot be the same on the basis
of these differences I can only reasonably conclude short-circuits in
brains or lying.
The difference you created is wrong; your shit is concluding that if
there are differnt addresses in a CALL instruction in an execution
trace, then the functions must be different.
And /that test/ is what is /introducing/ the difference.
I'm saying that /if/ you had an abstract comparison Compare_Function(X,
Y) which reports true when X is HHH and Y is HHH1 (or vice versa),
and used that comparison whenever your abort logic compares function
pointers rather than using the == operator, that difference in behavior
would disappear.
You must not do that. You must compare function pointers using your
own CompareFunction(X, Y) function which is calibrated such that
CompareFunction(HHH, HHH1) yields true.
If you consistently use CompareFunction eveywhere you would otherwise >>>>> use X == Y, you will see that HHH1(DD) and HHH(DD)
proceed identically.
The trace already shows that DD emulated by HHH1 and DD emulated by
HHH are identical until HHH emulates an instance of itself emulating
an instance of DD.
They are identical until a decision is made, which involves comparing
whether functions are the same, by address.
That's where you are wrong. DD is built on HHH. Saying HHH is not
part of the input is like saying the engine is not part of a car.
DD is the program under test.
HHH is not the program under test.
The bulk of DD consists of HHH, and the bulk of HHH consists of the
simulator that it relies on, so what you are saying is completely
moronic, as if you don't understand software eng.
HHH is the test program.
DD is not the test program.
The diagonal case changes that; the program under test by an algorithm
includes the implementation of that algorithm.
None of my three theory of computation textbooks seemed to mention >>>>>> this at all. Where did you get it from?
By paying attention in CS classes.
I only learned theory of computation from textbooks and you say the
whole pure function thing is not in any textbooks?
Pure functions are important in topics connected with programming
languages. Their properties are useful in compiler optimiization,
Useful yet not mandatory.
That's because when we are developing systems, we are not trying to
prove theorems about halting. (Ideally, we would just want to prove
that our systems do what we think and say they do; and we /can/ prove
that combinations of impure functions have the properties we want them
to have in that context.)
If you want to explore the theory of computation by writing code in a >>>>> programming language which has imperative features (side effects),
it behooves you to make your functions behave as closely to the
theoretical ones as possible, which implies purity.
I think that I learned that from you.
Pure functions are Turing computable functions.
On the other hand impure functions seem to contradict the
Church/Turing thesis.
Not necessarily. When you have impurity, you have to manage it
carefully and prove that it's not making your theoretical result wrong.
This is an additional burden which requires you to be extra clever, and
it's an extra burden to anyone following your work.
Turing's own tape machine is designed such that the tape head performs
impure calculations it mutates the tape.
This is managed by isolation. Each Turing Machine gets its own tape.
Each Turing Machine is understood to be a process that starts with the
tape in the specified initial contents. We never have to think bout
the tape being corrupt, or being tampered with.
Anyway, I'm only illustrating the term "undecidable" and how it isSo maybe the conventional "do the opposite" relationship can be called >>>> an undecidable instance.
used. It is used to describe situations when we believe we don't have >>>>> an algorithm that terminates on every input in the desired input set. >>>>>
I wouldn't. Because the instance is positively decidable.
The term "undecidable" in computer science is so strongly linked with
the idea of there being no algorithm which terminates and provides the
correct answer for all instances in the space of concern, I wouldn't
reuse the term for anything else.
I will not tolerate that there is no existing term for a meaning that
I must expression.
I just use "the diagonal case", because it is understood that in the
diagonal case there is a procedure and an input, such that the
procedure decdides incorrectly. However, that is a bit of a coded term
understandable just to people who are ramped up on the problem.
Just "gets thew wrong answer". The case, as a set of one,
is decidable.
I am hereby establishing the term "undecidable instance" for this
case.
Hard disagree; naming is important. Reusing deeply entrenced,
loaded terms, big no no.
The relationship between HHH and DD isn't that DD is "undecidable" >>>>>>> to HHH, but that HHH /doesn't/ decide DD (either by not terminating >>>>>>> or returing the wrong value). This is by design; DD is built on HHH >>>>>>> and designed such that HHH(DD) is incorrect, if HHH(DD) terminates. >>>>>>>So what conventional term do we have for the undecidability of a
single H/D pair? H forced to get the wrong answer seems too clumsy
It's not only clumsy, but it's wrong. Nothing is forced.
Forced means that an altered course of action is imposed by
interference where a different course was to take place without
interference.
It is an input/decider combination intentionally defined to create an
undecidable instance.
But the combination doens't do anything to the decider whatsoever,
taking it as-is.
If you can tell me how to convert HHH into a pure function and keep
complete correspondence to the HP proof I will do this.
Until then HHH is a correct termination analyzer.
"Until someone shows how to correct my mistake, I am right"
is kind of not how it works.
Any simulation that falls short of this is just an incomplete
and/or incorrect analysis, and not a description of the subject's >>>>>>> behavior.
It follows the semantics specified by the input finite string.
Not all the way to the end, right? The semantics is not completely
evolved when the simulation is abruptly abandoned.
If you only paid enough attention you would see that the only possible >>>> end is out-of-memory error.
I paid more attention and saw that the abandoned simulation is actually
terminating. Take a few more steps of it and a CALL HHH DD instruction
is seen to terminate, and conditional jumps are coming.
verified fact that DD calls HHH(DD) in recursive simulation changes the
behavior relative to DD does not call HHH1 at all.
DD halts.
/Flibble
On 9/17/2025 10:35 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:
On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:You are the one that seems to not be able to understand the easily
On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
The way you are testing function equivalence is flawed.It is not and you convoluted example seems to make no point.
Function equivalence cannot be determined by address because,In the semantics of the x86 model of computation same address means
for instance:
same function.
Sure, you genius computer scientist, you!
The thing you are not understanding in my text above is that a
different address does /not/ mean different function!
The exact /same/ computation can be implemented in multiple ways,
and located at /different/ addresseds.
Your native pointer comparison wrongly concludes that two functions
that are the same are different.
add_foo(x, y) { return x + y }
add_bar(x, y) { return x + y }
are just two names/addresses for the same computation.
There is no algorithm which determines whether two functions are
the same; it is an undecidable problem.
Your abort decision is based on a strawman implementation of a
problem which is undecidable in its correct form.
Identical finite strings of machine code can achieve this same
result in the Linz proof.
Machines can be identical/equivalent computations yet be completely
different strings.
You are seeing this problem in your own code. You created a cloneIn the very well known conventionally understood way.
of HHH called HHH1, which is the same except for the name.
Yet, it's behaving differently.
Why? Because when your machinery sees CALL HHH1 and CALL HHH in the >>>>>> execution trace, it treats them as a different functions.When I say that DD calls HHH(DD) in recursive simulation and DD does >>>>> not call HHH1 at all and I say this 500 times
But if HHH1 and HHH were correctly identified as the same
computation, then there would be no difference between "call HHH(DD)"
and "call HHH1(DD)".
It doesn't strike you as wrong that if you just copy a function under
a different name, you get a different behavior?
(Didn't you work as a software engineer and get to pick most function
names all your life without worrying that the choice would break your
program?)
and no one sees that these behaviors cannot be the same on the basis >>>>> of these differences I can only reasonably conclude short-circuits
in brains or lying.
The difference you created is wrong; your shit is concluding that if
there are differnt addresses in a CALL instruction in an execution
trace, then the functions must be different.
And /that test/ is what is /introducing/ the difference.
I'm saying that /if/ you had an abstract comparison
Compare_Function(X,
Y) which reports true when X is HHH and Y is HHH1 (or vice versa),
and used that comparison whenever your abort logic compares function
pointers rather than using the == operator, that difference in
behavior would disappear.
You must not do that. You must compare function pointers using your >>>>>> own CompareFunction(X, Y) function which is calibrated such that
CompareFunction(HHH, HHH1) yields true.
If you consistently use CompareFunction eveywhere you would
otherwise use X == Y, you will see that HHH1(DD) and HHH(DD)
proceed identically.
The trace already shows that DD emulated by HHH1 and DD emulated by
HHH are identical until HHH emulates an instance of itself emulating >>>>> an instance of DD.
They are identical until a decision is made, which involves comparing
whether functions are the same, by address.
That's where you are wrong. DD is built on HHH. Saying HHH is not
part of the input is like saying the engine is not part of a car.
DD is the program under test.
HHH is not the program under test.
The bulk of DD consists of HHH, and the bulk of HHH consists of the
simulator that it relies on, so what you are saying is completely
moronic, as if you don't understand software eng.
HHH is the test program.
DD is not the test program.
The diagonal case changes that; the program under test by an
algorithm includes the implementation of that algorithm.
None of my three theory of computation textbooks seemed to mention >>>>>>> this at all. Where did you get it from?
By paying attention in CS classes.
I only learned theory of computation from textbooks and you say the
whole pure function thing is not in any textbooks?
Pure functions are important in topics connected with programming
languages. Their properties are useful in compiler optimiization,
Useful yet not mandatory.
That's because when we are developing systems, we are not trying to
prove theorems about halting. (Ideally, we would just want to prove
that our systems do what we think and say they do; and we /can/ prove
that combinations of impure functions have the properties we want
them to have in that context.)
If you want to explore the theory of computation by writing code in >>>>>> a programming language which has imperative features (side
effects), it behooves you to make your functions behave as closely >>>>>> to the theoretical ones as possible, which implies purity.
I think that I learned that from you.
Pure functions are Turing computable functions.
On the other hand impure functions seem to contradict the
Church/Turing thesis.
Not necessarily. When you have impurity, you have to manage it
carefully and prove that it's not making your theoretical result
wrong.
This is an additional burden which requires you to be extra clever,
and it's an extra burden to anyone following your work.
Turing's own tape machine is designed such that the tape head
performs impure calculations it mutates the tape.
This is managed by isolation. Each Turing Machine gets its own tape.
Each Turing Machine is understood to be a process that starts with
the tape in the specified initial contents. We never have to think
bout the tape being corrupt, or being tampered with.
Anyway, I'm only illustrating the term "undecidable" and how it is >>>>>> used. It is used to describe situations when we believe we don'tSo maybe the conventional "do the opposite" relationship can be
have an algorithm that terminates on every input in the desired
input set.
called an undecidable instance.
I wouldn't. Because the instance is positively decidable.
The term "undecidable" in computer science is so strongly linked with
the idea of there being no algorithm which terminates and provides
the correct answer for all instances in the space of concern, I
wouldn't reuse the term for anything else.
I will not tolerate that there is no existing term for a meaning
that I must expression.
I just use "the diagonal case", because it is understood that in the
diagonal case there is a procedure and an input, such that the
procedure decdides incorrectly. However, that is a bit of a coded
term understandable just to people who are ramped up on the problem.
Just "gets thew wrong answer". The case, as a set of one,
is decidable.
I am hereby establishing the term "undecidable instance" for this
case.
Hard disagree; naming is important. Reusing deeply entrenced,
loaded terms, big no no.
It's not only clumsy, but it's wrong. Nothing is forced.The relationship between HHH and DD isn't that DD isSo what conventional term do we have for the undecidability of a >>>>>>> single H/D pair? H forced to get the wrong answer seems too clumsy >>>>>>
"undecidable" to HHH, but that HHH /doesn't/ decide DD (either by >>>>>>>> not terminating or returing the wrong value). This is by design; >>>>>>>> DD is built on HHH and designed such that HHH(DD) is incorrect, >>>>>>>> if HHH(DD) terminates.
Forced means that an altered course of action is imposed by
interference where a different course was to take place without
interference.
It is an input/decider combination intentionally defined to create
an undecidable instance.
But the combination doens't do anything to the decider whatsoever,
taking it as-is.
If you can tell me how to convert HHH into a pure function and keep
complete correspondence to the HP proof I will do this.
Until then HHH is a correct termination analyzer.
"Until someone shows how to correct my mistake, I am right"
is kind of not how it works.
Any simulation that falls short of this is just an incomplete
and/or incorrect analysis, and not a description of the subject's >>>>>>>> behavior.
It follows the semantics specified by the input finite string.
Not all the way to the end, right? The semantics is not completely >>>>>> evolved when the simulation is abruptly abandoned.
If you only paid enough attention you would see that the only
possible end is out-of-memory error.
I paid more attention and saw that the abandoned simulation is
actually terminating. Take a few more steps of it and a CALL HHH DD
instruction is seen to terminate, and conditional jumps are coming.
verified fact that DD calls HHH(DD) in recursive simulation changes
the behavior relative to DD does not call HHH1 at all.
DD halts.
/Flibble
DD.HHH does not halt.
You keep trying to get away with the strawman deception.
On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:
On 9/17/2025 10:35 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:
On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:You are the one that seems to not be able to understand the easily
On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
The way you are testing function equivalence is flawed.It is not and you convoluted example seems to make no point.
Function equivalence cannot be determined by address because,In the semantics of the x86 model of computation same address means >>>>>> same function.
for instance:
Sure, you genius computer scientist, you!
The thing you are not understanding in my text above is that a
different address does /not/ mean different function!
The exact /same/ computation can be implemented in multiple ways,
and located at /different/ addresseds.
Your native pointer comparison wrongly concludes that two functions
that are the same are different.
add_foo(x, y) { return x + y }
add_bar(x, y) { return x + y }
are just two names/addresses for the same computation.
There is no algorithm which determines whether two functions are >>>>>>> the same; it is an undecidable problem.
Your abort decision is based on a strawman implementation of a
problem which is undecidable in its correct form.
Identical finite strings of machine code can achieve this same
result in the Linz proof.
Machines can be identical/equivalent computations yet be completely
different strings.
You are seeing this problem in your own code. You created a clone >>>>>>> of HHH called HHH1, which is the same except for the name.In the very well known conventionally understood way.
Yet, it's behaving differently.
Why? Because when your machinery sees CALL HHH1 and CALL HHH in the >>>>>>> execution trace, it treats them as a different functions.When I say that DD calls HHH(DD) in recursive simulation and DD does >>>>>> not call HHH1 at all and I say this 500 times
But if HHH1 and HHH were correctly identified as the same
computation, then there would be no difference between "call HHH(DD)" >>>>> and "call HHH1(DD)".
It doesn't strike you as wrong that if you just copy a function under >>>>> a different name, you get a different behavior?
(Didn't you work as a software engineer and get to pick most function >>>>> names all your life without worrying that the choice would break your >>>>> program?)
and no one sees that these behaviors cannot be the same on the basis >>>>>> of these differences I can only reasonably conclude short-circuits >>>>>> in brains or lying.
The difference you created is wrong; your shit is concluding that if >>>>> there are differnt addresses in a CALL instruction in an execution
trace, then the functions must be different.
And /that test/ is what is /introducing/ the difference.
I'm saying that /if/ you had an abstract comparison
Compare_Function(X,
Y) which reports true when X is HHH and Y is HHH1 (or vice versa),
and used that comparison whenever your abort logic compares function >>>>> pointers rather than using the == operator, that difference in
behavior would disappear.
You must not do that. You must compare function pointers using your >>>>>>> own CompareFunction(X, Y) function which is calibrated such that >>>>>>> CompareFunction(HHH, HHH1) yields true.
If you consistently use CompareFunction eveywhere you would
otherwise use X == Y, you will see that HHH1(DD) and HHH(DD)
proceed identically.
The trace already shows that DD emulated by HHH1 and DD emulated by >>>>>> HHH are identical until HHH emulates an instance of itself emulating >>>>>> an instance of DD.
They are identical until a decision is made, which involves comparing >>>>> whether functions are the same, by address.
That's where you are wrong. DD is built on HHH. Saying HHH is not >>>>>>> part of the input is like saying the engine is not part of a car. >>>>>>DD is the program under test.
HHH is not the program under test.
The bulk of DD consists of HHH, and the bulk of HHH consists of the
simulator that it relies on, so what you are saying is completely
moronic, as if you don't understand software eng.
HHH is the test program.
DD is not the test program.
The diagonal case changes that; the program under test by an
algorithm includes the implementation of that algorithm.
None of my three theory of computation textbooks seemed to mention >>>>>>>> this at all. Where did you get it from?
By paying attention in CS classes.
I only learned theory of computation from textbooks and you say the >>>>>> whole pure function thing is not in any textbooks?
Pure functions are important in topics connected with programming >>>>>>> languages. Their properties are useful in compiler optimiization, >>>>>>Useful yet not mandatory.
That's because when we are developing systems, we are not trying to
prove theorems about halting. (Ideally, we would just want to prove
that our systems do what we think and say they do; and we /can/ prove >>>>> that combinations of impure functions have the properties we want
them to have in that context.)
If you want to explore the theory of computation by writing code in >>>>>>> a programming language which has imperative features (side
effects), it behooves you to make your functions behave as closely >>>>>>> to the theoretical ones as possible, which implies purity.
I think that I learned that from you.
Pure functions are Turing computable functions.
On the other hand impure functions seem to contradict the
Church/Turing thesis.
Not necessarily. When you have impurity, you have to manage it
carefully and prove that it's not making your theoretical result
wrong.
This is an additional burden which requires you to be extra clever,
and it's an extra burden to anyone following your work.
Turing's own tape machine is designed such that the tape head
performs impure calculations it mutates the tape.
This is managed by isolation. Each Turing Machine gets its own tape. >>>>> Each Turing Machine is understood to be a process that starts with
the tape in the specified initial contents. We never have to think
bout the tape being corrupt, or being tampered with.
Anyway, I'm only illustrating the term "undecidable" and how it is >>>>>>> used. It is used to describe situations when we believe we don't >>>>>>> have an algorithm that terminates on every input in the desiredSo maybe the conventional "do the opposite" relationship can be
input set.
called an undecidable instance.
I wouldn't. Because the instance is positively decidable.
The term "undecidable" in computer science is so strongly linked with >>>>> the idea of there being no algorithm which terminates and provides
the correct answer for all instances in the space of concern, I
wouldn't reuse the term for anything else.
I will not tolerate that there is no existing term for a meaning
that I must expression.
I just use "the diagonal case", because it is understood that in the >>>>> diagonal case there is a procedure and an input, such that the
procedure decdides incorrectly. However, that is a bit of a coded
term understandable just to people who are ramped up on the problem. >>>>>
Just "gets thew wrong answer". The case, as a set of one,
is decidable.
I am hereby establishing the term "undecidable instance" for this
case.
Hard disagree; naming is important. Reusing deeply entrenced,
loaded terms, big no no.
It's not only clumsy, but it's wrong. Nothing is forced.The relationship between HHH and DD isn't that DD isSo what conventional term do we have for the undecidability of a >>>>>>>> single H/D pair? H forced to get the wrong answer seems too clumsy >>>>>>>
"undecidable" to HHH, but that HHH /doesn't/ decide DD (either by >>>>>>>>> not terminating or returing the wrong value). This is by design; >>>>>>>>> DD is built on HHH and designed such that HHH(DD) is incorrect, >>>>>>>>> if HHH(DD) terminates.
Forced means that an altered course of action is imposed by
interference where a different course was to take place without
interference.
It is an input/decider combination intentionally defined to create >>>>>> an undecidable instance.
But the combination doens't do anything to the decider whatsoever,
taking it as-is.
If you can tell me how to convert HHH into a pure function and keep >>>>>> complete correspondence to the HP proof I will do this.
Until then HHH is a correct termination analyzer.
"Until someone shows how to correct my mistake, I am right"
is kind of not how it works.
Any simulation that falls short of this is just an incomplete >>>>>>>>> and/or incorrect analysis, and not a description of the subject's >>>>>>>>> behavior.
It follows the semantics specified by the input finite string.
Not all the way to the end, right? The semantics is not completely >>>>>>> evolved when the simulation is abruptly abandoned.
If you only paid enough attention you would see that the only
possible end is out-of-memory error.
I paid more attention and saw that the abandoned simulation is
actually terminating. Take a few more steps of it and a CALL HHH DD
instruction is seen to terminate, and conditional jumps are coming.
verified fact that DD calls HHH(DD) in recursive simulation changes
the behavior relative to DD does not call HHH1 at all.
DD halts.
/Flibble
DD.HHH does not halt.
You keep trying to get away with the strawman deception.
To be a halt decider HHH must report a halting decision to its caller, DD, and DD then halts if HHH reports non-halting thus proving HHH to be incorrect.
/Flibble
On 9/17/2025 10:59 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:HHH has never been supposed to report on the behavior of its caller.
On 9/17/2025 10:35 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:DD.HHH does not halt.
On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:You are the one that seems to not be able to understand the easily
On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
The way you are testing function equivalence is flawed.It is not and you convoluted example seems to make no point.
Function equivalence cannot be determined by address because,In the semantics of the x86 model of computation same address
for instance:
means same function.
Sure, you genius computer scientist, you!
The thing you are not understanding in my text above is that a
different address does /not/ mean different function!
The exact /same/ computation can be implemented in multiple ways,
and located at /different/ addresseds.
Your native pointer comparison wrongly concludes that two functions >>>>>> that are the same are different.
add_foo(x, y) { return x + y }
add_bar(x, y) { return x + y }
are just two names/addresses for the same computation.
There is no algorithm which determines whether two functions are >>>>>>>> the same; it is an undecidable problem.
Your abort decision is based on a strawman implementation of a >>>>>>>> problem which is undecidable in its correct form.
Identical finite strings of machine code can achieve this same
result in the Linz proof.
Machines can be identical/equivalent computations yet be completely >>>>>> different strings.
You are seeing this problem in your own code. You created a clone >>>>>>>> of HHH called HHH1, which is the same except for the name.In the very well known conventionally understood way.
Yet, it's behaving differently.
Why? Because when your machinery sees CALL HHH1 and CALL HHH in >>>>>>>> the execution trace, it treats them as a different functions.When I say that DD calls HHH(DD) in recursive simulation and DD
does not call HHH1 at all and I say this 500 times
But if HHH1 and HHH were correctly identified as the same
computation, then there would be no difference between "call
HHH(DD)"
and "call HHH1(DD)".
It doesn't strike you as wrong that if you just copy a function
under a different name, you get a different behavior?
(Didn't you work as a software engineer and get to pick most
function names all your life without worrying that the choice would >>>>>> break your program?)
and no one sees that these behaviors cannot be the same on the
basis of these differences I can only reasonably conclude
short-circuits in brains or lying.
The difference you created is wrong; your shit is concluding that
if there are differnt addresses in a CALL instruction in an
execution trace, then the functions must be different.
And /that test/ is what is /introducing/ the difference.
I'm saying that /if/ you had an abstract comparison
Compare_Function(X,
Y) which reports true when X is HHH and Y is HHH1 (or vice versa), >>>>>> and used that comparison whenever your abort logic compares
function pointers rather than using the == operator, that
difference in behavior would disappear.
You must not do that. You must compare function pointers using >>>>>>>> your own CompareFunction(X, Y) function which is calibrated such >>>>>>>> that CompareFunction(HHH, HHH1) yields true.
If you consistently use CompareFunction eveywhere you would
otherwise use X == Y, you will see that HHH1(DD) and HHH(DD)
proceed identically.
The trace already shows that DD emulated by HHH1 and DD emulated >>>>>>> by HHH are identical until HHH emulates an instance of itself
emulating an instance of DD.
They are identical until a decision is made, which involves
comparing whether functions are the same, by address.
That's where you are wrong. DD is built on HHH. Saying HHH is not >>>>>>>> part of the input is like saying the engine is not part of a car. >>>>>>>DD is the program under test.
HHH is not the program under test.
The bulk of DD consists of HHH, and the bulk of HHH consists of the >>>>>> simulator that it relies on, so what you are saying is completely
moronic, as if you don't understand software eng.
HHH is the test program.
DD is not the test program.
The diagonal case changes that; the program under test by an
algorithm includes the implementation of that algorithm.
None of my three theory of computation textbooks seemed to
mention this at all. Where did you get it from?
By paying attention in CS classes.
I only learned theory of computation from textbooks and you say
the whole pure function thing is not in any textbooks?
Pure functions are important in topics connected with programming >>>>>>>> languages. Their properties are useful in compiler optimiization, >>>>>>>Useful yet not mandatory.
That's because when we are developing systems, we are not trying to >>>>>> prove theorems about halting. (Ideally, we would just want to prove >>>>>> that our systems do what we think and say they do; and we /can/
prove that combinations of impure functions have the properties we >>>>>> want them to have in that context.)
If you want to explore the theory of computation by writing code >>>>>>>> in a programming language which has imperative features (side
effects), it behooves you to make your functions behave as
closely to the theoretical ones as possible, which implies
purity.
I think that I learned that from you.
Pure functions are Turing computable functions.
On the other hand impure functions seem to contradict the
Church/Turing thesis.
Not necessarily. When you have impurity, you have to manage it
carefully and prove that it's not making your theoretical result
wrong.
This is an additional burden which requires you to be extra clever, >>>>>> and it's an extra burden to anyone following your work.
Turing's own tape machine is designed such that the tape head
performs impure calculations it mutates the tape.
This is managed by isolation. Each Turing Machine gets its own
tape. Each Turing Machine is understood to be a process that starts >>>>>> with the tape in the specified initial contents. We never have to >>>>>> think bout the tape being corrupt, or being tampered with.
Anyway, I'm only illustrating the term "undecidable" and how it >>>>>>>> is used. It is used to describe situations when we believe weSo maybe the conventional "do the opposite" relationship can be
don't have an algorithm that terminates on every input in the
desired input set.
called an undecidable instance.
I wouldn't. Because the instance is positively decidable.
The term "undecidable" in computer science is so strongly linked
with the idea of there being no algorithm which terminates and
provides the correct answer for all instances in the space of
concern, I wouldn't reuse the term for anything else.
I will not tolerate that there is no existing term for a meaning >>>>>>> that I must expression.
I just use "the diagonal case", because it is understood that in
the diagonal case there is a procedure and an input, such that the >>>>>> procedure decdides incorrectly. However, that is a bit of a coded
term understandable just to people who are ramped up on the
problem.
Just "gets thew wrong answer". The case, as a set of one,
is decidable.
I am hereby establishing the term "undecidable instance" for this >>>>>>> case.
Hard disagree; naming is important. Reusing deeply entrenced,
loaded terms, big no no.
The relationship between HHH and DD isn't that DD isSo what conventional term do we have for the undecidability of a >>>>>>>>> single H/D pair? H forced to get the wrong answer seems too
"undecidable" to HHH, but that HHH /doesn't/ decide DD (either >>>>>>>>>> by not terminating or returing the wrong value). This is by >>>>>>>>>> design; DD is built on HHH and designed such that HHH(DD) is >>>>>>>>>> incorrect, if HHH(DD) terminates.
clumsy
It's not only clumsy, but it's wrong. Nothing is forced.
Forced means that an altered course of action is imposed by
interference where a different course was to take place without >>>>>>>> interference.
It is an input/decider combination intentionally defined to create >>>>>>> an undecidable instance.
But the combination doens't do anything to the decider whatsoever, >>>>>> taking it as-is.
If you can tell me how to convert HHH into a pure function and
keep complete correspondence to the HP proof I will do this.
Until then HHH is a correct termination analyzer.
"Until someone shows how to correct my mistake, I am right"
is kind of not how it works.
Not all the way to the end, right? The semantics is notAny simulation that falls short of this is just an incomplete >>>>>>>>>> and/or incorrect analysis, and not a description of the
subject's behavior.
It follows the semantics specified by the input finite string. >>>>>>>>
completely evolved when the simulation is abruptly abandoned.
If you only paid enough attention you would see that the only
possible end is out-of-memory error.
I paid more attention and saw that the abandoned simulation is
actually terminating. Take a few more steps of it and a CALL HHH DD >>>>>> instruction is seen to terminate, and conditional jumps are coming. >>>>>>
verified fact that DD calls HHH(DD) in recursive simulation changes
the behavior relative to DD does not call HHH1 at all.
DD halts.
/Flibble
You keep trying to get away with the strawman deception.
To be a halt decider HHH must report a halting decision to its caller,
DD,
and DD then halts if HHH reports non-halting thus proving HHH to be
incorrect.
/Flibble
That is just not the way that deciders have ever worked. HHH has always
been required to report on the semantic property specified by its input finite string. Rice's theorem mentions semantic properties yet still misattributes these to something other than the input finite string.
On Mon, 15 Sep 2025 21:59:33 +0100, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote in <10a9unl$26h2q$1@dont-email.me>:
hmm, maybe objcopy can convert ELF to COFF? It has a -O bfdname option.
I don't have objcopy to test, but PO only needs a few basic COFF
capabilities, so it might be enough...
<https://man7.org/linux/man-pages/man1/objcopy.1.html>
Mike.
$ objcopy -I elf64-x86-64 -O pe-i386 hello.o hello.coff
$ file hello.coff
hello.coff: Intel 80386 COFF object file, no line number info, not
stripped, 8 sections, symbol offset=0x22e, 4 symbols, 1st section name ".text"
Looks like it can handle it -- whether or not Olcott's COFF handler
can read those, no clue.
(Something tells me one would have to build the original .o file
with -m32 -march=i386, but my machine isn't set up to do that. That,
because IIRC Olcott's system is 32-bit.)
On Wed, 17 Sep 2025 11:11:42 -0500, olcott wrote:
On 9/17/2025 10:59 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:HHH has never been supposed to report on the behavior of its caller.
On 9/17/2025 10:35 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:DD.HHH does not halt.
On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:You are the one that seems to not be able to understand the easily >>>>>> verified fact that DD calls HHH(DD) in recursive simulation changes >>>>>> the behavior relative to DD does not call HHH1 at all.
On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
The way you are testing function equivalence is flawed.It is not and you convoluted example seems to make no point.
Function equivalence cannot be determined by address because, >>>>>>>>> for instance:In the semantics of the x86 model of computation same address
means same function.
Sure, you genius computer scientist, you!
The thing you are not understanding in my text above is that a
different address does /not/ mean different function!
The exact /same/ computation can be implemented in multiple ways, >>>>>>> and located at /different/ addresseds.
Your native pointer comparison wrongly concludes that two functions >>>>>>> that are the same are different.
add_foo(x, y) { return x + y }
add_bar(x, y) { return x + y }
are just two names/addresses for the same computation.
There is no algorithm which determines whether two functions are >>>>>>>>> the same; it is an undecidable problem.
Your abort decision is based on a strawman implementation of a >>>>>>>>> problem which is undecidable in its correct form.
Identical finite strings of machine code can achieve this same >>>>>>>> result in the Linz proof.
Machines can be identical/equivalent computations yet be completely >>>>>>> different strings.
You are seeing this problem in your own code. You created a clone >>>>>>>>> of HHH called HHH1, which is the same except for the name.In the very well known conventionally understood way.
Yet, it's behaving differently.
Why? Because when your machinery sees CALL HHH1 and CALL HHH in >>>>>>>>> the execution trace, it treats them as a different functions. >>>>>>>>>When I say that DD calls HHH(DD) in recursive simulation and DD >>>>>>>> does not call HHH1 at all and I say this 500 times
But if HHH1 and HHH were correctly identified as the same
computation, then there would be no difference between "call
HHH(DD)"
and "call HHH1(DD)".
It doesn't strike you as wrong that if you just copy a function
under a different name, you get a different behavior?
(Didn't you work as a software engineer and get to pick most
function names all your life without worrying that the choice would >>>>>>> break your program?)
and no one sees that these behaviors cannot be the same on the >>>>>>>> basis of these differences I can only reasonably conclude
short-circuits in brains or lying.
The difference you created is wrong; your shit is concluding that >>>>>>> if there are differnt addresses in a CALL instruction in an
execution trace, then the functions must be different.
And /that test/ is what is /introducing/ the difference.
I'm saying that /if/ you had an abstract comparison
Compare_Function(X,
Y) which reports true when X is HHH and Y is HHH1 (or vice versa), >>>>>>> and used that comparison whenever your abort logic compares
function pointers rather than using the == operator, that
difference in behavior would disappear.
You must not do that. You must compare function pointers using >>>>>>>>> your own CompareFunction(X, Y) function which is calibrated such >>>>>>>>> that CompareFunction(HHH, HHH1) yields true.
If you consistently use CompareFunction eveywhere you would
otherwise use X == Y, you will see that HHH1(DD) and HHH(DD) >>>>>>>>> proceed identically.
The trace already shows that DD emulated by HHH1 and DD emulated >>>>>>>> by HHH are identical until HHH emulates an instance of itself
emulating an instance of DD.
They are identical until a decision is made, which involves
comparing whether functions are the same, by address.
That's where you are wrong. DD is built on HHH. Saying HHH is not >>>>>>>>> part of the input is like saying the engine is not part of a car. >>>>>>>>DD is the program under test.
HHH is not the program under test.
The bulk of DD consists of HHH, and the bulk of HHH consists of the >>>>>>> simulator that it relies on, so what you are saying is completely >>>>>>> moronic, as if you don't understand software eng.
HHH is the test program.
DD is not the test program.
The diagonal case changes that; the program under test by an
algorithm includes the implementation of that algorithm.
None of my three theory of computation textbooks seemed to >>>>>>>>>> mention this at all. Where did you get it from?
By paying attention in CS classes.
I only learned theory of computation from textbooks and you say >>>>>>>> the whole pure function thing is not in any textbooks?
Pure functions are important in topics connected with programming >>>>>>>>> languages. Their properties are useful in compiler optimiization, >>>>>>>>Useful yet not mandatory.
That's because when we are developing systems, we are not trying to >>>>>>> prove theorems about halting. (Ideally, we would just want to prove >>>>>>> that our systems do what we think and say they do; and we /can/
prove that combinations of impure functions have the properties we >>>>>>> want them to have in that context.)
If you want to explore the theory of computation by writing code >>>>>>>>> in a programming language which has imperative features (side >>>>>>>>> effects), it behooves you to make your functions behave as
closely to the theoretical ones as possible, which implies
purity.
I think that I learned that from you.
Pure functions are Turing computable functions.
On the other hand impure functions seem to contradict the
Church/Turing thesis.
Not necessarily. When you have impurity, you have to manage it
carefully and prove that it's not making your theoretical result >>>>>>> wrong.
This is an additional burden which requires you to be extra clever, >>>>>>> and it's an extra burden to anyone following your work.
Turing's own tape machine is designed such that the tape head
performs impure calculations it mutates the tape.
This is managed by isolation. Each Turing Machine gets its own
tape. Each Turing Machine is understood to be a process that starts >>>>>>> with the tape in the specified initial contents. We never have to >>>>>>> think bout the tape being corrupt, or being tampered with.
Anyway, I'm only illustrating the term "undecidable" and how it >>>>>>>>> is used. It is used to describe situations when we believe we >>>>>>>>> don't have an algorithm that terminates on every input in the >>>>>>>>> desired input set.So maybe the conventional "do the opposite" relationship can be >>>>>>>> called an undecidable instance.
I wouldn't. Because the instance is positively decidable.
The term "undecidable" in computer science is so strongly linked >>>>>>> with the idea of there being no algorithm which terminates and
provides the correct answer for all instances in the space of
concern, I wouldn't reuse the term for anything else.
I will not tolerate that there is no existing term for a meaning >>>>>>>> that I must expression.
I just use "the diagonal case", because it is understood that in >>>>>>> the diagonal case there is a procedure and an input, such that the >>>>>>> procedure decdides incorrectly. However, that is a bit of a coded >>>>>>> term understandable just to people who are ramped up on the
problem.
Just "gets thew wrong answer". The case, as a set of one,
is decidable.
I am hereby establishing the term "undecidable instance" for this >>>>>>>> case.
Hard disagree; naming is important. Reusing deeply entrenced,
loaded terms, big no no.
The relationship between HHH and DD isn't that DD isSo what conventional term do we have for the undecidability of a >>>>>>>>>> single H/D pair? H forced to get the wrong answer seems too >>>>>>>>>> clumsy
"undecidable" to HHH, but that HHH /doesn't/ decide DD (either >>>>>>>>>>> by not terminating or returing the wrong value). This is by >>>>>>>>>>> design; DD is built on HHH and designed such that HHH(DD) is >>>>>>>>>>> incorrect, if HHH(DD) terminates.
It's not only clumsy, but it's wrong. Nothing is forced.
Forced means that an altered course of action is imposed by
interference where a different course was to take place without >>>>>>>>> interference.
It is an input/decider combination intentionally defined to create >>>>>>>> an undecidable instance.
But the combination doens't do anything to the decider whatsoever, >>>>>>> taking it as-is.
If you can tell me how to convert HHH into a pure function and >>>>>>>> keep complete correspondence to the HP proof I will do this.
Until then HHH is a correct termination analyzer.
"Until someone shows how to correct my mistake, I am right"
is kind of not how it works.
If you only paid enough attention you would see that the onlyNot all the way to the end, right? The semantics is notAny simulation that falls short of this is just an incomplete >>>>>>>>>>> and/or incorrect analysis, and not a description of the
subject's behavior.
It follows the semantics specified by the input finite string. >>>>>>>>>
completely evolved when the simulation is abruptly abandoned. >>>>>>>>
possible end is out-of-memory error.
I paid more attention and saw that the abandoned simulation is
actually terminating. Take a few more steps of it and a CALL HHH DD >>>>>>> instruction is seen to terminate, and conditional jumps are coming. >>>>>>>
DD halts.
/Flibble
You keep trying to get away with the strawman deception.
To be a halt decider HHH must report a halting decision to its caller,
DD,
and DD then halts if HHH reports non-halting thus proving HHH to be
incorrect.
/Flibble
That is just not the way that deciders have ever worked. HHH has always
been required to report on the semantic property specified by its input
finite string. Rice's theorem mentions semantic properties yet still
misattributes these to something other than the input finite string.
However, in the case of the Halting Problem diagonalization-based proofs, HHH's input just happens to ALSO be a description of its caller.
HHH gets
the answer wrong because the Halting Problem has been proven to be undecidable.
/Flibble
On 9/17/2025 12:30 AM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
That's vacuously true because DD is never correctly
simulated by HHH.
It seems that you might be intentionally dishonest here.
I proved that DD is correctly simulated by HHH up to the
point where DD has met its non-halting criteria.
The diagonal case is something that happens; a situation which never
happens is never identifiable as one one which happens.
When DD is emulated by HHH it is very easy to see
that the "do the opposite" code is unreachable.
Within the premise that HHH(DD) is only supposed
to report on the actual behavior that it actually
sees then the diagonal case is defeated. HHH
simply rejects DD as non-halting.
Only when it is incorrectly defined such that the
decider is required to report on something besides
the actual behavior that the actual input actually
specifies.
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/17/2025 12:30 AM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
DD correctly simulated by HHH IS NEVER THE DIAGONAL CASE
That's vacuously true because DD is never correctly
simulated by HHH.
It seems that you might be intentionally dishonest here.
I proved that DD is correctly simulated by HHH up to the
point where DD has met its non-halting criteria.
"correctly simulated up to the point" is exactly the same
kind of weasel phrasing as as "lawfully behaved up to the point of
robbing a convenience store".
If you correctly answer some exam questions, but then run out
of time, so that 15 out of 20 are unanswered, your score is 25%.
Completeness is part of correctness.
If you write a program that receives some packets from the network and
has to handle 15 cases, but you implemented only four, it is not
correct, even if the four cases are flawless.
The diagonal case is something that happens; a situation which never
happens is never identifiable as one one which happens.
When DD is emulated by HHH it is very easy to see
that the "do the opposite" code is unreachable.
No, it isn't. That only happens when you disable the abort
code. Then HHH(DD) doesn't return.
Within the premise that HHH(DD) is only supposed
to report on the actual behavior that it actually
sees then the diagonal case is defeated. HHH
simply rejects DD as non-halting.
But the simulation that is rejected has an instruction
pointer that's ready to go to the next instruction.
DD.HHH does not halt.
You keep trying to get away with the strawman deception.
HHH has never been supposed to report on the behavior
of its caller.
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
Only when it is incorrectly defined such that the
decider is required to report on something besides
the actual behavior that the actual input actually
specifies.
The actual input is the function DD, the function HHH,
and all the functions that HHH calls like Debug_Step,
Allocate and everything else in the call graph.
The input is the entire call graph of DD, not just the
code in DD.
On 17/09/2025 17:11, olcott wrote:
HHH has never been supposed to report on the behavior
of its caller.
And it doesn't. It completely fails to report on DD.
On 9/17/2025 1:22 PM, Richard Heathfield wrote:
On 17/09/2025 17:11, olcott wrote:
HHH has never been supposed to report on the behavior
of its caller.
And it doesn't. It completely fails to report on DD.
HHH cannot see the behavior of its caller or
which function is calling it.
A halt decider cannot use psychic power to
REPORT ON BEHAVIOR THAT IT CANNOT SEE.
Thus the job of a halt decider is to report
on the actual behavior that its actual finite
string input actually specifies.
On 9/17/2025 1:22 PM, Richard Heathfield wrote:
On 17/09/2025 17:11, olcott wrote:HHH cannot see the behavior of its caller or
HHH has never been supposed to report on the behavior
of its caller.
And it doesn't. It completely fails to report on DD.
which function is calling it.
On 17/09/2025 19:44, olcott wrote:
On 9/17/2025 1:22 PM, Richard Heathfield wrote:
On 17/09/2025 17:11, olcott wrote:
HHH has never been supposed to report on the behavior
of its caller.
And it doesn't. It completely fails to report on DD.
HHH cannot see the behavior of its caller or
which function is calling it.
Nor can it correctly decide the question it was written to answer.
On 9/17/2025 2:02 PM, Richard Heathfield wrote:
On 17/09/2025 19:44, olcott wrote:
On 9/17/2025 1:22 PM, Richard Heathfield wrote:
On 17/09/2025 17:11, olcott wrote:
HHH has never been supposed to report on the behavior
of its caller.
And it doesn't. It completely fails to report on DD.
HHH cannot see the behavior of its caller or
which function is calling it.
Nor can it correctly decide the question it was written to answer.
So you don't understand that requiring
a halt decider
to have psychic ability is nuts?
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/17/2025 1:22 PM, Richard Heathfield wrote:
On 17/09/2025 17:11, olcott wrote:HHH cannot see the behavior of its caller or
HHH has never been supposed to report on the behavior
of its caller.
And it doesn't. It completely fails to report on DD.
which function is calling it.
The caller of the top-level activation of HHH is main isn't it?
int main()
{
if (HHH(DD)) {
OutputString("HHH says DD halts");
} else {
OutputString("HHH says DD doesn't halt");
}
}
In your twisted simulation world, it /is/ possible for HHH to have access
to an execution trace which could inform it that it is being called by
main. That kind of thing isn't valid, though.
On 9/17/2025 2:02 PM, Richard Heathfield wrote:
On 17/09/2025 19:44, olcott wrote:
On 9/17/2025 1:22 PM, Richard Heathfield wrote:
On 17/09/2025 17:11, olcott wrote:
HHH has never been supposed to report on the behavior
of its caller.
And it doesn't. It completely fails to report on DD.
HHH cannot see the behavior of its caller or
which function is calling it.
Nor can it correctly decide the question it was written to answer.
So you don't understand that requiring
a halt decider to have psychic ability is nuts?
On 9/17/2025 2:08 PM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/17/2025 1:22 PM, Richard Heathfield wrote:
On 17/09/2025 17:11, olcott wrote:HHH cannot see the behavior of its caller or
HHH has never been supposed to report on the behavior
of its caller.
And it doesn't. It completely fails to report on DD.
which function is calling it.
The caller of the top-level activation of HHH is main isn't it?
Not when the directly executed DD() exists.
On 9/17/2025 11:38 AM, Mr Flibble wrote:The point is that they are the same.
On Wed, 17 Sep 2025 11:11:42 -0500, olcott wrote:
On 9/17/2025 10:59 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:
On 9/17/2025 10:35 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:
On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
The way you are testing function equivalence is flawed.It is not and you convoluted example seems to make no point.
You are, naturally, infallible. What is the difference between HHHYou are seeing this problem in your own code. You created a >>>>>>>>>> clone of HHH called HHH1, which is the same except for the >>>>>>>>>> name. Yet, it's behaving differently.When I say that DD calls HHH(DD) in recursive simulation and DD >>>>>>>>> does not call HHH1 at all and I say this 500 times
Why? Because when your machinery sees CALL HHH1 and CALL HHH in >>>>>>>>>> the execution trace, it treats them as a different functions. >>>>>>>>>>
But if HHH1 and HHH were correctly identified as the same
computation, then there would be no difference between "call
HHH(DD)" and "call HHH1(DD)".
It doesn't strike you as wrong that if you just copy a function >>>>>>>> under a different name, you get a different behavior?
(Didn't you work as a software engineer and get to pick most
function names all your life without worrying that the choice
would break your program?)
and no one sees that these behaviors cannot be the same on the >>>>>>>>> basis of these differences I can only reasonably conclude
short-circuits in brains or lying.
Good catch.The trace already shows that DD emulated by HHH1 and DD emulated >>>>>>>>> by HHH are identical until HHH emulates an instance of itself >>>>>>>>> emulating an instance of DD.They are identical until a decision is made, which involves
comparing whether functions are the same, by address.
There is no need to make up a term for something nonexistent.I will not tolerate that there is no existing term for a meaning >>>>>>>>> that I must expression.
I'd just say the "decider" gets this input wrong.The relationship between HHH and DD isn't that DD is
"undecidable" to HHH, but that HHH /doesn't/ decide DD >>>>>>>>>>>> (either by not terminating or returing the wrong value). This >>>>>>>>>>>> is by design; DD is built on HHH and designed such that >>>>>>>>>>>> HHH(DD) is incorrect, if HHH(DD) terminates.
It is an input/decider combination intentionally defined to
create an undecidable instance.
Not if you fix DD to mean "the program that calls the decider thatBut the combination doens't do anything to the decider
whatsoever, taking it as-is.
It follows the semantics specified by the input finite string. >>>>>>>>>> Not all the way to the end, right? The semantics is notcompletely evolved when the simulation is abruptly abandoned. >>>>>>>>> If you only paid enough attention you would see that the only >>>>>>>>> possible end is out-of-memory error.
DD also halts if it were simulated further, but DD is not DD1.DD halts. DD.HHH does not halt.You keep trying to get away with the strawman deception.
DD is the input to HHH; DD calls HHH.However, in the case of the Halting Problem diagonalization-basedNot exactly if you are paying 100% complete attention.
proofs,
HHH's input just happens to ALSO be a description of its caller.
You may not like it, but the way the HP is defined it is undecidable.HHH gets the answer wrong because the Halting Problem has been provenOnly when it is incorrectly defined such that the decider is required to report on something besides the actual behavior that the actual input actually specifies.
to be undecidable.
No one ever noticed this before because no one ever investigatedThat's pretty obvious.
simulating halt deciders to the degree that they can see the "do the opposite"
portion of the input is unreachable code when it is being simulated by
its corresponding decider.
Am Wed, 17 Sep 2025 12:22:44 -0500 schrieb olcott:
On 9/17/2025 11:38 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 11:11:42 -0500, olcott wrote:
On 9/17/2025 10:59 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:
On 9/17/2025 10:35 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:
On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
On 9/16/2025 11:58 PM, Kaz Kylheku wrote:
The point is that they are the same.The way you are testing function equivalence is flawed.It is not and you convoluted example seems to make no point.
You are seeing this problem in your own code. You created a >>>>>>>>>>> clone of HHH called HHH1, which is the same except for the >>>>>>>>>>> name. Yet, it's behaving differently.When I say that DD calls HHH(DD) in recursive simulation and DD >>>>>>>>>> does not call HHH1 at all and I say this 500 times
Why? Because when your machinery sees CALL HHH1 and CALL HHH in >>>>>>>>>>> the execution trace, it treats them as a different functions. >>>>>>>>>>>
But if HHH1 and HHH were correctly identified as the same
computation, then there would be no difference between "call >>>>>>>>> HHH(DD)" and "call HHH1(DD)".
It doesn't strike you as wrong that if you just copy a function >>>>>>>>> under a different name, you get a different behavior?
(Didn't you work as a software engineer and get to pick most >>>>>>>>> function names all your life without worrying that the choice >>>>>>>>> would break your program?)
You are, naturally, infallible. What is the difference between HHHand no one sees that these behaviors cannot be the same on the >>>>>>>>>> basis of these differences I can only reasonably conclude
short-circuits in brains or lying.
and HHH1 again?
The trace already shows that DD emulated by HHH1 and DD emulated >>>>>>>>>> by HHH are identical until HHH emulates an instance of itself >>>>>>>>>> emulating an instance of DD.They are identical until a decision is made, which involves
comparing whether functions are the same, by address.
Good catch.
On 9/17/2025 2:02 PM, Richard Heathfield wrote:
On 17/09/2025 19:44, olcott wrote:
On 9/17/2025 1:22 PM, Richard Heathfield wrote:
On 17/09/2025 17:11, olcott wrote:
HHH has never been supposed to report on the behavior
of its caller.
And it doesn't. It completely fails to report on DD.
HHH cannot see the behavior of its caller or
which function is calling it.
Nor can it correctly decide the question it was written to answer.
So you don't understand that requiring
a halt decider to have psychic ability is nuts?
On 9/17/2025 3:33 PM, joes wrote:Exactly, HHH decides HHH is different from HHH1.
Am Wed, 17 Sep 2025 12:22:44 -0500 schrieb olcott:
On 9/17/2025 11:38 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 11:11:42 -0500, olcott wrote:
On 9/17/2025 10:59 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:
On 9/17/2025 10:35 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:
On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
The trace already shows that DD emulated by HHH1 and DDThey are identical until a decision is made, which involves >>>>>>>>>> comparing whether functions are the same, by address.
emulated by HHH are identical until HHH emulates an instance >>>>>>>>>>> of itself emulating an instance of DD.
Good catch.That is not a good catch it is a damned lie.
They are identical until HHH emulates an instance of itself emulating an instance of DD.
Am Wed, 17 Sep 2025 15:36:37 -0500 schrieb olcott:
On 9/17/2025 3:33 PM, joes wrote:
Am Wed, 17 Sep 2025 12:22:44 -0500 schrieb olcott:
On 9/17/2025 11:38 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 11:11:42 -0500, olcott wrote:
On 9/17/2025 10:59 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:44:39 -0500, olcott wrote:
On 9/17/2025 10:35 AM, Mr Flibble wrote:
On Wed, 17 Sep 2025 10:26:49 -0500, olcott wrote:
On 9/17/2025 10:19 AM, Kaz Kylheku wrote:
On 2025-09-17, olcott <polcott333@gmail.com> wrote:
Exactly, HHH decides HHH is different from HHH1.That is not a good catch it is a damned lie.The trace already shows that DD emulated by HHH1 and DD >>>>>>>>>>>> emulated by HHH are identical until HHH emulates an instance >>>>>>>>>>>> of itself emulating an instance of DD.They are identical until a decision is made, which involves >>>>>>>>>>> comparing whether functions are the same, by address.
Good catch.
They are identical until HHH emulates an instance of itself emulating an
instance of DD.
On Mon, 15 Sep 2025 14:35:18 -0700, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote in <10aa0qm$26msp$5@dont-email.me>:
On 9/15/2025 2:32 PM, olcott wrote:
On 9/15/2025 4:30 PM, Chris M. Thomasson wrote:
On 9/15/2025 2:26 PM, olcott wrote:
On 9/15/2025 4:01 PM, joes wrote:
Am Mon, 15 Sep 2025 13:19:26 -0500 schrieb olcott:It is apparently over everyone's head.
On 9/15/2025 12:27 PM, Kaz Kylheku wrote:Yeah, why does HHH think that it doesn't halt *and then halts*?
On 2025-09-15, olcott <polcott333@gmail.com> wrote:
On 9/14/2025 1:41 PM, Kaz Kylheku wrote:
 From the POV of HHH1(DDD) the call from DD to HHH(DD)
simply returns.
Why are you comparing at all?Anyway, HHH1 is not HHH and is therefore superfluous andDDD correctly simulated by HHH1 is identical to the behavior of the >>>>>>> directly executed DDD().
irrelevant.
When we have emulation compared to emulation we are comparing
Apples to Apples and not Apples to Oranges.
You have yet to explain the significance of HHH1.Just did.
HHH has complete proof that DDD correctly simulated by HHH cannot >>>>>>> possibly reach its own simulated final halt state.HHH ONLY sees the behavior of DD *BEFORE* HHH has aborted DD thus >>>>>>>>> must abort DD itself.That's a tautology. HHH only sees the behavior that HHH has seen >>>>>>>> in order to convince itself that seeing more behavior is not
necessary.
Yes?
So yes. What is the proof?
void Infinite_Recursion()
{
  Infinite_Recursion();
  return;
}
How can a program know with complete certainty that
Infinite_Recursion() never halts?
Check...
Well, what are you trying to say here? That the following might halt?Ummm... Your Infinite_Recursion example is basically akin to:You aren't this stupid on the other forums
10 PRINT "Halt" : GOTO 10
Right? It says halt but does not... ;^)
void Infinite_Recursion()
{
Infinite_Recursion();
return;
}
I think not. Blowing the stack is not the same as halting...
As a matter of principle, it's part of the execution environment,
just like his partial decider simulating the code.
In his world, catching it calling itself twice without an intervening decision is grounds to abort.
In our world, when the stack gets used up, it aborts.
Fair's fair. If one is valid -- and if he's thumping the x86 bible
saying that the rules of the instruction set are the source of
truth -- then he can't have an infinite stack.
(
The example could be
loop: goto loop;
But if I'm not mistaken, a decider can be written for such a trivial example...by parsing the source, not simulating it!
)
On 9/17/2025 2:48 AM, vallor wrote:
On Mon, 15 Sep 2025 14:35:18 -0700, "Chris M. Thomasson"HHH is smart enough to detect infinite loops
<chris.m.thomasson.1@gmail.com> wrote in <10aa0qm$26msp$5@dont-email.me>:
Well, what are you trying to say here? That the following might halt?
void Infinite_Recursion()
{
    Infinite_Recursion();
    return;
}
I think not. Blowing the stack is not the same as halting...
As a matter of principle, it's part of the execution environment,
just like his partial decider simulating the code.
In his world, catching it calling itself twice without an intervening
decision is grounds to abort.
In our world, when the stack gets used up, it aborts.
Fair's fair. If one is valid -- and if he's thumping the x86 bible
saying that the rules of the instruction set are the source of
truth -- then he can't have an infinite stack.
(
The example could be
loop: goto loop;
But if I'm not mistaken, a decider can be written for such a trivial
example...by parsing the source, not simulating it!
)
and complex cases of infinite recursion
involving many functions.
DD.HHH does not halt.
You keep trying to get away with the strawman deception.
On 9/17/2025 8:44 AM, olcott wrote:
[..]
DD.HHH does not halt.
You keep trying to get away with the strawman deception.
Am I lying when I say I think you might be a bit unstable?
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,070 |
Nodes: | 10 (0 / 10) |
Uptime: | 159:54:22 |
Calls: | 13,734 |
Calls today: | 2 |
Files: | 186,966 |
D/L today: |
826 files (296M bytes) |
Messages: | 2,418,695 |