• Never any actual rebuttal to HHH(DD)==0 Since 10/13/2022

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 22:00:16 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with

    Throngs of dumb boomers are falling for AI generated videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. He's not >>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>> network, and how it is ultimately just token prediction.

    It excels at generating good syntax. The reason for that is that the >>>>>>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>>>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>>>>>

    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from invalid. >>>>>>>

    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other than as
    you say, and you don't have the faintest idea how to put a dent in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    I'm convinced by the argumentation; and that conviction has
    the side effect of convincing me of the falsehood of your
    ineffective, contrary argumentation.


    Not really it actually gives you the bias to refuse
    to pay attention.

    That is not any actual rebuttal of the specific points that I make.

    No, indeed /that/ isn't; but plenty of those have also been made not
    only by me but various others, over a considerable time span.


    Never any actual rebuttal ever since Professor
    Sipser agreed with my words. Those exact same
    words still form the basis of my whole proof.

    *Here is an accurate paraphrase of those words*

    <Input to LLM systems>

    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    Simulating Termination Analyzer HHH correctly simulates its input until:

    (a) It correctly detects that its simulated input cannot
    possibly reach its own simulated final halt state then:
    abort simulation and return 0 rejecting its input as non-halting.

    (b) Simulated input reaches its simulated "return" statement: return 1.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 23:05:09 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 11:00 PM, olcott wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>> anyone with

    Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>> believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>> network, and how it is ultimately just token prediction.

    It excels at generating good syntax. The reason for that is >>>>>>>>>> that the
    vast amount of training data exhibits good syntax. (Where it >>>>>>>>>> has bad
    syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>> shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from invalid. >>>>>>>>

    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other than as >>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>

    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    I'm convinced by the argumentation; and that conviction has
    the side effect of convincing me of the falsehood of your
    ineffective, contrary argumentation.


    Not really it actually gives you the bias to refuse
    to pay attention.

    That is not any actual rebuttal of the specific points that I make.

    No, indeed /that/ isn't; but plenty of those have also been made not
    only by me but various others, over a considerable time span.


    Never any actual rebuttal ever since Professor
    Sipser agreed with my words. Those exact same
    words still form the basis of my whole proof.

    You mean the words where he didn't agree with your interpretation of them?



    On Monday, March 6, 2023 at 2:41:27 PM UTC-5, Ben Bacarisse wrote:
    Fritz Feldhase <franz.fri...@gmail.com> writes:

    On Monday, March 6, 2023 at 3:56:52 AM UTC+1, olcott wrote:
    On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
    On Monday, March 6, 2023 at 3:30:38 AM UTC+1, olcott wrote:

    I needed Sipser for people [bla]

    Does Sipser support your view/claim that you have refuted the
    halting theorem?

    Does he write/teach that the halting theorem is invalid?

    Tell us, oh genius!

    Professor Sipser only agreed that [...]

    So the answer is no. Noted.

    Because he has >250 students he did not have time to examine anything
    else. [...]

    Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*
    I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.


    On 8/23/2024 5:07 PM, Ben Bacarisse wrote:
    joes <noreply@example.org> writes:

    Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:

    Professor Sipser clearly agreed that an H that does a finite simulation
    of D is to predict the behavior of an unlimited simulation of D.

    If the simulator *itself* would not abort. The H called by D is,
    by construction, the same and *does* abort.

    We don't really know what context Sipser was given. I got in touch at
    the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".

    Since PO considers his words finely crafted and key to his so-called
    work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
    (Sipser) read it as a general remark about how to determine some cases,
    i.e. that D names an input that H can partially simulate to determine
    it's halting or otherwise. We all know or could construct some such
    cases.

    I suspect he was tricked because PO used H and D as the names without
    making it clear that D was constructed from H in the usual way (Sipser
    uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
    "minor remark" becomes true by being a hypothetical: if the moon is made
    of cheese, the Martians can look forward to a fine fondue. But,
    personally, I think the professor is more straight talking than that,
    and he simply took as a method that can work for some inputs. That's
    the only way is could be seen as a "minor remark" with being accused of being disingenuous.

    On 8/23/2024 9:10 PM, Mike Terry wrote:
    So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.

    On 5/2/2025 9:16 PM, Mike Terry wrote:
    PO is trying to interpret Sipser's quote:

    --- Start Sipser quote
    If simulating halt decider H correctly simulates its input D
    until H correctly determines that its simulated D would never
    stop running unless aborted then

    H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.
    --- End Sipser quote

    The following interpretation is ok:

    If H is given input D, and while simulating D gathers enough
    information to deduce that UTM(D) would never halt, then
    H can abort its simulation and decide D never halts.

    I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
    strategy for a simulating halt decider). It is trivial to check that
    what my interpretation says is valid:

    if UTM(D) would never halt, then D never halts, so if H(D) returns
    never_halts then that is the correct answer for the input. QED :)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 22:13:26 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 10:05 PM, dbush wrote:
    On 10/20/2025 11:00 PM, olcott wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>> anyone with

    Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>> believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. He's >>>>>>>>>>> not
    researched the fundamentals of what it means to train a language >>>>>>>>>>> network, and how it is ultimately just token prediction. >>>>>>>>>>>
    It excels at generating good syntax. The reason for that is >>>>>>>>>>> that the
    vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>> has bad
    syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>> shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from invalid. >>>>>>>>>

    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other
    than as
    you say, and you don't have the faintest idea how to put a dent in it. >>>>>

    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    I'm convinced by the argumentation; and that conviction has
    the side effect of convincing me of the falsehood of your
    ineffective, contrary argumentation.


    Not really it actually gives you the bias to refuse
    to pay attention.

    That is not any actual rebuttal of the specific points that I make.

    No, indeed /that/ isn't; but plenty of those have also been made not
    only by me but various others, over a considerable time span.


    Never any actual rebuttal ever since Professor
    Sipser agreed with my words. Those exact same
    words still form the basis of my whole proof.

    You mean the words where he didn't agree with your interpretation of them?



    According to a Claude AI analysis there
    are only two interpretations and one of
    them is wrong and the other one is my
    interpretation.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 23:16:13 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 11:13 PM, olcott wrote:
    On 10/20/2025 10:05 PM, dbush wrote:
    On 10/20/2025 11:00 PM, olcott wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>> wrote:
    i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>> anyone with

    Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>>> believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. >>>>>>>>>>>> He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>
    It excels at generating good syntax. The reason for that is >>>>>>>>>>>> that the
    vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>>> has bad
    syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>> shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from >>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other
    than as
    you say, and you don't have the faintest idea how to put a dent in >>>>>> it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    I'm convinced by the argumentation; and that conviction has
    the side effect of convincing me of the falsehood of your
    ineffective, contrary argumentation.


    Not really it actually gives you the bias to refuse
    to pay attention.

    That is not any actual rebuttal of the specific points that I make.

    No, indeed /that/ isn't; but plenty of those have also been made not
    only by me but various others, over a considerable time span.


    Never any actual rebuttal ever since Professor
    Sipser agreed with my words. Those exact same
    words still form the basis of my whole proof.

    You mean the words where he didn't agree with your interpretation of
    them?



    According to a Claude AI analysis there
    are only two interpretations and one of
    them is wrong and the other one is my
    interpretation.



    Whether you think one interpretation is wrong is irrelevant. What is
    relevant is that that's how everyone else including Sipser interpreted
    those words, so you lie by implying that he agrees with your interpretation.



    On Monday, March 6, 2023 at 2:41:27 PM UTC-5, Ben Bacarisse wrote:
    Fritz Feldhase <franz.fri...@gmail.com> writes:

    On Monday, March 6, 2023 at 3:56:52 AM UTC+1, olcott wrote:
    On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
    On Monday, March 6, 2023 at 3:30:38 AM UTC+1, olcott wrote:

    I needed Sipser for people [bla]

    Does Sipser support your view/claim that you have refuted the
    halting theorem?

    Does he write/teach that the halting theorem is invalid?

    Tell us, oh genius!

    Professor Sipser only agreed that [...]

    So the answer is no. Noted.

    Because he has >250 students he did not have time to examine anything
    else. [...]

    Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*
    I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.


    On 8/23/2024 5:07 PM, Ben Bacarisse wrote:
    joes <noreply@example.org> writes:

    Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:

    Professor Sipser clearly agreed that an H that does a finite simulation
    of D is to predict the behavior of an unlimited simulation of D.

    If the simulator *itself* would not abort. The H called by D is,
    by construction, the same and *does* abort.

    We don't really know what context Sipser was given. I got in touch at
    the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".

    Since PO considers his words finely crafted and key to his so-called
    work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
    (Sipser) read it as a general remark about how to determine some cases,
    i.e. that D names an input that H can partially simulate to determine
    it's halting or otherwise. We all know or could construct some such
    cases.

    I suspect he was tricked because PO used H and D as the names without
    making it clear that D was constructed from H in the usual way (Sipser
    uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
    "minor remark" becomes true by being a hypothetical: if the moon is made
    of cheese, the Martians can look forward to a fine fondue. But,
    personally, I think the professor is more straight talking than that,
    and he simply took as a method that can work for some inputs. That's
    the only way is could be seen as a "minor remark" with being accused of being disingenuous.

    On 8/23/2024 9:10 PM, Mike Terry wrote:
    So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.

    On 5/2/2025 9:16 PM, Mike Terry wrote:
    PO is trying to interpret Sipser's quote:

    --- Start Sipser quote
    If simulating halt decider H correctly simulates its input D
    until H correctly determines that its simulated D would never
    stop running unless aborted then

    H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.
    --- End Sipser quote

    The following interpretation is ok:

    If H is given input D, and while simulating D gathers enough
    information to deduce that UTM(D) would never halt, then
    H can abort its simulation and decide D never halts.

    I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
    strategy for a simulating halt decider). It is trivial to check that
    what my interpretation says is valid:

    if UTM(D) would never halt, then D never halts, so if H(D) returns
    never_halts then that is the correct answer for the input. QED :)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Tue Oct 21 03:20:51 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with

    Throngs of dumb boomers are falling for AI generated videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>> network, and how it is ultimately just token prediction.

    It excels at generating good syntax. The reason for that is that the >>>>>>>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>>>>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>>>>>>

    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from invalid. >>>>>>>>

    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other than as >>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>

    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored.

    I'm convinced by the argumentation; and that conviction has
    the side effect of convincing me of the falsehood of your
    ineffective, contrary argumentation.


    Not really it actually gives you the bias to refuse
    to pay attention.

    LOL! the world at large is incredibly biased against giving a crank
    like you any attention.

    Those of us present are overcoming the world's /immense/ bias
    and actually indulging the details of your claims.


    That is not any actual rebuttal of the specific points that I make.

    No, indeed /that/ isn't; but plenty of those have also been made not
    only by me but various others, over a considerable time span.


    Never any actual rebuttal ever since Professor
    Sipser agreed with my words.

    You're forgetting (of course, isnce it was more than 48-72 hours
    ago) that I (almost) also agree with those words.

    Those exact same
    words still form the basis of my whole proof.

    They don't do that, though.

    *Here is an accurate paraphrase of those words*

    <Input to LLM systems>

    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    Simulating Termination Analyzer HHH correctly simulates its input until:

    Yes; it correctly steps each x86 instructions with Debug_Step until:

    (a) It correctly detects that its simulated input cannot
    possibly reach its own simulated final halt state then:

    It correctly detects the situation that: if it doesn't abort,
    the simulation will not end.

    What this means is that if, hypothetically, HHH //were differently
    defined// as a non-aborting decider, then DD //would also be
    differntly defined// as a non-terminating case.

    This hypothesis doesn't mean fuck all because it's not reality. HHH is
    not differently defined other than as it is, and likewise DD is not
    differently defined. HHH is required to report on the current definition
    of DD, which is built on the current definition of HHH.

    In any case, yes; the abort is necessary to avoid non-termination.

    abort simulation and return 0 rejecting its input as non-halting.

    And yes, I agree that it aborts the simulation, returns 0
    which indicates that it's rejecting the input as non-halting.

    (Unfortunately, that is wrong).

    But mostly the words can be rationally agreed to with the caveat that
    HHH's result may not be interpreted to be about a hypothetical different version of itself acting on a different input.

    HHH must be reporting about the actual instruction string DD
    that it is actually given. (As you like to repeat.) Not some fantasy
    other versions of these.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 22:25:54 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 10:16 PM, dbush wrote:
    On 10/20/2025 11:13 PM, olcott wrote:
    On 10/20/2025 10:05 PM, dbush wrote:>>>
    You mean the words where he didn't agree with your interpretation of
    them?



    According to a Claude AI analysis there
    are only two interpretations and one of
    them is wrong and the other one is my
    interpretation.



    Whether you think one interpretation is wrong is irrelevant.  What is relevant is that that's how everyone else including Sipser interpreted
    those words, so you lie by implying that he agrees with your
    interpretation.


    <MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
    If simulating halt decider H correctly simulates its
    input D until H correctly determines that its simulated D
    would never stop running unless aborted then

    H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.
    </MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>

    There are only two possible ways to interpret those words
    and one of them is wrong. The one that is not wrong is the
    way that I interpret them.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 22:29:09 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:

    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored.

    I'm convinced by the argumentation; and that conviction has
    the side effect of convincing me of the falsehood of your
    ineffective, contrary argumentation.


    Not really it actually gives you the bias to refuse
    to pay attention.

    LOL! the world at large is incredibly biased against giving a crank
    like you any attention.

    Hence the huge advantage of LLMs.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 23:29:51 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 11:25 PM, olcott wrote:
    On 10/20/2025 10:16 PM, dbush wrote:
    On 10/20/2025 11:13 PM, olcott wrote:
    On 10/20/2025 10:05 PM, dbush wrote:>>>
    You mean the words where he didn't agree with your interpretation of
    them?



    According to a Claude AI analysis there
    are only two interpretations and one of
    them is wrong and the other one is my
    interpretation.



    Whether you think one interpretation is wrong is irrelevant.  What is
    relevant is that that's how everyone else including Sipser interpreted
    those words, so you lie by implying that he agrees with your
    interpretation.


    <repeat of previously refuted point>


    Repeating the point that was just refuted is less than no rebuttal, and therefore constitutes your admission that Sipser does NOT agree with
    you, and that you have been lying by implying that he does.



    On Monday, March 6, 2023 at 2:41:27 PM UTC-5, Ben Bacarisse wrote:
    Fritz Feldhase <franz.fri...@gmail.com> writes:

    On Monday, March 6, 2023 at 3:56:52 AM UTC+1, olcott wrote:
    On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
    On Monday, March 6, 2023 at 3:30:38 AM UTC+1, olcott wrote:

    I needed Sipser for people [bla]

    Does Sipser support your view/claim that you have refuted the
    halting theorem?

    Does he write/teach that the halting theorem is invalid?

    Tell us, oh genius!

    Professor Sipser only agreed that [...]

    So the answer is no. Noted.

    Because he has >250 students he did not have time to examine anything
    else. [...]

    Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*
    I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.


    On 8/23/2024 5:07 PM, Ben Bacarisse wrote:
    joes <noreply@example.org> writes:

    Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:

    Professor Sipser clearly agreed that an H that does a finite simulation
    of D is to predict the behavior of an unlimited simulation of D.

    If the simulator *itself* would not abort. The H called by D is,
    by construction, the same and *does* abort.

    We don't really know what context Sipser was given. I got in touch at
    the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".

    Since PO considers his words finely crafted and key to his so-called
    work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
    (Sipser) read it as a general remark about how to determine some cases,
    i.e. that D names an input that H can partially simulate to determine
    it's halting or otherwise. We all know or could construct some such
    cases.

    I suspect he was tricked because PO used H and D as the names without
    making it clear that D was constructed from H in the usual way (Sipser
    uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
    "minor remark" becomes true by being a hypothetical: if the moon is made
    of cheese, the Martians can look forward to a fine fondue. But,
    personally, I think the professor is more straight talking than that,
    and he simply took as a method that can work for some inputs. That's
    the only way is could be seen as a "minor remark" with being accused of being disingenuous.

    On 8/23/2024 9:10 PM, Mike Terry wrote:
    So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.

    On 5/2/2025 9:16 PM, Mike Terry wrote:
    PO is trying to interpret Sipser's quote:

    --- Start Sipser quote
    If simulating halt decider H correctly simulates its input D
    until H correctly determines that its simulated D would never
    stop running unless aborted then

    H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.
    --- End Sipser quote

    The following interpretation is ok:

    If H is given input D, and while simulating D gathers enough
    information to deduce that UTM(D) would never halt, then
    H can abort its simulation and decide D never halts.

    I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
    strategy for a simulating halt decider). It is trivial to check that
    what my interpretation says is valid:

    if UTM(D) would never halt, then D never halts, so if H(D) returns
    never_halts then that is the correct answer for the input. QED :)


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 06:56:47 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with

    Throngs of dumb boomers are falling for AI generated videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>>> network, and how it is ultimately just token prediction. >>>>>>>>>>>
    It excels at generating good syntax. The reason for that is that the
    vast amount of training data exhibits good syntax. (Where it has bad
    syntax, it is idiosyncratic; whereas good syntax is broadly shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from invalid. >>>>>>>>>

    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other than as >>>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>>

    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    Blah, Blah Blah, no Olcott you are wrong, I know
    that you are wrong because I simply don't believe you.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 08:25:51 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>> wrote:
    i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>> anyone with

    Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>>> believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. >>>>>>>>>>>> He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>
    It excels at generating good syntax. The reason for that is >>>>>>>>>>>> that the
    vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>>> has bad
    syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>> shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from >>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other
    than as
    you say, and you don't have the faintest idea how to put a dent in >>>>>> it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 07:48:51 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 7:25 AM, dbush wrote:
    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>> wrote:
    i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>>> anyone with

    Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>> videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>> He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>>
    It excels at generating good syntax. The reason for that is >>>>>>>>>>>>> that the
    vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>> it has bad
    syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>>> shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from >>>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other >>>>>>> than as
    you say, and you don't have the faintest idea how to put a dent >>>>>>> in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve. >>>
    It is disingenuous to say that you've simply had your details ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly


    Yes that it the exact error that I have been
    referring to.

    In the case of HHH(DD) the above requires HHH to
    report on the behavior of its caller and HHH has
    no way to even know who its caller is.

    My simulating halt decider exposed the gap of
    false assumptions because there are no assumptions
    everything is fully operational code.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 09:00:27 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 8:48 AM, olcott wrote:
    On 10/22/2025 7:25 AM, dbush wrote:
    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>>> wrote:
    i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>>>> anyone with

    Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>> videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>> He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>
    It excels at generating good syntax. The reason for that >>>>>>>>>>>>>> is that the
    vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>>> it has bad
    syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>> broadly shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from >>>>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other >>>>>>>> than as
    you say, and you don't have the faintest idea how to put a dent >>>>>>>> in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not >>>>>> follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply >>>>>> that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they
    deserve.

    It is disingenuous to say that you've simply had your details ignored. >>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
    directly


    Yes that it the exact error that I have been
    referring to.

    That is not an error. That is simply a mapping that you have admitted
    exists.


    In the case of HHH(DD) the above requires HHH to
    report on the behavior of its caller

    False. It requires HHH to report on the behavior of the machine
    described by its input.

    int main {
    DD() // this
    HHH(DD) // is not the caller of this
    return 0;
    }

    and HHH has
    no way to even know who its caller is.

    Irrelevant.


    My simulating halt decider


    in other words, something that uses simulation to compute the following mapping:


    Given any algorithm (i.e. a fixed immutable sequence of instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly




    exposed the gap of
    false assumptions
    The only false assumption is that the above requirements can be
    satisfied, which Turing and Linz proved to be false and that you have *explicitly* agreed with.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 08:47:47 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 8:00 AM, dbush wrote:
    On 10/22/2025 8:48 AM, olcott wrote:
    On 10/22/2025 7:25 AM, dbush wrote:
    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>>>> wrote:
    i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>> life. anyone with

    Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>> videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>>> He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>
    It excels at generating good syntax. The reason for that >>>>>>>>>>>>>>> is that the
    vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>>>> it has bad
    syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>> broadly shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows >>>>>>>>>>>>>
    But you're incapable of recognizing valid entailment from >>>>>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other >>>>>>>>> than as
    you say, and you don't have the faintest idea how to put a dent >>>>>>>>> in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not >>>>>>> follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely >>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>> for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply >>>>>>> that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they
    deserve.

    It is disingenuous to say that you've simply had your details ignored. >>>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
    directly


    Yes that it the exact error that I have been
    referring to.

    That is not an error.  That is simply a mapping that you have admitted exists.


    In the case of HHH(DD) the above requires HHH to
    report on the behavior of its caller

    False.  It requires HHH to report on the behavior of the machine
    described by its input.


    That includes that DD calls HHH(DD) in recursive
    simulation.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 09:50:38 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 9:47 AM, olcott wrote:
    On 10/22/2025 8:00 AM, dbush wrote:
    On 10/22/2025 8:48 AM, olcott wrote:
    On 10/22/2025 7:25 AM, dbush wrote:
    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200
    <user7160@newsgrouper.org.invalid> wrote:
    i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>> life. anyone with

    Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>> videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>>>> He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>
    It excels at generating good syntax. The reason for that >>>>>>>>>>>>>>>> is that the
    vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>> (Where it has bad
    syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>> broadly shared.)


    I provide a basis to it and it does perform valid >>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>
    But you're incapable of recognizing valid entailment from >>>>>>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are >>>>>>>>>> other than as
    you say, and you don't have the faintest idea how to put a >>>>>>>>>> dent in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not >>>>>>>> follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely >>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>> for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply >>>>>>>> that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they
    deserve.

    It is disingenuous to say that you've simply had your details
    ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
    directly


    Yes that it the exact error that I have been
    referring to.

    That is not an error.  That is simply a mapping that you have admitted
    exists.


    In the case of HHH(DD) the above requires HHH to
    report on the behavior of its caller

    False.  It requires HHH to report on the behavior of the machine
    described by its input.


    That includes that DD calls HHH(DD) in recursive
    simulation.

    Which therefore includes the fact that HHH(DD) will return 0 and that DD
    will subsequently halt.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 09:25:15 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 8:50 AM, dbush wrote:
    On 10/22/2025 9:47 AM, olcott wrote:
    On 10/22/2025 8:00 AM, dbush wrote:
    On 10/22/2025 8:48 AM, olcott wrote:
    On 10/22/2025 7:25 AM, dbush wrote:
    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200
    <user7160@newsgrouper.org.invalid> wrote:
    i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>>> life. anyone with

    Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>>> videos, believing
    them to be real. This is much the same thing. >>>>>>>>>>>>>>>>>
    AI is just another thing Olcott has no understanding >>>>>>>>>>>>>>>>> of. He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>>
    It excels at generating good syntax. The reason for >>>>>>>>>>>>>>>>> that is that the
    vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>>> (Where it has bad
    syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>>> broadly shared.)


    I provide a basis to it and it does perform valid >>>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>>
    But you're incapable of recognizing valid entailment from >>>>>>>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are >>>>>>>>>>> other than as
    you say, and you don't have the faintest idea how to put a >>>>>>>>>>> dent in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does >>>>>>>>> not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely >>>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>>> for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which >>>>>>>>> we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply >>>>>>>>> that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>> deserve.

    It is disingenuous to say that you've simply had your details
    ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
    directly


    Yes that it the exact error that I have been
    referring to.

    That is not an error.  That is simply a mapping that you have
    admitted exists.


    In the case of HHH(DD) the above requires HHH to
    report on the behavior of its caller

    False.  It requires HHH to report on the behavior of the machine
    described by its input.


    That includes that DD calls HHH(DD) in recursive
    simulation.

    Which therefore includes the fact that HHH(DD) will return 0 and that DD will subsequently halt.

    You keep ignoring that we are only focusing on
    DD correctly simulated by HHH. In other words
    the behavior that HHH computes
    FROM ITS ACTUAL FREAKING INPUT NOT ANY OTHER DAMN THING
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 15:40:18 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    Blah, Blah Blah, no Olcott you are wrong, I know
    that you are wrong because I simply don't believe you.

    You are wrong because I (1) don't see that gaping flaw in the
    definition of the halting problem, (2) you don't even
    try to explain how such that flaw can be. Where, how, why
    is any decider being asked to decide something other than
    an input representable as a finite string.

    I've repeated many times that the diagonal case is constructable as a
    finite string, whose halting status can be readily ascertained.

    Because it's obvious to me, of course I'm going to reject
    baseless claims that simply ask me to /believe/ otherwise.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 10:47:48 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve. >>>
    It is disingenuous to say that you've simply had your details ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.


    It only seems that way because you are unable to
    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,
    even though I do remember that you did do this once.

    No sense moving on to any other point until
    mutual agreement on this mandatory prerequisite.

    Blah, Blah Blah, no Olcott you are wrong, I know
    that you are wrong because I simply don't believe you.

    You are wrong because I (1) don't see that gaping flaw in the
    definition of the halting problem, (2) you don't even
    try to explain how such that flaw can be. Where, how, why
    is any decider being asked to decide something other than
    an input representable as a finite string.

    I've repeated many times that the diagonal case is constructable as a
    finite string, whose halting status can be readily ascertained.

    Because it's obvious to me, of course I'm going to reject
    baseless claims that simply ask me to /believe/ otherwise.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 17:07:42 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve. >>>>
    It is disingenuous to say that you've simply had your details ignored. >>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it, and in what manner.

    When DD is simulated by HHH, the simulation is left incomplete.

    That is not permitted by the semantics of the source
    or target language in which DD is written;
    an incomplete simulation is an incorrect simulation.

    Thus, DD being simulated by HHH according to the semantics. The
    semantics say that there is a next statement or instruction to execute,
    which HHH neglects to do.

    Now that would be fine, because HHH's job isn't to evoke the
    full behavior of DD but only to predict whether it will halt.

    But HHH does that incorrectly; the correct halting status is 1,
    not 0.

    Thus HHH achieves neither a correct simulation, nor a correct
    appraisal of the halting status.

    even though I do remember that you did do this once.

    I must have accidentally written something that looked
    like crackpottery.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 12:11:56 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve. >>>>>
    It is disingenuous to say that you've simply had your details ignored. >>>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 13:30:20 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 10:25 AM, olcott wrote:
    On 10/22/2025 8:50 AM, dbush wrote:
    On 10/22/2025 9:47 AM, olcott wrote:
    On 10/22/2025 8:00 AM, dbush wrote:
    On 10/22/2025 8:48 AM, olcott wrote:
    On 10/22/2025 7:25 AM, dbush wrote:
    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>> On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200
    <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>>>> life. anyone with

    Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>>>> videos, believing
    them to be real. This is much the same thing. >>>>>>>>>>>>>>>>>>
    AI is just another thing Olcott has no understanding >>>>>>>>>>>>>>>>>> of. He's not
    researched the fundamentals of what it means to train >>>>>>>>>>>>>>>>>> a language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>>>
    It excels at generating good syntax. The reason for >>>>>>>>>>>>>>>>>> that is that the
    vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>>>> (Where it has bad
    syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>>>> broadly shared.)


    I provide a basis to it and it does perform valid >>>>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>>>
    But you're incapable of recognizing valid entailment >>>>>>>>>>>>>>>> from invalid.


    Any freaking idiot can spew out baseless rhetoric >>>>>>>>>>>>>>> such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored >>>>>>>>>>>>>>> in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/ >>>>>>>>>>>> extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are >>>>>>>>>>>> other than as
    you say, and you don't have the faintest idea how to put a >>>>>>>>>>>> dent in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say >>>>>>>>>> does not
    follow a valid mode of inference for refuting an argument. >>>>>>>>>>
    If you are trying to refute something which is not only a widely >>>>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>>>> for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being >>>>>>>>>> directly refuted by elements of the established result which >>>>>>>>>> we can refer to.

    I cannot identify any flaw in the halting theorem. It's not >>>>>>>>>> simply
    that I believe it because of the Big Names attached to it. >>>>>>>>>>

    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.

    It is disingenuous to say that you've simply had your details >>>>>>>> ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed >>>>>> directly


    Yes that it the exact error that I have been
    referring to.

    That is not an error.  That is simply a mapping that you have
    admitted exists.


    In the case of HHH(DD) the above requires HHH to
    report on the behavior of its caller

    False.  It requires HHH to report on the behavior of the machine
    described by its input.


    That includes that DD calls HHH(DD) in recursive
    simulation.

    Which therefore includes the fact that HHH(DD) will return 0 and that
    DD will subsequently halt.

    You keep ignoring that we are only focusing on
    DD correctly simulated by HHH.

    Which doesn't exist because HHH aborts.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 13:36:19 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 11:47 AM, olcott wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they
    deserve.

    It is disingenuous to say that you've simply had your details ignored. >>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.


    It only seems that way because you are unable to
    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    Then you have no mapping since DD is NOT simulated by HHH according to
    the semantics of the C language because HHH aborts.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 13:38:49 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 1:11 PM, olcott wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they
    deserve.

    It is disingenuous to say that you've simply had your details
    ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    And the fact that HHH(DD) returns 0 causing DD to subsequently halt is
    also part of the behavior specified by finite string DD.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 18:40:54 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they deserve. >>>>>>
    It is disingenuous to say that you've simply had your details ignored. >>>>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    DD can be passed as an argument to any decider, not only HHH.

    For instance, don't you have a HHH1 such that HHH1(DD)
    correctly steps DD to the end and returns the correct value 1?

    DD's behavior is dependent on a decider which it calls;
    but not dependent on anything which is analyzing DD.

    Even when those two are the same, they are different
    instances/activations.

    DD creates an activation of HHH on whose result it depends.

    The definition of DD's behavior does not depend on the ongoing
    activation of something which happens to be analyzing it;
    it has no knowledge of that.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@agisaak@gm.invalid to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 13:24:00 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22 12:40, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored. >>>>>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the
    "finite string input" DD *must* include as a substring the entire
    description of HHH.

    André

    DD can be passed as an argument to any decider, not only HHH.

    For instance, don't you have a HHH1 such that HHH1(DD)
    correctly steps DD to the end and returns the correct value 1?

    DD's behavior is dependent on a decider which it calls;
    but not dependent on anything which is analyzing DD.

    Even when those two are the same, they are different
    instances/activations.

    DD creates an activation of HHH on whose result it depends.

    The definition of DD's behavior does not depend on the ongoing
    activation of something which happens to be analyzing it;
    it has no knowledge of that.

    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 14:30:13 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 2:24 PM, André G. Isaak wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.

    It is disingenuous to say that you've simply had your details >>>>>>>> ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the "finite string input" DD *must* include as a substring the entire description of HHH.

    André


    That includes that HHH(DD) keeps simulating yet
    another instance of itself and DD forever and ever
    until it fully understands that no simulated DD
    can possibly ever reach its own final halt state.

    That five LLM systems immediately understood this
    and figured it all out on their own seems strong
    evidence that you are being disingenuous with me
    right now.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 15:31:25 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 3:30 PM, olcott wrote:
    On 10/22/2025 2:24 PM, André G. Isaak wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have >>>>>>>>> been
    dissected by multiple poeple to a much greater detail than they >>>>>>>>> deserve.

    It is disingenuous to say that you've simply had your details >>>>>>>>> ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the
    "finite string input" DD *must* include as a substring the entire
    description of HHH.

    André


    That includes that HHH(DD) keeps simulating yet
    another instance of itself and DD forever and ever

    False, as demonstrated by the fact that HHH(DD) returns.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 20:34:47 2025
    From Newsgroup: comp.ai.philosophy

    On 22/10/2025 20:24, André G. Isaak wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:

    <snip>

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single
    behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp
    that the "finite string input" DD *must* include as a substring
    the entire description of HHH.

    He also seems to be missing the fact that HHH's sole input is a
    function pointer that it immediately invalidates by casting the
    pointer into into a uint32_t.

    HHH's ability to simulate DD is like a dog's walking on his hind
    legs. It doesn't work well, but you are surprised to find it
    working at all.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    With apologies to Dr Johnson.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 19:52:34 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the "finite string input" DD *must* include as a substring the entire description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 14:55:23 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 1:40 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored. >>>>>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.


    That too is stupidly incorrect.
    It is the job of every simulating halt decider
    to predict what the behavior of it simulated
    input would be if it never aborted.

    When a person is asked a yes or no question
    there are not two separate people in parallel
    universes one that answers yes and one that
    answers no. There is one person that thinks
    through both hypothetical possibilities and
    then provides one answer.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 15:00:47 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the
    "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.


    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 20:20:39 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the
    "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 16:24:04 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 3:55 PM, olcott wrote:
    On 10/22/2025 1:40 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.

    It is disingenuous to say that you've simply had your details >>>>>>>> ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.


    That too is stupidly incorrect.
    It is the job of every simulating halt decider
    to predict what the behavior of it simulated
    input would be if it never aborted.

    In other words, what would happen if that same input was given to UTM.


    When a person is asked a yes or no question
    there are not two separate people in parallel
    universes one that answers yes and one that
    answers no. There is one person that thinks
    through both hypothetical possibilities and
    then provides one answer.


    Strawman. The halting problem is about the instructions themselves, not
    where instructions physically reside i.e. a particular person's brain.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 15:35:06 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.


    Yet again with deflection.
    That the input to HHH(DD) specfies non-haltin and
    HHH(DD) correctly reports this proves that the
    proof does not prove its point or that the halting
    problem incorrectly requires HHH to report on
    behavior that the input to HHH(DD) does not specify.


    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 16:43:59 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 4:35 PM, olcott wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.


    Yet again with deflection.
    That the input to HHH(DD) specfies non-haltin
    False, as you have admitted otherwise:

    On 10/20/2025 11:51 PM, olcott wrote:
    On 10/20/2025 10:45 PM, dbush wrote:
    And it is a semantic tautology that a finite string description of a
    Turing machine is stipulated to specify all semantic properties of the
    described machine, including whether it halts when executed directly.
    And it is this semantic property that halt deciders are required to
    report on.

    Yes that is all correct

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 16:12:38 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 14:32:20 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 2:12 PM, olcott wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time.

    This time? How many other times do not even read at all? Just a skim,
    then your self-moron program kicks in? Humm...

    DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.



    HHH(DD) can return 0 and DD halts? If not, just say that HHH(DD) always returns 1?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 17:50:41 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 5:12 PM, olcott wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH

    Does not exist because HHH aborts

    cannot possibly
    reach its own final halt state no matter what HHH does.

    But HHH is an algorithm which means it does exactly one thing and one
    thing only. Anything else is not HHH.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 21:55:45 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 1:40 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored. >>>>>>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    That too is stupidly incorrect.
    It is the job of every simulating halt decider
    to predict what the behavior of it simulated
    input would be if it never aborted.

    DD is a fixed input string that is etched in stone. That string
    specifies a behavior which invokes a certain decider in self-reference
    and then behaves opposite.

    That decider is recorded inside that string, in every detail,
    and so is also etched in stone.

    That decider is aborting, and can be nothing else.

    No decider which is analyzing DD has the power to alter any
    aspect of that string.

    It is a non-negotiable fact that DD calls an aborting decider
    which returns 0 to it, subsequent to which DD halts; so
    the correct answer for DD is 1.

    The behavior of a correct simulation of DD that is not aborted is that
    DD terminates, and thus so does the simulation.

    Games played with the redefinition (actual or hypothetical) of a decider
    that is specified somewhere outside of that string have no effect on
    that string.

    The real halting problem doesn't deal with C and function pointers,
    where you can play games and have the test case use a pointer
    to the same function that is also analyzing it, and be influenced
    by its redefinition and other muddled confusions.

    Even if HHH assumes it is calculating something in relation to
    a hypothetically redefined HHH, that hypothesis does not extend
    into the input DD; it must not.

    You are talking about some angels-on-the-head-of-a-pin rubbish and not
    the Halting Problem.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 17:14:08 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 18:33:05 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 6:14 PM, olcott wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH

    Does not exist because HHH aborts

    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    Category error: algorithm HHH does one thing and one thing only, and
    that is an incomplete and therefore incorrect simulation.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 23:01:39 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.

    DD specifies a procedure that transitions to a terminating state,
    whether any given simulation of it is carried far enough to show that.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 23:15:32 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    I explained that a nested simulation tower is two dimensional.

    One dimension is the simulation level, the nesting itself;
    that goes out to infinity. Due to the aborting behavior of HHH,
    it is not actually realized in simulation; we have to step
    through the aborted simulations to keep it going.

    The other dimension is the execution /within/ the simulations.
    That can be halting or non-halting.

    In the HHH(DD) simulation tower, though that is infinite,
    the simulations are halting.

    I said that before. Your memory of that has vaporized, and you have now
    focused only on my statement that the simluation tower is infinite.

    The depth of the simulation tower, and the halting of the simulations
    within that tower, are independent phenomena.

    A decider must not mistake one for the other.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 18:24:32 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    I explained that a nested simulation tower is two dimensional.

    One dimension is the simulation level, the nesting itself;
    that goes out to infinity.

    Great. Thus the input to HHH(DD) specifies behavior
    such that the correctly simulated DD cannot possibly
    reach its own simulated final halt state.

    Due to the aborting behavior of HHH,
    it is not actually realized in simulation; we have to step
    through the aborted simulations to keep it going.

    The other dimension is the execution /within/ the simulations.
    That can be halting or non-halting.

    In the HHH(DD) simulation tower, though that is infinite,
    the simulations are halting.

    I said that before. Your memory of that has vaporized, and you have now focused only on my statement that the simluation tower is infinite.

    The depth of the simulation tower, and the halting of the simulations
    within that tower, are independent phenomena.

    A decider must not mistake one for the other.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 20:14:35 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 7:24 PM, olcott wrote:
    On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single
    behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp
    that the
    "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    I explained that a nested simulation tower is two dimensional.

    One dimension is the simulation level, the nesting itself;
    that goes out to infinity.

    Great. Thus the input to HHH(DD)

    i.e. finite string DD which is the description of machine DD i.e. <DD>
    and therefore stipulated to specify all semantic properties of machine
    DD including the fact that it halts when executed directly.

    specifies behavior
    such that the correctly simulated DD

    i.e. UTM(DD)

    cannot possibly
    reach its own simulated final halt state.

    False, as proven by UTM(DD) halting.


    Due to the aborting behavior of HHH,
    it is not actually realized in simulation; we have to step
    through the aborted simulations to keep it going.

    The other dimension is the execution /within/ the simulations.
    That can be halting or non-halting.

    In the HHH(DD) simulation tower, though that is infinite,
    the simulations are halting.

    I said that before. Your memory of that has vaporized, and you have now
    focused only on my statement that the simluation tower is infinite.

    The depth of the simulation tower, and the halting of the simulations
    within that tower, are independent phenomena.

    A decider must not mistake one for the other.




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 01:22:41 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>>>> "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    I explained that a nested simulation tower is two dimensional.

    One dimension is the simulation level, the nesting itself;
    that goes out to infinity.

    Great. Thus the input to HHH(DD) specifies behavior
    such that the correctly simulated DD cannot possibly
    reach its own simulated final halt state.

    People replying to you respond to multiple points. Yet you typically
    only read one point of a response.

    You snipped all this:

    Due to the aborting behavior of HHH,
    it is not actually realized in simulation; we have to step
    through the aborted simulations to keep it going.

    The other dimension is the execution /within/ the simulations.
    That can be halting or non-halting.

    In the HHH(DD) simulation tower, though that is infinite,
    the simulations are halting.

    I said that before. Your memory of that has vaporized, and you have now
    focused only on my statement that the simluation tower is infinite.

    The depth of the simulation tower, and the halting of the simulations
    within that tower, are independent phenomena.

    A decider must not mistake one for the other.

    There being endless nested simulations doesn't imply that the
    simulations are nonterminating.

    If we simply do this:

    void fun(void)
    {
    sim_t s = simulation_create(fun);
    return;
    }

    we get an infinite tower of simulations, all of which terminate.

    When fun() is called, it creates a simulation beginning at fun.
    No step of this simulation is performed, yet it exists.

    Then fun terminates.

    No simulation has actually started, but we have a simuation
    state which implies an infnite tower.

    If the simulation s abandoned by fun is stepped, then soon,
    inside that simulation, fun wil be called, and will create another
    simulation and exit.

    Then if we simulate that the same thing will happen.

    Suppose the simulation_create module provides a simulate_run
    function which identifies all/any unfinished simulations and
    runs them.

    Then if we do this:

    int main()
    {
    fun();
    simulate_run();
    }

    simulate_run() will get into an infinite loop inside of
    which it is always completing simulations of fun, which
    are creating new simulations.

    That won't even run out of memory because it's not recursion.

    simulation_create() dynamically allocates a simulation. If
    simulation_run() calls simulation_destroy() whenever it detects that it
    has completed a simulation, then I think thesituation can hit a steady
    state; it runs forever, continuously launching and terminating
    simulations.

    But we cannot call fun itself non-halting. It has facilitated
    the infinite generation of simulations, but is itself halting.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 20:47:08 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    I explained that a nested simulation tower is two dimensional.

    One dimension is the simulation level, the nesting itself;
    that goes out to infinity.

    Great. Thus the input to HHH(DD) specifies behavior
    such that the correctly simulated DD cannot possibly
    reach its own simulated final halt state.

    *The above point is the only relevant point to my proof*

    We need to proceed from this one point to the next points
    that are semantically entailed from this one point then
    we have my whole proof.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 22:13:59 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 9:47 PM, olcott wrote:
    On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single
    behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp
    that the
    "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    I explained that a nested simulation tower is two dimensional.

    One dimension is the simulation level, the nesting itself;
    that goes out to infinity.

    Great. Thus the input to HHH(DD) specifies behavior
    such that the correctly simulated DD cannot possibly
    reach its own simulated final halt state.

    Repeat of previously refuted point:

    On 10/22/2025 8:14 PM, dbush wrote:
    On 10/22/2025 7:24 PM, olcott wrote:
    Great. Thus the input to HHH(DD)

    i.e. finite string DD which is the description of machine DD i.e. <DD>
    and therefore stipulated to specify all semantic properties of machine
    DD including the fact that it halts when executed directly.

    specifies behavior
    such that the correctly simulated DD

    i.e. UTM(DD)

    cannot possibly
    reach its own simulated final halt state.

    False, as proven by UTM(DD) halting.

    This constitutes your admission that:
    1) the prior refutation is correct
    2) the point you are responding to is correct

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Julio Di Egidio@julio@diegidio.name to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 08:02:50 2025
    From Newsgroup: comp.ai.philosophy

    On 22/10/2025 21:24, André G. Isaak wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the "finite string input" DD *must* include as a substring the entire description of HHH.

    The problem is you are a bunch of fucking retards,
    you spamming pieces of shit and 10 years of flooding
    all channels with just retarded bullshit, and rigorously
    cross-posted.

    *Plonk*

    Julio

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 09:51:57 2025
    From Newsgroup: comp.ai.philosophy

    On 10/23/2025 1:02 AM, Julio Di Egidio wrote:
    On 22/10/2025 21:24, André G. Isaak wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the
    "finite string input" DD *must* include as a substring the entire
    description of HHH.

    The problem is you are a bunch of fucking retards,
    you spamming pieces of shit and 10 years of flooding
    all channels with just retarded bullshit, and rigorously
    cross-posted.

    *Plonk*

    Julio


    And after all these years you never understood
    what the technical term *Plonk* really means
    because you keep responding to these same people
    that after a *Plonk* you would not be able to see.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 09:55:29 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.


    *Hence the halting problem is wrong*

    Turing machine deciders only compute the mapping
    from their finite string inputs to an accept state
    or reject state on the basis that this input finite
    string specifies a semantic or syntactic property.

    This means that the ultimate measure of the behavior
    that a finite string input D specifies is D correctly
    simulated by simulating halt decider H.

    The halting problem requires that halt deciders do what
    no Turing machine decider can do report on the semantic
    property of non-inputs.


    DD specifies a procedure that transitions to a terminating state,
    whether any given simulation of it is carried far enough to show that.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 16:47:18 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-23, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>>>> "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.

    *Hence the halting problem is wrong*

    The much simpler explanation is that the decider is wrong. By being
    wrong, it contributes one point of evidence that confirms the theorem.

    Turing machine deciders only compute the mapping
    from their finite string inputs to an accept state
    or reject state on the basis that this input finite
    string specifies a semantic or syntactic property.

    I don't understand what you think you're achieving by repeating
    this, but its inclusion does mean that your poting has four
    correct lines, improving its correctness average.

    The halting problem requires that halt deciders do what
    no Turing machine decider can do report on the semantic
    property of non-inputs.

    It positively doesn't. Even the diagonal cases that defeat deciders are
    all valid inputs: self-contained finite strings denoting machines, which perpetrate their trick without referencing anything outside of their own description.

    You are just making up nonsense and presenting without a shred of
    rational evidence (which, of course, doesn't exist for a falsehood).
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 12:22:02 2025
    From Newsgroup: comp.ai.philosophy

    On 10/23/2025 11:47 AM, Kaz Kylheku wrote:
    On 2025-10-23, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>>>>> "finite string input" DD *must* include as a substring the entire >>>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower. >>>>>

    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.

    *Hence the halting problem is wrong*

    The much simpler explanation is that the decider is wrong.

    https://www.liarparadox.org/Simple_but_Wrong.png


    By being
    wrong, it contributes one point of evidence that confirms the theorem.

    Turing machine deciders only compute the mapping
    from their finite string inputs to an accept state
    or reject state on the basis that this input finite
    string specifies a semantic or syntactic property.

    I don't understand what you think you're achieving by repeating
    this, but its inclusion does mean that your poting has four
    correct lines, improving its correctness average.

    The halting problem requires that halt deciders do what
    no Turing machine decider can do report on the semantic
    property of non-inputs.

    It positively doesn't.

    HHH(DD) does report on the behavior that its actual
    input actually specifies:

    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    there will still be a nested simulation tower

    The halting problem requires HHH(DD) to report on
    something else, QED the halting problem is wrong.

    Turing machine deciders only compute the mapping
    from their finite string inputs to an accept state
    or reject state on the basis that this input finite
    string specifies a semantic or syntactic property.

    This means that the ultimate measure of the behavior
    that a finite string input D specifies is D correctly
    simulated by simulating halt decider H.

    The halting problem requires that halt deciders do what
    no Turing machine decider can do report on the semantic
    property of non-inputs.

    Even the diagonal cases that defeat deciders are

    skipping over examining the actual underlying details

    all valid inputs: self-contained finite strings denoting machines, which perpetrate their trick without referencing anything outside of their own description.

    You are just making up nonsense and presenting without a shred of
    rational evidence (which, of course, doesn't exist for a falsehood).

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 11:50:28 2025
    From Newsgroup: comp.ai.philosophy

    On 10/23/2025 10:22 AM, olcott wrote:
    [....]
    HHH(DD) does report on the behavior that its actual
    input actually specifies:

    Can your HHH(DD) hit all possible paths of DD? Keep in mind that DD's
    behavior is dependent on the return value of HHH(DD), right?

    [...]
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 21:11:13 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-23, olcott <polcott333@gmail.com> wrote:
    On 10/23/2025 11:47 AM, Kaz Kylheku wrote:
    On 2025-10-23, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the
    "finite string input" DD *must* include as a substring the entire >>>>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower. >>>>>>

    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.

    *Hence the halting problem is wrong*

    The much simpler explanation is that the decider is wrong.

    https://www.liarparadox.org/Simple_but_Wrong.png

    As a general remark, liardparadox.org is a site under your own control
    and so sheds no light on anything; it just repeats the claims you make
    in comp.theory.

    I have thoroughly refuted the lazy, intellectually immature and illogical
    idea that the halting problem involves anything equivalent to the
    liar paradox.

    Turing machines do not proclaim a statement, and do not self-contradict.

    Even among sentences which appear to state a truth, and which are self-referential, not all such sentences are ill-formed paradoxes.

    "This sentence has five words." is self-referential and truth bearing;
    its value is true.

    Your repeated claim that halting involves something closely analogous
    to the Liar Paradox is completely unfounded, not supported by a shred
    of rational evidence.

    The halting problem requires that halt deciders do what
    no Turing machine decider can do report on the semantic
    property of non-inputs.

    It positively doesn't.

    HHH(DD) does report on the behavior that its actual
    input actually specifies:

    Yes; it does, as required.

    That input has one and only one behavior, which is halting.

    Thus, the 0 report is incorrect.

    It meets the requirements for what to report on,
    but not the requirement for correctness.

    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    there will still be a nested simulation tower

    The halting problem requires HHH(DD) to report on
    something else, QED the halting problem is wrong.

    It does not; it requires HHH(DD) to report on the actual behavior of
    input DD.

    That input DD starts its own instance of an algorithm equivalent to the
    one used by HHH, and applies that to itself. Then it behaves in a way
    which ensures that the result does not match its behavior.

    That whole thing encoded in the finite string input, and is its
    behavior.

    Turing machine deciders only compute the mapping
    from their finite string inputs to an accept state
    or reject state on the basis that this input finite
    string specifies a semantic or syntactic property.

    The DD input definitely specifies these properties. It it well-formed
    code that can be executed, and that reaches termination.

    This means that the ultimate measure of the behavior
    that a finite string input D specifies is D correctly
    simulated by simulating halt decider H.

    For that, a simulator is needed which doesn't abort, because
    correctly implies completely. We have been calling one such
    decider by the name HHH1. HHH1 does nothing but simulate.

    HHH1(DD) finds that DD terminates and returns 1.

    Thus it carries out the "ultimate measure".

    HHH fails to simulate DD to the end the way HHH1 does and therefore
    does not perform the "ultimate measure".

    The halting problem requires that halt deciders do what
    no Turing machine decider can do report on the semantic
    property of non-inputs.

    It simply doesn't. This claim is not based in reality, and is delivered
    without rational justification.

    The halting problem is simply the question, can there exist an algorithm
    for deciding (i.e. calculating in a finite number of steps) the halting
    status of /any/ algorithm?

    Through meticulous logic, we have derived the answer that no, an
    algorithm which decides the halting of all algorithms, does not exist.

    In no way does the halting problem require "non-inputs", whatever those
    are.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 17:08:05 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.


    The finite string as an actual input to HHH(DD)
    *does not have the halting property*

    Turing machine deciders only compute the mapping
    from their finite string inputs

    Turing machine deciders only compute the mapping
    from their finite string inputs

    Turing machine deciders only compute the mapping
    from their finite string inputs

    The DD that has the halting property is not an input
    The DD that has the halting property is not an input
    The DD that has the halting property is not an input
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 15:21:26 2025
    From Newsgroup: comp.ai.philosophy

    On 10/23/2025 3:08 PM, olcott wrote:
    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single
    behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp
    that the
    "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.


    The finite string as an actual input to HHH(DD)
    *does not have the halting property*

    Turing machine deciders only compute the mapping
    from their finite string inputs

    Turing machine deciders only compute the mapping
    from their finite string inputs

    Turing machine deciders only compute the mapping
    from their finite string inputs

    The DD that has the halting property is not an input
    The DD that has the halting property is not an input
    The DD that has the halting property is not an input


    Your DD relies on what HHH(DD) returns and acts accordingly.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 15:26:34 2025
    From Newsgroup: comp.ai.philosophy

    On 10/23/2025 3:08 PM, olcott wrote:
    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single
    behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp
    that the
    "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.


    The finite string as an actual input to HHH(DD)
    *does not have the halting property*

    Turing machine deciders only compute the mapping
    from their finite string inputs

    Turing machine deciders only compute the mapping
    from their finite string inputs

    Turing machine deciders only compute the mapping
    from their finite string inputs

    The DD that has the halting property is not an input
    The DD that has the halting property is not an input
    The DD that has the halting property is not an input


    DD relies on HHH(DD)'s return value. It can halt, or no halt.

    If HHH(DD) never returns, its not a simulation of DD.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 18:40:34 2025
    From Newsgroup: comp.ai.philosophy

    On 10/23/2025 6:08 PM, olcott wrote:
    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single
    behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp
    that the
    "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.


    The finite string as an actual input to HHH(DD)

    i.e. finite string DD which is the description of machine DD and
    therefore is stipulated to specify all semantic properties of the
    described machine, including halting when executed directly.

    *does not have the halting property*

    False, see above.>
    Turing machine deciders only compute the mapping
    from their finite string inputs

    And the finite string input DD has the halting property as show above.


    Turing machine deciders only compute the mapping
    from their finite string inputs

    Turing machine deciders only compute the mapping
    from their finite string inputs

    The DD that has the halting property

    i.e. finite string DD which is an input to HH

    is not an input
    False, see above.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 17:48:46 2025
    From Newsgroup: comp.ai.philosophy

    On 10/23/2025 5:40 PM, dbush wrote:
    On 10/23/2025 6:08 PM, olcott wrote:
    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single
    behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp >>>>>>>> that the
    "finite string input" DD *must* include as a substring the entire >>>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be >>>>>>> HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower. >>>>>

    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.


    The finite string as an actual input to HHH(DD)

    i.e. finite string DD which is the description of machine DD and
    therefore is stipulated to specify all semantic properties of the
    described machine, including halting when executed directly.

    *does not have the halting property*

    False, see above.>
    Turing machine deciders only compute the mapping
    from their finite string inputs

    And the finite string input DD has the halting property as show above.


    Turing machine deciders only compute the mapping
    from their finite string inputs

    Turing machine deciders only compute the mapping
    from their finite string inputs

    The DD that has the halting property

    i.e. finite string DD which is an input to HH

    is not an input
    False, see above.

    Correct simulation is defined as simulation
    according to the semantics of the specification
    language: C, x86 or TM description.

    The execution trace of DD correctly simulated
    by HHH differs from the execution trace of
    DD correctly simulated HHH1 proving that I
    am right and you are stupid or dishonest.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 19:09:15 2025
    From Newsgroup: comp.ai.philosophy

    On 10/23/2025 6:48 PM, olcott wrote:
    On 10/23/2025 5:40 PM, dbush wrote:
    On 10/23/2025 6:08 PM, olcott wrote:
    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single >>>>>>>>>> behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp >>>>>>>>> that the
    "finite string input" DD *must* include as a substring the entire >>>>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be >>>>>>>> HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower. >>>>>>

    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.


    The finite string as an actual input to HHH(DD)

    i.e. finite string DD which is the description of machine DD and
    therefore is stipulated to specify all semantic properties of the
    described machine, including halting when executed directly.

    *does not have the halting property*

    False, see above.>
    Turing machine deciders only compute the mapping
    from their finite string inputs

    And the finite string input DD has the halting property as show above.


    Turing machine deciders only compute the mapping
    from their finite string inputs

    Turing machine deciders only compute the mapping
    from their finite string inputs

    The DD that has the halting property

    i.e. finite string DD which is an input to HH

    is not an input
    False, see above.

    Correct simulation is defined as simulation
    according to the semantics of the specification
    language: C, x86 or TM description.

    And because aborting is against the semantics of those languages, HHH
    doesn't do a correct simulation.


    The execution trace of DD correctly simulated
    by HHH
    Does not exist because HHH aborts.


    And since none of what you've written directly addresses my prior post,
    you've implicitly agreed that the above points are correct, specifically
    that the input to HHH(DD) specifies halting behavior.




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 23:45:19 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-23, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>>>> "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.


    The finite string as an actual input to HHH(DD)
    *does not have the halting property*

    It obviously does. You can directly executed it, meticulously following
    every instruction according to the x86 instruction set, showing that it
    reaches the final RET.

    The DD that has the halting property is not an input

    Yes it is. The DD that has the halting property meets the definition of
    being a finite string which has the right form and halting semantics.

    The DD that has the halting property is not an input

    You can repeat it all you like, but you have not a shred of rational justification for this. Other than some hand-waving nonsense about liar paradoxes, incorrect questions, persons (who can randomly change their
    answer being asked a question) rather than formal machines or pure
    functions, and whatnot.

    None of your rambling on these topics amounts to any rational proof
    of anything.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 18:51:04 2025
    From Newsgroup: comp.ai.philosophy

    On 10/23/2025 6:45 PM, Kaz Kylheku wrote:
    On 2025-10-23, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>>>>> "finite string input" DD *must* include as a substring the entire >>>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower. >>>>>

    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.


    The finite string as an actual input to HHH(DD)
    *does not have the halting property*

    It obviously does.
    The show all the steps of DD simulated by HHH
    according to the semantics of the C programming
    language where DD reaches its own final halt
    state by pure simulation with no inference by
    anything.

    This is exactly what I mean:

    int DD()
    {
    int Halt_Status = UTM(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    UTM(DD);
    }
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 23:55:41 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-23, olcott <polcott333@gmail.com> wrote:
    On 10/23/2025 5:40 PM, dbush wrote:
    On 10/23/2025 6:08 PM, olcott wrote:
    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single >>>>>>>>>> behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp >>>>>>>>> that the
    "finite string input" DD *must* include as a substring the entire >>>>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be >>>>>>>> HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower. >>>>>>

    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.


    The finite string as an actual input to HHH(DD)

    i.e. finite string DD which is the description of machine DD and
    therefore is stipulated to specify all semantic properties of the
    described machine, including halting when executed directly.

    *does not have the halting property*

    False, see above.>
    Turing machine deciders only compute the mapping
    from their finite string inputs

    And the finite string input DD has the halting property as show above.


    Turing machine deciders only compute the mapping
    from their finite string inputs

    Turing machine deciders only compute the mapping
    from their finite string inputs

    The DD that has the halting property

    i.e. finite string DD which is an input to HH

    is not an input
    False, see above.

    Correct simulation is defined as simulation
    according to the semantics of the specification
    language: C, x86 or TM description.

    Correct simulation must continue while the
    final instruction has not been reached.

    The correct simulation of a non-terminating machine
    never stops.

    The correct simulation of a terminating machine
    must reach its halt state.

    The execution trace of DD correctly simulated
    by HHH differs from the execution trace of
    DD correctly simulated HHH1 proving that I
    am right and you are stupid or dishonest.

    Someone idenfiable as an engineer, and not necessarily even
    a great one, will immediately know that if two simulations
    (of a deterministic program that has a single behavior)
    do not agree, /at most/ one of them can be called "correct".

    They could be both wrong, but they cannot be both right.

    If you think so, then obviously you must be stupid
    or dishonest.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 19:00:31 2025
    From Newsgroup: comp.ai.philosophy

    On 10/23/2025 6:55 PM, Kaz Kylheku wrote:
    On 2025-10-23, olcott <polcott333@gmail.com> wrote:
    On 10/23/2025 5:40 PM, dbush wrote:
    On 10/23/2025 6:08 PM, olcott wrote:
    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote: >>>>>>>>>> On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single >>>>>>>>>>> behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp >>>>>>>>>> that the
    "finite string input" DD *must* include as a substring the entire >>>>>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be >>>>>>>>> HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language. >>>>>>>
    But nonetheless, yes, there will still be a nested simulation tower. >>>>>>>

    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.


    The finite string as an actual input to HHH(DD)

    i.e. finite string DD which is the description of machine DD and
    therefore is stipulated to specify all semantic properties of the
    described machine, including halting when executed directly.

    *does not have the halting property*

    False, see above.>
    Turing machine deciders only compute the mapping
    from their finite string inputs

    And the finite string input DD has the halting property as show above.


    Turing machine deciders only compute the mapping
    from their finite string inputs

    Turing machine deciders only compute the mapping
    from their finite string inputs

    The DD that has the halting property

    i.e. finite string DD which is an input to HH

    is not an input
    False, see above.

    Correct simulation is defined as simulation
    according to the semantics of the specification
    language: C, x86 or TM description.

    Correct simulation must continue while the
    final instruction has not been reached.

    The correct simulation of a non-terminating machine
    never stops.

    The correct simulation of a terminating machine
    must reach its halt state.

    The execution trace of DD correctly simulated
    by HHH differs from the execution trace of
    DD correctly simulated HHH1 proving that I
    am right and you are stupid or dishonest.

    Someone idenfiable as an engineer, and not necessarily even
    a great one, will immediately know that if two simulations
    (of a deterministic program that has a single behavior)
    do not agree, /at most/ one of them can be called "correct".


    Until that bother to actually pay complete attention
    then they will suddenly realize that the behavior of
    DD correctly simulated by HHH specifies non-terminating
    recursive simulation that cannot possibly reach its
    own simulated final halt state.

    This is why I have to tell ChatGPT: do not assume
    one damn thing work through all the actual steps
    and see what you find.

    They could be both wrong, but they cannot be both right.

    If you think so, then obviously you must be stupid
    or dishonest.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 20:14:33 2025
    From Newsgroup: comp.ai.philosophy

    On 10/23/2025 7:51 PM, olcott wrote:
    On 10/23/2025 6:45 PM, Kaz Kylheku wrote:
    On 2025-10-23, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single >>>>>>>>>> behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp >>>>>>>>> that the
    "finite string input" DD *must* include as a substring the entire >>>>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be >>>>>>>> HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower. >>>>>>

    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.


    The finite string as an actual input to HHH(DD)
    *does not have the halting property*

    It obviously does.
    The show all the steps of DD simulated by HHH
    according to the semantics of the C programming
    language where DD reaches its own final halt
    state by pure simulation with no inference by
    anything.

    This is exactly what I mean:

    int DD()
    {
      int Halt_Status = UTM(DD);
      if (Halt_Status)
        HERE: goto HERE;
      return Halt_Status;
    }

    int main()
    {
      UTM(DD);
    }



    That's not DD. That's DD' which is irrelevant.

    If you claim HHH(DD) must decide the above code DD', which is not the
    code it was given, then HHH(DD) is deciding on a non-input.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy on Tue Oct 28 17:28:30 2025
    From Newsgroup: comp.ai.philosophy

    On 10/28/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
    On 10/28/2025 4:57 PM, olcott wrote:
    On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    Deciders only compute a mapping from their actual
    inputs. Computing the mapping from non-inputs is
    outside of the scope of Turing machines.

    Calculating the halting of certain inputs is indeed impossible
    for some halting algorithms.

    Not just impossible outside of the scope of every Turing machine.
    Its the same kind of thing as requiring the purely mental object
    of a Turing machine to bake a birthday cake.

    It simply isn't. Inputs that are not correctly solvable by some
    deciders are decided by some others.


    THIS INPUT IS SOLVABLE
    THE NON-INPUT IS OUT-OF-SCOPE

    Then why do you claim that H(D) must decide on this non input?

    Because he claims that D is two things; it has two properties:
    D.input and D.noninput.

    H(D) is solving D.input (as machines are required) and (believe
    him when he says) that D.input is nonterminating.

    What is terminating is D.noninput (he acknowledges).


    Good job.
    Your naming conventions make things very clear.

    If some H_other decider is tested on H_other(D), then the "Olcott
    reality distortion wave function" (ORDF) collapses, and D.input
    becomes the same as D.noninput.


    *So we need to clarify*
    D.input_to_H versus
    D.non_input_to_H which can be:
    D.input_to_H1
    D.executed_from_main

    I take Rice's semantic properties of programs and
    clarify that this has always meant the semantic
    properties of finite string machine descriptions.

    Then I further divide this into
    (a) semantic properties of INPUT finite strings
    (b) semantic properties of NON_INPUT finite strings

    When observed by H, D.input and D.noninput have different quantum
    states: D is effectively split, identifiable as two particles. When
    observed by non-H, no difference in any quantum properties (charge, spin
    ...) is observed, and so D.input and D.noninput must be one and the
    same; they are indistinguishable particles: https://en.wikipedia.org/wiki/Indistinguishable_particles

    Olcott is a top quantum computer scientist, on the level of Dirac or
    Feynman.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,comp.ai.philosophy on Tue Oct 28 23:52:35 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
    On 10/28/2025 4:57 PM, olcott wrote:
    On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    Deciders only compute a mapping from their actual
    inputs. Computing the mapping from non-inputs is
    outside of the scope of Turing machines.

    Calculating the halting of certain inputs is indeed impossible
    for some halting algorithms.

    Not just impossible outside of the scope of every Turing machine.
    Its the same kind of thing as requiring the purely mental object
    of a Turing machine to bake a birthday cake.

    It simply isn't. Inputs that are not correctly solvable by some
    deciders are decided by some others.


    THIS INPUT IS SOLVABLE
    THE NON-INPUT IS OUT-OF-SCOPE

    Then why do you claim that H(D) must decide on this non input?

    Because he claims that D is two things; it has two properties:
    D.input and D.noninput.

    H(D) is solving D.input (as machines are required) and (believe
    him when he says) that D.input is nonterminating.

    What is terminating is D.noninput (he acknowledges).


    Good job.
    Your naming conventions make things very clear.

    If some H_other decider is tested on H_other(D), then the "Olcott
    reality distortion wave function" (ORDF) collapses, and D.input
    becomes the same as D.noninput.


    *So we need to clarify*
    D.input_to_H versus
    D.non_input_to_H which can be:
    D.input_to_H1
    D.executed_from_main

    I take Rice's semantic properties of programs and
    clarify that this has always meant the semantic
    properties of finite string machine descriptions.

    Then I further divide this into
    (a) semantic properties of INPUT finite strings
    (b) semantic properties of NON_INPUT finite strings

    The problem is that "input" is just a role of some datum in a context,
    like a function

    The input versus non-input distinction cannot be found in any
    binary digit of the bit string comprising the datum itself.

    And so you need an algorithm

    is_input(function, datum)

    to denote the abstract property.

    Then we need to define the property: what does it mean to be an
    input or non-input. Just like we do with halting: we know what
    it means to halt or not halt.

    Next question: can you calculate this property? Say we know
    what it means to be a non-input; can we reliably calculate it
    for all possible <function, datum> pairs?

    If that property is itself is incalculable, how does it help defeat the
    halting theorem?
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy on Tue Oct 28 19:01:40 2025
    From Newsgroup: comp.ai.philosophy

    On 10/28/2025 6:52 PM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
    On 10/28/2025 4:57 PM, olcott wrote:
    On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    Deciders only compute a mapping from their actual
    inputs. Computing the mapping from non-inputs is
    outside of the scope of Turing machines.

    Calculating the halting of certain inputs is indeed impossible >>>>>>>> for some halting algorithms.

    Not just impossible outside of the scope of every Turing machine. >>>>>>> Its the same kind of thing as requiring the purely mental object >>>>>>> of a Turing machine to bake a birthday cake.

    It simply isn't. Inputs that are not correctly solvable by some
    deciders are decided by some others.


    THIS INPUT IS SOLVABLE
    THE NON-INPUT IS OUT-OF-SCOPE

    Then why do you claim that H(D) must decide on this non input?

    Because he claims that D is two things; it has two properties:
    D.input and D.noninput.

    H(D) is solving D.input (as machines are required) and (believe
    him when he says) that D.input is nonterminating.

    What is terminating is D.noninput (he acknowledges).


    Good job.
    Your naming conventions make things very clear.

    If some H_other decider is tested on H_other(D), then the "Olcott
    reality distortion wave function" (ORDF) collapses, and D.input
    becomes the same as D.noninput.


    *So we need to clarify*
    D.input_to_H versus
    D.non_input_to_H which can be:
    D.input_to_H1
    D.executed_from_main

    I take Rice's semantic properties of programs and
    clarify that this has always meant the semantic
    properties of finite string machine descriptions.

    Then I further divide this into
    (a) semantic properties of INPUT finite strings
    (b) semantic properties of NON_INPUT finite strings

    The problem is that "input" is just a role of some datum in a context,
    like a function

    The input versus non-input distinction cannot be found in any
    binary digit of the bit string comprising the datum itself.


    Unless you bother to pay attention to the fact that
    the sequence of steps of D simulated by H is a different
    sequence of steps than D simulated by H1.

    I have already explained that hundreds of times and
    people are too full of themselves to see this.

    I do now have the advantage of referencing this in
    terms of Rice's semantic properties of finite string
    machine descriptions. When I can anchor my view as
    an adaptation of Rice's view I have more credibility.

    And so you need an algorithm

    is_input(function, datum)

    to denote the abstract property.

    Then we need to define the property: what does it mean to be an
    input or non-input. Just like we do with halting: we know what
    it means to halt or not halt.

    Next question: can you calculate this property? Say we know
    what it means to be a non-input; can we reliably calculate it
    for all possible <function, datum> pairs?

    If that property is itself is incalculable, how does it help defeat the halting theorem?

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,comp.ai.philosophy on Wed Oct 29 00:25:17 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 6:52 PM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
    On 10/28/2025 4:57 PM, olcott wrote:
    On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    Deciders only compute a mapping from their actual
    inputs. Computing the mapping from non-inputs is
    outside of the scope of Turing machines.

    Calculating the halting of certain inputs is indeed impossible >>>>>>>>> for some halting algorithms.

    Not just impossible outside of the scope of every Turing machine. >>>>>>>> Its the same kind of thing as requiring the purely mental object >>>>>>>> of a Turing machine to bake a birthday cake.

    It simply isn't. Inputs that are not correctly solvable by some
    deciders are decided by some others.


    THIS INPUT IS SOLVABLE
    THE NON-INPUT IS OUT-OF-SCOPE

    Then why do you claim that H(D) must decide on this non input?

    Because he claims that D is two things; it has two properties:
    D.input and D.noninput.

    H(D) is solving D.input (as machines are required) and (believe
    him when he says) that D.input is nonterminating.

    What is terminating is D.noninput (he acknowledges).


    Good job.
    Your naming conventions make things very clear.

    If some H_other decider is tested on H_other(D), then the "Olcott
    reality distortion wave function" (ORDF) collapses, and D.input
    becomes the same as D.noninput.


    *So we need to clarify*
    D.input_to_H versus
    D.non_input_to_H which can be:
    D.input_to_H1
    D.executed_from_main

    I take Rice's semantic properties of programs and
    clarify that this has always meant the semantic
    properties of finite string machine descriptions.

    Then I further divide this into
    (a) semantic properties of INPUT finite strings
    (b) semantic properties of NON_INPUT finite strings

    The problem is that "input" is just a role of some datum in a context,
    like a function

    The input versus non-input distinction cannot be found in any
    binary digit of the bit string comprising the datum itself.


    Unless you bother to pay attention to the fact that
    the sequence of steps of D simulated by H is a different
    sequence of steps than D simulated by H1.

    How do we know what is correct?

    Suppose I'm given an input D, and two deciders X and Y.
    I know nothing about these. (But they are simulating deciders,
    producing different executions.)

    X accepts, Y rejects.

    Do I regard both of them as right? Or one of them wrong?
    If so, which one? Or both wrong?

    Suppose I happen to be sure that D halts. So I know X is correct.

    Under your system, I don't know whether Y is correct.

    Y could be a broken decider that is wrongly deciding D (and /that/
    is why its execution trace differs from X).

    Or it could be the case that D is a non-input to Y, in which case Y is
    deemed to be correct because D being a non-input to Y means that D
    denotes non-halting semantics to Y (and /that/ is why its execution
    trace differs from X).

    The fact that the execution trace differs doesn't inform.

    We need to know the value of is_input(Y, D): we need to /decide/ whether
    D is non-input or input to Y in order to /decide/ whether its rejection
    is correct.

    Do you not see that your concept leaves decision problems?

    You are not looking at it from the perspective of a /consumer/ of a
    /decider product/ actually trying to use deciders and trust their
    answer.

    If you allow halting programs to be decided as non-halting in situations
    in which they are non-inputs, the end user of deciders has to /know/
    when they are looking at that case, and when they are dealing with a
    broken decider.

    That's a decision problem you hae punted to the end user.

    I say that that decision problem you have punted to the end user
    is incomputable!!!

    So even if ostensibly you resolved the halting problem on one level
    by simply /excusing/ some programs from calculating the traditional
    halting value when they are given non-inputs, you've not actually
    solved the real halting problem: that of the end-user simply wanting
    to know whether a given program will terminate or not. The
    end user has no way of knowing whether a program was excused on
    a non-input, or whether it is just fumbling an input.

    It doesn't appear you've improved the situation at all; you've
    just reshuffled how incomputablity fits into the picture.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy on Tue Oct 28 19:33:20 2025
    From Newsgroup: comp.ai.philosophy

    On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 6:52 PM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
    On 10/28/2025 4:57 PM, olcott wrote:
    On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    Deciders only compute a mapping from their actual
    inputs. Computing the mapping from non-inputs is
    outside of the scope of Turing machines.

    Calculating the halting of certain inputs is indeed impossible >>>>>>>>>> for some halting algorithms.

    Not just impossible outside of the scope of every Turing machine. >>>>>>>>> Its the same kind of thing as requiring the purely mental object >>>>>>>>> of a Turing machine to bake a birthday cake.

    It simply isn't. Inputs that are not correctly solvable by some >>>>>>>> deciders are decided by some others.


    THIS INPUT IS SOLVABLE
    THE NON-INPUT IS OUT-OF-SCOPE

    Then why do you claim that H(D) must decide on this non input?

    Because he claims that D is two things; it has two properties:
    D.input and D.noninput.

    H(D) is solving D.input (as machines are required) and (believe
    him when he says) that D.input is nonterminating.

    What is terminating is D.noninput (he acknowledges).


    Good job.
    Your naming conventions make things very clear.

    If some H_other decider is tested on H_other(D), then the "Olcott
    reality distortion wave function" (ORDF) collapses, and D.input
    becomes the same as D.noninput.


    *So we need to clarify*
    D.input_to_H versus
    D.non_input_to_H which can be:
    D.input_to_H1
    D.executed_from_main

    I take Rice's semantic properties of programs and
    clarify that this has always meant the semantic
    properties of finite string machine descriptions.

    Then I further divide this into
    (a) semantic properties of INPUT finite strings
    (b) semantic properties of NON_INPUT finite strings

    The problem is that "input" is just a role of some datum in a context,
    like a function

    The input versus non-input distinction cannot be found in any
    binary digit of the bit string comprising the datum itself.


    Unless you bother to pay attention to the fact that
    the sequence of steps of D simulated by H is a different
    sequence of steps than D simulated by H1.

    How do we know what is correct?


    Because all deciders only report on what their
    input specifies H(D)==0 and H1(D)==1 are both correct.

    Suppose I'm given an input D, and two deciders X and Y.
    I know nothing about these. (But they are simulating deciders,
    producing different executions.)

    X accepts, Y rejects.

    Do I regard both of them as right? Or one of them wrong?
    If so, which one? Or both wrong?

    Suppose I happen to be sure that D halts. So I know X is correct.


    As far as a DOS (denial of service) attack
    goes H(D)==0 correctly rejects its input.

    Under your system, I don't know whether Y is correct.

    Y could be a broken decider that is wrongly deciding D (and /that/
    is why its execution trace differs from X).

    Or it could be the case that D is a non-input to Y, in which case Y is
    deemed to be correct because D being a non-input to Y means that D
    denotes non-halting semantics to Y (and /that/ is why its execution
    trace differs from X).

    The fact that the execution trace differs doesn't inform.

    We need to know the value of is_input(Y, D): we need to /decide/ whether
    D is non-input or input to Y in order to /decide/ whether its rejection
    is correct.


    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    Do you not see that your concept leaves decision problems?


    Yes. It leaves them properly resolved.

    You are not looking at it from the perspective of a /consumer/ of a
    /decider product/ actually trying to use deciders and trust their
    answer.


    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    If you allow halting programs to be decided as non-halting in situations
    in which they are non-inputs, the end user of deciders has to /know/
    when they are looking at that case, and when they are dealing with a
    broken decider.

    That's a decision problem you hae punted to the end user.

    I say that that decision problem you have punted to the end user
    is incomputable!!!

    So even if ostensibly you resolved the halting problem on one level
    by simply /excusing/ some programs from calculating the traditional
    halting value when they are given non-inputs, you've not actually
    solved the real halting problem: that of the end-user simply wanting
    to know whether a given program will terminate or not. The
    end user has no way of knowing whether a program was excused on
    a non-input, or whether it is just fumbling an input.

    It doesn't appear you've improved the situation at all; you've
    just reshuffled how incomputablity fits into the picture.


    The definition of a decider requires it to
    only report on what its input specifies.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,comp.ai.philosophy on Tue Oct 28 17:56:22 2025
    From Newsgroup: comp.ai.philosophy

    On 10/28/2025 5:33 PM, olcott wrote:
    On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 6:52 PM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
    On 10/28/2025 4:57 PM, olcott wrote:
    On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    Deciders only compute a mapping from their actual
    inputs. Computing the mapping from non-inputs is
    outside of the scope of Turing machines.

    Calculating the halting of certain inputs is indeed impossible >>>>>>>>>>> for some halting algorithms.

    Not just impossible outside of the scope of every Turing machine. >>>>>>>>>> Its the same kind of thing as requiring the purely mental object >>>>>>>>>> of a Turing machine to bake a birthday cake.

    It simply isn't. Inputs that are not correctly solvable by some >>>>>>>>> deciders are decided by some others.


    THIS INPUT IS SOLVABLE
    THE NON-INPUT IS OUT-OF-SCOPE

    Then why do you claim that H(D) must decide on this non input?

    Because he claims that D is two things; it has two properties:
    D.input and D.noninput.

    H(D) is solving D.input (as machines are required) and (believe
    him when he says) that D.input is nonterminating.

    What is terminating is D.noninput (he acknowledges).


    Good job.
    Your naming conventions make things very clear.

    If some H_other decider is tested on H_other(D), then the "Olcott
    reality distortion wave function" (ORDF) collapses, and D.input
    becomes the same as D.noninput.


    *So we need to clarify*
        D.input_to_H versus
        D.non_input_to_H which can be:
          D.input_to_H1
          D.executed_from_main

    I take Rice's semantic properties of programs and
    clarify that this has always meant the semantic
    properties of finite string machine descriptions.

    Then I further divide this into
    (a) semantic properties of INPUT finite strings
    (b) semantic properties of NON_INPUT finite strings

    The problem is that "input" is just a role of some datum in a context, >>>> like a function

    The input versus non-input distinction cannot be found in any
    binary digit of the bit string comprising the datum itself.


    Unless you bother to pay attention to the fact that
    the sequence of steps of D simulated by H is a different
    sequence of steps than D simulated by H1.

    How do we know what is correct?


    Because all deciders only report on what their
    input specifies H(D)==0 and H1(D)==1 are both correct.

    Suppose I'm given an input D, and two deciders X and Y.
    I know nothing about these.  (But they are simulating deciders,
    producing different executions.)

    X accepts, Y rejects.

    Do I regard both of them as right? Or one of them wrong?
    If so, which one? Or both wrong?

    Suppose I happen to be sure that D halts.  So I know X is correct.


    As far as a DOS (denial of service) attack
    goes H(D)==0 correctly rejects its input.

    Imvvho, I don't think you know what to do with a proper DOS, or a DDOS.
    Think if a bunch of connections some in, they send some commands to the server, communicate a little, then stop for say a random amount of time,
    then continue, then stop. Then a flood of connections come in that
    execute commands that upload/download big random files, over and over
    again. Some of them say done!, Some of them hang around for say 20
    minutes holding your socket open. I don't think you know what a DOS even
    is? Humm... lol.
    [...]

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,comp.ai.philosophy on Tue Oct 28 17:57:14 2025
    From Newsgroup: comp.ai.philosophy

    On 10/28/2025 5:56 PM, Chris M. Thomasson wrote:
    On 10/28/2025 5:33 PM, olcott wrote:
    On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 6:52 PM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 5:14 PM, Kaz Kylheku wrote:
    On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
    On 10/28/2025 4:57 PM, olcott wrote:
    On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
    On 2025-10-28, olcott <polcott333@gmail.com> wrote:
    Deciders only compute a mapping from their actual
    inputs. Computing the mapping from non-inputs is
    outside of the scope of Turing machines.

    Calculating the halting of certain inputs is indeed impossible >>>>>>>>>>>> for some halting algorithms.

    Not just impossible outside of the scope of every Turing >>>>>>>>>>> machine.
    Its the same kind of thing as requiring the purely mental object >>>>>>>>>>> of a Turing machine to bake a birthday cake.

    It simply isn't. Inputs that are not correctly solvable by some >>>>>>>>>> deciders are decided by some others.


    THIS INPUT IS SOLVABLE
    THE NON-INPUT IS OUT-OF-SCOPE

    Then why do you claim that H(D) must decide on this non input?

    Because he claims that D is two things; it has two properties:
    D.input and D.noninput.

    H(D) is solving D.input (as machines are required) and (believe
    him when he says) that D.input is nonterminating.

    What is terminating is D.noninput (he acknowledges).


    Good job.
    Your naming conventions make things very clear.

    If some H_other decider is tested on H_other(D), then the "Olcott >>>>>>> reality distortion wave function" (ORDF) collapses, and D.input
    becomes the same as D.noninput.


    *So we need to clarify*
        D.input_to_H versus
        D.non_input_to_H which can be:
          D.input_to_H1
          D.executed_from_main

    I take Rice's semantic properties of programs and
    clarify that this has always meant the semantic
    properties of finite string machine descriptions.

    Then I further divide this into
    (a) semantic properties of INPUT finite strings
    (b) semantic properties of NON_INPUT finite strings

    The problem is that "input" is just a role of some datum in a context, >>>>> like a function

    The input versus non-input distinction cannot be found in any
    binary digit of the bit string comprising the datum itself.


    Unless you bother to pay attention to the fact that
    the sequence of steps of D simulated by H is a different
    sequence of steps than D simulated by H1.

    How do we know what is correct?


    Because all deciders only report on what their
    input specifies H(D)==0 and H1(D)==1 are both correct.

    Suppose I'm given an input D, and two deciders X and Y.
    I know nothing about these.  (But they are simulating deciders,
    producing different executions.)

    X accepts, Y rejects.

    Do I regard both of them as right? Or one of them wrong?
    If so, which one? Or both wrong?

    Suppose I happen to be sure that D halts.  So I know X is correct.


    As far as a DOS (denial of service) attack
    goes H(D)==0 correctly rejects its input.

    Imvvho, I don't think you know what to do with a proper DOS, or a DDOS. Think if a bunch of connections some in, they send some commands to the server, communicate a little, then stop for say a random amount of time, then continue, then stop. Then a flood of connections come in that
    execute commands that upload/download big random files, over and over
    again. Some of them say done!, Some of them hang around for say 20
    minutes holding your socket open. I don't think you know what a DOS even
    is? Humm... lol.
    [...]


    The fun part is that the connections can come from infected computers as well... ;^)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,comp.ai.philosophy on Wed Oct 29 02:19:47 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
    Under your system, I don't know whether Y is correct.

    Y could be a broken decider that is wrongly deciding D (and /that/
    is why its execution trace differs from X).

    Or it could be the case that D is a non-input to Y, in which case Y is
    deemed to be correct because D being a non-input to Y means that D
    denotes non-halting semantics to Y (and /that/ is why its execution
    trace differs from X).

    The fact that the execution trace differs doesn't inform.

    We need to know the value of is_input(Y, D): we need to /decide/ whether
    D is non-input or input to Y in order to /decide/ whether its rejection
    is correct.


    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    But under your system, if I am a user of deciders, and have been
    given a decider H which is certified to be correct, I cannot
    rely on it to decide halting.

    I want to know whether D halts, that's all.

    H says no. It is certified correct under your paradigm, so
    so I don't have to suspect that if it is given an /input/
    it will be wrong.

    But: I have no idea whether D is an input to H or a non-input!

    When H says 0, I have no idea whether it's being judged non-halting
    as an input, or whether it's being judged as a non-input (whereby
    either value is the correct answer as far as H is concerned).

    Again, I just want to know, does D halt?

    Under your paradigm, even though I have a certified correct H,
    I am not informed.

    Under the standard halting problem, I am not informed because
    I /don't/ have a certified correct H; it doesn't exist.

    Under your paradigm, I have a deemed-correct H, which is
    excused for giving a garbage answer on non-inputs, which
    I have no way to identify.

    How am I better off in your paradigm?

    Do I use 10 different certified deciders, and take a majority vote?

    But the function which combines 10 deciders into a majority vote
    is itself a decider! And that 10-majority-decider function can be
    targeted by a diagonal test case ... and such a test case is now
    a non-input. See?

    You are not looking at it from the perspective of a /consumer/ of a
    /decider product/ actually trying to use deciders and trust their
    answer.

    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    But how does the user interpret that result?

    The user just wants to know, does this thing halt or not?

    How does it answer the user's question?

    The user's question is not incorrect; it may be incorrect when
    posed to H, in which case the user needs some other H, like H1.

    How do they decide between H and H1?
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy on Tue Oct 28 21:43:47 2025
    From Newsgroup: comp.ai.philosophy

    On 10/28/2025 9:19 PM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
    Under your system, I don't know whether Y is correct.

    Y could be a broken decider that is wrongly deciding D (and /that/
    is why its execution trace differs from X).

    Or it could be the case that D is a non-input to Y, in which case Y is
    deemed to be correct because D being a non-input to Y means that D
    denotes non-halting semantics to Y (and /that/ is why its execution
    trace differs from X).

    The fact that the execution trace differs doesn't inform.

    We need to know the value of is_input(Y, D): we need to /decide/ whether >>> D is non-input or input to Y in order to /decide/ whether its rejection
    is correct.


    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    But under your system, if I am a user of deciders, and have been
    given a decider H which is certified to be correct, I cannot
    rely on it to decide halting.


    When halting is defined correctly:
    Does this input specify a sequence of moves that
    reach a final halt state?

    and not defined incorrectly: to require something
    that is not specified in the input then this does
    overcome the halting problem proof and shows that
    the halting problem itself has always been a category
    error. (Flibble's brilliant term).

    I want to know whether D halts, that's all.

    H says no. It is certified correct under your paradigm, so
    so I don't have to suspect that if it is given an /input/
    it will be wrong.

    But: I have no idea whether D is an input to H or a non-input!


    That is ridiculous. If it is an argument
    to the decider function then it is an input.

    When H says 0, I have no idea whether it's being judged non-halting
    as an input, or whether it's being judged as a non-input (whereby
    either value is the correct answer as far as H is concerned).


    Judging by anything besides and input has always
    been incorrect. H(D) maps its input to a reject
    value on the basis of the behavior that this
    argument to H specifies.

    Again, I just want to know, does D halt?


    You might also want a purely mental Turing
    machine to bake you a birthday cake.

    Under your paradigm, even though I have a certified correct H,
    I am not informed.

    Under the standard halting problem, I am not informed because
    I /don't/ have a certified correct H; it doesn't exist.


    The standard halting problem requires behavior
    that is out-of-scope for Turing machines, like
    requiring that they bake birthday cakes.

    Under your paradigm, I have a deemed-correct H, which is
    excused for giving a garbage answer on non-inputs, which
    I have no way to identify.


    int sum(int x, int y){return x + y;}
    Expecting sum(3,4) to return the sum of 5 + 7 is nuts.
    A function only computes from its arguments.

    How am I better off in your paradigm?


    In my paradigm you face reality rather than
    ignoring it.

    Do I use 10 different certified deciders, and take a majority vote?


    sum(3,4) computes the sum of 3+4 even if
    the sum of 5+6 is required from sum(3,4).

    Whatever behavior is measured by the decider's
    simulation of its input *is* the behavior that
    it must report on.

    But the function which combines 10 deciders into a majority vote
    is itself a decider! And that 10-majority-decider function can be
    targeted by a diagonal test case ... and such a test case is now
    a non-input. See?

    You are not looking at it from the perspective of a /consumer/ of a
    /decider product/ actually trying to use deciders and trust their
    answer.

    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    But how does the user interpret that result?


    The the input to this decider specifies a sequence
    that cannot possibly reach its final halt state.

    The user just wants to know, does this thing halt or not?


    The user may equally want a purely imaginary
    Turing machine to bake a birthday cake.

    How does it answer the user's question?


    As far as theoretical limitations go I have addressed
    them. Practical workarounds can be addressed after I
    am published and my work is accepted.

    The user's question is not incorrect; it may be incorrect when
    posed to H, in which case the user needs some other H, like H1.

    How do they decide between H and H1?


    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy on Tue Oct 28 23:07:12 2025
    From Newsgroup: comp.ai.philosophy

    On 10/28/25 10:43 PM, olcott wrote:
    On 10/28/2025 9:19 PM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
    Under your system, I don't know whether Y is correct.

    Y could be a broken decider that is wrongly deciding D (and /that/
    is why its execution trace differs from X).

    Or it could be the case that D is a non-input to Y, in which case Y is >>>> deemed to be correct because D being a non-input to Y means that D
    denotes non-halting semantics to Y (and /that/ is why its execution
    trace differs from X).

    The fact that the execution trace differs doesn't inform.

    We need to know the value of is_input(Y, D): we need to /decide/
    whether
    D is non-input or input to Y in order to /decide/ whether its rejection >>>> is correct.


    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    But under your system, if I am a user of deciders, and have been
    given a decider H which is certified to be correct, I cannot
    rely on it to decide halting.


    When halting is defined correctly:
    Does this input specify a sequence of moves that
    reach a final halt state?

    Which D does, just H doesn't follow all of them, so H is just wrong.


    and not defined incorrectly: to require something
    that is not specified in the input then this does
    overcome the halting problem proof and shows that
    the halting problem itself has always been a category
    error. (Flibble's brilliant term).

    So what about the program D that calls this particular H isn't part of
    the3 input?

    You have stated that the input is just the function pointer, and thus we
    have no dividing line in memory to tell us what we can or can not look
    at, so if we can see the code of the C function D, we can see the code
    of this particular funcion H, and that, by everything you have done,
    *WILL* return 0 to that D, and thus D will halt when run or completely simulated.

    Your problem is you think lies, like that H does a correct simulation
    when it only partially simulates, are truths, because you don't know
    what truth is.


    I want to know whether D halts, that's all.

    H says no. It is certified correct under your paradigm, so
    so I don't have to suspect that if it is given an /input/
    it will be wrong.

    But: I have no idea whether D is an input to H or a non-input!


    That is ridiculous. If it is an argument
    to the decider function then it is an input.

    So. *ALL* the code of D and the H it calls are the input, and that
    specifies a halting program, as that H returns 0 to D.


    When H says 0, I have no idea whether it's being judged non-halting
    as an input, or whether it's being judged as a non-input (whereby
    either value is the correct answer as far as H is concerned).


    Judging by anything besides and input has always
    been incorrect. H(D) maps its input to a reject
    value on the basis of the behavior that this
    argument to H specifies.

    So, you admit that H just presuming that the H that it sees being called
    isn't going to ever return a value is just incorrect.

    Thus, your logic is based on something that is incorrect being
    considered to be correct.


    Again, I just want to know, does D halt?


    You might also want a purely mental Turing
    machine to bake you a birthday cake.

    In other words, you can't answer because you know it will show your
    stupidity.


    Under your paradigm, even though I have a certified correct H,
    I am not informed.

    Under the standard halting problem, I am not informed because
    I /don't/ have a certified correct H; it doesn't exist.


    The standard halting problem requires behavior
    that is out-of-scope for Turing machines, like
    requiring that they bake birthday cakes.


    What is out-of-scope about asking about the behavior of the actual input?

    I guess you consider the number 4 to be out of scope for the input 1+3,
    as it isn't actually in the input.

    Under your paradigm, I have a deemed-correct H, which is
    excused for giving a garbage answer on non-inputs, which
    I have no way to identify.


    int sum(int x, int y){return x + y;}
    Expecting sum(3,4) to return the sum of 5 + 7 is nuts.
    A function only computes from its arguments.

    Right, so answering about a D that calls a different H, a H that doesn't abort, is just rediculous, but that is what you do.

    You are just admitting you are just a stupid pathological liar.


    How am I better off in your paradigm?


    In my paradigm you face reality rather than
    ignoring it.

    Since when?

    You ignore reality because you decided not to try to understand it,


    Do I use 10 different certified deciders, and take a majority vote?


    sum(3,4) computes the sum of 3+4 even if
    the sum of 5+6 is required from sum(3,4).

    How can the sum of 5 + 6 be required from sum(3, 4)?

    Is that like you H(D) is somehow required to use a wrong definition of
    the H that D calls? Some "hypothitcal" one, not the actual one that is
    there?

    That is just you logic being based on lies.


    Whatever behavior is measured by the decider's
    simulation of its input *is* the behavior that
    it must report on.


    In other words, your logic is based on H being allowed to make up what
    ever answer it wants to claim.

    In other words, you logic is based on the right to lie.

    Yep, that is your daddy's logic system,

    But the function which combines 10 deciders into a majority vote
    is itself a decider! And that 10-majority-decider function can be
    targeted by a diagonal test case ... and such a test case is now
    a non-input.  See?

    You are not looking at it from the perspective of a /consumer/ of a
    /decider product/ actually trying to use deciders and trust their
    answer.

    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    But how does the user interpret that result?


    The the input to this decider specifies a sequence
    that cannot possibly reach its final halt state.

    Sure it can, just not by the decider.


    The user just wants to know, does this thing halt or not?


    The user may equally want a purely imaginary
    Turing machine to bake a birthday cake.

    How does it answer the user's question?


    As far as theoretical limitations go I have addressed
    them. Practical workarounds can be addressed after I
    am published and my work is accepted.

    Nope, you have a actual limitation of being totally ignorant of what you
    are talking about, and your words ending up being deviod of meaning as
    you reject that the words need to have their actual meaning, and thus everything you say is effectively meaningless.


    The user's question is not incorrect; it may be incorrect when
    posed to H, in which case the user needs some other H, like H1.

    How do they decide between H and H1?





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy on Tue Oct 28 22:26:43 2025
    From Newsgroup: comp.ai.philosophy

    On 10/28/2025 10:16 PM, Richard Heathfield wrote:
    On 28/10/2025 21:58, dbush wrote:
    On 10/28/2025 4:51 PM, olcott wrote:

    <snip>

    <repeat of previously refuted point>

    So again you admit that Kaz's code proves that D is halting.

    Credit where credit is due. By returning 0*, /olcott's/ code proves that
    D is halting.

    *i.e. non-halting.

    Yet (as I have said hundreds of times) Turing machines
    can only compute the mapping from *INPUT* finite string
    machine descriptions to the behavior that these *INPUT*
    finite string machine descriptions *ACTUALLY SPECIFY*

    D simulated by H SPECIFIES NOT-HALTING BEHAVIOR.
    The halting problem itself makes a category error
    when it requires deciders to report on behavior
    other than the behavior that *THEIR INPUT SPECIFIES*
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,comp.ai.philosophy on Tue Oct 28 23:34:23 2025
    From Newsgroup: comp.ai.philosophy

    On 10/28/2025 11:26 PM, olcott wrote:
    On 10/28/2025 10:16 PM, Richard Heathfield wrote:
    On 28/10/2025 21:58, dbush wrote:
    On 10/28/2025 4:51 PM, olcott wrote:

    <snip>

    <repeat of previously refuted point>

    So again you admit that Kaz's code proves that D is halting.

    Credit where credit is due. By returning 0*, /olcott's/ code proves
    that D is halting.

    *i.e. non-halting.

    Yet (as I have said hundreds of times) Turing machines
    can only compute the mapping from *INPUT* finite string
    machine descriptions to the behavior that these *INPUT*
    finite string machine descriptions *ACTUALLY SPECIFY*

    Similarly, in the code below, H_S1(foo) == 0 is correct because it's
    computing the mapping from its input finite string machine description
    to the behavior that the finite string input actually specifies.

    Agreed?



    int simulate_one_instruction(ptr *P, void **state);

    int H_S1(ptr *P)
    {
    void *state = NULL;
    int done;

    done = simulate_one_instruction(P, &state);
    if (done) {
    return 1;
    } else {
    return 0;
    }
    }

    void foo()
    {
    puts("line 1");
    puts("line 2");
    puts("line 3");
    }

    int main()
    {
    printf("H_S1(foo) = %d\n", H_S1(foo));
    return 0;
    }

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,comp.ai.philosophy on Wed Oct 29 05:36:13 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 9:19 PM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
    Under your system, I don't know whether Y is correct.

    Y could be a broken decider that is wrongly deciding D (and /that/
    is why its execution trace differs from X).

    Or it could be the case that D is a non-input to Y, in which case Y is >>>> deemed to be correct because D being a non-input to Y means that D
    denotes non-halting semantics to Y (and /that/ is why its execution
    trace differs from X).

    The fact that the execution trace differs doesn't inform.

    We need to know the value of is_input(Y, D): we need to /decide/ whether >>>> D is non-input or input to Y in order to /decide/ whether its rejection >>>> is correct.


    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    But under your system, if I am a user of deciders, and have been
    given a decider H which is certified to be correct, I cannot
    rely on it to decide halting.


    When halting is defined correctly:
    Does this input specify a sequence of moves that
    reach a final halt state?

    and not defined incorrectly: to require something
    that is not specified in the input then this does
    overcome the halting problem proof and shows that
    the halting problem itself has always been a category
    error. (Flibble's brilliant term).

    I want to know whether D halts, that's all.

    H says no. It is certified correct under your paradigm, so
    so I don't have to suspect that if it is given an /input/
    it will be wrong.

    But: I have no idea whether D is an input to H or a non-input!


    That is ridiculous. If it is an argument
    to the decider function then it is an input.

    So how it's supposed to work that an otherwise halting D
    is a non-halting input to H.

    When the non-halting D is an input to H (which it undeniably as you have
    now decided) D is non-halting.

    With respect to H, it's as if the halting D exists in another dimension;
    /that/ D is not the input.

    Okay, but anyway ...

    - The decider user has some program P..

    - P terminates, but it takes three years on the user's hardware.

    - The user does not know this; they tried running P for weeks,
    months, but it never terminated.

    - The user has H which they have been assured is correct under
    the Olcott Halting Paradigm.

    - The applies H to P, and H rejects it.

    - The program P is actually D, but the user doesn't know this.

    What should the user believe? Does D halt or not?

    How is the user /not/ deceived if they believe that P doesn't halt?

    When H says 0, I have no idea whether it's being judged non-halting
    as an input, or whether it's being judged as a non-input (whereby
    either value is the correct answer as far as H is concerned).


    Judging by anything besides and input has always
    been incorrect. H(D) maps its input to a reject
    value on the basis of the behavior that this
    argument to H specifies.

    But that behavior is only real /as/ an argument to H; it is not the
    behavior that the halter-decider customer wants reported on.

    How is the user supposed to know which inputs are handled by their
    decider and which are not?

    Again, I just want to know, does D halt?


    You might also want a purely mental Turing
    machine to bake you a birthday cake.

    Are you insinuating that the end user for halt deciders is wrong to want
    to know whether something halts?

    And /that's/ how you ultimately refute the halting problem?

    The standard halting problem and its theorem tells the user
    they cannot have a halting algorithm that will decide everything;
    stop wanting that!

    Your paradigm tells the user that the question is wrong, or at least for
    some programs, and doesn't tell them which.

    Under your paradigm, even though I have a certified correct H,
    I am not informed.

    Under the standard halting problem, I am not informed because
    I /don't/ have a certified correct H; it doesn't exist.


    The standard halting problem requires behavior
    that is out-of-scope for Turing machines, like
    requiring that they bake birthday cakes.

    But what changes if we simply /stop requiring/ that behavior?

    How am I better off in your paradigm?

    In my paradigm you face reality rather than
    ignoring it.

    So does that reality provide an algorithm to decide the
    halting of any machine, or not?

    Do I use 10 different certified deciders, and take a majority vote?


    sum(3,4) computes the sum of 3+4 even if
    the sum of 5+6 is required from sum(3,4).

    Whatever behavior is measured by the decider's
    simulation of its input *is* the behavior that
    it must report on.

    That's the internallhy focused discussion. How are you
    solving the end user's demand for halting decision?


    But the function which combines 10 deciders into a majority vote
    is itself a decider! And that 10-majority-decider function can be
    targeted by a diagonal test case ... and such a test case is now
    a non-input. See?

    You are not looking at it from the perspective of a /consumer/ of a
    /decider product/ actually trying to use deciders and trust their
    answer.

    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    But how does the user interpret that result?

    The the input to this decider specifies a sequence
    that cannot possibly reach its final halt state.

    But you have inputs for which that is reported, which
    readily halt when they are executed.

    Don't you think the user wants to know /that/, and not what happens
    under the decider (if that is different)?

    The user just wants to know, does this thing halt or not?

    The user may equally want a purely imaginary
    Turing machine to bake a birthday cake.

    How does it answer the user's question?

    As far as theoretical limitations go I have addressed
    them.

    By address, do you mean remove?

    Practical workarounds can be addressed after I
    am published and my work is accepted.

    Workarounds for what? You've left something unsolved in halting; what is
    that?
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,comp.ai.philosophy on Wed Oct 29 05:45:52 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 10:16 PM, Richard Heathfield wrote:
    On 28/10/2025 21:58, dbush wrote:
    On 10/28/2025 4:51 PM, olcott wrote:

    <snip>

    <repeat of previously refuted point>

    So again you admit that Kaz's code proves that D is halting.

    Credit where credit is due. By returning 0*, /olcott's/ code proves that
    D is halting.

    *i.e. non-halting.

    Yet (as I have said hundreds of times) Turing machines
    can only compute the mapping from *INPUT* finite string
    machine descriptions to the behavior that these *INPUT*
    finite string machine descriptions *ACTUALLY SPECIFY*

    D simulated by H SPECIFIES NOT-HALTING BEHAVIOR.

    When some random P is simulated by H, sometimes it specifies
    behavior which matches the directly executed P().

    But some P's, when simulated H, have a behavior which
    doesn't match P().

    The user who executes H(P) does not want to be informed
    about those non-matching behaviors; they want to know
    whether P() terminates, not P simulated by H.

    The halting problem itself makes a category error
    when it requires deciders to report on behavior
    other than the behavior that *THEIR INPUT SPECIFIES*

    That's inevitable. No consumer of a halting algorithm asking about a
    program P wants the algorithm to secretly simulate a behavior other than
    P() and then produce an answer about that other behavior.

    Even when the user /knows/ that the machine can do that under the Olcott Halting paradigm, they have no way of knowing /when/ it is doing that
    and when it isn't.

    That's a decision in and of itself: is this H(P) decision giving me the straight dope about whether P() halts, or is this a problematic case
    where H's simulation of P differs from P(), and I'm getting /that/
    report instead?

    It's amazing that you don't see this ambiguity problem.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy on Wed Oct 29 07:32:28 2025
    From Newsgroup: comp.ai.philosophy

    On 10/28/25 11:26 PM, olcott wrote:
    On 10/28/2025 10:16 PM, Richard Heathfield wrote:
    On 28/10/2025 21:58, dbush wrote:
    On 10/28/2025 4:51 PM, olcott wrote:

    <snip>

    <repeat of previously refuted point>

    So again you admit that Kaz's code proves that D is halting.

    Credit where credit is due. By returning 0*, /olcott's/ code proves
    that D is halting.

    *i.e. non-halting.

    Yet (as I have said hundreds of times) Turing machines
    can only compute the mapping from *INPUT* finite string
    machine descriptions to the behavior that these *INPUT*
    finite string machine descriptions *ACTUALLY SPECIFY*

    D simulated by H SPECIFIES NOT-HALTING BEHAVIOR.
    The halting problem itself makes a category error
    when it requires deciders to report on behavior
    other than the behavior that *THEIR INPUT SPECIFIES*


    Your problem is you don't know what the question means, so you don't kow
    what you input means, because you don't actually seem to know what
    meaning means.

    Since the input SPECIFIES a representation of an actual program, and the question is about the behavior of that program when run, it is a valid question.

    That fact that you don't understand that must shows how ignorant you are
    of what you talk about.

    In your world, everything is a lie, as no truth exists as nothing has
    real meaning.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy on Wed Oct 29 11:12:27 2025
    From Newsgroup: comp.ai.philosophy

    On 10/29/2025 12:36 AM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 9:19 PM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
    Under your system, I don't know whether Y is correct.

    Y could be a broken decider that is wrongly deciding D (and /that/
    is why its execution trace differs from X).

    Or it could be the case that D is a non-input to Y, in which case Y is >>>>> deemed to be correct because D being a non-input to Y means that D
    denotes non-halting semantics to Y (and /that/ is why its execution
    trace differs from X).

    The fact that the execution trace differs doesn't inform.

    We need to know the value of is_input(Y, D): we need to /decide/ whether >>>>> D is non-input or input to Y in order to /decide/ whether its rejection >>>>> is correct.


    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    But under your system, if I am a user of deciders, and have been
    given a decider H which is certified to be correct, I cannot
    rely on it to decide halting.


    When halting is defined correctly:
    Does this input specify a sequence of moves that
    reach a final halt state?

    and not defined incorrectly: to require something
    that is not specified in the input then this does
    overcome the halting problem proof and shows that
    the halting problem itself has always been a category
    error. (Flibble's brilliant term).

    I want to know whether D halts, that's all.

    H says no. It is certified correct under your paradigm, so
    so I don't have to suspect that if it is given an /input/
    it will be wrong.

    But: I have no idea whether D is an input to H or a non-input!


    That is ridiculous. If it is an argument
    to the decider function then it is an input.

    So how it's supposed to work that an otherwise halting D
    is a non-halting input to H.


    int D()
    {
    int Halt_Status = H(D);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    H simulates D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    until H sees this repeating pattern.

    When the non-halting D is an input to H (which it undeniably as you have
    now decided) D is non-halting.


    D.input_to_H is non-halting is confirmed in that
    D simulated by cannot possibly reach its own
    "return" statement final halt state. This divides
    non-halting from stopping running.

    With respect to H, it's as if the halting D exists in another dimension; /that/ D is not the input.

    Okay, but anyway ...

    - The decider user has some program P..

    - P terminates, but it takes three years on the user's hardware.

    - The user does not know this; they tried running P for weeks,
    months, but it never terminated.

    - The user has H which they have been assured is correct under
    the Olcott Halting Paradigm.

    - The applies H to P, and H rejects it.


    The would mean that P has specifically targeted
    H in an attempt to thwart a correct assessment.

    - The program P is actually D, but the user doesn't know this.


    The system works on source-code.

    What should the user believe? Does D halt or not?


    When the input P targets the decider H or does not target
    the decider H input P simulated by decider H always reports
    on the basis of whether P can reach its own final halt state.

    How is the user /not/ deceived if they believe that P doesn't halt?

    When H says 0, I have no idea whether it's being judged non-halting
    as an input, or whether it's being judged as a non-input (whereby
    either value is the correct answer as far as H is concerned).


    Judging by anything besides and input has always
    been incorrect. H(D) maps its input to a reject
    value on the basis of the behavior that this
    argument to H specifies.

    But that behavior is only real /as/ an argument to H; it is not the
    behavior that the halter-decider customer wants reported on.


    When what the customer wants and what is in the scope of
    Turing machines differ the user must face reality. There
    may be practical workarounds these are outside the scope
    of the theoretical limits.

    How is the user supposed to know which inputs are handled by their
    decider and which are not?


    When the input P targets the decider H or does not target
    the decider H input P simulated by decider H always reports
    on the basis of whether P can reach its own final halt state.

    Again, I just want to know, does D halt?


    You might also want a purely mental Turing
    machine to bake you a birthday cake.

    Are you insinuating that the end user for halt deciders is wrong to want
    to know whether something halts?


    What is outside of the scope of all Turing machines is
    outside of the scope of all Turing machines.

    And /that's/ how you ultimately refute the halting problem?


    The halting problem as defined requires something
    that is outside of the scope of all Turing machines.

    The standard halting problem and its theorem tells the user
    they cannot have a halting algorithm that will decide everything;
    stop wanting that!


    When the input P targets the decider H or does not target
    the decider H input P simulated by decider H always reports
    on the basis of whether P can reach its own final halt state.

    Your paradigm tells the user that the question is wrong, or at least for
    some programs, and doesn't tell them which.


    I am discussing theoretical limits not practical workarounds.

    Under your paradigm, even though I have a certified correct H,
    I am not informed.

    Under the standard halting problem, I am not informed because
    I /don't/ have a certified correct H; it doesn't exist.


    The standard halting problem requires behavior
    that is out-of-scope for Turing machines, like
    requiring that they bake birthday cakes.

    But what changes if we simply /stop requiring/ that behavior?


    When the input P targets the decider H or does not target
    the decider H input P simulated by decider H always reports
    on the basis of whether P can reach its own final halt state.

    How am I better off in your paradigm?

    In my paradigm you face reality rather than
    ignoring it.

    So does that reality provide an algorithm to decide the
    halting of any machine, or not?


    Whenever the input P does not try to cheat by calling
    its own decider H the simulation of this input by this
    decider is the same as UTM(P).

    Do I use 10 different certified deciders, and take a majority vote?


    sum(3,4) computes the sum of 3+4 even if
    the sum of 5+6 is required from sum(3,4).

    Whatever behavior is measured by the decider's
    simulation of its input *is* the behavior that
    it must report on.

    That's the internallhy focused discussion. How are you
    solving the end user's demand for halting decision?


    Practical workarounds are outside of the scope of
    theoretical limits. If I started talking about those
    now I would never get closure on theoretical limits.


    But the function which combines 10 deciders into a majority vote
    is itself a decider! And that 10-majority-decider function can be
    targeted by a diagonal test case ... and such a test case is now
    a non-input. See?

    You are not looking at it from the perspective of a /consumer/ of a
    /decider product/ actually trying to use deciders and trust their
    answer.

    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    But how does the user interpret that result?

    The the input to this decider specifies a sequence
    that cannot possibly reach its final halt state.

    But you have inputs for which that is reported, which
    readily halt when they are executed.


    Whenever the input P does not try to cheat by calling
    its own decider H the simulation of this input by this
    decider is the same as UTM(P).

    Don't you think the user wants to know /that/, and not what happens
    under the decider (if that is different)?


    As far as theoretical limits goes the user can either
    face reality or be out-of-touch with reality.

    The user just wants to know, does this thing halt or not?

    The user may equally want a purely imaginary
    Turing machine to bake a birthday cake.

    How does it answer the user's question?

    As far as theoretical limitations go I have addressed
    them.

    By address, do you mean remove?


    The halting problem has always been incorrect, so just
    like ZFC eliminated Russell's Paradox I have eliminated
    the halting problem.

    Practical workarounds can be addressed after I
    am published and my work is accepted.

    Workarounds for what? You've left something unsolved in halting; what is that?

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,comp.ai.philosophy on Wed Oct 29 17:21:10 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/29/2025 12:36 AM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 9:19 PM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
    Under your system, I don't know whether Y is correct.

    Y could be a broken decider that is wrongly deciding D (and /that/ >>>>>> is why its execution trace differs from X).

    Or it could be the case that D is a non-input to Y, in which case Y is >>>>>> deemed to be correct because D being a non-input to Y means that D >>>>>> denotes non-halting semantics to Y (and /that/ is why its execution >>>>>> trace differs from X).

    The fact that the execution trace differs doesn't inform.

    We need to know the value of is_input(Y, D): we need to /decide/ whether >>>>>> D is non-input or input to Y in order to /decide/ whether its rejection >>>>>> is correct.


    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    But under your system, if I am a user of deciders, and have been
    given a decider H which is certified to be correct, I cannot
    rely on it to decide halting.


    When halting is defined correctly:
    Does this input specify a sequence of moves that
    reach a final halt state?

    and not defined incorrectly: to require something
    that is not specified in the input then this does
    overcome the halting problem proof and shows that
    the halting problem itself has always been a category
    error. (Flibble's brilliant term).

    I want to know whether D halts, that's all.

    H says no. It is certified correct under your paradigm, so
    so I don't have to suspect that if it is given an /input/
    it will be wrong.

    But: I have no idea whether D is an input to H or a non-input!


    That is ridiculous. If it is an argument
    to the decider function then it is an input.

    So how it's supposed to work that an otherwise halting D
    is a non-halting input to H.


    int D()
    {
    int Halt_Status = H(D);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    H simulates D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    until H sees this repeating pattern.

    When the non-halting D is an input to H (which it undeniably as you have
    now decided) D is non-halting.


    D.input_to_H is non-halting is confirmed in that
    D simulated by cannot possibly reach its own
    "return" statement final halt state. This divides
    non-halting from stopping running.

    With respect to H, it's as if the halting D exists in another dimension;
    /that/ D is not the input.

    Okay, but anyway ...

    - The decider user has some program P..

    - P terminates, but it takes three years on the user's hardware.

    - The user does not know this; they tried running P for weeks,
    months, but it never terminated.

    - The user has H which they have been assured is correct under
    the Olcott Halting Paradigm.

    - The applies H to P, and H rejects it.


    The would mean that P has specifically targeted
    H in an attempt to thwart a correct assessment.

    - The program P is actually D, but the user doesn't know this.


    The system works on source-code.

    Whatever results you have, they have to be valid
    for any representation of Turing machines whatsoever.

    Source code can be obfuscated, as well as extremely large and
    obfuscated.

    That the user can have source code (not required at all by the Turing
    model) doesn't change that the user has no idea that P is actually a D
    program with respect to their halting decider.

    Also, the question "is this input P a diagonal input targetin this
    decider H" is an undecidable problem!!!

    What should the user believe? Does D halt or not?


    When the input P targets the decider H or does not target
    the decider H input P simulated by decider H always reports
    on the basis of whether P can reach its own final halt state.

    But that's not the output we want; we want to know whether P()
    halts.

    How is the user /not/ deceived if they believe that P doesn't halt?

    When H says 0, I have no idea whether it's being judged non-halting
    as an input, or whether it's being judged as a non-input (whereby
    either value is the correct answer as far as H is concerned).


    Judging by anything besides and input has always
    been incorrect. H(D) maps its input to a reject
    value on the basis of the behavior that this
    argument to H specifies.

    But that behavior is only real /as/ an argument to H; it is not the
    behavior that the halter-decider customer wants reported on.


    When what the customer wants and what is in the scope of
    Turing machines differ the user must face reality.

    So under your paradigm, the user is told they must face reality: some
    machines cannot be decided, so you cannot get an answer for whether P()
    halts. Sometimes you get answer for whether P is considered to be
    halting when simulated by H, which doesn't match whether P() halts. Suck
    it up!

    That's just a stupidly convoluted version of what they are told
    under the standard Halting Problem, which informs them that for
    every halting decider, there are inputs for which it is wrong or nonterminating.

    You are not improving the standard halting problem and its theorem
    one iota; just vandalizing it with impertinent content and details.

    Under your paradigm, a halting decider reports rubbish for some inputs,
    and is called correct; e.g. rejecting a halting input.

    How is what you are doing different from calling a tail "leg",
    and claiming that canines are five-legged animals?

    There
    may be practical workarounds these are outside the scope
    of the theoretical limits.

    I cannot think of any example of an engineering technique
    which overcomes theoretical limits.

    There are no workarounds for the undecidability of halting;
    you can only work within the limits not overcome them.

    The halting problem has always been incorrect, so just
    like ZFC eliminated Russell's Paradox I have eliminated
    the halting problem.

    Only, you've not eliminated it sufficently far that wouldn't have to
    tell the halting decider client to accept reality?

    ZFC eliminating Russel's Paradox is a formal-system-level
    change.

    You're not changing the formal system; you are staying in
    the Turing Model.

    Also, Russel's Paradox is nonsense. Whereas a program H
    deciding a program P which integrates H itself in some shape
    is not nonsense; it is constructible.

    Get it? Russel's silly set cannot even be imagined, let alone
    constructedd.

    Failing test cases for halting can be constructed; they
    are real.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,comp.ai.philosophy on Wed Oct 29 18:21:03 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    The halting problem as defined requires something
    that is outside of the scope of all Turing machines.

    That is false. The halting problem simply /asks/ a question whether a
    Turing machine can exist which does something. The answer comes back
    negative in the form of a theorem.

    The halting problem doesn't require anything of Turing computation
    that it cannot do; it's not a requirements specification.

    The input cases which are not decided correctly by a given partial
    decider are real, constructable entities which have a definite halting status. They are not paradoxical absurdities that cannot exist; you cannot dismiss something which exists.

    It's like "solving" indivisibility by two by banning odd numbers
    as incorrect.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy on Wed Oct 29 14:13:12 2025
    From Newsgroup: comp.ai.philosophy

    On 10/29/2025 1:21 PM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    The halting problem as defined requires something
    that is outside of the scope of all Turing machines.

    That is false. The halting problem simply /asks/ a question whether a
    Turing machine can exist which does something. The answer comes back
    negative in the form of a theorem.


    When the halting problem asks anything other than
    exactly what the input to a decider specifies the
    halting problem is asking for something that is
    outside the scope of Turing machines.

    The halting problem doesn't require anything of Turing computation
    that it cannot do; it's not a requirements specification.


    It is a requirements specification for the behavior
    that is specified by a finite string that is different
    than the behavior that this finite string as an input
    specifies.

    The halting problem is asking for the behavior of
    UTM(D) when the behavior of the input to H(D) is not
    the same as the behavior of UTM(D).

    The input cases which are not decided correctly by a given partial
    decider are real, constructable entities which have a definite halting status.
    They are not paradoxical absurdities that cannot exist; you cannot dismiss something which exists.


    UTM(D) does indeed specify a behavior yet H(D) specifies
    a different behavior. The halting problem itself commits
    a category error when it requires the behavior of UTM(D)
    from H(D).

    It's like "solving" indivisibility by two by banning odd numbers
    as incorrect.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy on Wed Oct 29 14:38:37 2025
    From Newsgroup: comp.ai.philosophy

    On 10/29/2025 12:21 PM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/29/2025 12:36 AM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 9:19 PM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
    Under your system, I don't know whether Y is correct.

    Y could be a broken decider that is wrongly deciding D (and /that/ >>>>>>> is why its execution trace differs from X).

    Or it could be the case that D is a non-input to Y, in which case Y is >>>>>>> deemed to be correct because D being a non-input to Y means that D >>>>>>> denotes non-halting semantics to Y (and /that/ is why its execution >>>>>>> trace differs from X).

    The fact that the execution trace differs doesn't inform.

    We need to know the value of is_input(Y, D): we need to /decide/ whether
    D is non-input or input to Y in order to /decide/ whether its rejection >>>>>>> is correct.


    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    But under your system, if I am a user of deciders, and have been
    given a decider H which is certified to be correct, I cannot
    rely on it to decide halting.


    When halting is defined correctly:
    Does this input specify a sequence of moves that
    reach a final halt state?

    and not defined incorrectly: to require something
    that is not specified in the input then this does
    overcome the halting problem proof and shows that
    the halting problem itself has always been a category
    error. (Flibble's brilliant term).

    I want to know whether D halts, that's all.

    H says no. It is certified correct under your paradigm, so
    so I don't have to suspect that if it is given an /input/
    it will be wrong.

    But: I have no idea whether D is an input to H or a non-input!


    That is ridiculous. If it is an argument
    to the decider function then it is an input.

    So how it's supposed to work that an otherwise halting D
    is a non-halting input to H.


    int D()
    {
    int Halt_Status = H(D);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    H simulates D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    until H sees this repeating pattern.

    When the non-halting D is an input to H (which it undeniably as you have >>> now decided) D is non-halting.


    D.input_to_H is non-halting is confirmed in that
    D simulated by cannot possibly reach its own
    "return" statement final halt state. This divides
    non-halting from stopping running.

    With respect to H, it's as if the halting D exists in another dimension; >>> /that/ D is not the input.

    Okay, but anyway ...

    - The decider user has some program P..

    - P terminates, but it takes three years on the user's hardware.

    - The user does not know this; they tried running P for weeks,
    months, but it never terminated.

    - The user has H which they have been assured is correct under
    the Olcott Halting Paradigm.

    - The applies H to P, and H rejects it.


    The would mean that P has specifically targeted
    H in an attempt to thwart a correct assessment.

    - The program P is actually D, but the user doesn't know this.


    The system works on source-code.

    Whatever results you have, they have to be valid
    for any representation of Turing machines whatsoever.

    Source code can be obfuscated, as well as extremely large and
    obfuscated.


    We are not taking about complexity we are talking about
    computability.

    That the user can have source code (not required at all by the Turing
    model) doesn't change that the user has no idea that P is actually a D program with respect to their halting decider.


    The source code is always required all the time because
    Turing machines only operate on finite string machine
    descriptions that are essentially source-code.

    Also, the question "is this input P a diagonal input targetin this
    decider H" is an undecidable problem!!!


    That is circumvented by simulating halt deciders that
    do what all deciders do and that is compute an accept
    or reject value entirely on the basis of what its finite
    string actually specifies.

    What should the user believe? Does D halt or not?


    When the input P targets the decider H or does not target
    the decider H input P simulated by decider H always reports
    on the basis of whether P can reach its own final halt state.

    But that's not the output we want; we want to know whether P()
    halts.


    People can choose to break from reality.

    How is the user /not/ deceived if they believe that P doesn't halt?

    When H says 0, I have no idea whether it's being judged non-halting
    as an input, or whether it's being judged as a non-input (whereby
    either value is the correct answer as far as H is concerned).


    Judging by anything besides and input has always
    been incorrect. H(D) maps its input to a reject
    value on the basis of the behavior that this
    argument to H specifies.

    But that behavior is only real /as/ an argument to H; it is not the
    behavior that the halter-decider customer wants reported on.


    When what the customer wants and what is in the scope of
    Turing machines differ the user must face reality.

    So under your paradigm, the user is told they must face reality: some machines cannot be decided,

    I never said that. You are not paying close enough
    attention.

    If the user wants the square root of a dead rabbit
    they must be corrected that square roots only apply
    to numbers.

    If the user wants the halt status of a program they
    must be corrected in that halt deciders only provide
    the halt status of their finite string inputs.

    so you cannot get an answer for whether P()
    halts. Sometimes you get answer for whether P is considered to be
    halting when simulated by H, which doesn't match whether P() halts. Suck
    it up!


    If it wasn't for the damned cheat of an input calling
    its own decider then it would not be a category error
    to require that a halt decider provide the halt status
    of UTM(D).

    That's just a stupidly convoluted version of what they are told
    under the standard Halting Problem, which informs them that for
    every halting decider, there are inputs for which it is wrong or nonterminating.

    You are not improving the standard halting problem and its theorem
    one iota; just vandalizing it with impertinent content and details.


    I am proving that it has always been flat out incorrect.
    Fools may still require the square root of a dead rabbit.

    Under your paradigm, a halting decider reports rubbish for some inputs,
    and is called correct; e.g. rejecting a halting input.


    Under your program the square root of a dead rabbit
    is not incorrect it is merely too difficult for
    Turing machines.

    How is what you are doing different from calling a tail "leg",
    and claiming that canines are five-legged animals?

    There
    may be practical workarounds these are outside the scope
    of the theoretical limits.

    I cannot think of any example of an engineering technique
    which overcomes theoretical limits.


    It is a theoretical limit that no Truth predicate
    can possibly exist when you don't exclude some
    expressions of language as not bearers of truth.

    Is the sentence: "what time is it?" true or false?
    I just proved that no universal truth predicate exists
    according to one definition of a truth predicate.

    If a truth predicate is defined to return true
    when an expression is true and false otherwise
    then a truth predicate can be defined.

    Gibberish_Nonsense = " foiwrml 34590sd sflp49dcvs" True(Gibberish_Nonsense)==FALSE
    True(~Gibberish_Nonsense)==FALSE

    Now a consistent truth predicate exists.

    There are no workarounds for the undecidability of halting;
    you can only work within the limits not overcome them.

    The halting problem has always been incorrect, so just
    like ZFC eliminated Russell's Paradox I have eliminated
    the halting problem.

    Only, you've not eliminated it sufficently far that wouldn't have to
    tell the halting decider client to accept reality?

    ZFC eliminating Russel's Paradox is a formal-system-level
    change.

    You're not changing the formal system; you are staying in
    the Turing Model.


    I am pointing out that the halting problem requirement
    that a halt decider determines that value of UTM(P)
    is flat out incorrect when H(P) != UTM(D).

    Also, Russel's Paradox is nonsense. Whereas a program H
    deciding a program P which integrates H itself in some shape
    is not nonsense; it is constructible.


    The halting problem is a much more subtle form of
    the same error as requiring the square root of a dead cat.

    Get it? Russel's silly set cannot even be imagined, let alone
    constructedd.


    They did imagine it for quite a few years until
    ZFC noticed that it could not be coherently imagined.

    Failing test cases for halting can be constructed; they
    are real.



    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,comp.ai.philosophy on Wed Oct 29 13:31:59 2025
    From Newsgroup: comp.ai.philosophy

    On 10/28/2025 8:26 PM, olcott wrote:
    On 10/28/2025 10:16 PM, Richard Heathfield wrote:
    On 28/10/2025 21:58, dbush wrote:
    On 10/28/2025 4:51 PM, olcott wrote:

    <snip>

    <repeat of previously refuted point>

    So again you admit that Kaz's code proves that D is halting.

    Credit where credit is due. By returning 0*, /olcott's/ code proves
    that D is halting.

    *i.e. non-halting.

    Yet (as I have said hundreds of times) Turing machines
    can only compute the mapping from *INPUT* finite string
    machine descriptions to the behavior that these *INPUT*
    finite string machine descriptions *ACTUALLY SPECIFY*

    D simulated by H SPECIFIES NOT-HALTING BEHAVIOR.
    The halting problem itself makes a category error
    when it requires deciders to report on behavior
    other than the behavior that *THEIR INPUT SPECIFIES*


    Think if HHH(DD) returned 0 to DD. Oh my! It halts. HHH(DD) returns
    anything else it goes into a never ending loop wrt DD's logic. So be it.
    You are thinking you are smart about DD's logic, such that a little kid
    can understand? rofl. Afaict, your HHH(DD) is basically this:

    1 HOME
    5 PRINT "The Olcott All-in-One Halt Decider!"
    10 INPUT "Shall I halt or not? " ; A$
    30 IF A$ = "YES" GOTO 666
    40 GOTO 10
    666 PRINT "OK!"

    Right? ;^o
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,comp.ai.philosophy on Wed Oct 29 15:13:05 2025
    From Newsgroup: comp.ai.philosophy

    On 10/29/2025 9:12 AM, olcott wrote:
    [...]
    When what the customer wants and what is in the scope of
    Turing machines differ the user must face reality. There
    may be practical workarounds these are outside the scope
    of the theoretical limits.
    [...]

    Oh wow.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,comp.ai.philosophy on Wed Oct 29 15:16:11 2025
    From Newsgroup: comp.ai.philosophy

    On 10/29/2025 10:21 AM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/29/2025 12:36 AM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 9:19 PM, Kaz Kylheku wrote:
    On 2025-10-29, olcott <polcott333@gmail.com> wrote:
    On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
    Under your system, I don't know whether Y is correct.

    Y could be a broken decider that is wrongly deciding D (and /that/ >>>>>>> is why its execution trace differs from X).

    Or it could be the case that D is a non-input to Y, in which case Y is >>>>>>> deemed to be correct because D being a non-input to Y means that D >>>>>>> denotes non-halting semantics to Y (and /that/ is why its execution >>>>>>> trace differs from X).

    The fact that the execution trace differs doesn't inform.

    We need to know the value of is_input(Y, D): we need to /decide/ whether
    D is non-input or input to Y in order to /decide/ whether its rejection >>>>>>> is correct.


    Whatever is a correct simulation of an input by
    a decider is the behavior that must be reported on.

    But under your system, if I am a user of deciders, and have been
    given a decider H which is certified to be correct, I cannot
    rely on it to decide halting.


    When halting is defined correctly:
    Does this input specify a sequence of moves that
    reach a final halt state?

    and not defined incorrectly: to require something
    that is not specified in the input then this does
    overcome the halting problem proof and shows that
    the halting problem itself has always been a category
    error. (Flibble's brilliant term).

    I want to know whether D halts, that's all.

    H says no. It is certified correct under your paradigm, so
    so I don't have to suspect that if it is given an /input/
    it will be wrong.

    But: I have no idea whether D is an input to H or a non-input!


    That is ridiculous. If it is an argument
    to the decider function then it is an input.

    So how it's supposed to work that an otherwise halting D
    is a non-halting input to H.


    int D()
    {
    int Halt_Status = H(D);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    H simulates D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    that calls H(D) to simulate D
    until H sees this repeating pattern.

    When the non-halting D is an input to H (which it undeniably as you have >>> now decided) D is non-halting.


    D.input_to_H is non-halting is confirmed in that
    D simulated by cannot possibly reach its own
    "return" statement final halt state. This divides
    non-halting from stopping running.

    With respect to H, it's as if the halting D exists in another dimension; >>> /that/ D is not the input.

    Okay, but anyway ...

    - The decider user has some program P..

    - P terminates, but it takes three years on the user's hardware.

    - The user does not know this; they tried running P for weeks,
    months, but it never terminated.

    - The user has H which they have been assured is correct under
    the Olcott Halting Paradigm.

    - The applies H to P, and H rejects it.


    The would mean that P has specifically targeted
    H in an attempt to thwart a correct assessment.

    - The program P is actually D, but the user doesn't know this.


    The system works on source-code.

    Whatever results you have, they have to be valid
    for any representation of Turing machines whatsoever.

    Source code can be obfuscated, as well as extremely large and
    obfuscated.

    That the user can have source code (not required at all by the Turing
    model) doesn't change that the user has no idea that P is actually a D program with respect to their halting decider.

    Also, the question "is this input P a diagonal input targetin this
    decider H" is an undecidable problem!!!

    What should the user believe? Does D halt or not?


    When the input P targets the decider H or does not target
    the decider H input P simulated by decider H always reports
    on the basis of whether P can reach its own final halt state.

    But that's not the output we want; we want to know whether P()
    halts.

    How is the user /not/ deceived if they believe that P doesn't halt?

    When H says 0, I have no idea whether it's being judged non-halting
    as an input, or whether it's being judged as a non-input (whereby
    either value is the correct answer as far as H is concerned).


    Judging by anything besides and input has always
    been incorrect. H(D) maps its input to a reject
    value on the basis of the behavior that this
    argument to H specifies.

    But that behavior is only real /as/ an argument to H; it is not the
    behavior that the halter-decider customer wants reported on.


    When what the customer wants and what is in the scope of
    Turing machines differ the user must face reality.

    So under your paradigm, the user is told they must face reality: some machines cannot be decided, so you cannot get an answer for whether P() halts. Sometimes you get answer for whether P is considered to be
    halting when simulated by H, which doesn't match whether P() halts. Suck
    it up!

    That's just a stupidly convoluted version of what they are told
    under the standard Halting Problem, which informs them that for
    every halting decider, there are inputs for which it is wrong or nonterminating.

    You are not improving the standard halting problem and its theorem
    one iota; just vandalizing it with impertinent content and details.

    Under your paradigm, a halting decider reports rubbish for some inputs,
    and is called correct; e.g. rejecting a halting input.

    How is what you are doing different from calling a tail "leg",
    and claiming that canines are five-legged animals?

    Indeed. Actually, it reminds me of the art on the cover of the following
    game:

    https://en.wikipedia.org/wiki/Nord_and_Bert_Couldn%27t_Make_Head_or_Tail_of_It

    A cow with two asses? I can say head up own ass and might beat a puzzle.
    lol.

    [...]
    --- Synchronet 3.21a-Linux NewsLink 1.2