• The most definitive measure of the behavior of the input to H(P)

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Tue Dec 16 11:21:18 2025
    From Newsgroup: comp.ai.philosophy

    On 12/16/2025 2:39 AM, Tristan Wibberley wrote:
    On 15/12/2025 23:23, polcott wrote:
    They are not going to know the nuances of context
    dependent execution. Everyone here denies that it
    exists when I have proven it beyond all possible
    doubt thousands of times.

    It's literally in Turing's paper. He calls them c-machines (choice
    machines). One of the state-transitions takes context from outside the machine.


    It is a verified fact that HHH(DD)==0 and HHH1(DD)==1
    are both correct when

    (a) TMs only transform input finite strings to values

    (b) There exists no alternative more definitive measure
    of the behavior that the input to H(P) specifies (within
    finite string transformation rules) than P simulated by H.
    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.math,sci.logic,comp.ai.philosophy on Tue Dec 16 11:30:01 2025
    From Newsgroup: comp.ai.philosophy

    On 12/16/2025 4:07 AM, Mikko wrote:
    On 15/12/2025 16:39, olcott wrote:
    On 12/15/2025 3:44 AM, Mikko wrote:
    On 13/12/2025 15:15, polcott wrote:
    On 12/12/2025 11:22 PM, dart200 wrote:
    On 12/11/25 1:20 PM, polcott wrote:
    On 12/11/2025 3:02 PM, dart200 wrote:
    On 12/11/25 12:45 PM, polcott wrote:
    On 12/11/2025 1:35 PM, dart200 wrote:
    On 12/9/25 8:02 PM, Richard Damon wrote:
    On 12/9/25 1:55 PM, dart200 wrote:
    On 12/9/25 4:42 AM, Richard Damon wrote:
    On 12/9/25 12:23 AM, dart200 wrote:
    On 12/8/25 8:12 PM, Richard Damon wrote:

    ???

    Given Machine H is chosen as one partial decider then the >>>>>>>>>>>>>> machine:

    H^(d): if H(d, d) returns halting, loop forever
            else halt.

    i'm sorry now ur claiming H(d,d) actually returns an answer??? >>>>>>>>>>>>>
    when did this happen, and what does it return buddy??? >>>>>>>>>>>>
    what ever its programs says it will.

    Do you not understand the concept of a parameter to an >>>>>>>>>>>> arguement?

    My claim is if *YOU* give me a machine H, I can prove it wrong. >>>>>>>>>>>>
    YOU need to provide some machine that my arguement will >>>>>>>>>>>> label as H.



    Then H^(H^) will show that H was wrong for H(H^, H^) >>>>>>>>>>>>>>
    How is that not showing the machine which that machine can >>>>>>>>>>>>>> not decider.

    partial decidable does not fly it loses to BB

    Nope, because "partial deciability" means the machine is >>>>>>>>>>>> allowed to not answer.

    so what ur saying is H won't answer, so H^ will have an >>>>>>>>>>> answer? i did explore that paradigm in one of my papers, a >>>>>>>>>>> believe it's possible to create a program that seeks out an >>>>>>>>>>> contradicts any and all deciders that try to decide on it: >>>>>>>>>>
    H^ must have a behavior, so there is a correct answer.

    One semi-useful class of partial decider, which are also
    called recognizer, are machines that never give a wrong
    answer, but sometimes

    yeah that's what i explored in the paper i posted

    don't answer. This class is more useful if they always
    eventually answer for one side of the decision, and only not- >>>>>>>>>> answer sometimes for the

    no, there's always going to be some machine which they cannot >>>>>>>>> answer for both sides

    please do read §2 of that paper

    other. Halting is partially decidable by this criteria, with a >>>>>>>>>> decider that eventually answer halting for all halting
    machines, and non- halting for a large class of non-halting >>>>>>>>>> machines. I looked at machines of this type in the late 70's >>>>>>>>>> in school.

    Also, "beleive" is not proof, and doesn't mean you framework >>>>>>>>>> is useful.

    It is easy to create a system where Halting can be decided, it >>>>>>>>>> just needs making the system less than Turing Complete, so if >>>>>>>>>> you idea is based on that, so what.


    https://www.academia.edu/136521323/
    how_to_resolve_a_halting_paradox

    (partial decidability also wouldn't work in Turing's
    "satisfactory" problem from the og paper /on computable >>>>>>>>>>> numbers/, but we'll get there later)


    The Abstract talks about changing the meaning of basics of >>>>>>>>>> Conputation theory and the defintion of Halting (I haven't >>>>>>>>>> read the full paper).

    All that is doing is admitting that by the definitions
    accepted for defining a computation and what halting means, >>>>>>>>>> the author is conceeding that Halting is uncomputable.

    The paper than says:

    This paper does not attempt to present at depth arguments or >>>>>>>>>> reasons for why we should accept either of these proposals vs >>>>>>>>>> a more conventional perspective,

    because the implications are so broad my interest was to just >>>>>>>>> focus on the idea of the technique vs why


    But, what good is an alternate formulation if you aren't going >>>>>>>>>> to discuss why the alternative is going to be useful.

    i cannot condense meaning into the abstract and conclusions, >>>>>>>>> u'd actually have to read it 🤷


    It seems this is just another person not understand the
    reasoning behind how computations were defined, and why
    "Halting" was an important question, but just wants to create >>>>>>>>>> a somewhat similar solvable problem, even if such an
    alternative problem has no use.



    if BB has some limit L (which is must if u believe halting >>>>>>>>>>>>> problem), then there must be some specifically L-state >>>>>>>>>>>>> machine which *no* machine could decide upon, for if that >>>>>>>>>>>>> machine was decidable by anything, then BB could find that >>>>>>>>>>>>> anything and subvert the limit L

    WHy does BB need to have a limit L?

    my my richard, u don't know that in ur theory BB must have a >>>>>>>>>>> limit?

    You seem to be using undefined terms.

    BB is apparently the Busy Beaver problem, which since it is >>>>>>>>>> uncomputable, can't actually be a machine.

    yeah but it's certainly computable up until a limit, as we've >>>>>>>>> already computed it up to 5, there cannot be any machines <6 >>>>>>>>> states that are not decidable


    BB(n) is the maximum length tape that a Turing Machine of n >>>>>>>>>> states can create and then halt.

    technically it's steps: https://en.wikipedia.org/wiki/Busy_beaver >>>>>>>>>
    but for the purposes of this discussion it doesn't really
    matter whether it's space or steps we're talking about


    BB(n) is, by definitiion a "finite" number. Talking about the >>>>>>>>>> "limit" of a finite number is a misuse of the term.

    i mean the natural number limit L >5 at which point BB(L)
    becomes fundamentally *unknowable* due to some L-state machine >>>>>>>>> being fundamentally undecidable.

    if L doesn't exist, that would make halting generally
    decidable, so therefore L must exist

    if L does exist, then there must be some L-state machine U
    which cannot be decided on *by any partial* decider, because >>>>>>>>> the BB computation would find it and use it


    We can sometimes establish upper and lower bounds on the value >>>>>>>>>> of BB(n), is that what you mean by "a limit L"?


    if you believe the halting problem, then BB must have a limit >>>>>>>>>>> L, or else halting becomes generally solvable using the BB >>>>>>>>>>> function. see, if you can compute the BB number for any N- >>>>>>>>>>> state machines, then for any N-state machine u can just run >>>>>>>>>>> the N- state machine until BB number of steps. any machine >>>>>>>>>>> that halts on or before BB(N) steps halts, any that run past >>>>>>>>>>> must be nonhalting

    No, if we could establish an upper limit for BB(n) for all n, >>>>>>>>>> then we could solve the hatling problem, as we have an upper >>>>>>>>>> limit for the number of steps we need to simulate the machine. >>>>>>>>>>
    BB(n) has a value, but for sufficiently large values of n, we >>>>>>>>>> don't have an upper bound for BB(n).


    and the problem with allowing for partial decidability is >>>>>>>>>>> that BB can run continually run more and more deciders in >>>>>>>>>>> parallel, on every N- state machine, until one comes back >>>>>>>>>>> with an halting answer, for every N-state machine, which then >>>>>>>>>>> it can the use to decide what the BB number is for any N ... >>>>>>>>>>
    So, what BB are you running? Or are you misusing "running" to >>>>>>>>>> try to mean somehow trying to calculate?

    contradicting the concept it must have a limit L, where some >>>>>>>>>>> L- state machine cannot be decidable by *any* partial decider >>>>>>>>>>> on the matter,

    No, it can have a limit, just not a KNOWN limit.

    consensus is there can a known limit L to the BB function, and >>>>>>>>> proofs have been put out in regards to this



    so no richard, partial decidability does not work if BB is to >>>>>>>>>>> have a limit


    You only have the problem is BB has a KNOWN limit. Again, you >>>>>>>>>> trip up on assuming you can know any answer you want.

    That some things are not knowable breaks your logic.


    I just glanced at your paper and skipped to the conclusion.
    Why do we care about the undecidability of the halting problem? >>>>>>>> Because undecidability in general (if it is correct) shows
    that truth itself is broken. Truth itself cannot be broken.
    This is the only reason why I have worked on these things
    for 28 years.

    because it makes us suck as developing and maintaining software, >>>>>>> and as a 35 year old burnt out SWE, i'm tired of living in a
    world running off sucky software. it really is limiting our
    potential, and i want my soon to be born son to have a far better >>>>>>> experience with this shit than i did.

    a consequence of accepting the halting problem is then
    necessarily accepting proof against *all* semantic deciders,
    barring us from agreeing on what such general deciders might be


    Exactly: Tarski even "proved" that we can't even directly
    compute what is true. This lets dangerous liars get away
    with their dangerous lies.

    this has lead to not only an unnecessary explosion in complexity >>>>>>> of software engineering, because we can't generally compute
    semantic (turing) equivalence,

    but the general trend in deploying software that doesn't have
    computed semantic proofs guaranteeing they actually do what we
    want them to do.

    Yes without computing halting total proof of
    correctness is impossible.

    "testing" is poor substitute for doing so, but that's the most we >>>>>>> can agree upon due to the current theory of computing.

    i think my ideas might contribute to dealing with incompleteness >>>>>>> in fundamental math more generally ... like producing more
    refined limits to it's philosophical impact. tho idk if it can be >>>>>>> gotten rid of completely, anymore than we can get rid of the
    words "this statement is false"


    I don't think that there actually are any limits
    except for expressions requiring infinite proofs.

    but i am currently focused on the theory of computing and not
    anything more generally. the fundamental objects comprising the >>>>>>> theory of computing (machines) are far more constrained in their >>>>>>> definitions than what set theory needs to encompass, and within >>>>>>> those constraints i think i can twist the consensus into some
    contradiction that are just entirely ignorant of atm


    I have explored all of the key areas. None of them
    can be made as 100% perfectly concrete and unequivocal
    as computing.

    that's the slam dunk left that i need. i have a means to rectify >>>>>>> whatever contradiction we find thru the use of RTMs, but i'm
    still teasing out the contradiction that will *force* others to >>>>>>> notice


    I do have my refutation of the halting problem itself
    boiled down to a rough draft of two first principles.

    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine this
    is always a category error because Turing machines
    only take finite string inputs.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that its
    actual finite string input actually specifies.



    polcott, i'm working on making the halting problem complete and
    consistent in regards to a subset of the improved "reflective
    turing machines" that encompasses all useful computations

    i'm sorry, but not about trying to reaffirm the halting function as >>>>> still uncomputable by calling it a category error

    I do compute the halting function correctly.
    I have been doing this for more than three years.
    We probably should not be spamming alt.buddha.short.fat.guy

    int sum(int x, int y){return x + y;}
    sum(3,2) should return 5 and it is incorrect
    to require sum(3,2) to return the sum of 5+6.

    What
       int sum(int x, int y){return x + y;}
    should return either is specified somewhere or is not specified.
    In the former case it should do as the document says, in the
    latter case no return value is wrong.

    However, C rules specify what that function must return. If the function >>> returns something other than 5 it violates C rules regardless what it
    is required to return.

    Likewise a halt decider is required to report on the
    behavior that its input finite string actually
    specifies.

    That does not mean anuthing without interpretation rules. Without them strings are merely uninterpreted strings that don't specify anything.
    Also needed is a proof that every conputation is an interpretaion of
    some string according to the inpterpretation rules.


    (a) TMs only transform input finite strings to values
    using finite string transformation rules.

    (b) There exists no alternative more definitive measure
    of the behavior that the input to H(P) specifies (within
    finite string transformation rules) than P simulated by H.

    Here is an insight that LLM Kimi suggested entirely
    on the basis of the text of my first principles.

    The Universal TM's Illusion: The UTM appears
    to "simulate another machine," but it's really just
    interpreting a string as a lookup table for state
    transitions. The simulation is pure string rewriting.

    A simple way to specify the interpretaion rules is to select one
    universal Turing machine as the reference machine and saty that
    the specified interpretation is what the refernce machine does.
    The halting problem can then be formulated as:
      the halting decider shall accept if the reference machine with the
          same input halts, and
      the halting decider shall reject if the reference machine with the
          same input runs forever.

    This remains true even when this finite string input
    defines an interdependency with its decider that
    changes its behavior.

    The behaviour of a Turing macine does not depend on anything other than
    its input.

    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Tue Dec 16 21:47:01 2025
    From Newsgroup: comp.ai.philosophy

    On 12/16/25 12:21 PM, olcott wrote:
    On 12/16/2025 2:39 AM, Tristan Wibberley wrote:
    On 15/12/2025 23:23, polcott wrote:
    They are not going to know the nuances of context
    dependent execution. Everyone here denies that it
    exists when I have proven it beyond all possible
    doubt thousands of times.

    It's literally in Turing's paper. He calls them c-machines (choice
    machines). One of the state-transitions takes context from outside the
    machine.


    It is a verified fact that HHH(DD)==0 and HHH1(DD)==1
    are both correct when

    (a) TMs only transform input finite strings to values

    (b) There exists no alternative more definitive measure
    of the behavior that the input to H(P) specifies (within
    finite string transformation rules) than P simulated by H.


    But (b) is wrong, and says that you are just admitting to lying that H
    is a halt decider in the first place.

    The actual definitvie measure of the behavior of that input is the
    behavior of the machine it represents when run, which will be identical
    to the behavior shown when that input is given to an ACTUAL UTM, that is
    one that doesn't stop until it reaches a final state (or just run forever).

    Your problem is you think it is ok to just lie about the meaning of the
    words you are using, because it seems Truth and Semantics are unknown
    concepts to you, because you are just a pathological liar.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,sci.math,sci.logic,comp.ai.philosophy on Wed Dec 17 12:01:13 2025
    From Newsgroup: comp.ai.philosophy

    On 16/12/2025 19:30, olcott wrote:
    On 12/16/2025 4:07 AM, Mikko wrote:
    On 15/12/2025 16:39, olcott wrote:
    On 12/15/2025 3:44 AM, Mikko wrote:
    On 13/12/2025 15:15, polcott wrote:
    On 12/12/2025 11:22 PM, dart200 wrote:
    On 12/11/25 1:20 PM, polcott wrote:
    On 12/11/2025 3:02 PM, dart200 wrote:
    On 12/11/25 12:45 PM, polcott wrote:
    On 12/11/2025 1:35 PM, dart200 wrote:
    On 12/9/25 8:02 PM, Richard Damon wrote:
    On 12/9/25 1:55 PM, dart200 wrote:
    On 12/9/25 4:42 AM, Richard Damon wrote:
    On 12/9/25 12:23 AM, dart200 wrote:
    On 12/8/25 8:12 PM, Richard Damon wrote:

    ???

    Given Machine H is chosen as one partial decider then the >>>>>>>>>>>>>>> machine:

    H^(d): if H(d, d) returns halting, loop forever
            else halt.

    i'm sorry now ur claiming H(d,d) actually returns an >>>>>>>>>>>>>> answer???

    when did this happen, and what does it return buddy??? >>>>>>>>>>>>>
    what ever its programs says it will.

    Do you not understand the concept of a parameter to an >>>>>>>>>>>>> arguement?

    My claim is if *YOU* give me a machine H, I can prove it >>>>>>>>>>>>> wrong.

    YOU need to provide some machine that my arguement will >>>>>>>>>>>>> label as H.



    Then H^(H^) will show that H was wrong for H(H^, H^) >>>>>>>>>>>>>>>
    How is that not showing the machine which that machine >>>>>>>>>>>>>>> can not decider.

    partial decidable does not fly it loses to BB

    Nope, because "partial deciability" means the machine is >>>>>>>>>>>>> allowed to not answer.

    so what ur saying is H won't answer, so H^ will have an >>>>>>>>>>>> answer? i did explore that paradigm in one of my papers, a >>>>>>>>>>>> believe it's possible to create a program that seeks out an >>>>>>>>>>>> contradicts any and all deciders that try to decide on it: >>>>>>>>>>>
    H^ must have a behavior, so there is a correct answer.

    One semi-useful class of partial decider, which are also >>>>>>>>>>> called recognizer, are machines that never give a wrong >>>>>>>>>>> answer, but sometimes

    yeah that's what i explored in the paper i posted

    don't answer. This class is more useful if they always
    eventually answer for one side of the decision, and only not- >>>>>>>>>>> answer sometimes for the

    no, there's always going to be some machine which they cannot >>>>>>>>>> answer for both sides

    please do read §2 of that paper

    other. Halting is partially decidable by this criteria, with >>>>>>>>>>> a decider that eventually answer halting for all halting >>>>>>>>>>> machines, and non- halting for a large class of non-halting >>>>>>>>>>> machines. I looked at machines of this type in the late 70's >>>>>>>>>>> in school.

    Also, "beleive" is not proof, and doesn't mean you framework >>>>>>>>>>> is useful.

    It is easy to create a system where Halting can be decided, >>>>>>>>>>> it just needs making the system less than Turing Complete, so >>>>>>>>>>> if you idea is based on that, so what.


    https://www.academia.edu/136521323/
    how_to_resolve_a_halting_paradox

    (partial decidability also wouldn't work in Turing's
    "satisfactory" problem from the og paper /on computable >>>>>>>>>>>> numbers/, but we'll get there later)


    The Abstract talks about changing the meaning of basics of >>>>>>>>>>> Conputation theory and the defintion of Halting (I haven't >>>>>>>>>>> read the full paper).

    All that is doing is admitting that by the definitions
    accepted for defining a computation and what halting means, >>>>>>>>>>> the author is conceeding that Halting is uncomputable.

    The paper than says:

    This paper does not attempt to present at depth arguments or >>>>>>>>>>> reasons for why we should accept either of these proposals vs >>>>>>>>>>> a more conventional perspective,

    because the implications are so broad my interest was to just >>>>>>>>>> focus on the idea of the technique vs why


    But, what good is an alternate formulation if you aren't >>>>>>>>>>> going to discuss why the alternative is going to be useful. >>>>>>>>>>
    i cannot condense meaning into the abstract and conclusions, >>>>>>>>>> u'd actually have to read it 🤷


    It seems this is just another person not understand the >>>>>>>>>>> reasoning behind how computations were defined, and why >>>>>>>>>>> "Halting" was an important question, but just wants to create >>>>>>>>>>> a somewhat similar solvable problem, even if such an
    alternative problem has no use.



    if BB has some limit L (which is must if u believe halting >>>>>>>>>>>>>> problem), then there must be some specifically L-state >>>>>>>>>>>>>> machine which *no* machine could decide upon, for if that >>>>>>>>>>>>>> machine was decidable by anything, then BB could find that >>>>>>>>>>>>>> anything and subvert the limit L

    WHy does BB need to have a limit L?

    my my richard, u don't know that in ur theory BB must have a >>>>>>>>>>>> limit?

    You seem to be using undefined terms.

    BB is apparently the Busy Beaver problem, which since it is >>>>>>>>>>> uncomputable, can't actually be a machine.

    yeah but it's certainly computable up until a limit, as we've >>>>>>>>>> already computed it up to 5, there cannot be any machines <6 >>>>>>>>>> states that are not decidable


    BB(n) is the maximum length tape that a Turing Machine of n >>>>>>>>>>> states can create and then halt.

    technically it's steps: https://en.wikipedia.org/wiki/Busy_beaver >>>>>>>>>>
    but for the purposes of this discussion it doesn't really >>>>>>>>>> matter whether it's space or steps we're talking about


    BB(n) is, by definitiion a "finite" number. Talking about the >>>>>>>>>>> "limit" of a finite number is a misuse of the term.

    i mean the natural number limit L >5 at which point BB(L) >>>>>>>>>> becomes fundamentally *unknowable* due to some L-state machine >>>>>>>>>> being fundamentally undecidable.

    if L doesn't exist, that would make halting generally
    decidable, so therefore L must exist

    if L does exist, then there must be some L-state machine U >>>>>>>>>> which cannot be decided on *by any partial* decider, because >>>>>>>>>> the BB computation would find it and use it


    We can sometimes establish upper and lower bounds on the >>>>>>>>>>> value of BB(n), is that what you mean by "a limit L"?


    if you believe the halting problem, then BB must have a >>>>>>>>>>>> limit L, or else halting becomes generally solvable using >>>>>>>>>>>> the BB function. see, if you can compute the BB number for >>>>>>>>>>>> any N- state machines, then for any N-state machine u can >>>>>>>>>>>> just run the N- state machine until BB number of steps. any >>>>>>>>>>>> machine that halts on or before BB(N) steps halts, any that >>>>>>>>>>>> run past must be nonhalting

    No, if we could establish an upper limit for BB(n) for all n, >>>>>>>>>>> then we could solve the hatling problem, as we have an upper >>>>>>>>>>> limit for the number of steps we need to simulate the machine. >>>>>>>>>>>
    BB(n) has a value, but for sufficiently large values of n, we >>>>>>>>>>> don't have an upper bound for BB(n).


    and the problem with allowing for partial decidability is >>>>>>>>>>>> that BB can run continually run more and more deciders in >>>>>>>>>>>> parallel, on every N- state machine, until one comes back >>>>>>>>>>>> with an halting answer, for every N-state machine, which >>>>>>>>>>>> then it can the use to decide what the BB number is for any >>>>>>>>>>>> N ...

    So, what BB are you running? Or are you misusing "running" to >>>>>>>>>>> try to mean somehow trying to calculate?

    contradicting the concept it must have a limit L, where some >>>>>>>>>>>> L- state machine cannot be decidable by *any* partial >>>>>>>>>>>> decider on the matter,

    No, it can have a limit, just not a KNOWN limit.

    consensus is there can a known limit L to the BB function, and >>>>>>>>>> proofs have been put out in regards to this



    so no richard, partial decidability does not work if BB is >>>>>>>>>>>> to have a limit


    You only have the problem is BB has a KNOWN limit. Again, you >>>>>>>>>>> trip up on assuming you can know any answer you want.

    That some things are not knowable breaks your logic.


    I just glanced at your paper and skipped to the conclusion.
    Why do we care about the undecidability of the halting problem? >>>>>>>>> Because undecidability in general (if it is correct) shows
    that truth itself is broken. Truth itself cannot be broken.
    This is the only reason why I have worked on these things
    for 28 years.

    because it makes us suck as developing and maintaining software, >>>>>>>> and as a 35 year old burnt out SWE, i'm tired of living in a
    world running off sucky software. it really is limiting our
    potential, and i want my soon to be born son to have a far
    better experience with this shit than i did.

    a consequence of accepting the halting problem is then
    necessarily accepting proof against *all* semantic deciders,
    barring us from agreeing on what such general deciders might be >>>>>>>>

    Exactly: Tarski even "proved" that we can't even directly
    compute what is true. This lets dangerous liars get away
    with their dangerous lies.

    this has lead to not only an unnecessary explosion in complexity >>>>>>>> of software engineering, because we can't generally compute
    semantic (turing) equivalence,

    but the general trend in deploying software that doesn't have >>>>>>>> computed semantic proofs guaranteeing they actually do what we >>>>>>>> want them to do.

    Yes without computing halting total proof of
    correctness is impossible.

    "testing" is poor substitute for doing so, but that's the most >>>>>>>> we can agree upon due to the current theory of computing.

    i think my ideas might contribute to dealing with incompleteness >>>>>>>> in fundamental math more generally ... like producing more
    refined limits to it's philosophical impact. tho idk if it can >>>>>>>> be gotten rid of completely, anymore than we can get rid of the >>>>>>>> words "this statement is false"


    I don't think that there actually are any limits
    except for expressions requiring infinite proofs.

    but i am currently focused on the theory of computing and not >>>>>>>> anything more generally. the fundamental objects comprising the >>>>>>>> theory of computing (machines) are far more constrained in their >>>>>>>> definitions than what set theory needs to encompass, and within >>>>>>>> those constraints i think i can twist the consensus into some >>>>>>>> contradiction that are just entirely ignorant of atm


    I have explored all of the key areas. None of them
    can be made as 100% perfectly concrete and unequivocal
    as computing.

    that's the slam dunk left that i need. i have a means to rectify >>>>>>>> whatever contradiction we find thru the use of RTMs, but i'm
    still teasing out the contradiction that will *force* others to >>>>>>>> notice


    I do have my refutation of the halting problem itself
    boiled down to a rough draft of two first principles.

    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine this
    is always a category error because Turing machines
    only take finite string inputs.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that its
    actual finite string input actually specifies.



    polcott, i'm working on making the halting problem complete and
    consistent in regards to a subset of the improved "reflective
    turing machines" that encompasses all useful computations

    i'm sorry, but not about trying to reaffirm the halting function
    as still uncomputable by calling it a category error

    I do compute the halting function correctly.
    I have been doing this for more than three years.
    We probably should not be spamming alt.buddha.short.fat.guy

    int sum(int x, int y){return x + y;}
    sum(3,2) should return 5 and it is incorrect
    to require sum(3,2) to return the sum of 5+6.

    What
       int sum(int x, int y){return x + y;}
    should return either is specified somewhere or is not specified.
    In the former case it should do as the document says, in the
    latter case no return value is wrong.

    However, C rules specify what that function must return. If the
    function
    returns something other than 5 it violates C rules regardless what it
    is required to return.

    Likewise a halt decider is required to report on the
    behavior that its input finite string actually
    specifies.

    That does not mean anuthing without interpretation rules. Without them
    strings are merely uninterpreted strings that don't specify anything.
    Also needed is a proof that every conputation is an interpretaion of
    some string according to the inpterpretation rules.

    (a) TMs only transform input finite strings to values
    using finite string transformation rules.

    (b) There exists no alternative more definitive measure
    of the behavior that the input to H(P) specifies (within
    finite string transformation rules) than P simulated by H.

    The halting problem does not ask about a string. It asks about
    a meaning, which is a computation. Therefore the measure of the
    behaviour is the computation asked about. If the input string
    does not specify that behaviour then it is a wrong string.
    --
    Mikko
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Sat Dec 20 08:07:59 2025
    From Newsgroup: comp.ai.philosophy

    On 12/20/2025 7:32 AM, Richard Damon wrote:
    On 12/20/25 8:00 AM, polcott wrote:
    On 12/19/2025 4:55 PM, Richard Damon wrote:
    On 12/19/25 5:07 PM, Tristan Wibberley wrote:
    On 18/12/2025 04:29, Richard Damon wrote:
    On 12/17/25 11:08 PM, olcott wrote:
    On 12/17/2025 4:01 AM, Mikko wrote:
    On 16/12/2025 19:30, olcott wrote:

    (a) TMs only transform input finite strings to values
    using finite string transformation rules.

    (b) There exists no alternative more definitive measure
    of the behavior that the input to H(P) specifies (within
    finite string transformation rules) than P simulated by H.

    The halting problem does not ask about a string. It asks about
    a meaning, which is a computation. Therefore the measure of the
    behaviour is the computation asked about. If the input string
    does not specify that behaviour then it is a wrong string.


    It asks about the semantic meaning that its
    input finite string specifies. (See Rice)


    And "Semantic Meaning" relates to directly running the program the
    input
    represents, something you are trying to say is out of scope, but that >>>>> means that you are trying to put semantics out of scope.


    eh? I'm sure it relates to what /would/ have happened if you directly
    run it, and other properties. You may infer them by other means such as >>>> by translating the program to an input for a different machine and
    running that which then reports or fails to report on properties of the >>>> original program; that includes, but is not limited to, halting with
    the
    same final state as the original program would on.


    "Meaning" should have a single source of truth. There may be
    alternate ways to determine it, but those are shown to be correct, by
    being able to trace it back to that original source.

    Rice's proof was based on defined "Semantics" of strings representing
    machines, as having that meaning established by the running of the
    program the input represents.


    The halt decider cannot run a machine it can
    only apply finite string transformation rules
    to input finite strings.

    So?

    It doesn't need to run the machine, only give an answer that is
    correctly determined by doing so.


    Deciders: Transform finite strings by finite string
    transformation rules into {Accept, Reject}.

    In cases of a pathological self-reference relationship
    between Decider H and Input P such that there are no
    finite string transformation rules that H can apply to
    P to derive the behavior of UTM(P) the behavior of UTM(P)
    is overruled as out-of-scope for Turing machine deciders.


    It isn't much of a test if you require that every question include the answer as part of the question.

    You don't seem to understand the concept of REQUIREMENTS.

    But then, since you seem to reject the basic concept of Truth and Correctness, that seems consistant with your logic system.
    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Sat Dec 20 09:41:03 2025
    From Newsgroup: comp.ai.philosophy

    On 12/20/25 9:07 AM, olcott wrote:
    On 12/20/2025 7:32 AM, Richard Damon wrote:
    On 12/20/25 8:00 AM, polcott wrote:
    On 12/19/2025 4:55 PM, Richard Damon wrote:
    On 12/19/25 5:07 PM, Tristan Wibberley wrote:
    On 18/12/2025 04:29, Richard Damon wrote:
    On 12/17/25 11:08 PM, olcott wrote:
    On 12/17/2025 4:01 AM, Mikko wrote:
    On 16/12/2025 19:30, olcott wrote:

    (a) TMs only transform input finite strings to values
    using finite string transformation rules.

    (b) There exists no alternative more definitive measure
    of the behavior that the input to H(P) specifies (within
    finite string transformation rules) than P simulated by H.

    The halting problem does not ask about a string. It asks about >>>>>>>> a meaning, which is a computation. Therefore the measure of the >>>>>>>> behaviour is the computation asked about. If the input string
    does not specify that behaviour then it is a wrong string.


    It asks about the semantic meaning that its
    input finite string specifies. (See Rice)


    And "Semantic Meaning" relates to directly running the program the >>>>>> input
    represents, something you are trying to say is out of scope, but that >>>>>> means that you are trying to put semantics out of scope.


    eh? I'm sure it relates to what /would/ have happened if you directly >>>>> run it, and other properties. You may infer them by other means
    such as
    by translating the program to an input for a different machine and
    running that which then reports or fails to report on properties of >>>>> the
    original program; that includes, but is not limited to, halting
    with the
    same final state as the original program would on.


    "Meaning" should have a single source of truth. There may be
    alternate ways to determine it, but those are shown to be correct,
    by being able to trace it back to that original source.

    Rice's proof was based on defined "Semantics" of strings
    representing machines, as having that meaning established by the
    running of the program the input represents.


    The halt decider cannot run a machine it can
    only apply finite string transformation rules
    to input finite strings.

    So?

    It doesn't need to run the machine, only give an answer that is
    correctly determined by doing so.


    Deciders: Transform finite strings by finite string
    transformation rules into {Accept, Reject}.

    Right,


    In cases of a pathological self-reference relationship
    between Decider H and Input P such that there are no
    finite string transformation rules that H can apply to
    P to derive the behavior of UTM(P) the behavior of UTM(P)
    is overruled as out-of-scope for Turing machine deciders.

    But since H is (or needs to be) a Turing Machine, the copy of it in P
    gives some definite answer, and thus P has a definite behavior, and thus UTM(P) does have a definite behavior, defining the right answer that H
    should have given.

    The fact that H didn't give that answer, just makes H wrong.

    The fact that the claimed "UTM" at H.UTM doesn't match the behavior of
    the actual P just says you LIE when you call it a UTM.

    All you are doing is proving you are just a stupid liar.

    LIES do not overrule truth, except in the mind of a pathological liar.

    Note, the behavior of UTM(P) is NOT out of scope of Turing Machine
    deciders, just not computable with finite work.

    There is a difference, but that seems to be beyond your self-imposed
    ignorant understanding.


    It isn't much of a test if you require that every question include the
    answer as part of the question.

    You don't seem to understand the concept of REQUIREMENTS.

    But then, since you seem to reject the basic concept of Truth and
    Correctness, that seems consistant with your logic system.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Sat Dec 20 08:51:08 2025
    From Newsgroup: comp.ai.philosophy

    On 12/20/2025 8:41 AM, Richard Damon wrote:
    On 12/20/25 9:07 AM, olcott wrote:
    On 12/20/2025 7:32 AM, Richard Damon wrote:
    On 12/20/25 8:00 AM, polcott wrote:
    On 12/19/2025 4:55 PM, Richard Damon wrote:
    On 12/19/25 5:07 PM, Tristan Wibberley wrote:
    On 18/12/2025 04:29, Richard Damon wrote:
    On 12/17/25 11:08 PM, olcott wrote:
    On 12/17/2025 4:01 AM, Mikko wrote:
    On 16/12/2025 19:30, olcott wrote:

    (a) TMs only transform input finite strings to values
    using finite string transformation rules.

    (b) There exists no alternative more definitive measure
    of the behavior that the input to H(P) specifies (within
    finite string transformation rules) than P simulated by H.

    The halting problem does not ask about a string. It asks about >>>>>>>>> a meaning, which is a computation. Therefore the measure of the >>>>>>>>> behaviour is the computation asked about. If the input string >>>>>>>>> does not specify that behaviour then it is a wrong string.


    It asks about the semantic meaning that its
    input finite string specifies. (See Rice)


    And "Semantic Meaning" relates to directly running the program
    the input
    represents, something you are trying to say is out of scope, but >>>>>>> that
    means that you are trying to put semantics out of scope.


    eh? I'm sure it relates to what /would/ have happened if you directly >>>>>> run it, and other properties. You may infer them by other means
    such as
    by translating the program to an input for a different machine and >>>>>> running that which then reports or fails to report on properties
    of the
    original program; that includes, but is not limited to, halting
    with the
    same final state as the original program would on.


    "Meaning" should have a single source of truth. There may be
    alternate ways to determine it, but those are shown to be correct,
    by being able to trace it back to that original source.

    Rice's proof was based on defined "Semantics" of strings
    representing machines, as having that meaning established by the
    running of the program the input represents.


    The halt decider cannot run a machine it can
    only apply finite string transformation rules
    to input finite strings.

    So?

    It doesn't need to run the machine, only give an answer that is
    correctly determined by doing so.


    Deciders: Transform finite strings by finite string
    transformation rules into {Accept, Reject}.

    Right,


    In cases of a pathological self-reference relationship
    between Decider H and Input P such that there are no
    finite string transformation rules that H can apply to
    P to derive the behavior of UTM(P) the behavior of UTM(P)
    is overruled as out-of-scope for Turing machine deciders.

    But since H is (or needs to be) a Turing Machine, the copy of it in P
    gives some definite answer,

    P simulated by H cannot possibly receive
    any return value from H because it remains
    stuck in recursive simulation until aborted.

    The caller of H cannot possibly be one-and-the-same
    thing as the argument to H.

    and thus P has a definite behavior, and thus

    That cannot be derived by H applying finite string
    transformation rules to its input P.

    UTM(P) does have a definite behavior, defining the right answer that H should have given.


    Yet UTM(P) specifies a different sequence of steps.
    After the simulation of P has been aborted as opposed
    to and contrast with before the simulation has been
    aborted.

    The fact that H didn't give that answer, just makes H wrong.

    The fact that the claimed "UTM" at H.UTM doesn't match the behavior of
    the actual P just says you LIE when you call it a UTM.

    All you are doing is proving you are just a stupid liar.

    LIES do not overrule truth, except in the mind of a pathological liar.

    Note, the behavior of UTM(P) is NOT out of scope of Turing Machine
    deciders, just not computable with finite work.

    There is a difference, but that seems to be beyond your self-imposed ignorant understanding.


    It isn't much of a test if you require that every question include
    the answer as part of the question.

    You don't seem to understand the concept of REQUIREMENTS.

    But then, since you seem to reject the basic concept of Truth and
    Correctness, that seems consistant with your logic system.



    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Sat Dec 20 13:56:35 2025
    From Newsgroup: comp.ai.philosophy

    On 12/20/25 9:51 AM, olcott wrote:
    On 12/20/2025 8:41 AM, Richard Damon wrote:
    On 12/20/25 9:07 AM, olcott wrote:
    On 12/20/2025 7:32 AM, Richard Damon wrote:
    On 12/20/25 8:00 AM, polcott wrote:
    On 12/19/2025 4:55 PM, Richard Damon wrote:
    On 12/19/25 5:07 PM, Tristan Wibberley wrote:
    On 18/12/2025 04:29, Richard Damon wrote:
    On 12/17/25 11:08 PM, olcott wrote:
    On 12/17/2025 4:01 AM, Mikko wrote:
    On 16/12/2025 19:30, olcott wrote:

    (a) TMs only transform input finite strings to values
    using finite string transformation rules.

    (b) There exists no alternative more definitive measure
    of the behavior that the input to H(P) specifies (within >>>>>>>>>>> finite string transformation rules) than P simulated by H. >>>>>>>>>>
    The halting problem does not ask about a string. It asks about >>>>>>>>>> a meaning, which is a computation. Therefore the measure of the >>>>>>>>>> behaviour is the computation asked about. If the input string >>>>>>>>>> does not specify that behaviour then it is a wrong string. >>>>>>>>>>

    It asks about the semantic meaning that its
    input finite string specifies. (See Rice)


    And "Semantic Meaning" relates to directly running the program >>>>>>>> the input
    represents, something you are trying to say is out of scope, but >>>>>>>> that
    means that you are trying to put semantics out of scope.


    eh? I'm sure it relates to what /would/ have happened if you
    directly
    run it, and other properties. You may infer them by other means >>>>>>> such as
    by translating the program to an input for a different machine and >>>>>>> running that which then reports or fails to report on properties >>>>>>> of the
    original program; that includes, but is not limited to, halting >>>>>>> with the
    same final state as the original program would on.


    "Meaning" should have a single source of truth. There may be
    alternate ways to determine it, but those are shown to be correct, >>>>>> by being able to trace it back to that original source.

    Rice's proof was based on defined "Semantics" of strings
    representing machines, as having that meaning established by the
    running of the program the input represents.


    The halt decider cannot run a machine it can
    only apply finite string transformation rules
    to input finite strings.

    So?

    It doesn't need to run the machine, only give an answer that is
    correctly determined by doing so.


    Deciders: Transform finite strings by finite string
    transformation rules into {Accept, Reject}.

    Right,


    In cases of a pathological self-reference relationship
    between Decider H and Input P such that there are no
    finite string transformation rules that H can apply to
    P to derive the behavior of UTM(P) the behavior of UTM(P)
    is overruled as out-of-scope for Turing machine deciders.

    But since H is (or needs to be) a Turing Machine, the copy of it in P
    gives some definite answer,

    P simulated by H cannot possibly receive
    any return value from H because it remains
    stuck in recursive simulation until aborted.

    So, which H are you talking about?

    Your logic is just a lie of equivocation.

    Your claimed H is defined to return non-halting for H(P) which means
    your H doesn't get stuck in recursvie simulation, but is mearly wrong
    with your answer, and thus not a Halt Decider, although it might still
    be a decider.

    But you keep on lying with equivocation that you also means a DIFFERENT
    H, and thus a DIFFERENT P that just fails to answer, and thus isn't a
    Halt Decider either.

    The fact that the different P doesn't halt means nothing about the P
    that you actually have. You are just showing you can't tell the
    difference between different things because you logic is so
    contradictory that everything is the same.


    The caller of H cannot possibly be one-and-the-same
    thing as the argument to H.

    But it can be the machine the input is a representation of, and thus its behavior is the machine being asked about.

    You are just showing your ignorance about how problems are solved by computations.


    and thus P has a definite behavior, and thus

    That cannot be derived by H applying finite string
    transformation rules to its input P.


    But the question isn't can H do it, but can it be done at all.

    Since UTM(P) does get the results, it is a valid question.

    The fact that we can't make an H that determines it for the particular P
    made from it, means the problem is uncomputable.

    Note, The P for a given H can easily be decided by many other deciders,
    which shows it isn't a problem with the actual input.


    UTM(P) does have a definite behavior, defining the right answer that H
    should have given.


    Yet UTM(P) specifies a different sequence of steps.
    After the simulation of P has been aborted as opposed
    to and contrast with before the simulation has been
    aborted.

    No it doesn't. What step did your H.UTM(P) see that differed from UTM(P) before H aborted its simulation?

    H.UTM (which isn't a UTM) just incorrectly stops its simulation, because
    you programmed it with bad logic.


    The fact that H didn't give that answer, just makes H wrong.

    The fact that the claimed "UTM" at H.UTM doesn't match the behavior of
    the actual P just says you LIE when you call it a UTM.

    All you are doing is proving you are just a stupid liar.

    LIES do not overrule truth, except in the mind of a pathological liar.

    Note, the behavior of UTM(P) is NOT out of scope of Turing Machine
    deciders, just not computable with finite work.

    There is a difference, but that seems to be beyond your self-imposed
    ignorant understanding.


    It isn't much of a test if you require that every question include
    the answer as part of the question.

    You don't seem to understand the concept of REQUIREMENTS.

    But then, since you seem to reject the basic concept of Truth and
    Correctness, that seems consistant with your logic system.






    --- Synchronet 3.21a-Linux NewsLink 1.2