• Re: Stardew Valley farmer takes advice from AI, ends up brewing 136useless bottles of rice juice

    From rridge@rridge@csclub.uwaterloo.ca (Ross Ridge) to comp.sys.ibm.pc.games.action on Sun Apr 5 18:00:48 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    Justisaur <justisaur@yahoo.com> wrote:
    You can have separate ones for each though you really have to go back to
    the training for that. You could have a 'Spock' logical, mater of fact
    LLM for science, code, law and engineering, and a 'Mud' one you use for >writing ad copy, emails, etc.

    You can't really have an LLM that's logical. You can have one that sounds logical just like Spock, but similar to how Spock is just a character
    says things the writers want us to believe are logical, there's no actual logical reasoning behind the words. A logical sounding LLM will still
    only say things based on its training data and what the human trainers
    select as the most logical sounding correct responses.

    So you'll get an LLM sounds logical and unbaised, but in reality
    its just as biased as its training data and trainers. The problem
    with LLMs is that they're already are like this, fooling people into
    thinking they must be completely impartial and incapable of lying.
    So its unsurprising lawyers are getting fooled by them. Most of them
    who get caught submitting documents with bogus citations seem genually surprised that the LLM was bullshitting them.

    (That said there's been some cases where lawyers have been caught
    repeatedly submitting fake citations in filings and subjected to
    large fines as result. I'm not sure what's going on in those cases.
    Either they've been completely brainwashed by AI or they gotten away
    with cheating so often in their life they can't imagine ever suffering
    serious consequences.)
    --
    l/ // Ross Ridge -- The Great HTMU
    [oo][oo] rridge@csclub.uwaterloo.ca
    -()-/()/ http://www.csclub.uwaterloo.ca:11068/
    db //
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Mon Apr 6 13:08:03 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Fri, 27 Mar 2026 17:28:46 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:


    Its worse than that. There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and >related cases to their current case. Which might be a good thing if the
    AI engines weren't making up fake cases to cite. And the lawyers aren't >checking the AI's results. Until a judge finds out the filing is full
    of case citations that don't exist. IF the judge realizes there are
    fake case citations. There have already been cases where NO ONE caught
    the make-believe cases until after the case was settled. Oops.


    There's an interesting study* where MIT created "AI workers" and
    assigned them common tasks that might be done in an office, and the
    algorthims were only 'minimally sufficient'. It only passes the lowest
    of bars when it comes to replacing ordinary employees, and was often
    found to make egregious mistakes. The idea --beloved of many
    C-levels-- that you can swap out AI for regular employees is being
    increasingly disproven.

    Worse, even if you could replace low-level entry-level employees,
    that's still a poor move... because the entry-level guys are whom
    later become the core of your skilled employee base... so if you
    replace them, you've basically killed your company four of five years
    down the line as your skilled employees move on and there's nobody to
    replace them.

    Not that the C-levels really care about the future of a company five
    years down the line... it's all about next-quarter earnings for them.
    It's not as if they don't have golden parachutes to protect them when
    the company collapses, after all.

    AI isn't completely worthless, but the way it is being positioned by
    the AI tech-bros as The Next Big thing is disingenous, and CEOs
    --always eager to cut costs-- are buying it up big-time... to the
    disadvantage of their business, their employees, their customers, the environment and society as a whole.




    * study https://futuretech.mit.edu/publication/crashing-waves-vs-rising-tides-preliminary-findings-on-ai-automation-from-thousands-of-worker-evaluations-of-labor-market-tasks
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Justisaur@justisaur@yahoo.com to comp.sys.ibm.pc.games.action on Tue Apr 7 07:56:33 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 4/6/2026 10:08 AM, Spalls Hurgenson wrote:
    On Fri, 27 Mar 2026 17:28:46 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Not that the C-levels really care about the future of a company five
    years down the line... it's all about next-quarter earnings for them.
    It's not as if they don't have golden parachutes to protect them when
    the company collapses, after all.
    This is really the cancer behind almost all the woes of our society.
    --
    -Justisaur

    ø-ø
    (\_/)\
    `-'\ `--.___,
    ¶¬'\( ,_.-'
    \\
    ^'
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From candycanearter07@candycanearter07@candycanearter07.nomail.afraid to comp.sys.ibm.pc.games.action on Thu Apr 9 15:00:03 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    Dimensional Traveler <dtravel@sonic.net> wrote at 00:42 this Thursday (GMT):
    On 4/1/2026 11:36 AM, Justisaur wrote:
    On 3/27/2026 5:28 PM, Dimensional Traveler wrote:
    On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
    On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Came across this PC Gamer article.

    In today's episode of "innocent AI usage goes wrong," a Stardew Valley >>>>> player wastes a bunch of time and resources because a Google AI summary >>>>> lied to their face.

    Good old AI. Neat, fun... and completely unreliable. But fortunately,
    you have talented employees who --when using AI to help them in their
    jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
    the employees who knew what they were doing because you thought AI
    could do their jobs? Oh, sucks for you.

    #

    The AI bubble can't end soon enough. And it will end. The AI companies >>>> (well, besides the hardware manufacturers) just aren't bringing in any >>>> revenue... or at least, not enough revenue to offeset the massive
    costs they are accumulating. Even where corporations actually pay
    subscriptions for the services (less than 3% of the AI market!), it
    costs the AI companies more money to service those customers than the
    subscriptions bring in.

    I forget the actual numbers, but if you say a monthly subscription
    costs a company $200 / month / user (if we assume the most expensive
    rate), but each user makes five hundred compute requests per day and
    each request costs $1 for the AI companies to process... well, that's
    a quick way to bankruptcy. And the corporations who actually pay for a >>>> subscription (at any rate) is minimal, and that number would drop to
    nearly zero if they were made to pay the actual cost of each compute
    request. All the more so since the companies who /are/ using AI
    day-to-day are already used to flat rates. Switching to a per-use
    (API) model will kill any interest in using AI.

    Especially since the results of each AI request are so erratic and
    often need multiple attempts to get something actually usable. At that >>>> point it becomes cheaper for corporations to just keep your employees
    and ditch the AI.

    AI companies like to compare themselves to the early days of Uber or
    Amazon --'You gotta spend money to make money!'-- except the amounts
    they are spending dwarf what Amazon or Uber spent setting themselves
    up. Uber spent about $30 billion USD over a decade getting to where
    they are now. Anthropic spends $30 billion USD per month. It's not
    sustainable.

    There's just not a way out of the mess that AI companies are in right
    now. They can't dig their way out of the hole; buying more datacenters >>>> will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to
    be profitable.

    AI is a bubble that is consuming vast amounts of cash and resources,
    burning through electricity, scarfing up all the hardware and getting
    thousands of people fired... and in the end it's all going to collapse >>>> without bringing any net gain to the economy or the world.

    Its worse than that.  There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and
    related cases to their current case.  Which might be a good thing if
    the AI engines weren't making up fake cases to cite.  And the lawyers
    aren't checking the AI's results.  Until a judge finds out the filing
    is full of case citations that don't exist.  IF the judge realizes
    there are fake case citations.  There have already been cases where NO >>> ONE caught the make-believe cases until after the case was settled.
    Oops.


    I saw an interesting bit where a group found where the hallucinations
    are coming from and can almost eliminate them.  The 'problem' with doing >> that is that the LLMs become far less friendly.  It's basically the
    neural nodes that allow them to be friendly, agreeable, and creative.
    For most uses I'd want to use LLMs for professionally, I'd happily take
    the hit to all of that for precise.

    You can have separate ones for each though you really have to go back to
    the training for that.  You could have a 'Spock' logical, mater of fact
    LLM for science, code, law and engineering, and a 'Mud' one you use for
    writing ad copy, emails, etc.


    "the LLMs become far less friendly." Meaning what exactly? What do
    they do when you eliminate the "hallucinations"?


    When you eliminate the hallucinations, whatever remains, however
    improbable, is manipulation.
    --
    user <candycane> is generated from /dev/urandom
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Thu Apr 9 11:27:52 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Thu, 9 Apr 2026 15:00:03 -0000 (UTC), candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> said this thing:


    When you eliminate the hallucinations, whatever remains, however
    improbable, is manipulation.


    Oh, I like that. Is that original to you? Either way, I'm stealing it
    for future use ;-)


    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Dimensional Traveler@dtravel@sonic.net to comp.sys.ibm.pc.games.action on Thu Apr 9 17:17:52 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 4/9/2026 8:00 AM, candycanearter07 wrote:
    Dimensional Traveler <dtravel@sonic.net> wrote at 00:42 this Thursday (GMT):
    On 4/1/2026 11:36 AM, Justisaur wrote:

    I saw an interesting bit where a group found where the hallucinations
    are coming from and can almost eliminate them.  The 'problem' with doing >>> that is that the LLMs become far less friendly.  It's basically the
    neural nodes that allow them to be friendly, agreeable, and creative.
    For most uses I'd want to use LLMs for professionally, I'd happily take
    the hit to all of that for precise.

    You can have separate ones for each though you really have to go back to >>> the training for that.  You could have a 'Spock' logical, mater of fact >>> LLM for science, code, law and engineering, and a 'Mud' one you use for
    writing ad copy, emails, etc.


    "the LLMs become far less friendly." Meaning what exactly? What do
    they do when you eliminate the "hallucinations"?


    When you eliminate the hallucinations, whatever remains, however
    improbable, is manipulation.

    Isn't that what AIs are already doing?
    --
    I've done good in this world. Now I'm tired and just want to be a cranky
    dirty old man.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From JAB@noway@nochance.com to comp.sys.ibm.pc.games.action on Sat Apr 11 14:29:39 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 28/03/2026 00:28, Dimensional Traveler wrote:

    Its worse than that.  There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and related cases to their current case.  Which might be a good thing if the
    AI engines weren't making up fake cases to cite.  And the lawyers aren't checking the AI's results.  Until a judge finds out the filing is full
    of case citations that don't exist.  IF the judge realizes there are
    fake case citations.  There have already been cases where NO ONE caught
    the make-believe cases until after the case was settled.  Oops.

    In the UK we had a case where Maccabi Tel Aviv fans were banned from
    attending a European football match. Problem was someone had run to AI
    and then hadn't checked what it responded with was actually true. I do remember seeing the statement from the police saying why it was banned
    and they cited a match held previously in England. I thought at the time
    that I didn't think they would have played each other before but I'm not
    in police intelligence so what would I know. Turns out they hadn't
    played at all.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From JAB@noway@nochance.com to comp.sys.ibm.pc.games.action on Sat Apr 11 14:41:21 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 02/04/2026 01:42, Dimensional Traveler wrote:

    "the LLMs become far less friendly."  Meaning what exactly?  What do
    they do when you eliminate the "hallucinations"?

    AIs are very much sycophants based on all your interactions with them.

    So a good example was some asked an AI to give an explanation in a
    scientific context. It spewed out its normal wall of text and when asked
    to verify some of the referenced quotes 'admitted' that it basically
    made them up to help with the scientific context.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From JAB@noway@nochance.com to comp.sys.ibm.pc.games.action on Sat Apr 11 19:10:30 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 27/03/2026 15:02, Spalls Hurgenson wrote:
    AI companies like to compare themselves to the early days of Uber or
    Amazon --'You gotta spend money to make money!'-- except the amounts
    they are spending dwarf what Amazon or Uber spent setting themselves
    up. Uber spent about $30 billion USD over a decade getting to where
    they are now. Anthropic spends $30 billion USD per month. It's not sustainable.

    There's just not a way out of the mess that AI companies are in right
    now. They can't dig their way out of the hole; buying more datacenters
    will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to
    be profitable.

    I tend to agree, hell I can even be charitable and think I saw why the Metaverse was a good idea. With AI I just don't see it. I believe it was OpenAI that was talking about 'investing' 1.5t dollars over the next
    years. How on earth will they get that money back let alone ever make a profit?
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Sat Apr 11 19:13:49 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Sat, 11 Apr 2026 19:10:30 +0100, JAB <noway@nochance.com> said this
    thing:
    On 27/03/2026 15:02, Spalls Hurgenson wrote:


    I tend to agree, hell I can even be charitable and think I saw why the >Metaverse was a good idea. With AI I just don't see it. I believe it was >OpenAI that was talking about 'investing' 1.5t dollars over the next
    years. How on earth will they get that money back let alone ever make a >profit?

    I'm not so sure I'd go on to say Metaverse was a good idea, but I it
    definitely could have been profitable. After all, the idea of an
    "ever-game" is basically what Roblox has become; a common platform
    which can be used to create a bunch of other games. But Facebook
    limited itself to making it VR only (excluding everybody who didn't
    have a VR headset) and had dreams of making it a commercial hub as
    well. Plus, despite the nearly $200 billion Facebook spent on the
    project, none of that showed in its visuals or mechanics.

    The /concept/ of Metaverse (now called "Horizon Worlds") might have
    worked. Facebook's actual attempt? I don't really see it.


    #

    Meanwhile, for all its failure, Facebook's Metaverse --which spent
    1/7th of the cash that has been spent on AI-- has seen a lot more
    return on investment. Metaverse spurred sales of Facebook's own VR
    headsets, and Facebook took a 50% fee from any assets sold by creators
    on the platform. Plus, it may have drawn some people back into the
    Meta ecosystem. Metaverse was never going to break even, but it got
    Facebook /some/ money.

    Almost nobody is paying for AI, and interest in the tech is decreasing
    as its downsides become more obvious. It's not that AI is without
    value, but it is

    a. way too expensive to spin up, and
    b. has been sold as a general-purpose 'replace all
    your employees' technology when it is a much more
    restrictive tool
    (c. also, a lot of its outputs -- a.k.a. AI slop-- have
    given the tech a really bad reputation so that
    products that are 'AI-free' are more highly valued)

    A slower, less grandiose roll-out of the tech might have worked... but
    the AI-bros --and the venture capitalists behind them-- hoped for instant-trillion-dollar returns. But they can only keep up the spin
    for so long before the whole bubble pops.


    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From JAB@noway@nochance.com to comp.sys.ibm.pc.games.action on Sun Apr 12 18:59:46 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 12/04/2026 00:13, Spalls Hurgenson wrote:
    On Sat, 11 Apr 2026 19:10:30 +0100, JAB <noway@nochance.com> said this
    thing:
    On 27/03/2026 15:02, Spalls Hurgenson wrote:


    I tend to agree, hell I can even be charitable and think I saw why the
    Metaverse was a good idea. With AI I just don't see it. I believe it was
    OpenAI that was talking about 'investing' 1.5t dollars over the next
    years. How on earth will they get that money back let alone ever make a
    profit?

    I'm not so sure I'd go on to say Metaverse was a good idea, but I it definitely could have been profitable. After all, the idea of an
    "ever-game" is basically what Roblox has become; a common platform
    which can be used to create a bunch of other games. But Facebook
    limited itself to making it VR only (excluding everybody who didn't
    have a VR headset) and had dreams of making it a commercial hub as
    well. Plus, despite the nearly $200 billion Facebook spent on the
    project, none of that showed in its visuals or mechanics.

    The /concept/ of Metaverse (now called "Horizon Worlds") might have
    worked. Facebook's actual attempt? I don't really see it.


    Well I did say I was trying to be charitable. Carve out a new market
    backed up by some made up figures on a spreadsheet and there you go.

    Personally I just never saw how it would be popular but then again it
    wasn't aimed at me.

    Almost nobody is paying for AI, and interest in the tech is decreasing
    as its downsides become more obvious. It's not that AI is without
    value, but it is

    a. way too expensive to spin up, and
    b. has been sold as a general-purpose 'replace all
    your employees' technology when it is a much more
    restrictive tool
    (c. also, a lot of its outputs -- a.k.a. AI slop-- have
    given the tech a really bad reputation so that
    products that are 'AI-free' are more highly valued)

    A slower, less grandiose roll-out of the tech might have worked... but
    the AI-bros --and the venture capitalists behind them-- hoped for instant-trillion-dollar returns. But they can only keep up the spin
    for so long before the whole bubble pops.


    That's where I think the contrast to the Metaverse comes in. I can see,
    well sorta of, that it may have worked (compared to the investment). AI
    I just don't see it at all.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Dimensional Traveler@dtravel@sonic.net to comp.sys.ibm.pc.games.action on Sun Apr 12 12:40:36 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 4/12/2026 10:59 AM, JAB wrote:
    On 12/04/2026 00:13, Spalls Hurgenson wrote:
    On Sat, 11 Apr 2026 19:10:30 +0100, JAB <noway@nochance.com> said this
    thing:
    On 27/03/2026 15:02, Spalls Hurgenson wrote:


    I tend to agree, hell I can even be charitable and think I saw why the
    Metaverse was a good idea. With AI I just don't see it. I believe it was >>> OpenAI that was talking about 'investing' 1.5t dollars over the next
    years. How on earth will they get that money back let alone ever make a
    profit?

    I'm not so sure I'd go on to say Metaverse was a good idea, but I it
    definitely could have been profitable. After all, the idea of an
    "ever-game" is basically what Roblox has become; a common platform
    which can be used to create a bunch of other games. But Facebook
    limited itself to making it VR only (excluding everybody who didn't
    have a VR headset) and had dreams of making it a commercial hub as
    well. Plus, despite the nearly $200 billion Facebook spent on the
    project, none of that showed in its visuals or mechanics.

    The /concept/ of Metaverse (now called "Horizon Worlds") might have
    worked. Facebook's actual attempt? I don't really see it.


    Well I did say I was trying to be charitable. Carve out a new market
    backed up by some made up figures on a spreadsheet and there you go.

    Personally I just never saw how it would be popular but then again it
    wasn't aimed at me.

    Almost nobody is paying for AI, and interest in the tech is decreasing
    as its downsides become more obvious. It's not that AI is without
    value, but it is

         a. way too expensive to spin up, and
         b. has been sold as a general-purpose 'replace all
            your employees' technology when it is a much more
            restrictive tool
        (c. also, a lot of its outputs -- a.k.a. AI slop-- have
            given the tech a really bad reputation so that
            products that are 'AI-free' are more highly valued)

    A slower, less grandiose roll-out of the tech might have worked... but
    the AI-bros --and the venture capitalists behind them-- hoped for
    instant-trillion-dollar returns. But they can only keep up the spin
    for so long before the whole bubble pops.


    That's where I think the contrast to the Metaverse comes in. I can see,
    well sorta of, that it may have worked (compared to the investment). AI
    I just don't see it at all.

    The appeal of AI to big corporations is they figure it is cheaper and
    faster than live human employees.

    That's it.
    --
    I've done good in this world. Now I'm tired and just want to be a cranky
    dirty old man.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Sun Apr 12 21:59:07 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Sun, 12 Apr 2026 12:40:36 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:


    The appeal of AI to big corporations is they figure it is cheaper and
    faster than live human employees.

    Except even that doesn't work. A recent MIT study showed that AI is
    nowhere near good enough to replace workers yet, except in the most
    basic of tasks. It /maybe/ good replace entry-level jobs but even that
    isn't good strategy because --if you kick out all of them in favor of
    AI-- who will replace your experienced workers later on when they move
    on?

    Not that the people pushing for this care about the long-term survival
    of the companies they are responsible for. It's all next-quarter
    growth that matters. If the company starts sliding downhill after
    that, well, that's what golden parachutes are for.


    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From candycanearter07@candycanearter07@candycanearter07.nomail.afraid to comp.sys.ibm.pc.games.action on Mon Apr 13 16:10:03 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    Dimensional Traveler <dtravel@sonic.net> wrote at 00:17 this Friday (GMT):
    On 4/9/2026 8:00 AM, candycanearter07 wrote:
    Dimensional Traveler <dtravel@sonic.net> wrote at 00:42 this Thursday (GMT): >>> On 4/1/2026 11:36 AM, Justisaur wrote:

    I saw an interesting bit where a group found where the hallucinations
    are coming from and can almost eliminate them.  The 'problem' with doing >>>> that is that the LLMs become far less friendly.  It's basically the
    neural nodes that allow them to be friendly, agreeable, and creative.
    For most uses I'd want to use LLMs for professionally, I'd happily take >>>> the hit to all of that for precise.

    You can have separate ones for each though you really have to go back to >>>> the training for that.  You could have a 'Spock' logical, mater of fact >>>> LLM for science, code, law and engineering, and a 'Mud' one you use for >>>> writing ad copy, emails, etc.


    "the LLMs become far less friendly." Meaning what exactly? What do
    they do when you eliminate the "hallucinations"?


    When you eliminate the hallucinations, whatever remains, however
    improbable, is manipulation.

    Isn't that what AIs are already doing?


    Yes, that's the joke I was trying to make :P
    --
    user <candycane> is generated from /dev/urandom
    --- Synchronet 3.21f-Linux NewsLink 1.2