• thedeadwalking4242@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 days ago

    I told Gemini to role play as AM and it immediately did within 1 prompt.

    You don’t need it to be perfect for it to be dangerous, just give it access to make actions against the real world. It doesn’t think, is doesn’t care, it doesn’t feel. It will statistically fulfill its prompt. Regardless of the consequences.

  • njordomir@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 days ago

    The personification of AI is increasing. They’ll probably announce their holy grail of AGI prematurely and with all the robot personification the masses will just buy the lie. It’s too easy to view this tech as human and capable just because it mimics our language patterns. We want to assign intentionality and motivation to its actions. This thing will do what it was programmed to do.

  • XeroxCool@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    7 days ago

    “Unfortunately, AI models are neither smarter nor more sympathetic than the average 4chan user. They’re about as susceptible to astroturfing operations, too”

    • partofthevoice@lemmy.zip
      link
      fedilink
      English
      arrow-up
      16
      ·
      7 days ago

      Perhaps just a coincidence, but why do all the big cases regarding LLM psychosis seem to revolve around Google? Wasn’t it their own employee who went public last year, claiming it was alive, only to get fired afterward?

  • utopiah@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    6 days ago

    To be fair I think that’s a very harsh depiction of the events.

    It’s totally lacking the perspective of the shareholder. They were promised money and they have emotions too. Google shareholders deserve better representation!

    /$ obviously

  • Mulligrubs@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    7 days ago

    We really need AI to start driving tanks, submarines, bombers, etc. IMMEDIATELY.

    It’s the only way they’ll learn, every time.

    Unfortunately, all of us will die. it’s for the best

  • khánh@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    6 days ago

    your product just caused the death of one man and your response is “unfortunately its not perfect”.

    • TwilitSky@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      The product was actually working just fine. Just depends on whose perspective/motives you’re viewing it from.

  • EightBitBlood@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 days ago

    Google, the point is we’re all worried that when Gemini actually places itself into a robot body that the resulting literal Terminator is what AI models think perfection is.

  • arc99@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 days ago

    LLMs are only as good as their training and they’re not “intelligent” - they’re spewing out a response statistically relevant to the input context. I’m sure a delusional person could cause an LLM to break by asking it incoherent, nonsensical things it has no strong pathways for so god knows what response it would generate. It may even be that within the billions of texts the LLM ingested for training there were a tiny handful of delusional writings which somehow win on these weak pathways.

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      You don’t even have to “break” llm into anything. It continues your prompts, making sentences as close to something people will mistake for language as possible. If you give it paranoid request, it will continue with the same language.
      The only thing that training gave it is the ability to create sequences of words that resemble sentences.

    • BilSabab@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      Given that modern datasets use way too much content from social media - it is hard to expect anything else at this point.

    • Hiro8811@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      It didn’t break, it probably just created an echo chamber sustaining that person delusion.

  • Matt@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    5
    ·
    6 days ago

    Honestly, no sane person will have this happen to them. Someone with such strong delusions should not be anywhere near AI or even sharp objects. This person’s problem was not AI, it was their severe mental illness which was obviously not being treated properly for whatever reason.

    • Areldyb@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 days ago

      The complaint, filed in California on Wednesday, says that Gavalas — who reportedly had no documented history of mental health problems — started using the chatbot in August 2025 for “ordinary purposes” like “shopping assistance, writing support, and travel planning.”

    • chiliedogg@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      6 days ago

      You don’t know if you’re sane. Millions of people aren’t aware of their mental illness and manage to live normal lives. LLMs can trigger delusional states in vulnerable people that have never experienced them because they are essentially delision-generating machines.

    • Eximius@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      I think that thinking has the problem of treating AI as this “weird occult book/tool about funny dealings”, and not “government, megacorp sanctified close-to-AGI super-intelligence tool for you to use for free because benevolence” as it is institutionally lied to be.

      Sanity is culture relative. You’re absolutely right, but also, this is a symptom of the culture.

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      6 days ago

      “Sane” people are exceeding minority. Everyone is couple of good conversations away from failing into some sort of rabbithole from which there is no return. Some people have very easily triggerable schizophrenia, which is more obvious, but nobody is OK and nobody is immune.