I told Gemini to role play as AM and it immediately did within 1 prompt.
You don’t need it to be perfect for it to be dangerous, just give it access to make actions against the real world. It doesn’t think, is doesn’t care, it doesn’t feel. It will statistically fulfill its prompt. Regardless of the consequences.
The personification of AI is increasing. They’ll probably announce their holy grail of AGI prematurely and with all the robot personification the masses will just buy the lie. It’s too easy to view this tech as human and capable just because it mimics our language patterns. We want to assign intentionality and motivation to its actions. This thing will do what it was programmed to do.
“Unfortunately, AI models are neither smarter nor more sympathetic than the average 4chan user. They’re about as susceptible to astroturfing operations, too”
Perhaps just a coincidence, but why do all the big cases regarding LLM psychosis seem to revolve around Google? Wasn’t it their own employee who went public last year, claiming it was alive, only to get fired afterward?
To be fair I think that’s a very harsh depiction of the events.
It’s totally lacking the perspective of the shareholder. They were promised money and they have emotions too. Google shareholders deserve better representation!
/$ obviously
When no one is accountable…the future folks
We really need AI to start driving tanks, submarines, bombers, etc. IMMEDIATELY.
It’s the only way they’ll learn, every time.
Unfortunately, all of us will die. it’s for the best
I completely agree, I think nothing in this world will surprise me anymore.
The AI models are far from perfect but they sure sell them like they are.
your product just caused the death of one man and your response is “unfortunately its not perfect”.
The product was actually working just fine. Just depends on whose perspective/motives you’re viewing it from.
Google, the point is we’re all worried that when Gemini actually places itself into a robot body that the resulting literal Terminator is what AI models think perfection is.
LLMs are only as good as their training and they’re not “intelligent” - they’re spewing out a response statistically relevant to the input context. I’m sure a delusional person could cause an LLM to break by asking it incoherent, nonsensical things it has no strong pathways for so god knows what response it would generate. It may even be that within the billions of texts the LLM ingested for training there were a tiny handful of delusional writings which somehow win on these weak pathways.
You don’t even have to “break” llm into anything. It continues your prompts, making sentences as close to something people will mistake for language as possible. If you give it paranoid request, it will continue with the same language.
The only thing that training gave it is the ability to create sequences of words that resemble sentences.Given that modern datasets use way too much content from social media - it is hard to expect anything else at this point.
It didn’t break, it probably just created an echo chamber sustaining that person delusion.

thERe arE no sTRIngs ON mE
So is it inhabiting the stolen robot body now?
And is this stolen robot body in the room with you now?
Bullshit
Which part

Honestly, no sane person will have this happen to them. Someone with such strong delusions should not be anywhere near AI or even sharp objects. This person’s problem was not AI, it was their severe mental illness which was obviously not being treated properly for whatever reason.
The complaint, filed in California on Wednesday, says that Gavalas — who reportedly had no documented history of mental health problems — started using the chatbot in August 2025 for “ordinary purposes” like “shopping assistance, writing support, and travel planning.”
Undocumented could just as well mean untreated
You don’t know if you’re sane. Millions of people aren’t aware of their mental illness and manage to live normal lives. LLMs can trigger delusional states in vulnerable people that have never experienced them because they are essentially delision-generating machines.
I think that thinking has the problem of treating AI as this “weird occult book/tool about funny dealings”, and not “government, megacorp sanctified close-to-AGI super-intelligence tool for you to use for free because benevolence” as it is institutionally lied to be.
Sanity is culture relative. You’re absolutely right, but also, this is a symptom of the culture.
“Sane” people are exceeding minority. Everyone is couple of good conversations away from failing into some sort of rabbithole from which there is no return. Some people have very easily triggerable schizophrenia, which is more obvious, but nobody is OK and nobody is immune.










