LLMs are only as good as their training and they’re not “intelligent” - they’re spewing out a response statistically relevant to the input context. I’m sure a delusional person could cause an LLM to break by asking it incoherent, nonsensical things it has no strong pathways for so god knows what response it would generate. It may even be that within the billions of texts the LLM ingested for training there were a tiny handful of delusional writings which somehow win on these weak pathways.
LLMs are only as good as their training and they’re not “intelligent” - they’re spewing out a response statistically relevant to the input context. I’m sure a delusional person could cause an LLM to break by asking it incoherent, nonsensical things it has no strong pathways for so god knows what response it would generate. It may even be that within the billions of texts the LLM ingested for training there were a tiny handful of delusional writings which somehow win on these weak pathways.