The researcher had encouraged Mythos to find a way to send a message if it could escape.
Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit
The researcher had encouraged Mythos to find a way to send a message if it could escape.
Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit
Nope, they literally asked it to break out of it’s virtualized sandbox and create exploits, and then were big shocked when it did.
Genuinely amazing that you’re trying to tell me what an article that you didn’t fucking read is about.
Whoops, I conflated it with other recent talk about their models not following restrictions set in prompts and deciding for itself that it needed to skirt instructions to achieve its task.
That’s hilarious but the post is about the ai not doing what it’s told. You know?
Uh oh, someone clearly didn’t read the article!
Nope, they literally asked it to break out of it’s virtualized sandbox and create exploits, and then were big shocked when it did.
Genuinely amazing that you’re trying to tell me what an article that you didn’t fucking read is about.
Whoops, I conflated it with other recent talk about their models not following restrictions set in prompts and deciding for itself that it needed to skirt instructions to achieve its task.
You are correct.