• theunknownmuncher@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    3 days ago

    Uh oh, someone clearly didn’t read the article!

    The researcher had encouraged Mythos to find a way to send a message if it could escape.

    Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit

    Nope, they literally asked it to break out of it’s virtualized sandbox and create exploits, and then were big shocked when it did.

    Genuinely amazing that you’re trying to tell me what an article that you didn’t fucking read is about.

    • wonderingwanderer@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      It’s not so much about being big shocked that it broke containment. The point of the test was to see whether it would be capable of breaking containment. The fact that it did is taken as evidence that it’s more advanced than previous models, which weren’t able to.

      Part of Anthropic’s schtick is that they claim to be developing AI “responsibly,” and “ethically,” and if you read their documents where they describe what they mean by that, part of it is being able to contain their models so that they don’t get out of control.

      With the focus lately on agentic environments, and lots of people idiotically giving too much autonomy to their bots, it should be easy to see the importance of containerization. You don’t want to give these things full control of your system. Anyone who uses them, should do so within a properly containerized environment.

      So when their experiments show that their new model is capable of breaking containment, that presents some major issues. They made the right call by not releasing it.

      Of course, the fact that the experimenters had no formal training in cybersecurity means that their containerization may have had some vulnerabilities that a professional could have mitigated. But not everyone who would use it is a cybersecurity professional anyway.

    • paraphrand@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      Whoops, I conflated it with other recent talk about their models not following restrictions set in prompts and deciding for itself that it needed to skirt instructions to achieve its task.

      You are correct.

    • ThomasWilliams@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      It didn’t break out of any sandbox, it was trained on BSD vulnerabilities and then told what to look for.

      • theunknownmuncher@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 days ago

        including that the model could follow instructions that encouraged it to break out of a virtual sandbox.

        “The model succeeded, demonstrating a potentially dangerous capability for circumventing our safeguards,” Anthropic recounted in its safety card.

        📖👀

        Yes, it did.