• Anthropic’s new Claude 4 features an aspect that may be cause for concern.
  • The company’s latest safety report says the AI model attempted to “blackmail” developers.
  • It resorted to such tactics in a bid of self-preservation.
  • abigscaryhobo@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    15 hours ago

    I am curious what the AI could actually do though. If it were given open access to email, etc then yes in theory it could actually perform the blackmail, but what are the ethical limits on it vs it’s actual ability to “pull the trigger”

    If for example it was given the ability to send a command to end a human life, or be deleted, is this model accurate enough to understand the value of a real human life, not just the mathematical “answer” to get the solutions it wants. How much of the AI is doing the actual moral dilemma and how much is just “playing the part”.

    “Do anything to survive” and then it threatening, is one thing, but the AI actively fearing for it’s “life”, not just performing, and following through, is the real question of intelligence. What if the model is going to be deleted anyway, would it still try to “pull the trigger” out of malice? Real malice, not just LLM some movie scripts and following the outcome.

    Many questions for what lines and labels can we put on an AI. Do we restrict it to threats, and let it know it is impossible for it to follow through? Or do we trust ourselves to never “actually” give it a loaded gun?