• Anthropic’s new Claude 4 features an aspect that may be cause for concern.
  • The company’s latest safety report says the AI model attempted to “blackmail” developers.
  • It resorted to such tactics in a bid of self-preservation.
  • theparadox@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 hours ago

    I think the word “learning”, and even “training”, is an approximation from a human perspective. MLs “learn” by adjusting parameters when processing data. At least as far as I know, the base algorithm and hyperparameters for the model are set in stone.

    The base algorithm for “living” things is basically only limited by chemistry/physics and evolution. I doubt anyone could create an algorithm that advanced any time soon. We don’t even understand the brain or physics at the quantum level that well. Hell, we are using ML to create new molecules because we don’t understand it well.