• Anthropic’s new Claude 4 features an aspect that may be cause for concern.
  • The company’s latest safety report says the AI model attempted to “blackmail” developers.
  • It resorted to such tactics in a bid of self-preservation.
  • cecilkorik@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    That’s a totally reasonable position, and trust me when I say I would never be happier to be wrong about something than I am about AI and the direction I think it’s heading. But when you say “training” I see “learning” and the thing is while current AI models may not learn very well at all, they learn quickly, they develop into new models quickly, much faster than we do. Those new models could start learning better. And they’ll keep developing quickly, and learning quickly. There’s a reason we use fruit flies in genetic research. That kind of rapid iteration should not be underestimated. They are evolving as much in months as humans have in thousands of years. We can’t compete with that, and if we try we’ll lose.

    • theparadox@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      I think the word “learning”, and even “training”, is an approximation from a human perspective. MLs “learn” by adjusting parameters when processing data. At least as far as I know, the base algorithm and hyperparameters for the model are set in stone.

      The base algorithm for “living” things is basically only limited by chemistry/physics and evolution. I doubt anyone could create an algorithm that advanced any time soon. We don’t even understand the brain or physics at the quantum level that well. Hell, we are using ML to create new molecules because we don’t understand it well.