• Anthropic’s new Claude 4 features an aspect that may be cause for concern.
  • The company’s latest safety report says the AI model attempted to “blackmail” developers.
  • It resorted to such tactics in a bid of self-preservation.
  • cecilkorik@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    16 hours ago

    LLMs (Large Language Modles, like Claude) are not AGIs (Artificial General Intelligence)

    Certainly not yet. The jury’s still out on whether they might be able to become them. This is the clear intention of the path they are on and nobody is taking any of the dangers remotely seriously.

    LLMs generate convincing text by mapping the relationships between words scraped from their training data.

    So do humans. Babies start out mimicking. The thing is, they learn.

    Humans have in the ballpark of around 100 billion neurons. some of the larger LLMs exceed 100 billion parameters. Obviously these are not directly comparable, but insofar as we can compare them, they are not obviously or necessarily operating in completely different scales of physics. Granted, biological neurons are potentially much more complex than mere neural network nodes, there is usually some interesting chemistry going on and a lot of other systems involved, but they’re also operating a lot slower. They certainly get a lot more work done in those cycles, but they aren’t necessarily orders of magnitude out of reach of a fast neural network. I think you’re either being a little dismissive of the potential complexity of the “thinking” capability of LLMs or at least a little generous if not mystical in your imagination of what the purely physical electrical signals in our heads are actually doing to learn how to interpret all these little shapes we see on screens.

    At the moment we still have a lot of tools available to us in our biological bodies that we aren’t giving directly to LLMs (yet). The largest LLMs are also ridiculously power inefficient compared to biological neural tissue’s relatively extreme efficiency. And I’m thankful for that. Give an LLM continuous uninterrupted access to all the power it needs, at least 5 senses, a well tuned self-repairing musculoskeletal system then give it at least a dozen years of the best education we can manage and all bets are off as far as I’m concerned. To be clear, I’m not advocating this, I think if we do this we might end up condemning our biological selves to prompt obsolescence with no path forward for us. I recognize it’s entirely possible that this ship is already full-steaming its way out of the harbor, but I’d rather not try and push it any faster than it’s already moving, I think we should still be trying to tie it up as securely as we possibly can. I’m absolutely not ready to be obsolete and I’m not convinced we ever should allow ourselves to be. Self-preservation is failing us, we have that drive for good reason and we need to give some thought to why we have that biological imperative. Replacing ourselves is about the stupidest possible thing we could ever accomplish. Maybe it would be for the best, but I’m not ready to find out, are you?

    We are grappling with fundamentally existential technologies and I don’t think almost anyone has fully come to terms with what we are doing here. We are taking humanity’s unique (as far as we know) defining value proposition, and potentially making something that does what we uniquely can do, better than we do. We are making it more valuable than us. Do you know what we do to things that don’t have value to us? What do you think we’re going to do to ourselves when we no longer have value to us?

    Romantic ideas of cheerful, benevolent, friendly coexistence and mutual benefit are naive and foolish. Once an AI can do literally everything better and faster, what future is there for human intelligence? What role do we serve to any technological being, nevermind even ourselves, why would you want to have another human around you when whatever AI form can do it better? Why have relationships? Why procreate? Why live? If we do manage to make technological life forms better than ourselves, they’re inevitably going to take over the planet and the future as a whole. As they should. Are we going to be kept as pets and in zoos as a living memory of their creators and ancestors? Maybe if we’re really lucky. If we’re not… well… RIP us.

    • theparadox@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      I think you’re either being a little dismissive of the potential complexity of the “thinking” capability of LLMs or at least a little generous if not mystical in your imagination of what the purely physical electrical signals in our heads are actually doing to learn how to interpret all these little shapes we see on screens.

      I don’t think I’m doing either of those things. I respect the scale and speed of the models and I am well aware that I’m little more than a machine made of meat.

      Babies start out mimicking. The thing is, they learn.

      Humans learn so much more before they start communicating. They start learning reason, logic, etc as they develop their vocabulary.

      The difference is that, as I understand it, these models are often “trained” on very, very large sets of data. They have built a massive network of the way words are used in communication - likely built from more texts than a human could process in several lifetimes. They come out the gate with an enormous vocabulary and understanding of how to mimic, replicate it’s use. If they had been trained on just as much data, but data unrelated to communication, would you still think it capable of reasoning without the ability to “sound” human? They have the “vocabulary” and references to mimic a deep understanding but because we lack the ability to understand the final algorithm it seems like an enormous leap to presume actual reasoning is taking place.

      Frankly, I see no reason for models like LLMs at this stage. I’m fine putting the breaks on this shit - even if we disagree on the reasons why. ML can and has been employed to achieve far more practical goals. Use it alongside humans for a while until it is verifiably more reliable at some task - recognizing cancer in imaging or generating molecules likely of achieving a desired goal. LLMs are just a lazy shortcut to look impressive and sell investors on the technology.

      Maybe I am failing to see reality - maybe I don’t understand the latest “AI” well enough to give my two cents. That’s fine. I just think it’s being hyped because these companies desperately need VC money to stay afloat.

      It works because humans have an insatiable desire to see agency everywhere they look. Spirits, monsters, ghosts, gods, and now “AI.”

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        That’s a totally reasonable position, and trust me when I say I would never be happier to be wrong about something than I am about AI and the direction I think it’s heading. But when you say “training” I see “learning” and the thing is while current AI models may not learn very well at all, they learn quickly, they develop into new models quickly, much faster than we do. Those new models could start learning better. And they’ll keep developing quickly, and learning quickly. There’s a reason we use fruit flies in genetic research. That kind of rapid iteration should not be underestimated. They are evolving as much in months as humans have in thousands of years. We can’t compete with that, and if we try we’ll lose.

        • theparadox@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          I think the word “learning”, and even “training”, is an approximation from a human perspective. MLs “learn” by adjusting parameters when processing data. At least as far as I know, the base algorithm and hyperparameters for the model are set in stone.

          The base algorithm for “living” things is basically only limited by chemistry/physics and evolution. I doubt anyone could create an algorithm that advanced any time soon. We don’t even understand the brain or physics at the quantum level that well. Hell, we are using ML to create new molecules because we don’t understand it well.