James Cameron on AI: “I warned you guys in 1984 and you didn’t listen”::undefined

  • Synchrome@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    1 year ago

    I think it will come… Right now the weak AI is highly compartmentalized (one produces text, another produces language, another scans videos, etc.). What would happen if these different parts could be integrated, and then given a robotic body with sensors? What if its “session” could run indefinitely? Isn’t this what we are, in essence?

    • Orphie Baby@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      Not much, because it turns out there’s more to AI than a hypothetical sum of what we already created.

        • Orphie Baby@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          1 year ago

          It’s not about improvement, it’s about actual AI being completely different technology, and working in a completely different way.

        • LetMeEatCake@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          That’s not what they said.

          What people are calling “AI” today is not AI in the sense of how laypeople understand it. Personally I hate the use of the term in this context and think it would have been much better to stick with Machine Learning (often just ML). Regardless, the point is that you cannot get from these system to what you think of as AI. To get there it would require new, different systems. Or changing these systems so thoroughly as to make them unrecognizable from their origins.

          If you put e.g. ChatGPT into a robotic body with sensors… you’d get nothing. It has no concept of a body. No concept of controlling the body. No concept of operating outside of the constraints within which it already operates. You could debate if it has some inhuman concept of language, but that debate is about as far as you can go.

          To put it another way: what happens if you connect the algorithms controlling a video game NPC to a robotic body? Absolutely nothing. Same deal here.

        • eee@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          1 year ago

          Not the guy you were referring to, but it’s not so much “improve” as “another paradigm shift is still needed”.

          A “robotic body with sensors” has already been around since 1999. But no matter how many sensors, no matter how lifelike and no matter how many machine learning algorithms/LLMs are thrown in, it is still not capable of independent thought. Anything that goes wrong is still due to human error in setting parameters.

          To get to a Terminator level intelligence, we need the machine to be capable of independent thought. Comparing independent thought to our current generative AI technology is like comparing a jet plane to a horse drawn carriage - you can call it “advancement”, yes, but there are many intermediate steps that need to happen. Just like an internal combustion engine is the linkage between horse-drawn carriages and planes, some form of independent thought is the link between generative AI and actual intelligent machines.