• CeeBee_Eh@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    12 hours ago

    No, it’s because it isn’t conscious. An LLM is a static model (all our AI models are in fact). For something to be conscious or sapient it would require a neural net that can morph and adapt in real-time. Nothing currently can do that. Training and inference are completely separate modes. A real AGI would have to have the training and inference steps occurring at once and continuously.

    • dissipatersshik@ttrpg.network
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      12 hours ago

      That’s fine, but I was referring to AI as a concept and not just its current iteration or implementation.

      I agree that it’s not conscious now, but someday it could be.

      • CeeBee_Eh@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 hours ago

        That’s the same as arguing “life” is conscious, even though most life isn’t conscious or sapient.

        Some day there could be AI that’s conscious, and when it happens we will call that AI conscious. That still doesn’t make all other AI conscious.

        It’s such a weirdly binary viewpoint.