The majority of young people use AI, and they try to be polite when interacting with chatbots like ChatGPT, but some might have too much of an attachment to it, according to a new study.
No, it’s because it isn’t conscious. An LLM is a static model (all our AI models are in fact). For something to be conscious or sapient it would require a neural net that can morph and adapt in real-time. Nothing currently can do that. Training and inference are completely separate modes. A real AGI would have to have the training and inference steps occurring at once and continuously.
No, it’s because it isn’t conscious. An LLM is a static model (all our AI models are in fact). For something to be conscious or sapient it would require a neural net that can morph and adapt in real-time. Nothing currently can do that. Training and inference are completely separate modes. A real AGI would have to have the training and inference steps occurring at once and continuously.
That’s fine, but I was referring to AI as a concept and not just its current iteration or implementation.
I agree that it’s not conscious now, but someday it could be.
That’s the same as arguing “life” is conscious, even though most life isn’t conscious or sapient.
Some day there could be AI that’s conscious, and when it happens we will call that AI conscious. That still doesn’t make all other AI conscious.
It’s such a weirdly binary viewpoint.