A new paper from Apple's artificial intelligence scientists has found that engines based on large language models, such as those from Meta and OpenAI, still lack basic reasoning skills.
Thinking is relational, it requires an internal self awareness.
i think it’s like the chicken and the egg : they both come together … one could try to argue that self-awareness comes from thinking in the fashion of : “i think so i am”
Do we know how human brains reason? Not really… Do we have an abundance of long chains of reasoning we can use as training data?
…no.
So we don’t have the training data to get language models to talk through their reasoning then, especially not in novel or personable ways.
But also - even if we did, that wouldn’t produce ‘thought’ any more than a book about thought can produce thought.
Thinking is relational. It requires an internal self awareness. We can’t discuss that in text so much that a book is suddenly conscious.
This is the idea that"Sentience can’t come from semantics"… More is needed than that.
i like your comment here, just one reflection :
i think it’s like the chicken and the egg : they both come together … one could try to argue that self-awareness comes from thinking in the fashion of : “i think so i am”