AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-21 day agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square247fedilinkarrow-up1732arrow-down134
arrow-up1698arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-21 day agomessage-square247fedilink
minus-squareEl Barto@lemmy.worldlinkfedilinkEnglisharrow-up29arrow-down4·15 hours agoLLMs deal with tokens. Essentially, predicting a series of bytes. Humans do much, much, much, much, much, much, much more than that.
minus-squareZexks@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down3·1 hour agoNo. They don’t. We just call them proteins.
minus-squarestickly@lemmy.worldlinkfedilinkEnglisharrow-up2·22 minutes agoYou are either vastly overestimating the Language part of an LLM or simplifying human physiology back to the Greek’s Four Humours theory.
LLMs deal with tokens. Essentially, predicting a series of bytes.
Humans do much, much, much, much, much, much, much more than that.
No. They don’t. We just call them proteins.
“They”.
What are you?
You are either vastly overestimating the Language part of an LLM or simplifying human physiology back to the Greek’s Four Humours theory.