AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-213 天前Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square348linkfedilinkarrow-up1874arrow-down142file-textcross-posted to: [email protected]
arrow-up1832arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-213 天前message-square348linkfedilinkfile-textcross-posted to: [email protected]
minus-squareantonim@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up7arrow-down2·14 天前But 90% of “reasoning humans” would answer just the same. Your questions are based on some non-trivial knowledge of physics, chemistry and medicine that most people do not possess.
But 90% of “reasoning humans” would answer just the same. Your questions are based on some non-trivial knowledge of physics, chemistry and medicine that most people do not possess.