The Picard Maneuver@lemmy.world to Just Post@lemmy.world · 1 month agoLLM hallucinationslemmy.worldimagemessage-square57linkfedilinkarrow-up1639arrow-down18
arrow-up1631arrow-down1imageLLM hallucinationslemmy.worldThe Picard Maneuver@lemmy.world to Just Post@lemmy.world · 1 month agomessage-square57linkfedilink
minus-squaremorrowind@lemmy.mllinkfedilinkarrow-up1·edit-21 month agoThe y key difference is humans are aware of what they know and don’t know and when they’re unsure of an answer. We haven’t cracked that for AIs yet. When AIs do say they’re unsure, that’s their understanding of the problem, not an awareness of their own knowledge
minus-squareFundMECFS@lemmy.blahaj.zonelinkfedilinkarrow-up1·1 month ago They hey difference is humans are aware of what they know and don’t know If this were true, the world would be a far far far better place. Humans gobble up all sorts of nonsense because they “learnt” it. Same for LLMs.
minus-squaremorrowind@lemmy.mllinkfedilinkarrow-up1·1 month agoI’m not saying humans are always aware of when they’re correct, merely how confident they are. You can still be confidently wrong and know all sorts of incorrect info. LLMs aren’t aware of anything like self confidence
The y key difference is humans are aware of what they know and don’t know and when they’re unsure of an answer. We haven’t cracked that for AIs yet.
When AIs do say they’re unsure, that’s their understanding of the problem, not an awareness of their own knowledge
If this were true, the world would be a far far far better place.
Humans gobble up all sorts of nonsense because they “learnt” it. Same for LLMs.
I’m not saying humans are always aware of when they’re correct, merely how confident they are. You can still be confidently wrong and know all sorts of incorrect info.
LLMs aren’t aware of anything like self confidence