• tiramichu@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    I think the reason non-tech people find this so difficult to comprehend is the poor understanding of what problems are easy for (classically programmed) computers to solve versus ones that are hard.

    if ( person_at_crossing ) then { stop }
    

    To the layperson it makes sense that self-driving cars should be programmed this way. Aftter all, this is a trivial problem for a human to solve. Just look, and if there is a person you stop. Easy peasy.

    But for a computer, how do you know? What is a ‘person’? What is a ‘crossing’? How do we know if the person is ‘at/on’ the crossing as opposed to simply near it or passing by?

    To me it’s this disconnect between the common understanding of computer capability and the reality that causes the misconception.

    • Starbuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I think you could liken it to training a young driver who doesn’t share a language with you. You can demonstrate the behavior you want once or twice, but unless all of the observations demonstrate the behavior you want, you can’t say “yes, we specifically told it to do that”