• 1 Post
  • 35 Comments
Joined 8 months ago
cake
Cake day: January 17th, 2024

help-circle







  • Hm… At some point a human will have to say “Yes, this response is correct.” to whatever the machine outputs. The output then takes the bias of that human. (This is unavoidable, I’m just pointing it out.) If this is really not an effort in ideological propaganda, a solution could be for the bot to provide arguments, rather han conclusions. Instead of telling me a source is “Left” or “Biased”, it could say: “I found this commentary/article/websites/video discussing this aource’s political leaning (or quality): Link 1 Link 2 Link 3”

    Here you reduce bias by presenting information, instead of conclusions, and then letting the reader come to their own conclusions based on this information. This not only is better at education, but also helps readers develop their critical thinking.

    Instead of… You know, being told what to think about what by a bot.