• stickly@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    12 days ago

    I apologize if my phrasing is combative; I have experience with this topic and get a knee-jerk reaction to supporting AI as a literacy tool.

    Your argument is flawed because it implicitly assumes that critical thinking can be offloaded to a tool. One of my favorite quotes on that:

    The point of cognitive automation is NOT to enhance thinking. The point of it is to avoid thinking in the first place.

    (coincidentally from an article on the topic of LLM use for propoganda)

    You can’t “open source” a model in a meaningful and verifiable way. Datasets are massive and, even if you had the compute to audit them, poisoning can be much more subtle than explicitly trashing the dataset.

    For example, did you know you can control bias just by changing the ordering of the dataset? There’s an interesting article from the same author that covers well known poisoning vectors, and that’s already a few years old.

    These problems are baked in to any AI at this scale, regardless of implementation. The idea that we can invent a way out of a misinformation hell of our own design is a mirage. The solution will always be to limit exposure and make media literacy a priority.

    • LarmyOfLone@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      12 days ago

      Hmm very interesting info, thanks. Research about biases and poisoning is very important, but why would you assume this can’t be overcome in the future? Training advanced AI models specifically to understand the reasons behind biases and be able to filter or mark them.

      So my hope is that it IS technically possible to develop an AI model that can both reason better and analyze news sources, journalists, their affiliations, their motivation and historical actions, and can be tested or audited against bias (in the simplest case a kind of litmus test). And to use that instead of something like google and integrated in the browser (like firefox) to inform users about the propaganda around topics and in articles. I don’t see anything that precludes this possibility or this goal.

      The other thing is that we can’t expect a top down approach to work, but the tools need to be “democratic”. And an advanced, open source, somewhat audited AI model against bias and manipulation could be run locally on your own solar powered PC. I don’t know how much it costs to take something like deepseek and train a new model on updated datasets, but it can’t be astronomical. It only takes at least one somewhat trustworthy project to do this. That is a much more a bottom up approach.

      Those who have and seek power have no interest in limiting misinformation. The response to the misinformation by Trump and MAGA seems to have led to more pressure on media conglomerates to be in lockstep and censor anything that is dissent (the propaganda model). So expecting those in power to make that a priority is futile. Those who only seek power are statistically more likely to achieve it, and they will and are using AI against us already.

      Of course I don’t have all the answers, and my argument could be put stupidely as “The only thing that can stop a bad AI with a gun is a good AI with a gun”. But I see “democratizing” AI as a crucial step.