Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.

  • Railcar8095@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 hours ago

    It differs in basically being something completely different. This is a classification model, doesn’t have generative capabilities. Even if you were to get the model and it’s weights, and you tried to reverse engineer an “input” that it would classify as CP, it would most likely look like pure noise to you.

    Moron

      • Railcar8095@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 hours ago

        So you need to have a model that generates CP to begin with. Flawless reasoning there.

        Look, it’s clear you have no clue what you’re talking about. Stop demonstrating it, moron.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 hour ago

          The model I use (I forget the name) popped out something pretty sus once. I wouldn’t describe it as CP, but it was definitely weird enough to really make me uncomfortable. It’s the only thing it ever made that I immediately deleted and removed from the recycling bin too lol.

          The point I’m making is that this isn’t as far fetched as you believe.

          Plus, you can merge models. Get a general purpose model that knows what children look like, a general purpose pornographic model, merge them, then start generating and selecting images based on Thorn’s classifier.