- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.
I think all CSAM should be destroyed out of respect for the victims, not proliferated. I don’t care who is hanging onto this material or for what purpose.
How is this proliferating csam? Also, how do you expect them to find csam without having known images? It gives a really nice way to check based on hashes without having someone look at every picture on someone’s harddrive. With this AI it should greatly help determining new or unknown images while minimizing the number of actual people that have to see that stuff, and who get scarred from looking at such images. The only reason to be against this is if you are looking at CP and want it to be harder to find, or if you don’t understand how this technology is being used.
This ain’t about the victims… It never was, otherwise churches would exist in current form.
This is about police and corpo taste gaining power.