- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.
At this point how does it differ w/ generating AI powered CP? morons
Uh, well this one tells you if an image looks like it or not. It doesn’t generate images
If it knows if an image looks like it it can generate something like it, one step further
Correct, this kind of software is trained on CP data. So such models can be easily used to generate CP instead of recognizing it, which makes them very dangerous indeed.
Same idea as the current models that are trained to recognized cars, these models can also be used to generate a car from noise as a starting poiint.
In pretttty sure you can’t just run it in reverse like that. There’s a whole different training and operation methodology you have to use to support generating images rather than simple text classification
There is a method of training where you use one system to make things and another to detect them. I forget the name of this approach, but it definitely is an approach.
It differs in basically being something completely different. This is a classification model, doesn’t have generative capabilities. Even if you were to get the model and it’s weights, and you tried to reverse engineer an “input” that it would classify as CP, it would most likely look like pure noise to you.
Moron
Generate porn, classificate output, result very young looking models.
Moron
So you need to have a model that generates CP to begin with. Flawless reasoning there.
Look, it’s clear you have no clue what you’re talking about. Stop demonstrating it, moron.
The model I use (I forget the name) popped out something pretty sus once. I wouldn’t describe it as CP, but it was definitely weird enough to really make me uncomfortable. It’s the only thing it ever made that I immediately deleted and removed from the recycling bin too lol.
The point I’m making is that this isn’t as far fetched as you believe.
Plus, you can merge models. Get a general purpose model that knows what children look like, a general purpose pornographic model, merge them, then start generating and selecting images based on Thorn’s classifier.
You can’t merge a generative model and a classification model. You can run then in series to get a bunch of false positives/hallucinations, but you can’t make it generate something from the other model.
When I said a “general purpose model that knows what children look like” I didn’t mean the classification model from the article. I meant a normal, general purpose image generation model. When I said “that knows what children look like” I mean part of its training set is on children, because it’s sort of trained a little on everything. When I said “pornographic model” I mean a model trained exclusively on NSFW content (and not including any CSAM, but that may be generous depending on how much care was out into the model’s creation).
Not CP, but normal porn and select on CP traits, moron
https://en.m.wikipedia.org/wiki/False_positives_and_false_negatives
Not that I think you will understand. I’m posting this mostly for those moronic enough to read your comments and think “that seems reasonable”
Thanks