Meta is one of those companies wallowing in the idiotic belief that generative AI will “soon” reach intelligence and sentience and the ability to walk your dog, so odds are that it’s deploying them heavily for moderation duties. Except that the crap does not understand a single iot of the pictures and text that it analyses, so it’s bound to get huge amounts of false positives and false negatives.
Well, here’s an example of false positive. i.e. machine mod assuming that the poster is underageb&.
Protip: if you use “assumer machine” to handle people, you’re trash, your service is trash, and you both deserve to be treated as trash. Not this conclusion is surprising regarding Meta.
Well the other thing of course is that Facebook are very much on the side of the far right (who keep promising to get rid of all the pesky laws preventing them from making as much money as possible). So they all disincentivized to get rid of misinformation spread by their favorite group of people.
My hypothesis:
Meta is one of those companies wallowing in the idiotic belief that generative AI will “soon” reach intelligence and sentience and the ability to walk your dog, so odds are that it’s deploying them heavily for moderation duties. Except that the crap does not understand a single iot of the pictures and text that it analyses, so it’s bound to get huge amounts of false positives and false negatives.
Well, here’s an example of false positive. i.e. machine mod assuming that the poster is underageb&.
Protip: if you use “assumer machine” to handle people, you’re trash, your service is trash, and you both deserve to be treated as trash. Not this conclusion is surprising regarding Meta.
Well the other thing of course is that Facebook are very much on the side of the far right (who keep promising to get rid of all the pesky laws preventing them from making as much money as possible). So they all disincentivized to get rid of misinformation spread by their favorite group of people.
It isn’t just stupidity.