- Reddit has begun issuing warnings to users to regularly upvote violent content with a view to taking harsher action in future.
- The company says that it will consider expanding this action to other forms of content in future.
- Users are concerned that this moderation tactic could be abused or just improperly implemented.
This is the key bit. It’s good to try and make safer online spaces. But Reddit’s automated moderation has been bad for a while, and this might get more users caught up in false positives
I’ve seen comments tagged as abusive regardless of the context:
For well moderated subs, the vast majority of those reports became false positives over time. For the mod queue, this didn’t affect the end user since mods can dismiss the false positives. But automated ‘scores’ won’t account for that.
We’re going to see even more annoying algospeak like “unalive”, only it’s going to be in news quotes as well