What’s changing?
Platforms have started rolling out new ways for European users to flag illegal online content and dodgy products, which companies will be obligated to take down quickly and objectively.
Amazon opened a new channel for reporting suspected illegal products and is providing more information about third-party merchants.
TikTok gave users an “additional reporting option” for content, including advertising, that they believe is illegal. Categories such as hate speech and harassment, suicide and self-harm, misinformation or frauds and scams, will help them pinpoint the problem.
Then, a “new dedicated team of moderators and legal specialists” will determine whether flagged content either violates its policies or is unlawful and should be taken down, according to the app from Chinese parent company ByteDance.
TikTok says the reason for a takedown will be explained to the person who posted the material and the one who flagged it, and decisions can be appealed.
TikTok users can turn off systems that recommend videos based on what a user has previously viewed. Such systems have been blamed for leading social media users to increasingly extreme posts. If personalized recommendations are turned off, TikTok’s feeds will instead suggest videos to European users based on what’s popular in their area and around the world.
The DSA prohibits targeting vulnerable categories of people, including children, with ads.
Snapchat said advertisers won’t be able to use personalization and optimization tools for teens in the EU and U.K. Snapchat users who are 18 and older also would get more transparency and control over ads they see, including “details and insight” on why they’re shown specific ads.
TikTok made similar changes, stopping users 13 to 17 from getting personalized ads “based on their activities on or off TikTok.”
And you could argue that each Lemmy instance is it’s own service.
Agreed. Any rules or regulations for a federated network would likely need to be new legislation (if it’s really even needed at all).