It apparently amounted to discrimination.
One risk category meant that videos only appeared in the country where they were uploaded. Another, known as “Auto R,” stopped videos from hitting other users’ “For You” feed after hitting a certain view count. This tag might be applied to individual videos, but for a couple of dozen “special users,” it was supposedly the default.
“Auto R” seemingly applied to a broad category of people.
Netzpolitik.org describes TikTok limiting the reach of “fat and self-confident” users, LGBT users, or users with autism.
As the article notes, though, these categories could be difficult to judge simply from profiles or videos. And while the policy was meant to prevent bullying, it did so by punishing the likely victims.
A spokesperson from Bytedance, TikTok’s parent company, called the rules an early and flawed attempt to fight conflict. “Early on, in response to an increase in bullying on the app, we implemented a blunt and temporary policy,” a spokesperson told The Verge. “While the intention was good, the approach was wrong and we have long since changed the earlier policy in favor of more nuanced anti-bullying policies and in-app protections.”
TikTok has previously faced charges of political censorship, including limiting videos that would offend the Chinese government. It recently suspended a user who criticized China’s mass imprisonment of Uighur Muslims.