Business

/

ArcaMax

'I need my girlfriend off TikTok': How hackers game abuse-reporting systems

Brian Contreras, Los Angeles Times on

Published in Business News

One hundred and forty-seven dollar signs fill the opening lines of the computer program. Rendered in an icy blue against a matte black background, each "$" has been carefully placed so that, all together, they spell out a name: "H4xton."

It's a signature of sorts, and not a subtle one. Actual code doesn't show up until a third of the way down the screen.

The purpose of that code: to send a surge of content violation reports to the moderators of the wildly popular short-form video app TikTok, with the intent of getting videos removed and their creators banned.

It's a practice called "mass reporting," and for would-be TikTok celebrities, it's the sort of thing that keeps you up at night.

As with many social media platforms, TikTok relies on users to report content they think violates the platform's rules. With a few quick taps, TikTokers can flag videos as falling into specific categories of prohibited content — misleading information, hate speech, pornography — and send them to the company for review. Given the immense scale of content that gets posted to the app, this crowdsourcing is an important weapon in TikTok's content moderation arsenal.

Mass reporting simply scales that process up. Rather than one person reporting a post to TikTok, multiple people all report it in concert or — as programs such as H4xton's purport to do — a single person uses automated scripts to send multiple reports.

 

H4xton, who described himself as a 14-year-old from Denmark, said he saw his "TikTok Reportation Bot" as a force for good. "I want to eliminate those who spread false information or … made fun of others," he said, citing QAnon and anti-vax conspiracy theories. (He declined to share his real name, saying he was concerned about being doxxed, or having personal information spread online; The Times was unable to independently confirm his identity.)

But the practice has become something of a boogeyman on TikTok, where having a video removed can mean losing a chance to go viral, build a brand or catch the eye of corporate sponsors. It's an especially frightening prospect because many TikTokers believe that mass reporting is effective even against posts that don't actually break the rules. If a video gets too many reports, they worry, TikTok will remove it, regardless of whether those reports were fair.

It's a very 2021 thing to fear. The policing of user-generated internet content has emerged as a hot-button issue in the age of social-mediated connectivity, pitting free speech proponents against those who seek to protect internet users from digital toxicity. Spurred by concerns about misinformation and extremism — as well as events such as the Jan. 6 insurrection — many Democrats have called for social media companies to moderate user content more aggressively. Republicans have responded with cries of censorship and threats to punish internet companies that restrict expression.

Mass reporting tools exist for other social media platforms too. But TikTok's popularity and growth rate — it was the most downloaded app in the world last year — raise the stakes of what happens there for influencers and other power-users.

...continued

swipe to next page
©2021 Los Angeles Times. Visit at latimes.com. Distributed by Tribune Content Agency, LLC.