Crime in the Tweets. Can Social Networks Make Gangs Even More Angry?
It was a terrible week for Facebook.
First, an investigative series by The Wall Street Journal reported that for years Facebook has been studying how Instagram, which it owns, has been harmful to young users. Among other bombshells, the Journal quoted a leaked internal document that read: “We make body image issues worse for one in three teen girls.”
Imagine that as part of Facebook’s advertising plan. Not likely.
In her Senate testimony Tuesday, whistleblower and former Facebook employee Frances Haugen quoted more inside documents and accused the social media giant of putting “profits before people,” comparing it to tobacco companies in addicting youngsters to a toxic product “just like cigarettes.”
“They say explicitly, ‘I feel bad when I use Instagram,’” Haugen said, “‘and yet I can’t stop.’”
By the end of the week, Facebook founder and CEO Mark Zuckerberg was back on the cover of Time magazine, but this time with his face partly covered by an image of a smartphone app asking “Delete Facebook?”
Facebook responded to the allegations, as it has before, with denials or various versions of “We’re working on it.”
If so, I hope they and Congress also take a closer, broader look and probe another long-running but too rarely reported menace encouraged by the social networks: street gang violence in cities like Chicago.
The interplay between social networks and gang violence has been widely known since at least 2016. That was when then- interim Chicago police Superintendent John Escalante blamed gang disputes for the spike in violence welling up that year and continuing with painfully little relief ever since.
He described how street conflicts often arise through social media platforms like Twitter and Facebook and Snapchat, where gang members threaten and taunt one another, often escalating beefs to the point where somebody gets shot.
As gangbangers and others find a new way to flaunt their toughness, firearms and even some of their crimes by posting photos online, police have learned to track their activities too.
So have some academic researchers. For example, former University of Chicago sociologist Forrest Stuart, a 2020 MacArthur Foundation “genius” grant recipient, has become an emerging expert on the topic since he embedded himself like a war correspondent for two years with Chicago’s Gangster Disciples, while also running an after-school violence prevention program.
Now an associate professor at Stanford University, his 2020 book “Ballad of the Bullet: Gangs, Drill Music and the Power of Online Infamy” offers an evenhanded look at how violence has been both encouraged and discouraged on social networks, depending on who’s doing the posting.
Particularly ominous about Haugen’s testimony was her description of how Facebook’s algorithm is programmed to promote the inciteful, not the insightful — pushing the most polarizing and emotionally charged content, without regard to its truthfulness.
For example, she said, the algorithm picks out content based on what you have watched in the past. “They optimize content that is hateful and divisive and polarizing,” she said. “It’s easier to inspire people to anger than to other emotions.”
That’s not healthy for people who already are preconditioned to commit violence.
Haugen said Facebook even removed safeguards against such inflammatory content before the 2020 election and afterward took the safeguards off — contributing in her estimation to the Jan. 6 Capitol Hill insurrection.
I hesitate to rush to judgment, since there was more than enough of that in the boneheaded Jan. 6 attack. But the possibility heightens the case for Congress to investigate Facebook’s algorithm, which Facebook is bound to fight like Colonel Sanders would fight for the secrecy of his fried-chicken recipe.
But, as a publicly traded company, Facebook’s veiled power to promote false or potentially dangerous content is not necessarily protected by the First Amendment.
The debate focuses inevitably on Section 230 of the Communications Decency Act. Passed in 1996, it protects websites from lawsuits if a user posts something illegal, not counting such exceptions as copyright violations, sex work-related material, and violations of federal criminal law.
It is not surprising that Zuckerberg is so not-thrilled by the possibility of such vulnerability that he, too, has called for updating Section 230, although, so far, his suggestions have sounded too self-serving to get a very warm reception on Capitol Hill.
I’m a zealot for the First Amendment but after more than three decades of experience, it makes little sense to grant more protection to internet media than traditional, old-school, legacy media traditionally have had.
Put Facebook on a shorter leash.
(E-mail Clarence Page at email@example.com.)
©2021 Clarence Page. Distributed by Tribune Content Agency, LLC.(c) 2021 CLARENCE PAGE DISTRIBUTED BY TRIBUNE MEDIA SERVICES, INC.