U.S. lawmakers investigating how Facebook Inc. and other online platforms shape users’ worldviews are considering new rules for the artificial intelligence programs blamed for spreading malicious content.
This legislative push is taking on more urgency since a whistleblower revealed thousands of pages of internal documents revealing how Facebook employees knew that the company’s algorithms prioritizing growth and engagement were driving people to more divisive and harmful content.
Every automated action on the internet — from ranking content and displaying search results to offering recommendations or showing ads — is controlled by computer code written by engineers. Some of these algorithms take simple inputs like words or video quality to show certain outputs, while others use artificial intelligence to learn more about people and user-generated content, resulting in more sophisticated sorting.
Both Republicans and Democrats agree there should be some accountability for tech companies, even though Section 230 of the 1996 Communications Decency Act provides broad legal immunity for online platforms.
While there has been some consensus around updated privacy rules and tech-focused antitrust bills, two weeklong recesses next month and fiscal deadlines looming in December mean there is precious little time for concrete action this year.
After wrestling with how to write laws to allow or prohibit certain kinds of speech, which risks running afoul of the First Amendment, regulating automated algorithms is emerging as a possible strategy.
“The algorithms driving powerful social media platforms are black boxes, making it difficult for the public and policy makers to conduct oversight and ensure companies’ compliance, even with their own policies,” Sen. Ed Markey, D-Mass., told Bloomberg. He introduced a bill in May he said would “help pull back the curtain on Big Tech, enact strict prohibitions on harmful algorithms, and prioritize justice for communities who have long been discriminated against as we work toward platform accountability.”
Several senators touted their own algorithm-focused bills while questioning Frances Haugen, the Facebook whistleblower, when she appeared before Congress earlier this month. While Haugen didn’t endorse any specific piece of legislation, she did say the best way to regulate online platforms like Facebook is to focus on systemic solutions, especially transparency and accountability for the machine-learning architecture that powers some of the world’s biggest and most influential companies.
Sen. Richard Blumenthal, D-Conn., who as chair of the Senate consumer protection subcommittee has led the congressional investigation of Haugen’s allegations, last week invited Facebook Chief Executive Officer Mark Zuckerberg to testify before Congress. Blumenthal, in a statement Monday, identified the machine-learning structure of the company’s platform as a danger not only to users, but also to democracy.
“Facebook is obviously unable to police itself as its powerful algorithms drive deeply harmful content to children and fuel hate,” Blumenthal said. “This resoundingly adds to the drumbeat of calls for reform, rules to protect teens, and real transparency and accountability from Facebook and its Big Tech peers.”