Politics

/

ArcaMax

How can Congress regulate AI? Erect guardrails, ensure accountability and address monopolistic power

Anjana Susarla, Professor of Information Systems, Michigan State University, The Conversation on

Published in Political News

Rather than create a new agency that runs the risk of becoming compromised by the technology industry it’s meant to regulate, Congress can support private and public adoption of the NIST risk management framework and pass bills such as the Algorithmic Accountability Act. That would have the effect of imposing accountability, much as the Sarbanes-Oxley Act and other regulations transformed reporting requirements for companies. Congress can also adopt comprehensive laws around data privacy.

Regulating AI should involve collaboration among academia, industry, policy experts and international agencies. Experts have likened this approach to international organizations such as the European Organization for Nuclear Research, known as CERN, and the Intergovernmental Panel on Climate Change. The internet has been managed by nongovernmental bodies involving nonprofits, civil society, industry and policymakers, such as the Internet Corporation for Assigned Names and Numbers and the World Telecommunication Standardization Assembly. Those examples provide models for industry and policymakers today.

Though OpenAI’s Altman suggested that companies could be licensed to release artificial intelligence technologies to the public, he clarified that he was referring to artificial general intelligence, meaning potential future AI systems with humanlike intelligence that could pose a threat to humanity. That would be akin to companies being licensed to handle other potentially dangerous technologies, like nuclear power. But licensing could have a role to play well before such a futuristic scenario comes to pass.

Algorithmic auditing would require credentialing, standards of practice and extensive training. Requiring accountability is not just a matter of licensing individuals but also requires companywide standards and practices.

Experts on AI fairness contend that issues of bias and fairness in AI cannot be addressed by technical methods alone but require more comprehensive risk mitigation practices such as adopting institutional review boards for AI. Institutional review boards in the medical field help uphold individual rights, for example.

Academic bodies and professional societies have likewise adopted standards for responsible use of AI, whether it is authorship standards for AI-generated text or standards for patient-mediated data sharing in medicine.

Strengthening existing statutes on consumer safety, privacy and protection while introducing norms of algorithmic accountability would help demystify complex AI systems. It’s also important to recognize that greater data accountability and transparency may impose new restrictions on organizations.

Scholars of data privacy and AI ethics have called for “technological due process” and frameworks to recognize harms of predictive processes. The widespread use of AI-enabled decision-making in such fields as employment, insurance and health care calls for licensing and audit requirements to ensure procedural fairness and privacy safeguards.

Requiring such accountability provisions, though, demands a robust debate among AI developers, policymakers and those who are affected by broad deployment of AI. In the absence of strong algorithmic accountability practices, the danger is narrow audits that promote the appearance of compliance.

 

What was also missing in Altman’s testimony is the extent of investment required to train large-scale AI models, whether it is GPT-4, which is one of the foundations of ChatGPT, or text-to-image generator Stable Diffusion. Only a handful of companies, such as Google, Meta, Amazon and Microsoft, are responsible for developing the world’s largest language models.

Given the lack of transparency in the training data used by these companies, AI ethics experts Timnit Gebru, Emily Bender and others have warned that large-scale adoption of such technologies without corresponding oversight risks amplifying machine bias at a societal scale.

It is also important to acknowledge that the training data for tools such as ChatGPT includes the intellectual labor of a host of people such as Wikipedia contributors, bloggers and authors of digitized books. The economic benefits from these tools, however, accrue only to the technology corporations.

Proving technology firms’ monopoly power can be difficult, as the Department of Justice’s antitrust case against Microsoft demonstrated. I believe that the most feasible regulatory options for Congress to address potential algorithmic harms from AI may be to strengthen disclosure requirements for AI firms and users of AI alike, to urge comprehensive adoption of AI risk assessment frameworks, and to require processes that safeguard individual data rights and privacy.

Learn what you need to know about artificial intelligence by signing up for our newsletter series of four emails delivered over the course of a week. You can read all our stories on generative AI at TheConversation.com.

This article is republished from The Conversation, an independent nonprofit news site dedicated to sharing ideas from academic experts. The Conversation is trustworthy news from experts, from an independent nonprofit. Try our free newsletters.

Read more:
Regulating AI: 3 experts explain why it’s difficult to do and important to get right

AI exemplifies the ‘free rider’ problem – here’s why that points to regulation

Anjana Susarla receives funding from the National Institute of Health and the Omura-Saxena Professorship in Responsible AI


Comments

blog comments powered by Disqus