Current News

/

ArcaMax

As US debates, EU begins to audit AI

Gopal Ratnam, CQ-Roll Call on

Published in News & Features

Until now, determining whether artificial intelligence-powered platforms are spreading disinformation or negatively targeting kids fell to the companies operating them or to the few research organizations capable of sifting through publicly available data.

That was until the European Union in late August began such monitoring as it implements its law known as the Digital Services Act, which applies to online platforms like social media companies and search engines that have more than 45 million active European users. The law is designed to combat hate speech and spread of disinformation as well as prevent targeting of minors.

Although national governments of the 27-member European Union are responsible for enforcing the law, the E.U. also established the European Centre for Algorithmic Transparency, or ECAT, that now audits algorithms and underlying data from the 19 major platforms that meet the law’s criteria.

“When we say algorithmic transparency, auditing is what we are doing,” E. Alberto Pena Fernandez, head of the ECAT unit, said in an interview in his office in Brussels. “We are doing a systematic analysis of what the algorithm is doing.”

He said ECAT examines samples of a platform’s data, and the software that produces recommendations or promotes posts, videos, and other material to users.

In the U.S., however, the idea of auditing is still in its infancy. Some lawmakers and advocates have been pushing for transparency, but legislation hasn’t advanced. One reason may be the critical questions that have to be confronted, as illustrated by the EU’s handling of the process.

 

Fernandez likened the European method to financial auditors who certify a company’s performance.

“We say, ‘this is the risk we are trying to avoid’ and we ask the company to show us the algorithm that deals with that risk,” Fernandez said. In the case of disinformation, for example, “we look at the algorithm dealing with the disinformation or the moderation of that content and ask the company to show us how it works.”

The 19 companies that meet the E.U.’s definition of large platforms include top U.S. companies such as Amazon.com Inc.; Apple Inc.’s App Store; the Bing search engine, part of Microsoft Corp.; Facebook and Instagram, both part of Meta Platforms Inc.; Google LLC; Pinterest; and YouTube; as well as European and Chinese companies including Alibaba Express, Booking.com, and TikTok.

The E.U. unit is building a set of “bullet-proof methodologies and protocols” by testing software and data provided by companies under strict confidentiality agreements to see if the algorithms produce results that are prohibited, such as disinformation and targeting of minors, Fernandez said.

...continued

swipe to next page

©2023 CQ-Roll Call, Inc., All Rights Reserved. Visit cqrollcall.com. Distributed by Tribune Content Agency, LLC.

Comments

blog comments powered by Disqus