How can Congress regulate AI? Erect guardrails, guarantee accountability and deal with monopolistic energy

0
15
How can Congress regulate AI? Erect guardrails, guarantee accountability and deal with monopolistic energy

Takeaways:

  • A brand new federal company to manage AI sounds useful however may turn out to be unduly influenced by the tech business. As a substitute, Congress can legislate accountability.

  • As a substitute of licensing corporations to launch superior AI applied sciences, the federal government may license auditors and push for corporations to arrange institutional assessment boards.

  • The federal government hasn’t had nice success in curbing know-how monopolies, however disclosure necessities and knowledge privateness legal guidelines may assist examine company energy.


OpenAI CEO Sam Altman urged lawmakers to contemplate regulating AI throughout his Senate testimony on Could 16, 2023. That advice raises the query of what comes subsequent for Congress. The options Altman proposed – creating an AI regulatory company and requiring licensing for corporations – are attention-grabbing. However what the opposite consultants on the identical panel advised is no less than as essential: requiring transparency on coaching knowledge and establishing clear frameworks for AI-related dangers.

One other level left unsaid was that, given the economics of constructing large-scale AI fashions, the business could also be witnessing the emergence of a brand new sort of tech monopoly.

As a researcher who research social media and synthetic intelligence, I imagine that Altman’s recommendations have highlighted essential points however don’t present solutions in and of themselves. Regulation could be useful, however in what kind? Licensing additionally is sensible, however for whom? And any effort to manage the AI business might want to account for the businesses’ financial energy and political sway.

An company to manage AI?

Lawmakers and policymakers the world over have already begun to deal with a few of the points raised in Altman’s testimony. The European Union’s AI Act is predicated on a danger mannequin that assigns AI purposes to 3 classes of danger: unacceptable, excessive danger, and low or minimal danger. This categorization acknowledges that instruments for social scoring by governments and automated instruments for hiring pose completely different dangers than these from the usage of AI in spam filters, for instance.

The U.S. Nationwide Institute of Requirements and Expertise likewise has an AI danger administration framework that was created with intensive enter from a number of stakeholders, together with the U.S. Chamber of Commerce and the Federation of American Scientists, in addition to different enterprise {and professional} associations, know-how corporations and suppose tanks.

Federal businesses such because the Equal Employment Alternative Fee and the Federal Commerce Fee have already issued pointers on a few of the dangers inherent in AI. The Shopper Product Security Fee and different businesses have a job to play as effectively.

Somewhat than create a brand new company that runs the danger of turning into compromised by the know-how business it’s meant to manage, Congress can help non-public and public adoption of the NIST danger administration framework and move payments such because the Algorithmic Accountability Act. That might have the impact of imposing accountability, a lot as the Sarbanes-Oxley Act and different laws remodeled reporting necessities for corporations. Congress may also undertake complete legal guidelines round knowledge privateness.

Regulating AI ought to contain collaboration amongst academia, business, coverage consultants and worldwide businesses. Specialists have likened this strategy to worldwide organizations such because the European Group for Nuclear Analysis, generally known as CERN, and the Intergovernmental Panel on Local weather Change. The web has been managed by nongovernmental our bodies involving nonprofits, civil society, business and policymakers, such because the Web Company for Assigned Names and Numbers and the World Telecommunication Standardization Meeting. These examples present fashions for business and policymakers immediately.

Cognitive scientist and AI developer Gary Marcus explains the necessity to regulate AI.

Licensing auditors, not corporations

Although OpenAI’s Altman advised that corporations might be licensed to launch synthetic intelligence applied sciences to the general public, he clarified that he was referring to synthetic basic intelligence, which means potential future AI methods with humanlike intelligence that might pose a risk to humanity. That might be akin to corporations being licensed to deal with different probably harmful applied sciences, like nuclear energy. However licensing may have a job to play effectively earlier than such a futuristic situation involves move.

Algorithmic auditing would require credentialing, requirements of apply and intensive coaching. Requiring accountability is not only a matter of licensing people but in addition requires companywide requirements and practices.

Specialists on AI equity contend that problems with bias and equity in AI can’t be addressed by technical strategies alone however require extra complete danger mitigation practices corresponding to adopting institutional assessment boards for AI. Institutional assessment boards within the medical discipline assist uphold particular person rights, for instance.

Tutorial our bodies {and professional} societies have likewise adopted requirements for accountable use of AI, whether or not it’s authorship requirements for AI-generated textual content or requirements for patient-mediated knowledge sharing in drugs.

Strengthening present statutes on shopper security, privateness and safety whereas introducing norms of algorithmic accountability would assist demystify complicated AI methods. It’s additionally essential to acknowledge that better knowledge accountability and transparency might impose new restrictions on organizations.

Students of information privateness and AI ethics have known as for “technological due course of” and frameworks to acknowledge harms of predictive processes. The widespread use of AI-enabled decision-making in such fields as employment, insurance coverage and well being care requires licensing and audit necessities to make sure procedural equity and privateness safeguards.

Requiring such accountability provisions, although, calls for a strong debate amongst AI builders, policymakers and people who are affected by broad deployment of AI. Within the absence of sturdy algorithmic accountability practices, the hazard is slender audits that promote the looks of compliance.

AI monopolies?

What was additionally lacking in Altman’s testimony is the extent of funding required to coach large-scale AI fashions, whether or not it’s GPT-4, which is among the foundations of ChatGPT, or text-to-image generator Secure Diffusion. Solely a handful of corporations, corresponding to Google, Meta, Amazon and Microsoft, are chargeable for creating the world’s largest language fashions.

Given the dearth of transparency within the coaching knowledge utilized by these corporations, AI ethics consultants Timnit Gebru, Emily Bender and others have warned that large-scale adoption of such applied sciences with out corresponding oversight dangers amplifying machine bias at a societal scale.

Additionally it is essential to acknowledge that the coaching knowledge for instruments corresponding to ChatGPT consists of the mental labor of a bunch of individuals corresponding to Wikipedia contributors, bloggers and authors of digitized books. The financial advantages from these instruments, nonetheless, accrue solely to the know-how companies.

Proving know-how corporations’ monopoly energy will be tough, because the Division of Justice’s antitrust case towards Microsoft demonstrated. I imagine that essentially the most possible regulatory choices for Congress to deal with potential algorithmic harms from AI could also be to strengthen disclosure necessities for AI corporations and customers of AI alike, to induce complete adoption of AI danger evaluation frameworks, and to require processes that safeguard particular person knowledge rights and privateness.


Study what you might want to learn about synthetic intelligence by signing up for our e-newsletter collection of 4 emails delivered over the course of per week. You possibly can learn all our tales on generative AI at TheConversation.com.


Supply hyperlink