AI Needs Regulatory Guardrails in the US to Ensure Safe Use

This article was published by Bloomberg Law. https://news.bloomberglaw.com/us-law-week/ai-needs-regulatory-guardrails-in-the-us-to-ensure-safe-use

Imagine a world where artificial intelligence systems, once designed to serve humanity, become uncontrollable forces, seeking power, resisting shutdown, or deviating from their intended purposes. This dystopian scenario is a real possibility if the US fails to regulate AI safety.

California Gov. Gavin Newsom’s recent veto of an AI safety bill has stirred debate in the tech industry. Large AI firms, many of which lobbied against the bill, argue such regulation could stifle innovation, leaving the US at a crossroads on how to regulate AI to balance the need for safety without hindering advancement.

The rapid advancement of AI technologies has raised several critical safety concerns that demand urgent attention. According to the Center for AI Safety, AI poses catastrophic risks such as malicious use, where bad actors intentionally exploit AI for harmful purposes, such as by creating pandemics or spreading propaganda.

Additionally, the pressure to stay competitive could lead militaries to develop autonomous weapons and use AI for cyberwarfare. This could create a new kind of warfare where mistakes can quickly escalate without human control.

Then there are organizational risks for companies that rush to develop AI with a focus on profits over safety, which leads to accidental AI leaks, theft by malicious individuals, and insufficient investment in safety research.

Rogue AIs are also a threat as advanced AI systems operate without guardrails, pursuing flawed goals, deviating from their original objectives, seeking power, resisting shutdown, and engaging in deceptive behavior, such as manipulating financial markets. 

To mitigate these risks, the Center for AI Safety recommends several measures, including establishing safety regulations to hold AI developers accountable, ensuring transparency, and maintaining human oversight. 

US Landscape

There is no federal law addressing AI safety issues. President Joe Biden’s executive order on AI safety serves as a policy directive, guiding federal agencies to develop AI safety standards and conduct testing. However, the order isn’t a law and doesn’t have direct legal binding force.

Several states, including California, Colorado, Connecticut, and Illinois, have enacted AI regulations that focus on immediate concerns such as data privacy, algorithmic discrimination, and transparency. However, none of these state laws comprehensively cover long-term AI safety issues, especially now that Newsom has vetoed California’s AI safety bill. 

EU Landscape

The EU Artificial Intelligence Act is a comprehensive law that addresses several important aspects of AI safety concerns. It prohibits AI systems that constitute a clear threat to people’s safety, livelihoods, and rights, which could cover some forms of malicious use. It imposes transparency obligations on AI-generated content, which could help mitigate risks of propaganda. However, the law may not be sufficient to cover all potential malicious uses.

To address risks associated with rapidly advancing AI capabilities, the EU imposes specific requirements on high-risk systems and general-purpose AI models, particularly those classified as having systemic risk.However, it doesn’t specifically address other issues arising out of the AI race, such as mass unemployment or over-dependence on AI systems.

The EU requires providers of high-risk AI systems to implement a risk management system throughout the AI system’s lifecycle. It mandates data governance and management practices, which could help mitigate risks of accidental AI leaks or theft. The law also requires providers of high-risk AI systems to conduct a conformity assessment, which can help identify and address potential risks before the system is deployed.

While the EU law doesn’t explicitly address the concept of rogue AIs, its requirements for human oversight and risk management can help mitigate risks associated with AI systems deviating from their intended purposes. However, the EU doesn’t provide specific guidance on how to identify or prevent rogue AI behavior.

Overall, the EU AI Act is a significant step forward in regulating AI safety. However, there are concerns about its potential impact on the competitive landscape. Critics argue that the detailed requirements, especially for high-risk AI systems and GPAI models with systemic risk, could be burdensome and costly for companies, particularly startups and SMEs. They also warn that stringent regulations might slow down AI innovation or development in the short term, as companies adjust to the new requirements. 

Path for the US

As a global leader in AI, the US can’t afford to neglect the need for regulation to address AI safety concerns. However, given objections from large AI companies to California’s AI Safety Bill, which they argue could stifle innovation, adopting the EU AI’s prescriptive approach could face strong resistance. The US will need a more balanced approach that promotes innovation while safeguarding against AI risks.

A balanced approach in the US might include adopting a tiered, risk-based approach similar to the EU AI Act, but with greater flexibility to quickly adapt to evolving AI technologies. This would focus on principles and testing outcomes rather than rigid, prescriptive requirements, with regular reviews and updates to regulatory guidelines.

The US could also expand Congress’ proposal to establish regulatory sandboxes for AI projects beyond the financial industry to all industries. Setting federal minimum standards that align with global AI regulations would make compliance easier for companies. 

Compliance Strategies

Faced with regulatory uncertainty, AI developers should consider taking the following key strategies to protect public safety. They should establish AI development policies and procedures across the lifecycle, including comprehensive safety testing frameworks and continuous post-deployment monitoring to detect potential risks. 

In addition, they could promptly design and implement emergency shutdown procedures for handling critical AI malfunctions.

Also key is to document internal processes in a transparent and explainable manner to prepare for potential third-party audits, and develop flexible compliance strategies that are capable of responding to evolving regulations.

While this isn’t an exhaustive list, these steps will help prepare AI developers for future AI regulation. The potential for AI to either enhance our lives or pose significant risks depends on how we regulate it. With the right regulations in place, we can ensure that innovation doesn’t come at the cost of safety.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.


Comments

2 responses to “AI Needs Regulatory Guardrails in the US to Ensure Safe Use”

  1. Thanks for this post. Mainstream media has been focusing too much on AI’s financial market impact versus its societal impact. Policy makers and politicians will not take a proactive stance on the issue until something catastrophic occurs. How that catastrophe may look, I don’t know.

    1. Alton, Thank you so much for your thoughtful comment. I completely agree—while the financial market implications of AI have garnered significant attention, the societal impacts often take a back seat. Policymakers must address these broader societal risks before a catastrophic event forces reactive legislation.

Leave a Reply

Your email address will not be published. Required fields are marked *