Getting AI Regulation Right: The U.S. Approach
4 min readAI is on the rise, and its regulation is crucial. The U.S. government has been taking thoughtful steps to ensure a balanced approach. In this article, we’ll explore the key strategies the U.S. is employing to regulate AI effectively.
From acknowledging AI’s potential to encouraging public-private collaboration, the U.S. is setting a precedent. Read on to find out how the government and industry are working together to pave the way for responsible AI innovation.
Why the U.S. Government’s Approach is Working
Over the past year, the U.S. government has taken a thoughtful approach to crafting guidelines for AI developers, deployers, and users. Principled commitments have provided a framework for the sector, and a federal Executive Order has given detailed guidance to regulators.
Congress complements this work in a balanced way. The House formed a bipartisan committee led by experts in computer science and AI to consider legislation. Recently, the Senate’s Bipartisan AI Working Group released its “Driving U.S. Innovation in Artificial Intelligence” policy roadmap, which lays out detailed policy recommendations.
Recognizing AI’s Potential and Economic Impact
First, the government’s approach acknowledges AI’s incredible potential in fields like science, healthcare, and energy. It adopts a practical risk-and-benefit framework for future steps, which is vital for America to stay at the forefront of AI innovation.
Second, American leaders recognize AI’s enormous economic potential. A McKinsey report estimates the global economic impact of AI to be between $17 and $25 trillion annually by 2030. To harness this potential, both the White House and the Senate Working Group have set out concrete actions to increase access to AI tools and develop an AI-ready workforce.
Collaboration Between Public and Private Sectors
Third, the efforts highlight the need for collaboration between the private and public sectors in AI leadership. We are in the middle of a global technology race, and success won’t come from being the first to invent something, but from deploying it best across all sectors.
This includes public and private cyberdefense and national security in the U.S., where effective AI deployment can help solve the “defender’s dilemma.”
Support for Key Legislative Bills
Google endorses five bills mentioned in the Senate’s AI Policy Roadmap. The bills cover vital areas for AI’s advancement and responsible use. AI, as a general-purpose technology, requires collaboration between public and private stakeholders to transition from theoretical to practical applications.
By working together, we can move from the “wow” of AI to the “how” of AI, ensuring everyone can benefit from AI’s opportunities.
Here are five bills we support, and we continue to advocate for legislation that covers other essential areas.
Principles for Responsible Regulation
To complement scientific innovation, seven principles are suggested as the foundation of prudent and responsible AI regulation.
First, support responsible innovation. Increased spending on both AI innovation and safeguards against risks is advocated. Advances in technology can increase safety and help build more resilient systems.
Second, focus on outputs. Promoting AI systems that generate high-quality outputs while preventing harms allows regulators to intervene in a focused way without overbroad regulations that might stifle beneficial AI advances.
Third, strike a sound copyright balance. Fair use and copyright exceptions governing publicly available data are vital for scientific progress, but website owners should be able to opt out of having their content used for AI training.
Filling Gaps in Existing Laws
Fourth, plug gaps in existing laws. If something is illegal without AI, it should be illegal with AI. The goal is to fill gaps where existing laws don’t adequately cover AI applications.
Fifth, empower existing agencies. There’s no one-size-fits-all regulation for AI. Each agency should be empowered to handle AI within its domain, similar to how we regulate other general-purpose technologies like electricity.
Adopting a Hub-and-Spoke Model
Sixth, adopt a hub-and-spoke model. Establishing a center of technical expertise at an agency like NIST can help advance government understanding of AI and support sectoral agencies.
This model acknowledges that issues in banking differ from those in pharmaceuticals or transportation.
Striving for Alignment
Seventh, strive for alignment. Dozens of AI frameworks and proposals exist globally. Progress requires interventions at points of actual harm rather than blanket regulations, and regulations should align with international standards wherever possible.
Progressing American innovation requires thoughtful, consistent, and collaborative efforts to maximize the benefits of AI for everyone.
Future Potential of AI
AI drives advances from everyday improvements to extraordinary breakthroughs. It enhances tools like Google Search, Translate, and Maps and tackles significant societal challenges.
Think of Google DeepMind’s AlphaFold, which has predicted the 3D shapes of almost all known proteins. AI also forecasts floods up to seven days in advance, providing life-saving alerts for millions of people.
AI could lead to more stunning breakthroughs if we stay focused on its long-term potential. Consistency, thoughtfulness, and collaboration are key to ensuring everyone benefits from AI’s opportunities.
The U.S. is taking decisive steps toward effective AI regulation. Through partnerships, thoughtful legislation, and strategic frameworks, America aims to lead in AI innovation while addressing associated risks. This balanced approach could set a global standard for responsible AI development.
AI has the potential for unprecedented advances. The U.S. approach seeks to harness this potential, ensuring benefits for all while safeguarding against potential harms. The collaboration between public and private sectors, along with a focus on ethical principles, promises a future where AI serves humanity positively.