In a groundbreaking move, the California Legislature has introduced Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. This landmark legislation aims to position California as a global leader in realizing the transformative potential of AI while proactively managing the technology's most serious risks.
SB 1047 focuses on regulating "frontier" AI models - advanced systems trained using vast amounts of computing power, potentially capable of sophisticated reasoning and task performance. The bill sets a threshold of models trained with over 10^26 operations, aligning with the compute used for today's most powerful foundation models.
Under SB 1047, developers of frontier models must either:
- Certify the model qualifies for a "limited duty exemption," indicating it lacks hazardous capabilities and operates within a restricted scope, or
- Implement rigorous safety protocols, including immediate shutdown capabilities, robust security measures, extensive testing procedures, and annual compliance certifications.
The bill defines hazardous capabilities as those that could enable mass casualties, severe economic damage, cyberattacks on critical infrastructure, or other significant threats to public safety if misused. Importantly, it acknowledges that even initially beneficial models may acquire hazardous capabilities through post-training modifications.
To facilitate compliance and oversight, SB 1047 establishes the Frontier Model Division within the California Department of Technology. This division will provide guidance, collect certifications, and investigate reported safety incidents.
Beyond its regulatory components, the bill also aims to democratize access to the immense computing resources necessary for frontier AI development. It directs the creation of CalCompute, a public cloud computing cluster designed to support academic researchers and startups in conducting research on safe and responsible AI advancement.
If passed, SB 1047 would establish California as the first jurisdiction with a comprehensive framework for governing advanced AI systems. While some details may require further refinement, the bill offers a well-considered approach to proactively addressing AI risks while fostering responsible innovation.
As generative AI continues to make rapid strides, policymakers increasingly face the challenge of balancing innovation, public protection, and democratic values. California's SB 1047 serves as a substantial model for achieving this balance through targeted oversight, safety requirements, and strategic investments. Other states and the federal government should closely examine this approach.
Maximizing the positive societal impact of transformative AI technologies ultimately requires active collaboration among policymakers, experts, industry leaders, and the public. With SB 1047, California has taken a significant step forward in sparking this critical dialogue and hopefully inspiring further action to ensure the thoughtful navigation of the complex AI landscape ahead. Reach out to learn how Pillar can help you build AI applications that are both trustworthy and compliant.