
"In 2024, Silicon Valley mounted a fierce campaign against his controversial AI safety bill, SB 1047, which would have made tech companies liable for the potential harms of their AI systems. Tech leaders warned that it would stifle America's AI boom. Governor Gavin Newsom ultimately vetoed the bill, echoing similar concerns, and a popular AI hacker house promptly threw a "SB 1047 Veto Party." One attendee told me, "Thank god, AI is still legal.""
"Now Wiener has returned with a new AI safety bill, SB 53, which sits on Governor Newsom's desk awaiting his signature or veto sometime in the next few weeks. This time around, the bill is much more popular or at least, Silicon Valley doesn't seem to be at war with it. Anthropic outright endorsed SB 53 earlier this month. Former White House AI policy advisor Dean Ball says SB 53 is a "victory for reasonable voices," and thinks Governor Newsom may sign it."
"If signed, SB 53 would impose some of the nation's first safety reporting requirements on AI giants like OpenAI, Anthropic, xAI, and Google - companies that today face no obligation to reveal how they test their AI systems. Many AI labs voluntarily publish safety reports explaining how their AI models could be used to create bioweapons and other dangers, but they do this at will and they're not always consistent."
SB 53 would require leading AI labs to publish safety reports for their most capable models, with a revenue threshold set at more than $500 million. The measure focuses on severe risks, including the potential for AI to contribute to human deaths. Many AI companies already publish voluntary safety reports, but those disclosures vary in consistency and scope. SB 53 has gained notable industry support from companies like Anthropic and cautious backing from Meta, and some policy experts view it as a moderate step toward regulation. Governor Newsom's signature or veto is pending.
Read at TechCrunch
Unable to calculate read time
Collection
[
|
...
]