
"The legislation requires AI companies to implement and disclose publicly safety protocols to prevent their most advanced models from being used to cause major harm. The rules are designed to cover AI systems if they meet a frontier threshold that signals they run on a huge amount of computing power. Such thresholds are based on how many calculations the computers are performing."
"The existing systems are largely made by California-based companies like Anthropic, Google, Meta Platforms and OpenAI. The legislation defines a catastrophic risk as something that would cause at least $1 billion in damage or more than 50 injuries or deaths. It's designed to guard against AI being used for activities that could cause mass disruption, such as hacking into a power grid."
California enacted a law requiring safety protocols and public disclosure for frontier-scale artificial intelligence models to prevent their use in potentially catastrophic activities such as building bioweapons or shutting down bank systems. The law targets models that meet a frontier threshold based on computational calculations and aims to cover today's highest-performing generative systems and future, more powerful versions. Many major AI companies based in California, including Anthropic, Google, Meta Platforms and OpenAI, must follow the requirements. The legislation defines catastrophic risk as causing at least $1 billion in damage or more than 50 injuries or deaths and is designed to guard against mass disruption scenarios such as hacking a power grid.
Read at www.mercurynews.com
Unable to calculate read time
Collection
[
|
...
]