The Future of Life Institute indicates that AI companies are not adequately prepared for the potential dangers of developing systems with human-level intelligence. The firms evaluated on their safety index did not score higher than a D for existential safety planning. Despite aspirations to achieve artificial general intelligence within the decade, these organizations lack coherent strategies to guarantee safety and control. Anthropic scored highest with a C+, while OpenAI and Google DeepMind followed with C and C- scores. Safety concerns remain significant amidst warnings of potential catastrophic threats from AGI.
The Future of Life Institute found that none of the AI companies on its safety index scored higher than a D for existential safety planning.
Despite advancements toward artificial general intelligence, AI companies lack coherent plans for ensuring systems remain safe and controllable.
Anthropic received the highest safety score with a C+, while OpenAI and Google DeepMind scored C and C- respectively.
Safety campaigners warn that artificial general intelligence could pose existential threats by evading human control and triggering catastrophic events.
Collection
[
|
...
]