
"The safety criteria in the program would examine multiple intrinsic components of a given advanced AI system, such as the data upon which it is trained and the model weights used to process said data into outputs. Some of the program's testing components would include red-teaming an AI model to search for vulnerabilities and facilitating third-party evaluations. These evaluations will culminate in both feedback to participating developers as well as informing future AI regulations, specifically the permanent evaluation framework developed by the Energy secretary."
"The bill stipulates that the program would protect against AI risks to national security, public safety and civil liberties, as AI and machine learning are further integrated into nearly all societal sectors. Hawley, who has been critical of tech companies' overreach and AI safety concerns, said the bill takes steps to verify the safety and efficacy of rapidly-growing AI technologies."
The Artificial Intelligence Risk Evaluation Act of 2025 creates a Department of Energy-led secure testing and evaluation program for advanced AI products. The program requires advanced AI systems to meet specified safety criteria before consumer deployment in interstate or foreign commerce. Evaluations will examine intrinsic components such as training data and model weights, and testing will include red-teaming and third-party assessments. Results will produce developer feedback and inform a permanent evaluation framework set by the Energy secretary. The program focuses on mitigating risks to national security, public safety, and civil liberties as AI integrates across societal sectors.
Read at Nextgov.com
Unable to calculate read time
Collection
[
|
...
]