Xlera8

US Government Mandates AI Companies Disclose Safety Testing

Penka Hristovska Penka Hristovska
Published on: January 30, 2024

AI companies will soon have to start sharing their safety test results with the US government.

White House special adviser on AI, Ben Buchanan, said the US government wants “to know AI systems are safe before they’re released to the public — the president has been very clear that companies need to meet that bar.”

The requirement is part of the executive order President Joe Biden signed three months ago, aimed at regulating rapidly advancing AI technology. This order, set for review by the White House AI Council on Monday, primarily includes a directive under the Defense Production Act that mandates that AI companies disclose critical information to the Commerce Department, encompassing safety test results and other pertinent data.

“We know that AI has transformative effects and potential,” Buchanan said. “We’re not trying to upend the apple cart there, but we are trying to make sure the regulators are prepared to manage this technology,” Buchanan added.

So far, developers have agreed to specific categories for safety testing, but they’re not yet required to follow a certain standard for these tests. As part of the executive order signed by President Biden in October, the National Institute of Standards and Technology is tasked with creating a standardized framework for evaluating safety across these companies.

Additionally, the Biden administration is exploring legislative options with Congress and collaborating with other nations and the European Union to establish regulations for managing the technology.

The government has also bolstered efforts to hire AI experts and data scientists for federal agencies. In the meantime, the Commerce Department has already developed a draft rule targeting U.S. cloud companies that offer server services to foreign AI developers.

In a related development, 9 federal agencies, including the Departments of Defense and Treasury, have finalized risk assessments on the use of AI in vital national infrastructure sectors, such as the electric grid.