The Biden-Harris administration has officially turned to Big Tech companies and other AI stakeholders to address the safety and reliability of AI development.
On Thursday, the US Department of Commerce Announced the creation of the AI Safety Institute Consortium (AISIC). The consortium, which reports to the Department of Commerce’s National Institute of Standards and Technology (NIST), is tasked with meeting President Biden’s AI mandates. executive order. This includes “developing guidelines for red teaming, capability assessments, risk management, safety and security, and watermarking on synthetic content,” Commerce Secretary Gina Raimondo said in the announcement.
The list of more than 200 participants includes major technology players who have been developing artificial intelligence tools. This includes OpenAI, Google, Microsoft, Apple, Amazon, Meta, NVIDIA, Adobe, and Salesforce. The list also includes stakeholders from academia, including institutes at MIT, Stanford, and Cornell, as well as industry think tanks and researchers such as the Center for AI Safety, the Institute of Electrical and Electronics Engineers (IEEE), and the Responsible AI Institute.
The AI consortium is the result of Biden’s sweeping executive order that seeks to tame the wild west of AI development. AI has been considered a significant risk to national security, privacy and surveillance, electoral misinformationand job security to name a few. “The U.S. government has an important role to play in setting standards and developing the tools we need to mitigate risks and harness the immense potential of artificial intelligence,” Raimondo said.
While the European Parliament has been working to develop its own AI regulations, this is a significant step by the US government in its effort to formally and concretely control AI. The full list of AISIC participants can be found here.