Seven major US AI companies have agreed to voluntary safeguards on the technology’s development, the White House announced Friday, pledging to strive for safety, security and trust even as they compete for the potential of artificial intelligence.

The seven companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI – will formally announce their commitment to the new standards at a meeting with President Biden at the White House on Friday afternoon.

The announcement comes as the companies race to outdo each other with versions of AI that offer powerful new tools for creating text, photos, music and video without human input. But the technological leaps have sparked fears that the tools will facilitate the spread of misinformation and dire warnings of “risk of extinction” as self-aware computers evolve.

On Wednesday, Facebook’s parent company Meta announced its own AI tool called Llama 2 and said it would release the underlying code to the public. Nick Clegg, the president of global affairs at Meta, said in a statement that his company supports the safeguards developed by the White House.

“We are pleased to make these voluntary commitments alongside others in the sector,” said Mr Clegg. “They are an important first step in ensuring that responsible fences are established for AI and they create a model for other governments to follow.”

The voluntary safeguards announced Friday are just an early step as Washington and governments around the world establish legal and regulatory frameworks for the development of artificial intelligence. White House officials said the administration was working on an executive order that would go further than Friday’s announcement and supported the development of bipartisan legislation.

“Companies that develop these emerging technologies have a responsibility to ensure that their products are safe,” the administration said in a statement announcing the agreements. The statement said the companies must “uphold the highest standards to ensure that innovation does not come at the expense of the rights and safety of Americans.”

As part of the agreement, the companies agreed to:

  • Security testing of their AI products, in part by independent experts and to share information about their products with governments and others trying to manage the risks of the technology.

  • Ensuring that consumers can spot AI-generated material by implementing watermarks or other means to identify generated content.

  • Publicly reporting the capabilities and limitations of their systems regularly, including security risks and evidence of bias.

  • Deploying advanced artificial intelligence tools to tackle society’s biggest challenges, like curing cancer and fighting climate change.

  • Conducting research on the risks of bias, discrimination and invasion of privacy from the spread of AI tools.

“AI’s track record shows the insidiousness and pervasiveness of these dangers, and companies are committed to rolling out AI that mitigates them,” the Biden administration’s statement said Friday before the meeting.

The deal is likely to slow efforts to pass legislation and impose regulation on the emerging technology. Lawmakers in Washington are racing to catch up with the rapid advances in artificial intelligence. And other governments do the same.

The European Union last month moved quickly to consider the most comprehensive efforts to regulate the technology. The European Parliament’s proposed legislation would put strict limits on some uses of AI, including for facial recognition, and require companies to disclose more data about their products.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *