Regulating artificial intelligence has been a hot topic in Washington in recent months, with lawmakers holding hearings and news conferences and the White House announcing voluntary AI security commitments from seven tech companies on Friday.

But a closer look at the performance raises questions about how significant the actions are in setting policies around the rapidly evolving technology.

The answer is that it is still not very significant. The United States is only at the beginning of what is likely to be a long and difficult road to the creation of AI rules, lawmakers and policy experts said. While there have been hearings, meetings with top tech executives at the White House and speeches to introduce AI bills, it’s too early to predict even the roughest drafts of regulations to protect consumers and contain the risks the technology poses to jobs, the spread of misinformation and security.

“It’s still early days, and nobody knows what the law will look like yet,” said Chris Lewis, president of the consumer group Public Knowledge, which has called for the creation of an independent agency to regulate AI and other technology companies.

The US lags far behind Europe, where lawmakers are preparing to enact an AI law later this year that would put new restrictions on what are seen as the riskiest uses of the technology. In contrast, there remains much disagreement in the United States about the best way to deal with a technology that many American lawmakers are still trying to understand.

That suits many of the tech companies, policy experts said. While some of the companies said they welcomed rules on AI, they also argued against tough rules similar to those created in Europe.

Here’s a summary of the state of AI regulations in the US.

The Biden administration has been on a quick listening tour with AI companies, academics and civil society groups. The effort began in May with Vice President Kamala Harris meeting at the White House with the heads of Microsoft, Google, OpenAI and Anthropic, where she pushed the tech industry to take security more seriously.

On Friday, representatives of seven tech companies appeared at the White House to announce a set of principles to make their AI technologies safer, including third-party security checks and watermarking of AI-generated content to help curb the spread of misinformation.

Many of the practices that were announced were already in place at OpenAI, Google and Microsoft, or were on their way to being implemented. They are not enforceable by law. Promises of self-regulation also fell short of what consumer groups had hoped for.

“Voluntary commitments are not enough when it comes to Big Tech,” said Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center, a privacy group. “Congress and federal regulators must put meaningful, enforceable guardrails in place to ensure that the use of AI is fair, transparent and protects the privacy and civil rights of individuals.”

Last fall, the White House introduced a Blueprint for an AI Bill of Rights, a set of guidelines for protecting consumers through the technology. The guidelines are also not regulations and are not enforceable. This week, White House officials said they were working on an executive order on AI, but did not disclose details and timing.

The loudest drumbeat about regulating AI has come from lawmakers, some of whom have introduced bills on the technology. Their proposals include the creation of an agency to oversee AI, accountability for AI technologies that spread misinformation and the requirement of licensing for new AI tools.

Lawmakers have also held hearings on AI, including a hearing in May with Sam Altman, the chief executive of OpenAI, which runs the ChatGPT chatbot. Some lawmakers floated ideas for other regulations during the hearings, including nutrition labels to notify consumers of AI risks.

The bills are in their earliest stages and so far do not have the support needed to move forward. Last month, the leader of the Senate Chuck Schumer, Democrat of New York, announced a month-long process for the creation of AI legislation, which included education sessions for members in the fall.

“In many ways we’re starting from scratch, but I think Congress is up to the challenge,” he said during a speech at the time at the Center for Strategic and International Studies.

Regulatory agencies are beginning to take action by monitoring some of the problems arising from AI

Last week, the Federal Trade Commission opened an investigation into OpenAI’s ChatGPT, requesting information about how the company secures its systems and how the chat service could harm consumers by creating false information. FTC Chairwoman Lina Khan said she believes the agency has ample power under consumer protection and competition laws to police problematic behavior by AI companies.

“Waiting for Congress to act is not ideal given the usual timeline of Congressional action,” said Andres Sawicki, a law professor at the University of Miami.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *