Top AI companies have agreed to set limits and guidelines on the development of new AI tools.

Amazon, Google, and Meta are some of the companies that have announced these guidelines. They are in a competitive race to outperform each other with their respective versions of artificial intelligence.

7/22/20234 min read

On Friday, the White House announced that seven prominent AI companies in the United States have voluntarily agreed to put safeguards in place for the development of AI technology. Despite their competition to maximize the potential of artificial intelligence, these companies have committed to managing the risks associated with new AI tools. The agreement reflects their shared dedication to ensuring responsible and safe AI development.

The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formally made their commitment to new standards for safety, security and trust at a meeting with President Biden at the White House on Friday afternoon.

“We must be cleareyed and vigilant about the threats emerging from emerging technologies that can pose — don’t have to but can pose — to our democracy and our values,” Mr. Biden said in brief remarks from the Roosevelt Room at the White House.

“This is a serious responsibility; we have to get it right,” he said, flanked by the executives from the companies. “And there’s enormous, enormous potential upside as well.”

The announcement of voluntary safeguards comes at a time when these companies are in a fierce competition to outperform each other with their AI versions, which provide groundbreaking methods for generating text, photos, music, and video without human intervention. However, as these technologies advance, concerns have arisen about the potential spread of disinformation and the alarming possibility of AI becoming so sophisticated and human-like that it poses a risk to humanity's existence, as warned by experts. This highlights the importance of taking responsible measures to ensure AI development is both safe and beneficial.

The voluntary safeguards are just a preliminary and cautious measure as Washington and governments worldwide work on establishing legal and regulatory frameworks for AI development. These agreements involve testing products to identify security risks and implementing watermarks to enable consumers to identify AI-generated content. While these steps are a positive start, more comprehensive regulations are being considered to address the broader implications of AI technology in the future.

Lawmakers have faced challenges in effectively regulating social media and other rapidly evolving technologies. The fast pace of technological advancements often outpaces the development of appropriate regulations. As a result, it can be difficult for existing laws to address new and complex issues arising from these technologies. Policymakers are continually working to find the right balance between promoting innovation and ensuring responsible use of technology, but it remains a complex and ongoing process.

The White House has not provided specific information regarding an upcoming presidential executive order that aims to address the issue of controlling China's and other competitors' access to new artificial intelligence programs or the components used in their development. This executive order is expected to tackle concerns related to safeguarding sensitive AI technologies and preventing potential adversaries from gaining unauthorized access to critical AI advancements. Further details on the content and scope of the executive order are yet to be disclosed.

The forthcoming presidential executive order is anticipated to include additional restrictions on advanced semiconductors and limit the export of large language models. These restrictions aim to enhance security measures to prevent unauthorized access and distribution of sensitive AI technologies. However, securing large language models can be challenging, as much of the software can be easily compressed and stored on small, portable devices like a thumb drive. The executive order is expected to address these concerns while balancing the need for innovation and technological advancements in the AI industry.

“We are pleased to make these voluntary commitments alongside others in the sector,” Nick Clegg, the president of global affairs at Meta, the parent company of Facebook, said in a statement. “They are an important first step in ensuring responsible guardrails are established for A.I. and they create a model for other governments to follow.”

As part of the safeguards, the companies have agreed to take several important steps. These include conducting security testing, in which independent experts will also be involved. They will conduct research to address concerns related to bias and privacy. Additionally, they will share information about potential risks with governments and other organizations. The companies will also work on developing tools that can help tackle significant societal challenges, such as climate change. Moreover, they will implement transparency measures to identify AI-generated content, ensuring that users are aware of what is AI-generated and what is not. These commitments reflect the companies' dedication to responsible AI development and addressing potential challenges associated with the technology.

In a statement announcing the agreements, the Biden administration said the companies must ensure that “innovation doesn’t come at the expense of Americans’ rights and safety, Companies that are developing these emerging technologies have a responsibility to ensure their products are safe,” the administration said in a statement.

Brad Smith, the president of Microsoft and one of the executives attending the White House meeting, said his company endorsed the voluntary safeguards.“By moving quickly, the White House’s commitments create a foundation to help ensure the promise of A.I. stays ahead of its risks,” Mr. Smith said.

Anna Makanju, the vice president of global affairs at OpenAI, described the announcement as “part of our ongoing collaboration with governments, civil society organizations and others around the world to advance AI governance.”

Paul Barrett, the deputy director of the Stern Center for Business and Human Rights at New York University, said that more needed to be done to protect against the dangers that artificial intelligence posed to society. “The voluntary commitments announced today are not enforceable, which is why it’s vital that Congress, together with the White House, promptly crafts legislation requiring transparency, privacy protections, and stepped-up research on the wide range of risks posed by generative A.I.,” Mr. Barrett said in a statement.

European regulators are planning to implement AI laws later this year, which has led many companies to advocate for similar regulations in the United States. Some lawmakers have introduced bills that propose licensing requirements for AI companies, the establishment of a federal agency to oversee the industry, and data privacy regulations. However, there is no consensus among members of Congress on the specific rules.

Lawmakers are faced with the challenge of finding the right approach to address the rise of AI technology. Some are primarily concerned about potential risks to consumers, while others are more focused on the importance of keeping up with global competitors, particularly China, in the quest for leadership in the AI domain. The differing perspectives highlight the complexity of the issue and the need for thoughtful and well-balanced regulations.