The biggest companies in the technology industry have spent the year warning that the development of artificial intelligence technology is exceeding their wildest expectations and that they must limit who has access to it.

Mark Zuckerberg doubles down to a different beat: He’s giving it away.

Mr. Zuckerberg, the chief executive of Meta, said on Tuesday that he plans to provide the code behind the company’s latest and most advanced AI technology to developers and software enthusiasts around the world for free.

The decision, similar to the one Meta made in February, could help the company edge out competitors like Google and Microsoft. Those companies have moved faster to incorporate generative artificial intelligence — the technology behind OpenAI’s popular chatbot ChatGPT — into their products.

“When software is open source, more people can examine it to identify and fix potential problems,” Mr. Zuckerberg said in a post to his personal Facebook page.

The latest version of Meta’s AI was created with 40 percent more data than what the company released just a few months ago, and is believed to be considerably more powerful. And Meta provides a detailed roadmap that shows how developers can work with the vast amount of data it has collected.

Researchers worry that generative AI may overwhelm the amount of misinformation and spam on the internet, and pose dangers that even some of its creators don’t fully understand.

Meta adheres to a long-standing belief that allowing all types of developers to touch technology is the best way to improve it. Until recently, most AI researchers agreed with this. But in the past year, companies like Google, Microsoft and OpenAI, a San Francisco startup, have set limits on who has access to their latest technology and put controls around what can be done with it.

The companies say they are limiting access because of security concerns, but critics say they are also trying to stifle competition. Meta argues that it’s in everyone’s best interest to share what it’s working on.

“Meta has historically been a big proponent of open platforms, and it’s really worked well for us as a company,” Ahmad Al-Dahle, vice president of generative AI at Meta, said in an interview.

The move will make the software “open source,” which is computer code that can be freely copied, modified and reused. The technology, called LLaMA 2, provides everything anyone would need to build online chatbots like ChatGPT. LLaMA 2 will be released under a commercial license, meaning developers can build their own businesses using Meta’s underlying AI to power them – all for free.

By open-sourcing LLaMA 2, Meta can take advantage of improvements made by developers outside the company while — Meta executives hope — spurring AI experimentation.

Meta’s open source approach is not new. Companies often open source technologies to catch up with rivals. Fifteen years ago, Google open-sourced its Android mobile operating system to better compete with Apple’s iPhone. While the iPhone had an early lead, Android later became the dominant software used in smartphones.

But researchers argue that someone could deploy Meta’s AI without the safeguards that tech giants like Google and Microsoft often use to suppress toxic content. Recently created open source models could be used, for example, to flood the internet with even more spam, financial scams and misinformation.

LLaMA 2, short for Large Language Model Meta AI, is what scientists call a large language model, or LLM Chatbots like ChatGPT and Google Bard are built with large language models.

The models are systems that learn skills by analyzing enormous volumes of digital text, including Wikipedia articles, books, Internet forum conversations, and chat logs. By pointing out patterns in the text, these systems learn to generate text of their own, including terms, poetry and computer code. They can even carry on a conversation.

Meta executives argue that their strategy is not as risky as many believe. They say that people can already generate large amounts of misinformation and hate speech without using AI, and that such toxic material can be heavily restricted by Meta’s social networks like Facebook. They claim that releasing the technology may eventually strengthen the ability of Meta and other companies to fight against abuses of the software.

Meta conducted further testing of LLaMA 2’s “Red Team” before releasing it, Mr. Al-Dahle said. That’s a term for testing software for potential abuse and figuring out ways to protect against such abuse. The company will also publish a responsible use guide that contains best practices and guidelines for developers who want to build apps using the code.

But these tests and guidelines apply to only one of the models that Meta publishes, which will be trained and configured in a way that contains a fence and prevents abuse. Developers will also be able to use the code to create chatbots and apps without fences, a move that skeptics see as a risk.

In February, Meta released the first version of LLaMA to academics, government researchers and others. The company also allowed scholars to download LLaMA after it had been trained on vast amounts of digital text. Scientists call this process “release the weights.”

It was a remarkable move because analyzing all that digital data requires vast computing and financial resources. With the weights, anyone can build a chat room much more cheaply and easily than from scratch.

Many in the tech industry believed that Meta had set a dangerous precedent, and after Meta shared its AI technology with a small group of academics in February, one of the researchers leaked the technology onto the public internet.

In a recent opinion piece in The Financial TimesNick Clegg, Meta’s president of global public policy, argued that it is “not sustainable to keep fundamental technology in the hands of just a few large corporations”, and that historically companies that have released open source software have been served strategically as well.

“I look forward to seeing what you all build!” Mr. Zuckerberg said in his post.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *