How Google became cautious about AI and gave Microsoft an opening

Two years ago, Google was pushed by two researchers to create a chatbot using technology that was more powerful than any other available. Their conversational computer program could debate philosophy and make jokes about TV, while also making puns about horses and cows.

According to those who heard them, the researchers Daniel De Freitas (left) and Noam Shazeer (right), chatbots similar to theirs would revolutionize how people search the internet and interact with computers.

They tried to get the chatbot integrated into Google Assistant virtual assistant, and then asked Google for a public demonstration.

The people claimed that Google executives repeatedly rejected them, saying that in at least one instance, the program did not meet company standards for safety and fairness in AI systems. They decided to quit Google in 2021 and start their own company to develop similar technologies.

Google, the company that pioneered artificial intelligence, is now being challenged by Corp. last month announced plans to integrate its Bing search engine using the technology behind ChatGPT . This chatbot, which was created by a data-type=”phrase” href=”https://www.wsj.com/topics/person/elon-musk”>Elon Musk, is a seven-year-old startup. ChatGPT was developed by , a seven-year old startup founded by Elon Musk and called OpenAI. It piggybacked on early AI advancements made at Google.

Google has released its own chatbot months after ChatGPT was launched. It is based partly on the technology Mr. De Freitas, and Mr. Shazeer developed. The chatbot, named Bard, draws information from the internet to answer questions in conversational format. Google stated that it was currently testing Bard internally as well as externally, with the goal of making it available to all users in the coming weeks. It stated that it was also looking to integrate similar technology into its search results.

Google’s cautious approach to AI was formed by years of controversy about its AI efforts. These included internal arguments over accuracy and bias, as well as the public firing of a staffer who claimed its AI had reached sentience.

Executives were concerned about the potential risks that public AI product demos would pose to their reputation and the search-advertising company that generated most of the $283 billion in revenue at Alphabet, GOOG-1.48%decrease; red downward pointing triangle, according to former and current employees.

“Google is trying to find a balance between taking too much risk and maintaining thought leadership in this world,” Gaurav Nemade, who was a former product manager at Google who worked on the chatbot from 2010 to 2020.

Shazeer and De Freitas declined to be interviewed by an external representative.

A Google spokesperson stated that their research was intriguing at the time but that there is a huge gap between a prototype and a product that can be trusted by people every day. It must be more thoughtful about AI technology releases than smaller startups, according to the company.

Google’s approach may prove prudent. Microsoft stated in February that it would place new limitations on its chatbot following users reporting incorrect answers and unhinged responses .

Sundar Pichai (chief executive of Alphabet and Google) wrote last month to employees that some of the most popular products were not the first to market, but had gained trust over time.

Mr. Pichai stated that “This will be an extended journey–for all, across the field.” “The best thing we can do now is to build a great product, and develop it responsibly.”

Google’s chatbot efforts date back to 2013, when Larry Page (now Google CEO) hired Ray Kurzweil as a computer scientist. He popularized the notion that machines could one day surpass human intelligence. This concept is known as “technological singularity.”

Kurzweil started working on several chatbots. One of them was named Danielle after a novel he was writing at the time. He said this later. Kurzweil Technologies Inc. is a software company that he founded before joining Google. Kurzweil refused to answer a request for interviews.

Google also purchased the British artificial-intelligence company DeepMind, which had a similar mission of creating artificial general intelligence, or software that could mirror human mental capabilities.

Academics and technologists raised more concerns about AI, such as the potential for mass surveillance via facial recognition software. This pressured companies like Google to stop using the technology in certain ways.

In 2015 , OpenAI was formed by a group tech entrepreneurs and investors. This was partly in response to Google’s increasing prominence in the field. OpenAI was initially a non-profit organization. It stated that it wanted to ensure AI wasn’t manipulated by corporations and was instead used to benefit humanity. In 2018, Musk resigned from OpenAI’s board.

After employee protests against Google’s work on Project Maven, a U.S. Department of Defense contract that required the use of AI technology to identify and track potential drone targets like cars using AI, Google finally promised not to use it in military weapons in 2018.

The company also announced seven AI principles by Mr. Pichai to help guide its work. These are intended to limit the spread unfairly biased technologies ,such that AI tools should be accountable and “built and tested to safety.”

At that time, Mr. De Freitas (a Brazilian engineer who works on Google’s YouTube video platform) started an AI side project.

When he was a child, Mr. De Freitas envisioned working with computer systems that could produce convincing dialog. His fellow researcher, Mr. Shazeer, said in a video interview posted to YouTube in January. Google’s Mr. De Freitas wanted to create a chatbot that mimicked human conversation better than any other attempts.

The original name of the project was Meena. While Mr. De Freitas, and other Google researchers, fine-tuned its responses, it remained secretive for many years. Some employees were concerned about the dangers of such programs after Microsoft in 2016 stopped the public release of Tay, a chatbot that users had manipulated into giving them problematic responses such as support for Adolf Hitler.

Meena was first seen outside in 2020 in a Google Research Paper. It stated that the chatbot had been fed 40 million words via social-media conversations in public domain.

OpenAI had previously developed a similar model called GPT-2, which was based on 8,000,000 webpages. Although it released a version for researchers, it initially resisted making it public because it was concerned that it could be misused to generate large amounts of deceptive or biased language.

The team behind Meena wanted their tool to be released by Google as well, even though it was in a restricted format like OpenAI. The proposal was rejected by Google leadership because the chatbot did not meet company’s AI principles of safety and fairness. Mr. Nemade, a former product manager at Google, stated that.

A Google spokesperson stated that the chatbot has been subject to many reviews and was therefore not available for wider release due to various reasons.

The chatbot was developed by the team. The project was renamed LaMDA to reflect Language Model for Dialogue Applications. Mr. Shazeer is a veteran software engineer from Google Brain’s AI research unit. They added more data and computing power to it. Mr. Shazeer was a key contributor to the Transformer, an AI model that has been widely praised for making it easier to create powerful programs such as ChatGPT.

But the technology that underpinned their work quickly led to a public dispute. Timnit Gebru was a prominent AI ethics researcher at Google. She claimed that she was fired in late 2020 for refusing retract a paper about the risks inherent to programs like LaMDA, and then complaining to colleagues. Google claimed she was not fired, and that her research was insufficiently thorough.

Jeff Dean, Google’s head for research, made sure to demonstrate that Google was still committed to responsible AI development. In May 2021, Google promised to double its AI ethics group.

One week after the vow, Mr. Pichai appeared on the stage at the company’s flagship annual conference. He demonstrated two prerecorded conversations between LaMDA and , which, upon command, answered questions like a dwarf planet Pluto or a paper airplane.

After a last-minute demonstration to Mr. Pichai’s, Google researchers created the examples. The company stressed its efforts to improve the accuracy of the chatbot and reduce the possibility that it could be misused.

Two Google vice presidents stated that “our highest priority when creating technologies such as LaMDA is working to minimize such risks,” in a blog post.

Later, Google considered releasing LaMDA at its May 2022 flagship conference, according to Blake Lemoine, an engineer fired by Google last year after he published chat conversations with the chatbot, claiming it was sentient. He said that the company decided not to release LaMDA after Mr. Lemoine’s conclusion started generating internal controversy. Google claims Mr. Lemoine’s concerns are unfounded and that his disclosures breached data-security and employment policies.

According to people familiar with the work, Mr. De Freitas, and Mr. Shazeer, also sought ways to integrate LaMDA in Google Assistant. This was a software program that the company launched four years ago on its Pixel smartphones and home speakers systems. Every month, Assistant was used by more than 500 million people to perform basic tasks like checking the weather or scheduling appointments.

People familiar with the work of Assistant said that experiments were conducted using LaMDA to answer questions from users by the team responsible for it. The people claimed that Google executives did not make the chatbot public for demo purposes.

Google’s refusal to release LaMDA to public frustrated Mr. De Freitas, and Mr. Shazeer. They took steps to leave Google and start working on a startup that uses similar technology, according to people.

The people stated that Mr. Pichai intervened personally and asked the pair to stay to continue their work on LaMDA, but not to make a promise to release it to the public. In November 2018, Mr. De Freitas & Mr. Shazeer formed Character Technologies Inc.

The Character’s software was released last year and allows users to interact with chatbots. These bots can play the role of well-known figures like Socrates, stock types, or psychologists.

“It caused quite a stir within Google,” Mr. Shazeer stated in an interview uploaded to YouTube. However, he did not elaborate on the matter. “But eventually, we decided that we would probably have more success launching stuff as startups.”

Google has been fighting to assert its status as an AI innovator since Microsoft’s new deal with OpenAI.

Google introduced Bard in February on the eve a Microsoft event that featured OpenAI technology integration by Bing. The company then gave the press and wider public a second glimpse at Bard two days later, during an event in Paris, which Google claimed was originally going to discuss additional regional search features.

Google stated that it frequently reassesses conditions for releasing products. It said that Bard was released to testers because of the excitement.

Google has had internal demonstrations since early last year of search products that incorporate responses from generative AI tools such as LaMDA. Elizabeth Reid, Google’s vice president for search, stated in an interview.

One use case for search where the company considers generative AI to be most beneficial is for certain types of queries that have no right answer. The company calls this NORA. This is where the traditional blue Google link may not satisfy the user. Ms. Reid stated that the company sees search potential for complex queries such as solving math problems.

Executives said that accuracy was still a problem with similar programs. These models are known to create a response when they lack sufficient information. Researchers call this “hallucination” and tools built on LaMDA technology have in some cases provided fictional restaurant recommendations .

Microsoft called the new Bing version a work-in-progress after users reported disturbing conversations using the chatbot embedded into the search engine ,. They introduced changes such as limiting chat lengths to reduce the likelihood that the bot will give creepy or aggressive responses. In February, both Microsoft and Google showed previews of their bots. They also included inaccuracies that were caused by the programs.

Ms. Reid described LaMDA as “a bit like talking to children.” “If the child thinks they have to give you an explanation and doesn’t know what it is, they will make up an answer that sounds plausible.”

Google continues to refine its models including training employees to recognize when they should admit ignorance and not make up answers. Ms. Reid said that LaMDA has seen an improvement in safety and accuracy metrics over the years.

LaMDA can combine millions of websites into one paragraph of text. This could exacerbate Google’s long-running disputes with major news outlets and other online publications by denying websites traffic. According to an insider familiar with the matter, Google executives insist that generative AI must be used in search results so as not to upset website owners. This includes including source links.

Prabhakar Raghavan (Google senior vice president overseeing search engine), stated that “we’ve been very careful about taking care of the ecosystem concerns.” “And that’s something we intend to be very concentrated on.”