Demis Hassabis, DeepMind CEO, warns that huge AI funding can lead to grifting and hype.

According to Sir Demis Hassabis of DeepMind, the surge in money flowing into artificial intelligence is creating a crypto-like buzz that obscures the amazing scientific progress made in the field.

The head of Google’s AI division said that billions of dollars are being invested in generative AI startups and products. “This brings with it a lot of hype, and perhaps some grifting, and other things you see in other hyped up areas like crypto or whatever.”

“Some of this has spilled into AI which is unfortunate, I believe. It also clouds the science, and research which are phenomenal,” he said. In some ways, AI is not hyped up enough. But in other senses, it’s over hyped. We’re discussing all kinds of things that just aren’t real.”

OpenAI’s ChatGPT bot launched in November 2022, sparking a investor frenzy. Start-ups were racing to deploy generative AI as well as attract venture capital funding.

According to CB Insights, market analysts, VC groups invested over $42.5bn into 2,500 AI start up equity rounds in 2018.

Investors on the public markets have also been rushing to invest in the Magnificent Seven, a group of technology companies that includes Microsoft, Alphabet, and Nvidia. These companies are leading the AI revolution. Their growth has helped propel the global stock market to its strongest first quarter performance in five year.

Regulators are already checking companies who make false AI claims. Gary Gensler said in December that “one shouldn’t AI-wash and one shouldn’t greenwash”, and cautioned against both.

Hassabis said that despite some of the hype surrounding AI, he was still convinced it was the most revolutionary invention in human history.

He said: “I believe we are only scratching surface of what will be possible in the next decade and beyond.” “We are at the beginning of a golden age of scientific discovery. A new Renaissance, perhaps.”

He said that DeepMind’s AlphaFold, released in 2021, is the best proof-of-concept for how AI can accelerate scientific research.

AlphaFold was used by over 1 million biologists in the world to predict the structure of 200mn protein molecules. DeepMind uses AI to accelerate research in other areas of biology, including drug discovery, delivery, material science and mathematics. It also explores other areas such as weather prediction, nuclear fusion, and drug discovery. Hassabis stated that his goal was to use AI as “the ultimate tool for science”.

DeepMind, a company founded in London in 2010, was created with the goal of creating “artificial intelligence” (AGI) that is equal to human cognitive abilities. AGI is still decades away for some researchers, if it can be achieved at all.

Hassabis stated that AGI would not be possible without one or two critical breakthroughs. He added, “I wouldn’t be surprised if AGI happened within the next decade.” It’s not a certainty, but I wouldn’t be surprised if it happened. You could say that there is a 50% chance. This timeline hasn’t really changed since DeepMind was founded.”

Hassabis said that given the power of AGI it would be better to pursue the mission using the scientific method than the hacker-style approach favoured in Silicon Valley. He said: “I believe we should adopt a scientific approach in order to build AGI, because of its importance.”

The DeepMind co-founder informed the British government of the first global AI Safety Summit that took place at Bletchley last year. Hassabis welcomed a continuing international dialogue, which will see summits held in South Korea, France and the UK, as well as the creation of AI safety institutes in the US and UK.

He said, “I believe these are important steps.” “But there’s a lot to do, and we have to hurry up because technology is improving exponentially.”

DeepMind researchers published a paper last week outlining a method called SAFE for reducing factual errors (also known as hallucinations) generated by large language model such as OpenAI’s GPT or Google’s Gemini. These models are not reliable, which has led lawyers to submit fictitious references and discouraged many companies from using these models commercially.

Hassabis stated that DeepMind is exploring new ways to fact-check and base its models, such as by checking responses with Google Scholar or Google Search.

By double-checking the output, he compared his approach to how its AlphaGo model mastered ancient game of Go. A large language model can also check if a response makes sense and adjust. It’s like AlphaGo making a move. The network doesn’t make the first move you think of. He said that the network takes some time to plan and think.

SAFE, when challenged to authenticate 16,000 facts, agreed with human annotators in 72 percent of cases — but it was 20 times more expensive.