OpenAI and Meta are ready to launch new AI models that can’reason’

OpenAI and Meta have announced that they are about to release new artificial intelligence models. These models will, according to them, be able of reasoning and planning. This is a critical step towards machines having superhuman cognition.

OpenAI executives and Meta executives announced this week that they were preparing the launch of the next version of their large language model systems, which power AI applications like ChatGPT.

OpenAI, backed by Microsoft, said that the next model of its GPT-5 is expected “soon”.

“We’re hard at work trying to figure out how we can get these models to not only talk, but to actually reason, plan. . . Joelle Pineau is vice president of AI Research at Meta.

Brad Lightcap, OpenAI’s Chief operating officer, said in an interview that the next generation GPT would show progress when solving “hard problems”, such as reasoning.

He said: “We will start to see AI capable of taking on more complex tasks, in a more sophisticated manner.” “I believe we are just scratching the surface of the reasoning abilities that these models possess.”

Lightcap said that today’s AI systems were “very good at small, one-off tasks”, but their capabilities were “pretty limited”.

The upgrades from Meta and OpenAI are part of the wave of large language models released by companies this year, including Google and Anthropic.

The pace of technological progress is increasing as tech companies race to develop ever more sophisticated generative AI – software that can produce humanlike words and images, codes, videos, and code of a quality indistinguishable with human output.

AI researchers refer to reasoning and planning as important steps toward what they call “artificial general Intelligence” – human-level cognition – because they enable chatbots and digital assistants complete sequences of tasks and predict their consequences.

Yann LeCun, Meta’s chief AI researcher, said at an event held in London on February 2, that AI systems today “produce a word after another without really thinking or planning”.

He said that because they have trouble answering complex questions and retaining information over a long time, they “make stupid errors”.

He said that adding reasoning would mean an AI model searches over possible answers, “plans the sequencing of actions”, and builds a mental model of the effects of its actions.

He added that this is “a big missing piece” on which we are working to bring machines up to the next level.

LeCun announced that it was developing AI “agents”, which could plan and book a trip from a person’s office in Paris, to another one in New York. This would include getting to the airport.

Meta intends to integrate its new AI model in Ray-Ban’s smart glasses and WhatsApp. In the next few months, it will release Llama 3 models in different sizes for various applications and devices.

Lightcap said OpenAI will have “more information soon” about the next version GPT.

“I’ll think about it over time.” . . “We’ll see models move towards longer, more complex tasks,” said he. “And implicitly, that requires an improvement in their reasoning ability.”

Chris Cox said that at Meta’s event in London the cameras built into the Ray-Ban sunglasses could be used to examine, for example, a broken espresso machine. An AI assistant powered by Llama 3 would then explain to the wearer the best way to fix it.

LeCun stated that “we will always be talking with these AI assistants.” AI systems will control our entire digital diet.