Google’s Bard chatbot makes the same mistake that cost $120bn in share price

Google’s artificial Intelligence Chatbot continues to make the same mistake that caused a $120bn loss in the tech giant’s share prices a month ago.

Bard was open to the public in the US, UK and Canada on Tuesday. However, Bard still incorrectly claims the James Webb Space Telescope took the “first pictures of a planet other than our own solar system”.

The Very Large Telescope in Chile captured the first ever picture of a planet other than the solar system in 2004.

When it was first introduced by Google in February, Bard provided the same incorrect answer.

The error led to a $120bn sale of shares in internet search giant, amid doubts about the technology.

Google stated that it was testing the bot to ensure “Bard’s answers meet a high standard for quality, safety, and groundedness in real world information”.

Bard, however, still gave the same false information when he was asked Wednesday’s same question.

Google admitted that the chatbot will make mistakes when asked factual questions.

Google acknowledged that its algorithms can provide incorrect, misleading, or false information in a blog post. However, it presented it confidently.

A Google spokesperson pointed to a paper that was published by a Google research executive about the limitations of Bard’s technology.

According to the paper, Bard’s models can generate plausible-sounding responses even if they contain factual errors. This is not ideal for factuality matters, but it could be useful in generating unexpected or creative outputs.

Google has called Bard an “experiment” rather than a product that is ready for general usage.

Chatbots are designed to answer questions and provide conversational answers to users. They also digest information from its search engine, which has billions of lines of data.

AI technology is built on a large language model that is intended to give plausible answers to questions. It is unable to distinguish fact from fiction, and it will often repeat false information from the internet.

This could lead to the AI bots “hallucinate” by inserting realistic sounding text into answers.