Microsoft-backed company claims its latest version of AI technology, the’most advanced’ yet, is the most recent versionOpenAI has released GPT-4, its latest artificial intelligence model that it claims exhibits “human-level performance” on several academic and professional benchmarks such as the US bar exam, advanced placement tests and the SAT school exams.
ChatGPT’s $20 paid version allows you to access the new model. It is multimodal which means that it can take input in both image and text form. These queries can then be parsed and responded to using text.
OpenAI stated that it embedded its new software in a number of apps, including language-learning app Duolingo which uses it to create conversational language bots, education company Khan Academy which has created an online tutor, and Morgan Stanley Wealth Management which is testing an internal chatbot with GPT-4 to retrieve, synthesize, and present information to its employees.
Because the model can accept text and images as input, it can generate detailed descriptions and answer questions based upon the content of a photograph. The company announced that it had partnered with Danish startup Be My Eyes, which connects people with vision impairments to human volunteers. This partnership will allow the creation of a GPT-4-based virtual volunteer to guide and help blind or partially-sighted individuals.
GPT-4’s predecessor GPT-3.5 captured the imaginations millions of people who used the chatbot ChatGPT to answer their questions.
OpenAI claims GPT-4 to be its “most advanced system” yet. It claims that it is far more reliable and capable of handling nuanced questions than its predecessor. GPT-4, for instance, scored in the 90th%ile on the Uniform Bar Exam, taken by potential lawyers in the US, compared to ChatGPT which only achieved the 10th percentile.
However, the company pointed out some issues: “Despite its capabilities GPT-4 has similar limitations as earlier GPT models: It is not fully reliable (eg, can suffer from ‘hallucinations’), has a limited context windows, and doesn’t learn from experience.”
The company stated that GPT-4 outputs should be used with caution, especially in situations where reliability is critical.
Microsoft announced earlier this year that it would make a multibillion dollar investment in OpenAI over many years. This bet is on the future development of generative AI — software capable of responding to complex human queries using natural-sounding languages. GPT-4 will be the backbone of Microsoft’s Bing chatbot. It was released in a limited capacity earlier this year. Microsoft will also announce the integration of GPT-4 into its consumer products within the next few days.
Google announced that its chatbot has been opened to a limited number of testers. It also said that customers of Google Cloud will be able to use its large language model PaLM to create applications for the first-time.
OpenAI had previously published details of GPT-3 models. However, OpenAI said that it would not disclose any technical details of GPT-4. This was due to safety and competitive concerns.
GPT-4 was put through stress tests by the company to determine its potential harms. The company also outlined the risks it sees in regards to privacy, bias, and cybersecurity. GPT-4 could “generate potentially dangerous content, such as hate speech or advice on planning attacks.” It can reflect different worldviews and biases. . . It can also generate code that’s compromised or vulnerable.” OpenAI stated it could provide detailed information about illegal activities, such as the development of biological weapons.
According to the company, it had also collaborated with an outside organization to determine if GPT-4 could perform autonomous actions without human input. It concluded that it was not yet capable.