The healthcare system is rushing to release breakthrough AI models, despite profound questions.

CHICAGO – Industries grapple with the potential of ChatGPT – a breakthrough application that generates responses similar to those given by humans.

Healthcare is not an exception. ChatGPT, the underlying technology of ChatGPT, is being used by hospitals to solve key issues in medicine and business. Companies inventing next-generation AI products are struggling to keep up with the demand.

Epic, a behemoth company that touches every corner of the healthcare industry, has made generative AI a priority. Seth Hain is Epic’s director of research and developments. He told Insider that the pace at which Epic creates new tools, and the interest from its customers, are unprecedented.

He said, “I have been working with healthcare software for over 19 years and I was part of the development of our AI and machine-learning capabilities during that time. I cannot think of anything comparable.”

Microsoft has become a healthtech leader since the launch of ChatGPT. The company has a close partnership with OpenAI, the creators of the chatbot.

Peter Durlach is Nuance’s Chief Strategy Officer. He told Insider that Microsoft and Nuance have “been inundated” by providers, health plans and life science companies who want to learn more about generative AI and how it can be used.

Durlach stated that there are pain points everywhere.

Insider has discovered that the large language models behind ChatGPT automate medical notes, accelerate research and assess patient populations to detect signs of disease. The projects are not all public.

All signs point to a new chapter in AI that could benefit patients and doctors.

However, providers are pushing into uncharted territories. Hospitals, not wanting to be left behind, are carefully moving forward with their experiments to find the balance between safety and speed. The leading generative AI models will require them to solve the legal, ethical, and practical mysteries that surround them.

Peter Lee, Microsoft’s head of Research, recently issued what appeared to be a warning for healthcare executives.

He and his team have been exploring how generative AI can be used in healthcare for eight months. He said that it could provide doctors with guidance on treatment plans for patients, during a panel discussion at the Healthcare Information and Management Systems Society’s annual conference in April.

Lee announced on stage in Chicago, “What we discovered is that things are complex.” There are many benefits, but there are also some frightening risks.

Lee’s panel included leaders in tech and healthcare. Their questions on generative artificial intelligence were broad.

You may be wondering: who do you sue if something goes wrong. What is the right speed for healthcare organizations to move? What if AI models reinforce the biases which make healthcare inequitable and unfair?

Reid Blackman is an ethicist and consultant. He believes that the rationale behind ChatGPT is unknowable to prospective patients, which is a serious ethical issue.

“Look, when you make a cancer diagnose, I want to know the exact reasons for it,” he said.

Lee appeared to be wrestling with this problem in real-time. He said that the GPT model is so complex, his team was unable to determine if it can identify and explain its biases.

Lee said, “There are deep scientific mysteries we’re still trying to understand in addition to ethical and legal puzzles.”

In spite of this, ambitious projects utilizing generative AI have already begun.

People are already using ChatGPT for free therapy, and mental-health apps are experimenting with it. ChatGPT is already being used by people for free therapy and many mental-health apps are experimenting.

BJ Moore is the chief information officer of Providence Health System. He said that companies are pushing Microsoft to train their large language models using vast databases of de-identified data from patients, making them more useful for clinical settings.

Julius Bogdan, head of HIMSS advisory service, stated that health systems use generative AI to identify those patients who are at risk for sepsis and other conditions. Kaiser Permanente is one of the largest health systems in the United States. They have a similar project underway to combat heart disease, but refused to discuss it with Insider due its early stages.

For the time being, much of the early work done in generative AI focuses on mundane, back-office tasks.

Cris Ross (Mayo’s Chief Information Officer) said that the Mayo Clinic’s initial goals for generative AI are modest, but they could save providers each day. Mayo could be better equipped to develop the tools with low-stakes practices.

Ross said, “I’d like to have a chatbot at my help desk for doctors who are having computer problems.” “Simple stuff, right?”

Hain explained that Epic’s priority for generative AI was to help clinicians become more efficient. It launched in April a tool which drafts messages to help providers respond to patients.

Nuance, a company that began making dictation software for doctors in 1990, has recently merged its AI models with GPT-4 – the latest model behind ChatGPT.

The company gave presentations to invite-only attendees at the HIMSS Conference, revealing how this combination could transform the care industry.

Insider saw Dr. Julie OConnor a Nuance consultant ask a “patient,” an audience volunteer, what brought him to the urgent care clinic, which was the first place where Nuance tools were demonstrated. She pretended to perform a physical examination.

A screen behind her showed Nuance’s DAX Express its newest product, recording furiously a transcript of their conversation. The transcript was used to create a medical note seconds after the visit.

The note outlined the patient’s story — he had been struggling to breath, particularly in the last few days — and OConnor’s findings — such as the presence of fluid in his lungs.

OConnor corrected the error in the note and clicked the button to add it to the patient’s medical record.

The presentation continued in a hospital where OConnor dictated into an app Nuance makes for nurses. It filled in more paperwork automatically, including the urine output of patients.

Nuance envisioned a generic app to help the patient at home after he was discharged for an acute exacerbation in congestive cardiac failure.

I’m following a ketogenic diet. Is this a problem? The app asked the patient, who was now being played by Nuance employee Jon Dreyer.

It wrote that the keto diet could increase blood pressure and cause problems for people with congestive cardiac failure.

Nuance and Epic have been working together closely. This collaboration has enabled the medical-note creator, for example, to be configured so that it can work with records already used by clinicians. This gives Nuance, Microsoft and other generative AI tools a clear way to put them into action.

You may not be able to partner with other companies or app developers.

The health systems are still unsure about the safe deployment of generative-AI within their data ecosystems. Some are worried that less tech-savvy providers may fall behind and create a backlog for tool makers such as Epic.

HIMSS’s Bogdan cited adoption and privacy issues as reasons why generative AI is not a sure thing. He’s seen AI algorithms that are incredibly powerful sit on the shelves as a former chief information officer for a healthcare system.

He said that even if you build the most innovative things, nothing will happen if the clinicians do not understand and accept it.