Another week of fascinating developments in the field of artificial intelligence. OpenAI, based in San Francisco, unveiled a new model that can translate languages in real-time and tutor children in algebra.
Sundar Pichai, the Google CEO, hired a concert hall in Mountain View with 22,000 seats to show two hours of technological advances that he called “as profound and fire”.
The top minds of healthcare, just a few miles from the Stanford University Medical AI Conference, were focused on a far more urgent issue: paperwork.
Last week, thousands of people watched Google’s Sundar Pichai speak about AI
Both sides of the Atlantic are experiencing a feeling of overwork and underpayment. The British Medical Association conducted a survey among trainee GPs and found that seven out of ten were suffering from burnout or stress.
Many factors are at play, including staff shortages, pay and long working hours. The paperwork is a common driver.
The average NHS worker spends nearly 14 hours per week on administrative tasks, which is a 25% increase in the last seven years. Hatim Abdulhussein is a GP, and the chief officer of a NHS Innovation Network in Kent, Surrey, and Sussex.
Junior doctors voted in March overwhelmingly — 98 percent in favor — to continue with work stops during September to fight for the NHS to raise their pay by 35%. Doctors’ real wages in England have fallen by 25 percent since 2008. According to the General Medical Council, doctor stress is at an “all-time” high.
While Silicon Valley claims that AI can diagnose your ailments and replace your radiologists, doctors are more excited about the use of technology to reduce busy work and restore work-life balance.
Jesse Ehrenfeld said that AI would allow doctors to spend more of their time at home.
Jesse Ehrenfeld is the president of the American Medical Association. He counts at least 35 AI “scribe” companies, whose tools combine automatic transcription with chatGPT style large language models. They have created chatbots that have shown an amazing ability to understand context, and to generate text for recording, trancribing and summarizing patient visits.
Ehrenfeld said that: “I knew a doctor who, the very first time she used one of these devices, cried because it was the only time in months that she could be home to eat dinner with her children.”
Tortus is a British startup whose software assistant listens in on patient consultations and creates notes that a doctor can sign. It has been testing the system at several NHS hospitals.
The stakes are huge. The stakes are huge. The AI medical revolution may, in the short term, look like the white collar jobs of finance, accounting, or other research or data entry-intensive jobs.
Dom Pimenta is a 36-year old cardiologist who founded Tortus. He said: “If we look at the numbers, 60 percent of our time is spent on computers. This means that 40 percent of our time is with patients.
If you reduce your computer time to 15%, you can spend 85% of your time with patients. Imagine what could be done for the NHS if there were more doctors and nurses.
Adbulhussein, from the NHS Network added: “This doesn’t replace people.” It’s about enhancing the work that we do because there isn’t enough people.
It is not true that there are no frontline diagnostic devices on the market. In the last two years, the US Food and Drug Administration approved 247 devices that use AI, or machine learning. This is a subset AI. The deployment is expected to be slow.
AI can be used to analyse x-rays
Over 80 percent of these devices are used in radiology – the taking and interpretation of x rays. Since years, tech enthusiasts have predicted the end of radiology as machines are replacing humans who can detect the smallest of signs.
According to one estimate, only 2 percent of American radiology practices are using AI tools. This is due to the lack of real-world data and doctors’ reluctance to incorporate it into their busy workflows. Last year, the NHS set aside PS21 million for an AI Diagnostic Fund to help NHS trusts adopt tools that can diagnose strokes and interpret chest X rays.
Institutional inertia, and slow-footed regulations are strong forces that can prevent the adoption of new technologies.
Even those who create AI tools can’t explain their function definitively.
Sylvia Plevritis is a professor at Stanford University who teaches biomedical data science, radiology, and other related fields. She said, “One of the main issues driving ethics is the black-box nature of these tools.” We don’t really understand how they operate.
Another obstacle is liability. According to US law, doctors who rely on algorithms to diagnose or treat patients are legally liable for any errors made by the machine. Ehrenfeld stated: “Physicians are interested in knowing who is responsible when things go wrong. Why is it my problem if I use the model as instructed and there is a problem with the underlying models?
The hype continues to grow. Vinod Khosla – the tech billionaire who wrote the cheque for OpenAI – has predicted that AI would enable everyone to have “free doctors” within the next decade. His convictions are not unfounded.
The New England Journal of Medicine published “challenge case” articles for years. In these cases, it presented a patient with symptoms and asked its readers to guess what the illness was. In a November study, OpenAI’s GPT4 was found to outperform 99.98% of “simulated” humans readers. These were created by experimenters based on actual answers provided by doctors for several years.
OpenAI was founded by billionaire Vinod Kosla
These results may be the stuff of fever dreams for venture capitalists. But the regulatory and legal work that still needs to be done means that the flowering of robot doctors is tantalisingly near and frustratingly far.
This is why, at Stanford last weekend, the focus of much of discussion was on mundane tasks like transcription and note generation: tasks which are both essential and tedious. The “history” of a patient — their symptoms that they tell the doctor — is at the heart of every case. This information helps the clinician determine the best course of treatment and diagnosis.
Pimenta remembers times in his early career when he would send out treatment letters at 1am, “because I was so behind with my day work”. He continued: “I once had a boss that said, ‘Good medicine means doing the boring, simple things right all day long, every day.'” It turns out AI is perfectly suited for this.
The revolution that we were promised may not have happened, but the revolution is still happening.
Post Disclaimer
The following content has been published by Stockmark.IT. All information utilised in the creation of this communication has been gathered from publicly available sources that we consider reliable. Nevertheless, we cannot guarantee the accuracy or completeness of this communication.
This communication is intended solely for informational purposes and should not be construed as an offer, recommendation, solicitation, inducement, or invitation by or on behalf of the Company or any affiliates to engage in any investment activities. The opinions and views expressed by the authors are their own and do not necessarily reflect those of the Company, its affiliates, or any other third party.
The services and products mentioned in this communication may not be suitable for all recipients, by continuing to read this website and its content you agree to the terms of this disclaimer.