Family doctors are already turning to ‘Dr Google’ for diagnosis.
But Google has now developed AI that can work alongside doctors when answering questions about ailments.
The tech giant reported in the journal Nature that its latest model, which processes language similar to ChatGPT, can answer medical questions with 92.6 percent accuracy.
This is equivalent to the answers given by nine doctors in the UK, US and India, who were asked to answer the same 80 questions.
Google researchers say the technology does not threaten GPs’ jobs.
Google has now developed AI that can work alongside doctors while answering questions about ailments. But Google researchers say technology does not threaten GPs’ jobs
The tech giant reported in the journal Nature that its latest model, which uses the same language processing as ChatGPT, can answer medical questions with 92.6 percent accuracy.
But it gives detailed and accurate answers to questions like ‘Can incontinence be cured?’ And foods to avoid if you have rosacea.
It could be used for medical helplines such as NHS 111 in the future, the researchers suggest.
Dr Vivek Natarajan, senior author of a study on the AI ​​program Med-PalM, said: ‘This program is something we want doctors to be able to trust.
‘When people turn to the internet for medical information, they face information overload, so they can choose the worst case scenario out of 10 possible diagnoses and go through a lot of unnecessary stress.
‘This language model will provide a concise expert opinion, unbiased, that cites its sources and discloses any uncertainty.
‘It can be used for triage, to understand the urgency of people’s conditions and queue them up for treatment.
‘We need this to help us when we are short of specialist doctors, and it will free them up to do their jobs.’
The Med-PaLM artificial intelligence program was adapted from a program called PaLM, which specialized in language processing but was not specifically trained in health.
Researchers carefully teach AI to deliver higher-quality medical information and how to communicate with uncertainty when knowledge gaps exist.
The program was trained on answering doctors’ questions, so it could reason correctly and avoid giving information that could harm the patient.
It had to meet a benchmark called MultiMedQA, which is six datasets of medical topics, scientific research and medical consumer questions, as well as HealthSearchQA – a dataset of 3,173 medical questions that people searched for online.
Med-PaLM only gave answers that risked potential patient harm in 5.8 percent of cases, the study, published in the journal Nature, said.
This is also comparable to the rate of potentially harmful answers given by the nine doctors surveyed, which was 6.5 percent.
There is still a risk of ‘hallucination’ in AI – meaning it can generate answers without the data behind them, as engineers don’t fully understand and the technology is still being tested.
But Dr Natarajan said: ‘This technology can help doctors answer questions in medical exams, which are really difficult.
‘It’s really exciting and doctors don’t need to fear that AI is going to take their jobs, because it will give them more time to spend with patients instead.’
However, James Davenport, Medlock Professor of Information Technology at Hebron and Bath University, said: ‘The press release is accurate as far as it goes, describing how this paper advances our knowledge of using large language models (LLMs) to answer medical questions. .
But there is an elephant in the room, which is the difference between ‘medical questions’ and real medicine.
‘Medical questions are not answered within the practice of medicine – if it were purely medical questions, we would not need teaching hospitals and doctors would not need years of training after their academic courses.’