by Darcy Millett, Naomi Stekelenburg, CSIRO
Credit: Pixabay/CC0 Public Domain
Imagine you're shopping online and talking to a helpful bot about buying some new shoes. That's the basic idea behind large language models (LLMs). LLMs are a type of artificial intelligence (AI) and they are gaining traction in health care.
At our Australian e-Health Research Center (AEHRC), we're working to safely use LLMs and other AI tools to optimize health care for all Australians. Despite their increasing popularity, there are some misconceptions about how LLMs work and what they are suitable for.
New (digital) generation
One of the most widely used and well-known types of AI is generative AI. This is an umbrella term for AI tools that create content, usually in response to a user's prompt.
Different types of generative AI create different types of content. These could be text-based (like OpenAI's Chat-GPT or Google's Gemini), images (like DALL-E) and more.
LLMs are a type of generative AI that can recognize, translate, summarize, predict and generate text-based content. They were designed to imitate the way humans analyze and generate language.
Brain training
To perform these tasks, a neural network (the brain of the AI) is trained. These complex mathematical systems are inspired by neural networks of the human brain. They are very good at identifying patterns in data.
There are many models of neural networks, but most LLMs are based on a model called "transformer architecture." Transformer architectures are built with layers of the neural network called "encoders" and "decoders." These layers work together to analyze the text you put in, identify patterns, and predict what word is most likely to come next based on the input.
AI models are trained using LOTS of data. LLMs identify patterns in text-based data and learn how to generate language.
Dr. Bevan Koopman, a senior research scientist at AEHRC, says it's important to remember what tasks LLMs are performing.
"A lot of misconceptions surround the fact that LLMs can reason. But LLMs are just very good at recognizing patterns in language and then generating language," Bevan says.
Once trained, the model can analyze and generate language in response to a prompt. They do this by predicting what word is the most likely to come next in a sentence.
LLMs in health care
LLMs are often seen as a "silver bullet" solution to health care problems. In a world of endless data and infinite computer power this might be true—but not in reality. High-quality and useful LLMs rely on high-quality data… and lots of it.
We find health care data in two forms—structured and unstructured. Structured data has a specific format and is highly organized. This includes data like patient demographics, lab results, and vital signs. Unstructured data is typically text based, for example written clinician notes or discharge summaries.
Most health care data is unstructured (written notes). This leads people to think we don't need structured data to solve health care problems—because LLMs could do it for us.
But according to Derek Ireland, a senior software engineer at AEHRC, this isn't entirely true.
"Maybe with infinite computing power we could, but we don't have that," David says.
Fit for health
While LLMs aren't a cure-all solution for health care, they can be helpful.
We've developed four LLM-based chatbots for a range of health care settings. These are continuously being improved to best support patients and work alongside clinicians to ease high workloads. For instance, Dolores the pain chatbot provides patient education and takes clinical notes to help prepare clinicians for in-depth consultations with patients.
We're also studying how people use publicly available LLMs for health information. We want to understand what happens when people use them to ask health questions, much like when we Google our symptoms.
It's important to remember LLMs are only one type of AI. Sometimes their application is appropriate and sometimes a different technology might do a better job.
We're also developing other types of AI tools like VariantSpark and BitEpi to understand genetic diseases and programs to analyze and even generate synthetic medical images.
Safety first
Using LLMs and AI safely and ethically in health care is crucial. Just like any tool in health care, there are regulations in place to make sure AI tools are safe and used ethically.
Our health care system is very complex and the same tools won't work everywhere. We work closely with researchers, clinicians, caregivers, technicians, health services and patients to ensure technologies are fit for purpose.
We all have a role, including AI
LLMs might not be a miracle cure for all our health care problems. But they can help support patients and clinicians, make processes more efficient and ease the load on our health care system.
We're working towards a future where AI not only improves health care but is also widely understood and trusted by everyone.
Provided by CSIRO
Post comments