by Amy Dusto, Johns Hopkins University Bloomberg School of Public Health
Credit: CC0 Public Domain
An inconspicuous box sits beside the Wi-Fi router, silently humming its own much-lower-energy radio waves through the house. The patient—who has a family history of Parkinson's disease—makes dinner, watches TV, and falls asleep. Nothing amiss.
The radio waves bounce back. Far away in the cloud, an AI-powered software program sifts through the subtle physical details that circumscribe such an ordinary day: respiration rate, blood flow, eye twitching and—there it is. After building slowly over the last month, a slight change in gait has tipped into statistically significant territory. Well before the patient feels symptoms or their doctor could notice them, the onset of the disease is documented.
Doctor and patient wake to a notification. With the alert of onset giving them valuable time, the pair meets to form a plan to curb the disease's progression.
Dina Katabi, Ph.D., director of the MIT Center for Wireless Networks and Mobile Computing and leader of the NETMIT Research Group, builds these radio wave sensing and AI modeling devices for patients at high risk of certain neurological and autoimmune diseases. In addition to diagnosing early conditions that often present only at advanced stages, they are useful for generating quick feedback on patients' responses to medication and treatment.
If large numbers of people were to opt in to passive health monitoring and allow devices like Katabi's to share their data, doctors and public health officials could see population-level trends like disease outbreaks before individuals begin to register the signs. For example, a significant increase in breathing rate disturbances across a population could be an early sign of a COVID flare-up.
Yet it doesn't take a "Terminator" fan to imagine worst-case scenarios. Already, we've seen health insurers misuse algorithms to deny care to eligible clients and cutting-edge health screening technologies, such as breast cancer MRI screening as a more sensitive and accurate alternative to mammograms, generate false positives. Could a rogue AI system start spitting out harmful medical analyses or somehow twist our personal data against us?
Not if we manage the machines as we should—and can—Katabi says. In her opinion, most of the problems with AI come from humans who do not sufficiently understand how to test it and determine its limitations. "Every machine operates properly under certain conditions and then creates bad, unreliable answers under other conditions," she says. "AI is nothing special in that sense. If you take a freezer and keep the door open, everything is going to melt."
Whatever AI's use, humans create the algorithms, supply the data they access, and, critically, determine what to do with the information they churn out. Realizing AI's potential hinges on our ability to thoughtfully manage it and take ethical advantage of the opportunities it enables.
Consider the potential of AI for a single patient who may have tens of thousands of individual data points in her electronic medical record: every measurement, observation, medication, and treatment from every medical professional she's encountered over her lifetime.
When she presents at the clinic with symptoms, her doctor must select from among hundreds of thousands of possible diagnoses and treatments, as well as consider the outcomes of similar patients in the past. This is all before even factoring in how nonmedical determinants like food or housing insecurity—relevant details that might be stored in other social services databases—affect the patient's health risks.
Finding the optimal diagnosis and treatment plan within this ocean of digital data is "not something humans can do on their own," says Jonathan Weiner, DrPH '81, MS, a professor in Health Policy and Management and founder of the Center for Population Health IT.
AI can't fly solo, either. While algorithms and the servers that power them bring brute computing force that can filter the ocean for patterns, they do not have creative and critical minds or decision-making power. Most of the job comes before a programmer tells the AI algorithm to "run," including collecting and selecting what data to use, processing it, writing code that can understand it, and validating the results. Moreover, AI isn't even always the best option, he points out. In several data validation studies, other less complex, easier-to-understand statistical techniques have proven to be as good as, if not better than, AI analyses.
"AI is the icing on the cake, but the cake—which embodies the health information infrastructure—is much harder to create," Weiner says. The idea that humans could build a program, turn it on, and sit back while perfect predictive analyses print out like the work of an army of oracles couldn't be farther from the truth.
Rather, Weiner likens the AI-human partnership to a right brain-left brain synergy: The whole point of the machines is to supercharge the analytical ability (left brain) so that the rest of human intelligence (right brain) can operate to maximum effect. "So that's where the AI can come in—but be careful: The AI will just look for relationships and they may not make sense," he says. "It has to be the human and the computer together."
Still, even with expert oversight, people are likely to misinterpret or politicize AI's output, as happened around the COVID vaccine, explains Scott Zeger, Ph.D., MS, John C. Malone Professor of Biostatistics. From the high-powered computing that enabled the mRNA vaccine's record-speed development to the sophisticated modeling used to determine how to distribute it, AI was involved every step of the way. "Then the worst of both people and machines came to the fore with enormous amounts of disinformation that kept people from taking vaccines and cost many lives."
Machines, Zeger says, will never be able to disentangle the intricacies of diverse human motives and biases that color all our interactions with information.
AI is buzzy, but it's only the latest example of humanity's increasing computational prowess. Similarly, today's disinformation battles are only the latest manifestation of a war for responsible use that began with the dawn of technology, Zeger explains. That battle belongs to us all, not just the scientists and engineers. "We have to insist upon finding objective truth by the well-tested process of experimentation, careful analysis, human judgment, and wisdom enabled by computing power."
Provided by Johns Hopkins University Bloomberg School of Public Health
Post comments