This is a portion of my talk on technology and palliative care. I presented along with two great physicians, Drs. John Boll and Alex Nesbitt.
Let’s first consider the development of both human expertise and technology. If we posit the notion that the ability of medical personnel has improved over time to get closer to some notion of perfection we can also then consider that technology, in a general sense, has improved also. In many cases the technology has helped the doctors, in some cases advances in understanding have helped doctors. However, in the end doctors do not achieve a level of perfection, and technology will never be able to completely replace doctors, which is something we will consider later.
When we consider medical care for resource poor areas we have three approaches: more healthcare providers, technology assisted care, and autonomous care. I want to explore the notions of technology assisted and autonomous care. We can get defensive about autonomous medical care but this inquiry relates to places that don’t enjoy the medical infrastructure that we do.
For example, with a Mars mission, we may have to rely on a lot of technology and minimally trained personal. In Boston, you trip over doctors. We naturally resist technology’s incursion into medical care because people recognize the empathy and sacrificial attributes brought by real people. However, technology can, at the least, assist doctors in diagnoses and the development of a patient care plan.
One element of technology I would like to talk about is artificial intelligence, which mimics human knowledge and cognition. It can use rules of thumb and learning capabilities to make recommendations. It can also include sensing technologies that mimic human abilities from seeing to speaking.
It uses rules of thumb, called heuristics, and confidence factors to come up with a conclusion. That is what these IF/THEN and CF statements represent. But machines can’t handle problems that haven’t been thought of, they can’t think of “what if”.
Some appealing prospects for resource poor areas are to use an artificial intelligence system to suggest a diagnosis and treatment using a dialogue with the health care provider or family member, which I will call the “small black box”. A grander approach is to use a system of AI that learns about the patient over time, often called deep learning, as well as other inputs. I will call this the “big black box”.
The small black box uses patient signs and symptoms and rule-based/heuristic analysis to generate a suggested patient treatment plan.
However, let’s consider what could occur beyond using rules of thumb for diagnosis. Neural networks allow software to learn by identifying patterns from a mass of data. They have been used in the financial world for a long time but how far can they extend into medicine? A recent article in Nature illustrates the potential for these deep learning machines to go beyond heuristics.
The researchers considered melanoma diagnoses. This diagnosis is typically guided by rules described by the mnemonic: ABCD. Where they consider asymmetry, borders, color and diameter of the lesions.
Researchers went beyond rule-based diagnoses by using 14,000 images previously diagnosed by dermatologists. Could an AI system categorize the images as benign lesions, cancerous lesions and non-cancerous growths? The AI system was accurate 72% of the time.
This work was followed up by a test set of 2,000 biopsy proven images. These were fed into the neural network. They compared the computer findings with dermatologists’ conclusions.
This graph shows the performance of the neural network system versus dermatologists. You can see the wide range of assessments by the dermatologists but generally the AI algorithm was superior.
However, I recognize this might be considered a simple, visual evaluation far different then the variegated nature of pain and palliative care.
I would like to introduce an autonomous diagnosis and treatment system that is a futuristic concept but offers potential in resource poor areas. This system would use the small black box AI system with its heuristics and patient inquiries along with non-invasive measurements. In this big black box, the system receives patient signs and symptoms every year and learns about the individual patient. By this process, the black box learns about the peculiarities of the patient and can recommend additional required procedures or treatment plans.
Noninvasive evaluations would obtain data from head to toe, perhaps including brain imaging along the way. What might this look like? Perhaps it is like a space suit or a gelatinous bath. Maybe you would walk around like Darth Vader for an hour, I don’t know.
However, there is something missing. Namely the patient’s distinctive community and beliefs. Hard data can only go so far. Cultural norms and spiritual needs must be part of the mix. Designers use ethnography to gain this kind of nuanced insight. Ethnography usually relies on observations and interviews to develop a textured understanding of people. This information can also be part of the black box. In this way, the black box can understand the presentation of pain, the importance of dignity, and the elements of faith traditions that might not be otherwise considered in the ‘hard science’ aspect of artificial intelligence.
Ethnographic data is usually obtained by observation and surveys. We are all ethnographers of sorts, we quickly learn to read people and develop insights. For example, this can be summarized by Dr. Nesbitt’s ongoing question for his patients, “what do I need to know about you to so that I might treat you better?”
We all carry our cultural values and past experiences with us. This can make it difficult to see things through someone else’s eyes. For example, consider the arrow hidden in the FedEx logo.
Using AI for patient care does present problems. It can effectively program in prejudice so that the ethnographic data that makes sweeping conclusions such as “a certain population collectively care for the elderly” doesn’t always work. You could well have a case where an elderly person is ostracized for a variety of reasons that ethnographic approaches might not capture.
In addition, data from individuals becomes part of some sort of database that can present a myriad of privacy issues.
Finally, there is the impact of a machine replacing a human in any way. It is an affront to our pride, identity perhaps even dignity.
Therefore we believe that the output of AI, even a comprehensive “big black box” system needs to be moderated by a loving caregiver. Someone who has a relationship with the patient and can deliver automated treatment plans through the affectionate mind of a caregiver.
This is a democratization of medical care where a minimally trained but loving caregivers have the tools to execute a palliative care plan.
Fundamentally we believe that in resource poor areas, the best person for supervising medical care is a person who has a caring relationship with the patient. Technology can act as an adjunct to allow them to do a better job.
People have the distinctive ability to empathize with a patient. We also recognize and appreciate the personal sacrifice given by a caregiver. Additionally, people have the wonderful ability to develop creative approaches that machines will never obtain.