Working in areas like critical care, oncology (cancer) and hospice for over 45 years, I know that it is often hard to predict how long someone may live or when that person may die.
I have seen very ill or injured people with an optimistic prognosis unfortunately die and I have seen people expected to die very soon who recovered and went on to live for years. Back then, we used pain and other specialists, social workers, ministers, etc. for all our patients when needed. Some of our patients went into hospice.
In recent years, a new specialty called palliative care was developed to improve the quality of life for patients who have a serious or life-threatening disease with the goal of preventing or treating as early as possible, the symptoms and side effects of the disease and its treatment, in addition to any related psychological, social, and spiritual problems.
So I was very interested to read a July 1, 2020 article in StatNews titled “An experiment in end-of-life care: Tapping AI’s cold calculus to nudge the most human of conversations about using cutting-edge artificial intelligence (AI) models in palliative care that scan patient hospital medical records and generate emails to doctors about their patients considered most likely to die within a year.
In the case of one doctor who received such an email, she “was a bit surprised that the email had flagged” her patient who was in his 40s and seriously ill with a viral respiratory infection and too sick to leave the hospital. She thought “Why him? And should she heed the suggestion to have that talk?”
As the article states, those kinds of questions are increasingly cropping up among health care professionals at the handful of hospitals and clinics around the country using such AI models in palliative care, stating that:
The tools spit out cold actuarial calculations to spur clinicians to ask seriously ill patients some of the most intimate and deeply human questions: What are your most important goals if you get sicker? What abilities are so central to your life that you can’t imagine living without them? And if your health declines, how much are you willing to go through in exchange for the possibility of more time?“ (Emphasis added)
Some clinicians and researchers defend this AI by saying that doctors are “stretched too thin and lacked the training to prioritize talking with seriously ill patients about end-of-life care”.
Not surprisingly, the leaders of this palliative care AI discourage doctors from mentioning to patients that they were identified by an AI system because, as one doctor put it, ”To say a computer or a math equation has predicted that you could pass away within a year would be very, very devastating and would be really tough for patients to hear.”
Shockingly, while this AI is built around patients’ electronic health records, this article admits that some AI models also “sample from socioeconomic data and information from insurance claims.” (Emphasis added)
CAN AI RELIABLY PREDICT DEATH?
As the article admits, AI predictions of death “are often spotty when it comes to identifying the patients who actually end up dying” and that there has not been “a gold-standard study design that would compare outcomes when some clinics or patients are randomly assigned to use the AI tool, and others are randomly assigned to the usual strategies for encouraging conversations about end-of-life care.” (Emphasis added)
Nevertheless, using AI death predictions for earlier palliative care interventions is now also being tried for conditions like dementia. And last year in Great Britain, AI was touted as “better than doctors” in analyzing heart tests to determine which patients would die within a year.
ARE THERE OTHER AGENDAS?
The idea of basing medical decisions on a computer program to predict death is disturbing enough but there may be other agendas involved.
For example, in a May, 2020 Cancer journal article titled “Leveraging Advances in Artificial Intelligence to Improve the Quality and Timing of Palliative Care”, the authors called palliative care “a discipline of increasing importance in the aging population of the industrialized nations.” (Emphasis added
And according to a Hospice News article last year:
“Studies have found that palliative care saves health plans, health systems, and accountable care organizations close to $12,000 per person enrolled, as well as reducing hospital readmissions, emergency department visits, and hospice lengths of stay. “
Now Compassion and Choices (the former Hemlock Society) is not only fighting to legalize medically assisted suicide throughout the US, it has also been active in promoting training and expansion of palliative care with federal funding and now calls assisted suicide “one option in the palliative continuum” and that knowing assisted suicide “is an option is in itself palliative care.” (Compassion and Choices already maintains that VSED (voluntary stopping of eating and drinking) is already an ethical and legal means of ending life in the US.)
Even worse, a large and growing number of medical organizations-including the American Academy of Hospice and Palliative Medicine (AAHPM)-have endorsed or taken a neutral position on the issue of physician-assisted suicide.
An artificial intelligence program predicting death cannot replace the importance of an ethical healthcare provider who knows and truly respects the lives of his or her patients.
Good palliative care can be wonderful but, as I have written before, palliative care can go horribly wrong when misused.
We need to know the difference before we are able to trust that our own healthcare providers will give all of us the care we need and deserve, especially at the end of our lives.