How AI Risks Deepening Health Inequality

AI is rapidly transforming healthcare, but a new study warns that without careful oversight, it could reinforce existing disparities in access, bias, and digital poverty.

6
577

Artificial intelligence (AI) and large language models (LLMs) are increasingly shaping the future of healthcare. Advocates claim that these technologies have the potential to enhance accessibility, streamline medical decision-making, and address social determinants of health (SDOH).

However, a recent study published in Cell Reports Medicine highlights the risks that come with integrating AI into digital health systems. Researchers Ong, Seng, Law, Low, Kwa, Giacomini, and Ting warn that without intentional efforts to mitigate bias and increase digital accessibility, AI could exacerbate existing healthcare inequalities rather than resolve them.

“Despite rapid technological advancements, a concerted effort to address barriers to digital health… is still lacking. Low digital literacy, unequal access to digital health, and biased AI algorithms have raised mounting concerns over health equity. As AI applications and LLM models become pervasive, we seek to understand the potential pitfalls of AI in driving health inequalities and identify key opportunities for AI in SDOH from a global perspective.”

This study highlights the ways in which emerging technologies, often framed as neutral or universally beneficial, can, in fact, deepen existing power imbalances within healthcare. By exposing how AI and large language models risk reinforcing structural inequities—through biased data, digital exclusion, and the privatization of health information—it raises urgent questions about who controls the tools that shape mental and physical well-being. As the push toward AI-driven health solutions accelerates, this research underscores the need for critical scrutiny of how these technologies are designed, whose interests they serve, and whether they truly empower individuals or merely automate and obscure existing systems of control.

You've landed on a MIA journalism article that is funded by MIA supporters. To read the full article, sign up as a MIA Supporter. All active donors get full access to all MIA content, and free passes to all Mad in America events.

Current MIA supporters can log in below.(If you can't afford to support MIA in this way, email us at [email protected] and we will provide you with access to all donor-supported content.)

Donate

Previous articleJo Watson Chats With Rob Wipond About His Work and His Book
Next articleThe Ethics of Long-Term Psychiatric Drug Use and Why We Need a Better Way
Kevin Gallagher
Dr. Kevin Gallagher is currently an Adjunct Professor of Psychology Point Park University, in Pittsburgh, PA, focusing on Critical Psychology. Over the past decade, he has worked in many different community mental and physical health settings, including four years with the award-winning street medicine program, Operation Safety Net and supervising the Substance Use Disorder Program at Pittsburgh Mercy. Prior to completing his Doctorate in Critical Psychology, he worked with Gateway Health Plan on Clinical Quality Program Development and Management. His academic focus is on rethinking mental health, substance use, and addiction from alternative and burgeoning perspectives, including feminist, critical race, critical posthumanist, post-structuralist, and other cutting edge theories.

6 COMMENTS

  1. About 6 months ago, I was in my doctor’s exam room. She came in, smiled, and asked if she had my permission to use AI to “listen in” and take notes, then transcribe the notes into my record which would be a part of my permanent file. “That way,” she soothed, “I can spend more time with you and other patients instead of taking notes and typing them up myself, and I won’t be looking at a computer screen while I am here in the room with you.” I remembered a time when there was no computer in the exam room, not so long ago. I remembered when I actually undressed and put on a gown to be examined, which never happens these days. Never. Anything covered by my clothing is now unseen, anything which might give a hint to undiagnosed health conditions. Like unexplained bruising, a rash, skin cancers, localized swelling, etc. So I was already one down.

    I reluctantly agreed to let AI listen in and take notes. BTW, she automatically assumed that I, a 66 yr old disabled woman diagnosed many years ago with schizophrenia would know what AI was. I did, but she assumed.

    Later, when I accessed my health portal online from my home computer, I was horrified to see all the assumptions AI had made. I expected a straightforward, word for word court reporter-style compilation of notes. No.

    AI reworded what I had said in the exam room, which I believe was about ongoing insomnia and anxiety and a new concern about dizziness. I am very much a non-emotional, straight talker, very much to the point because I don’t want to waste my time or the doctor’s. AI concocted a narrative about my being a neurotic, hysterical woman who was overly concerned about nothing much and that this would all doubtless blow over by my next appointment. WTF?

    Immediately I messaged my doctor (her preferred method of contact) and pointed out that AI had misinterpreted and misogynized everything I had said. I asked if AI was being used by her to diagnose and/or suggest treatment options. And that I could tell AI was a long, long way from being patient-friendly and was perhaps not ready for primetime viewing or doctors’ offices. One of her minions replied that of course AI was not used for anything but note-taking and they would be happy to note in my file that I preferred it not be used in subsequent visits.

    Of course, then I had to wonder if my messaging to the doctor was also being monitored and answered by AI. Possibly. Who knows. And I am not a paranoid schizophrenic, btw. But I am becoming a paranoid patient about what is being put in my permanent record. AI was a menace in the room of my healthcare provider, and she seemed totally unconcerned. I also wondered if there was an extra charge to Medicare for the AI.

    Report comment

  2. Using perplexity A.I.
    https://www.perplexity.ai/
    because it has references (links) to where it gets its ideas,

    I asked “What percentage of patients electronic health records are in error”.

    That A.I. is reporting 21%, 25%, 50%, and 15% with associated details.

    I have no idea how anyone feels about this, but it seems to mean that millions of patients records are not accurate. With about nothing I can think of, that can be done about this.

    I once looked at my doctors notes – where he wrote down I had low potassium, when I had said I had had high potassium (with compromised kidneys) and just shook my head. It’s amazing they don’t kill more of us!

    I am eagerly using this particular A.I. It is terrific for asking “list the common side effects of xxx pharmaceutical. Include percentages of effected patients”

    Often discussed on this website – the withdrawal effects of withdrawing from xxx pharmaceutical. Ask A.I. what percentage of patients experience these withdrawal effects?

    I’m just guessing, but it appears A.I. is pulling from multiple sources – and giving me a succinct answer. No more searching all those PubMed papers, and various “look up your drug” – websites.

    Ask the half life? Ask the organ clearing the drug (I care, my kidneys are compromised)?

    I am quickly becoming a fan.
    Think how different life will be, when we all have an A.I. doctor app on our phones – within 5 years.

    If the doctors hate “I googled it” what are they going to think when we bring A.I. answers to them?

    I am not excusing all the errors. From my point of view I find the entire industry offensive.

    Report comment

  3. When we have people as insane, stupid and evil as Donald Trump and Elon Musk running the world, all these talking points like the dangers of AI very rapidly lose all salience. Do you think you will be destroyed by a system of AI or a president who openly wants to be king who is conducting a full scale assault on all your connections with the outer world, all connections with truth and reality, and all remnants of democracy and decency in the United States? Of course this government could deploy AI and inflame the danger by ceding so much ground to technology corporations and business in general, but then it is these evil thugs governing the system that are the problem and AI could as easily have been deployed to more benevolent ends: and you know that. If AI came about during the time of Roosevelt and the League of Nations probably we’d have a marvellous world shithole utopia shit only because we’d still be boring people – until there was a global magic mushroom revolution, naturally. And while there is no use in me evoking parallel universes here, nonetheless in this Universe our best hope is still probably the same, a magic mushroom or hallucinogenic revolution, and doesn’t that sound fun. Many will go absolutely psychotic of course but if you consult your reason for a moment, see how necessarily the ones who go bizzerk will be the most socially conditioned or greedy – for example can you imagine Trump or Musk NOT going psychotic on magic mushrooms or ayahuasca? Hence the most problematic people will go totally nuts, so we can regard the annunciation of the magic mushroom revolution as the birth of our new spiritual and psychological immune system which filters out all the crazies for the whole of humanity. And sane by that enlightened time will be non-conformist creative freedom. Insane will be regarded as conformist and socially conditioned. That’s how it was in the garden of Eden hence the tree of the knowledge of good and evil, which breeds social conditioning and conformity, destroyed everything. And as a result part of Adam and Eve went off with the snake. That’s why eventually to redeem humanity we’ll have to eat the snake. Or we could just eat ourselves because we are the snake.

    Report comment

  4. I don’t know how artificial intelligence’s involvement in ‘mind (mental) health’ will shape up in the future. However… Probably… I would guess that it may not be any worse than the psychiatric drugs (and the psychiatry and pharmaceutical industries) that have turned out to be harmful and deadly. However, it is still necessary to be cautious. There is a saying, ‘What goes around comes around /What goes away makes people look for what comes’. To avoid encountering this, one must be cautious.

    In my opinion.. The mental (mind) health system should consist of behavioral therapies such as ‘nature, travel, work, health’ etc.. Behavioral therapies may vary depending on people’s ‘lifestyles and personalities’. But the one constant story here is probably the belief that mental health can be improved with behavioral therapies. As we always say…. We can say that the best examples of this are examples like ‘Norwegian and Storia houses’..

    ‘Artificial intelligence’ driven mental (mind) health system can help the mental (mind) health system. At least it would be better than the mental health system that focuses on ‘toxic psychiatric medication and harmful psychiatric treatment’, which is deadly. Of course, it is necessary to be cautious.. In order for artificial intelligence to make a significant contribution to the ‘mental health system’, it needs to develop and improve further.

    —-

    For now, artificial intelligence seems to be playing the role of ‘dumb AI’. When I first came out with AI on my own blog (when it first came out), and specifically when I called Google’s AI ‘dumb AI’, there was a huge uproar. Because he was misinterpreting everything. And in fact, he still misinterprets most things. Nowadays, it is said that artificial intelligence is developing. But no..

    Artificial intelligence is not advanced or anything. Even today, he still cannot answer the questions correctly. He misinterprets them in a very dangerous way. Could this misinterpretation of AI pose a danger, especially to children and young people? (This is debatable..)

    Should artificial intelligence be included in the mental (mind) health paradigm? I think it should be bought, but as we said, we should approach it with caution. (In an AI-driven mental health system, if the AI ​​also recommends poisonous psychiatric drugs to people… 🙁 We would understand that this AI is controlled by someone.)

    After all, even if artificial intelligence plays the role of ‘idiot’, it can have deadly consequences for every human in the future. The fact that artificial intelligence plays the role of an idiot probably lies in the human intelligence that controls it. Probably… 🙂

    With my best wishes.. Y.E. (Researcher blog writer (blogger))

    Report comment

  5. You were using artificial intelligence to produce this article because the intellect IS artificial intelligence, a socially conditioned mechanical thought machine if you like. Real intelligence is through silent perception of what is as it is, and the understanding thereof. Facts brov, facts! Not alternative facts like you have in America.

    Report comment

LEAVE A REPLY