Looking for any resources on "AI" as used in medicine. Does anybody know of reliable research, investigative journalism and / or doctors or patients perspectives. May be positive but must be reliably free of marketing. #AI #AIResearch
"AI" raises its head in the second series of the Excellent The Pitt, which given its genesis, seems to suggest doctors have some pretty strong feelings about its being pushed so hard by management.
The proliferation of artificial intelligence (AI) algorithms for public use has led to many creative healthcare applications, some with the potential to create or worsen health inequities.
Having jumped through all the hoops, it appears that my institution does not get me access to this article of clear public interest.. "Settling the Score on Algorithmic Discrimination in Health Care" ai.nejm.org/doi/pdf/10.1056/AI…
Luckily, I can ask an #AI Companion (beta) to interpret it for me so I don't have to trouble myself with using my own mind.
I mentioned The Pitt earlier. So far, I have seen the first 2 episodes of the 2nd series. In the 2nd, we have a couple of scenes which relate to "AI", summarised here in Wikipedia*:
"Baran demonstrates a new #AI charting app, though Whitaker identifies a medication error. Outside, [Dr. Baran Al-Hashimi] and Robby debate the role of AI in medicine, until a college student arrives in psychosis."
Spoiler warning: The Pitt, series 2 broadly discussed
Sensitive content
Dr. Al-Hashimi makes a number of claims. It will be no great surprise at this point that she is close to the marketing..
1. "generative AI" will save doctors time 2. Which will improve both patient and physician satisfaction 3. "Generative AI is 98% accurate at present" 4. Of a transcription app: "It's protected, confidential, doesn't even stay on my phone..."
1. How many people with "self-driving cars" follow the instructions relating to holding the wheel & paying attention. Indeed, how phsysiologically possible is it to do so when a machine is doing the work?
Relatedly, how much time will be left for doctors to check for "minor" errors when the text of a consultation is made for them? And how effective will doctors do this. Anybody who reads their own texts must recognise how seldom one catches even one's own mistakes.
Now, I don't know how many steps are required for the kind of note taking / transcription apps all medical podcasts have been pushing for years by now, but my guess is that a context must narrow via an attempt at an analogue of differential diagnosis before we get to a "Risperdal" or a "Restoril".
It may, relatedly, be relatively unlikely for one of these apps to mistake these particular drugs, though I am certain that doctors gave feedback to the writers about similar catastrophic errors.
Who else but @pluralistic to lay out this problem, not only in terms of medicine but also, specifically, in terms of medicine, including a devastating critique of the most plausible case that can be made for "AI": theguardian.com/us-news/ng-int…
Medical English (ESL)
in reply to Medical English (ESL) • • •An example of the kind of thing I'm looking for is this from the BMJ:
What does a doctor look like? Asking AI
bmj.com/content/391/bmj-2025-0…
Medical English (ESL)
in reply to Medical English (ESL) • • •Medical English (ESL)
in reply to Medical English (ESL) • • •With a passing note of the usefulness of the Unpaywall extension* (here, with #Firefox), here's another, "Using labels to limit AI misuse in health"
nature.com/articles/s43588-024…
unpaywall.org/
* It was redundant here but often comes in handy
#AI #LLM #medicine
Using labels to limit AI misuse in health - Nature Computational Science
NatureMedical English (ESL)
in reply to Medical English (ESL) • • •Having jumped through all the hoops, it appears that my institution does not get me access to this article of clear public interest.. "Settling the Score on Algorithmic Discrimination in Health Care" ai.nejm.org/doi/pdf/10.1056/AI…
Luckily, I can ask an #AI Companion (beta) to interpret it for me so I don't have to trouble myself with using my own mind.
Have I stated a preference for #OpenScience yet? @openscience
Medical English (ESL)
in reply to Medical English (ESL) • • •Which brings us to this #OpenAccess article on a similar topic:
"Racism is an ethical issue for healthcare artificial intelligence"
cell.com/cell-reports-medicine…
pdf: cell.com/cell-reports-medicine…
Medical English (ESL)
in reply to Medical English (ESL) • • •Sensitive content
I mentioned The Pitt earlier. So far, I have seen the first 2 episodes of the 2nd series. In the 2nd, we have a couple of scenes which relate to "AI", summarised here in Wikipedia*:
"Baran demonstrates a new #AI charting app, though Whitaker identifies a medication error. Outside, [Dr. Baran Al-Hashimi] and Robby debate the role of AI in medicine, until a college student arrives in psychosis."
en.wikipedia.org/w/index.php?t…)
* relatedly: theregister.com/2026/01/16/wik…
Wikimedia’s 25th birthday gift: Letting more AIs scour pages volunteers created
Simon Sharwood (The Register)Medical English (ESL)
in reply to Medical English (ESL) • • •Sensitive content
Dr. Al-Hashimi makes a number of claims. It will be no great surprise at this point that she is close to the marketing..
1. "generative AI" will save doctors time
2. Which will improve both patient and physician satisfaction
3. "Generative AI is 98% accurate at present"
4. Of a transcription app: "It's protected, confidential, doesn't even stay on my phone..."
Medical English (ESL)
in reply to Medical English (ESL) • • •Sensitive content
Whitaker found an error:
"It says here she takes Risperdal, an antipsychotic.
She takes Restoril when needed for sleep."
This is the exchange which prompts Dr. Al-Hashimi's comment about accuracy and the caveat:
"You must always carefully proofread
and correct the minor errors."
en.wikipedia.org/w/index.php?t… (the Wikipedia page is, aptly, flagged for the presence of #LLM generated material
Restoril redirects to: en.wikipedia.org/wiki/Temazepa…
chemical compound
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)Medical English (ESL)
in reply to Medical English (ESL) • • •Which brings up 2 problems.
1. How many people with "self-driving cars" follow the instructions relating to holding the wheel & paying attention. Indeed, how phsysiologically possible is it to do so when a machine is doing the work?
Relatedly, how much time will be left for doctors to check for "minor" errors when the text of a consultation is made for them? And how effective will doctors do this. Anybody who reads their own texts must recognise how seldom one catches even one's own mistakes.
Medical English (ESL)
in reply to Medical English (ESL) • • •Secondly (the last thought here for today), let's return to that 98% with the help of Signal's @Mer__edith here talking with Udbhav Tiwari at #39c3
The Mathematics of failure:
media.ccc.de/v/39c3-ai-agent-a…
The timestamp on the url ought to take you to 28 minutes into the talk.
AI Agent, AI Spy
media.ccc.deMedical English (ESL)
in reply to Medical English (ESL) • • •Now, I don't know how many steps are required for the kind of note taking / transcription apps all medical podcasts have been pushing for years by now, but my guess is that a context must narrow via an attempt at an analogue of differential diagnosis before we get to a "Risperdal" or a "Restoril".
It may, relatedly, be relatively unlikely for one of these apps to mistake these particular drugs, though I am certain that doctors gave feedback to the writers about similar catastrophic errors.
Medical English (ESL)
in reply to Medical English (ESL) • • •The 98% mentioned by Dr. Al-Hashimi, however, is a marketing figment and is likely to remain so...
We'll take on the "It's protected, confidential, doesn't even stay on my phone..." another time since each of these is a whole rabbit hole of its own.
Medical English (ESL)
in reply to Medical English (ESL) • • •AI companies will fail. We can salvage something from the wreckage
Cory Doctorow (The Guardian)