Something a bit worrying to note about using Ai in healthcare.
-
@theron29 Genuine question or scepticism? I’m in Aotearoa NZ. Doctors were from two different specialist medical departments. Both used ai software to record consultation and take notes. The report letters sent to my GP contained multiple discrepancies about conditions discussed, and referenced in GP referrals. If they had checked before sending they would have realised mistakes had been made. My GP questioned the content, which was how I became aware. I can provide several specific examples but would rather not on a public forum to a stranger. However, both letters were re-assessed and sent again with corrections on request. Hope this helps.
@bloodflowersburning Genuine question (from central EU). (AI Scepticism is expected to come a bit later on...
).
Doctors are not using AI here, yet. I guess this tech had to be certified and tested before it was admitted into doctor's realm?Although this does not seem to be the worst usecase scenario where&how to use AI, your detailed explanation gives doubts whether the tech is actually ready now for anything *this* important....

-
@bloodflowersburning Genuine question (from central EU). (AI Scepticism is expected to come a bit later on...
).
Doctors are not using AI here, yet. I guess this tech had to be certified and tested before it was admitted into doctor's realm?Although this does not seem to be the worst usecase scenario where&how to use AI, your detailed explanation gives doubts whether the tech is actually ready now for anything *this* important....

@theron29 Agreed. Not the worst case scenario. For me personally it could have caused issues with further treatment, getting reimbursed by insurance, and caused confusion when needing ongoing care with other providers. So more an avoidable inconvenience and extra paperwork rather than a dangerous outcome in this example. I hope that’s the worst possibility across the board, and that people check their notes carefully to catch any inconsistencies.
Mistakes in medical notes have always happened, unfortunately it’s inevitable. Only time will tell if this becomes more of an issue if/when ai transcription is used in medical settings more frequently and if it generates a higher number of errors as opposed to human note taking. What I think is essential is that we still retain a human buffer to assess factual accuracy, rather than simply assuming (hoping?) the software can do it better.
For more info, the software Heidi AI Scribes has been endorsed for use within Health NZ. https://www.tewhatuora.govt.nz/health-services-and-programmes/digital-health/generative-ai-and-large-language-models#naiaeag-endorsed-tools
-
@JD38 I think most students of all disciplines are now.
@bloodflowersburning yeah, i was just too lazy to write multiple disciplines

-
@nzJayZee quoted from this article on RNZ: “He said jobseekers were using AI to generate their applications, while employers were using AI to read them.”
The snake is eating its own tail.@bloodflowersburning I like the idea of applicant pushback. for something like $40 NZD /mth you can have all the "job application agents" via Claude. Totally agree about the snake.eating its tail . We should be building community resilience instead of data centers IMO.
-
@bloodflowersburning I like the idea of applicant pushback. for something like $40 NZD /mth you can have all the "job application agents" via Claude. Totally agree about the snake.eating its tail . We should be building community resilience instead of data centers IMO.
@nzJayZee careful now, “community” seems to be a dirty word in some circles. Don’t be that radical lefty reminding people to be kind and care for others.

-
@nzJayZee careful now, “community” seems to be a dirty word in some circles. Don’t be that radical lefty reminding people to be kind and care for others.

@bloodflowersburning I'd never! The market knows best.
-
@nzJayZee careful now, “community” seems to be a dirty word in some circles. Don’t be that radical lefty reminding people to be kind and care for others.

@bloodflowersburning When you help someone with their groceries/stairs/anything or call an ambulance when someone's hurt the most important thing shouldn't be "How am I compensated". David Graeber called this (deliberately provocative) "baseline communism" it's why when 2 people working in a repair shop go "pass me the wrench," ..""ok" instead of entering into a wrench contract
-
@bloodflowersburning When you help someone with their groceries/stairs/anything or call an ambulance when someone's hurt the most important thing shouldn't be "How am I compensated". David Graeber called this (deliberately provocative) "baseline communism" it's why when 2 people working in a repair shop go "pass me the wrench," ..""ok" instead of entering into a wrench contract
@bloodflowersburning (I know that's not how NZ works, and I feel sad about it)
-
Something a bit worrying to note about using Ai in healthcare.
I’ve had two specialist appointments recently, both using ai to transcribe. Both sent report letters with inaccuracies about my diagnoses and past medical history. Even my GP was like, “huh, that directly contradicts what I put in the referrals.”
I have followed up both and requested amendments (which were done) but if I hadn’t, these inaccuracies could have significantly damaged ongoing care, further treatment or insurance claims.
Human error has always been a factor, but both doctors were clearly using the ai software and assuming what it spat out was correct. They made no other notes during the appointments to cross-reference and double check. This is how Very Bad Things can happen.
@bloodflowersburning I also see this as a HIPAA violation
-
@bloodflowersburning I also see this as a HIPAA violation
@MamaLake Unfortunately HIPPA doesn't apply in New Zealand law. But I think it's covered under the Health Information Privacy Code 2020 (HIPC) as Health NZ have authorised the use of specific tools (Heidi AI Scribe) in healthcare.
-
Something a bit worrying to note about using Ai in healthcare.
I’ve had two specialist appointments recently, both using ai to transcribe. Both sent report letters with inaccuracies about my diagnoses and past medical history. Even my GP was like, “huh, that directly contradicts what I put in the referrals.”
I have followed up both and requested amendments (which were done) but if I hadn’t, these inaccuracies could have significantly damaged ongoing care, further treatment or insurance claims.
Human error has always been a factor, but both doctors were clearly using the ai software and assuming what it spat out was correct. They made no other notes during the appointments to cross-reference and double check. This is how Very Bad Things can happen.
Please check out https://stopgenai.com
-
Please check out https://stopgenai.com
@kimcrawley interesting initiative. Is there any section in particular you’d like me to focus on?
My plan going forwards is to refuse the use of ai when recording medical consultations and to record my own notes (as a disability accessibility need) and keep checking everything for inconsistencies/mistakes.
-
@kimcrawley interesting initiative. Is there any section in particular you’d like me to focus on?
My plan going forwards is to refuse the use of ai when recording medical consultations and to record my own notes (as a disability accessibility need) and keep checking everything for inconsistencies/mistakes.
We have a mutual aid fund for people who lost their livelihoods, guides to avoiding Gen AI, upcoming support groups for chatbot addicts, all kinds of stuff.
Share our website. Join us. There's lots of things you can do.
Why just let Gen AI's horrors happen, when you can join forces with us and push back?
-
Something a bit worrying to note about using Ai in healthcare.
I’ve had two specialist appointments recently, both using ai to transcribe. Both sent report letters with inaccuracies about my diagnoses and past medical history. Even my GP was like, “huh, that directly contradicts what I put in the referrals.”
I have followed up both and requested amendments (which were done) but if I hadn’t, these inaccuracies could have significantly damaged ongoing care, further treatment or insurance claims.
Human error has always been a factor, but both doctors were clearly using the ai software and assuming what it spat out was correct. They made no other notes during the appointments to cross-reference and double check. This is how Very Bad Things can happen.
@bloodflowersburning thanks for the warning. My last couple appointments have used it too and I assumed providers would be double checking for errors, but maybe not. I'll be on the lookout.

-
Something a bit worrying to note about using Ai in healthcare.
I’ve had two specialist appointments recently, both using ai to transcribe. Both sent report letters with inaccuracies about my diagnoses and past medical history. Even my GP was like, “huh, that directly contradicts what I put in the referrals.”
I have followed up both and requested amendments (which were done) but if I hadn’t, these inaccuracies could have significantly damaged ongoing care, further treatment or insurance claims.
Human error has always been a factor, but both doctors were clearly using the ai software and assuming what it spat out was correct. They made no other notes during the appointments to cross-reference and double check. This is how Very Bad Things can happen.
@bloodflowersburning this has happened to me recently
-
@JD38 I think most students of all disciplines are now.
@bloodflowersburning
Some of their lecturers are too.
The pharmacy school offers free medication reviews, The lecturer who I saw used ChatGPT to summarise a paper. Isn't that what the abstract is for?
@JD38 -
Something a bit worrying to note about using Ai in healthcare.
I’ve had two specialist appointments recently, both using ai to transcribe. Both sent report letters with inaccuracies about my diagnoses and past medical history. Even my GP was like, “huh, that directly contradicts what I put in the referrals.”
I have followed up both and requested amendments (which were done) but if I hadn’t, these inaccuracies could have significantly damaged ongoing care, further treatment or insurance claims.
Human error has always been a factor, but both doctors were clearly using the ai software and assuming what it spat out was correct. They made no other notes during the appointments to cross-reference and double check. This is how Very Bad Things can happen.
@bloodflowersburning Yikes! That's really bad.
It's a good reminder to always check the notes on record after every appointment.
I think our GP gave us the option to decline use of the AI scribe. That should be the standard for everyone and part of the normal consent process. -
Something a bit worrying to note about using Ai in healthcare.
I’ve had two specialist appointments recently, both using ai to transcribe. Both sent report letters with inaccuracies about my diagnoses and past medical history. Even my GP was like, “huh, that directly contradicts what I put in the referrals.”
I have followed up both and requested amendments (which were done) but if I hadn’t, these inaccuracies could have significantly damaged ongoing care, further treatment or insurance claims.
Human error has always been a factor, but both doctors were clearly using the ai software and assuming what it spat out was correct. They made no other notes during the appointments to cross-reference and double check. This is how Very Bad Things can happen.
@bloodflowersburning god this was inevitable. It cant even narrate a reel on IG without entirely misreading whole words for others. Even official international accounts.
This is terribly dangerous. I hope you email your local goverment representative about this (cc in 'other' party representation in your area so they dont ignore it) and also file your concern with your medical ombudsman.Thank you for sharing this.
-
Two friends have told me in the last week they had similar issues happen. One had an incorrect diagnosis listed before they had a procedure done. The other noted viral not bacterial infection (although they did at least get the medication they needed). I feel like I’m being a pain in the bum going over everything and requesting corrections, but I’m seeing so many mistakes, to the point where any human reading them would immediately say “that doesn’t even make sense”. I worry for those who don’t or aren’t capable of checking these things. Sure, using ai might save the docs 10 minutes per patient in the ER but is that really worth the risks?
@bloodflowersburning it's amazing NZ would authorise something worse than simple voice to text transcription for doctor notes. But I'm old school, I still do searches and visit sites like Cleveland for medical guidance.
-
J jwcph@helvede.net shared this topic