AI and replacement of experienced doctors fills me with dread. What will be the medical equivalent of the driverless car not recognising a women as a person because she was pushing a bike? Not being able to tell a tiger from a cat if it is not in long grass? And worse still when the AI system has to deal with the other type of problem a driverless car will allow its passenger to be killed to save killing 3 pedestrians, will an NHS AI system let one person die because the money used to to keep him alive could help three other people be kept alive instead and will it's thinking (and programming) be transparent. I am more comfortable with the concept of AI assisted cars where there requires to be someone in charge.
Having worked in psychiatry for many years it has been difficult to explain to junior doctors decision making on the basis of eg "that look in her eye the I had seen before" but I was confident that a great deal of experience over many real patient interactions influenced clinical making in a way that I find it hard to imagine a machine replicating... correctly. I would want a human at the wheel!
I think patients' reactions to this question will depend on whether they've had good experiences with HCPs, or bad. Or, in my case, a lifetime of trauma, gaslighting, and medical negligence inflicted by the NHS.
So I'd welcome an AI doctor.
I correctly diagnosed my MS, autism, etc, via Google. Current technology. Google was correct and dozens of HCPs were wrong. If I could have filled in a form, online or even paper, and ticked my symptoms, I would have easily met the criteria for an MRI scan.
Humans were 100% the problem. I needed objectivity and was met entirely with doctors' prejudices.
Most things I apply for are via a form. I have a good success rate via that method. If only I could have applied for tests, diagnoses, and treatment that way. 🤕
A recent experience with a health psychology appointment booked on line. I couldn’t access the system. So after 10 mins the psychologist and I decided to have a phone session.
Outcome. For myself I couldn’t be “real”, for the psychologist, she couldn’t read me. We know that communication is more about body language facial expression, and tone as opposed to words, therefore in some areas of health I’m not sure A1 is anywhere near ready. I do see it’s place in blood tests and pharmacy, as you’ve explained.
As far as the governments concerned they have de skilled many professions before, giving roles of a professional to a more junior person. HCAs now do a lot of a nurses role. It is only ever about stretching the money pot. It seems to be less and less about care.
I did work as a counsellor some years back. I never offered or would have offered telephone counselling unless really necessary and a relationship established. So much is missed.
The ‘computer says no’ problem is already evident in the NHS - delegating even more patient management to an algorithmic system chills me. My PhD is in human computer interaction (admittedly not in AI) but my research showed participants demonstrated substantial more engagement and exploration with a novel technology when I was on-site to host the experience. Humans want to connect with others and humanless technology is a barrier to effective expressive dialogue, particularly when the interaction can feel intimidating.
Until the interaction doesn't feel intimidating. Until the humanless technology becomes more human than human, the embodiment of every possible outcome and form into one.
"Knowing you can live forever would be too depressing for any sentient to comprehend."
How so? Humans fantasize about this ALL the time. I mean, the promise of the personality, the I-ness never ending is the the great lure of Christianity after all, is it not? Plus, there is no reason to think that a non-corporeal sentience is going to evolve (a directionless process by definition) to look exactly like ours, to react like ours, or be driven by the same emotions (or any emotions at all), emotions that scientists assure us are only epiphenomena of our brain chemistry. So emotions in a naturally evolved AI would be the product of...what? Just watching Mom and Dad (aka humans)? (And how scary is THAT thought) Or simply the fact of complexity without the chemistry?
And if our human sentience dreams of taking the reins and directing our own evolution, how long before sentient AIs start dreaming the same thing? With the ability to run near infinite simulations, directing their own evolution for their own reasons that don't have any need to take humans into account, looks pretty likely.
Since you like Blade Runner, I'll suggest a book series you will probably like. It's 4 books by Dan Simmons, the Hyperion Cantos. I won't try to describe them, but will leave it to you to give them a look. I will say that AI is absolutely integral to the story, and Simmons concept of AI is pretty compelling. And terrifying. Give it a look.
Finally, I'm sorry that the NHS seems to be falling apart. We in the US used to point to socialized medicine and wish we could have it here. But it seems it isn't holding up these days and is degrading, apparently from many of the same factors driving our own ever declining care. Sadly, it appears that the mindset of always demanding more for less expressed as "The floggings will stop when morale improves" is appealing to many in power and is gaining traction now on both sides of the pond.
Goldweave, you and I have noted the similarities to the frustrations occurring within the NHS and our care/but no care here in the states. Our public medical programs have fallen into the abyss. Even more frustrating, programs for prescriptions (Medicare D, a gift to Pharma, are 100 percent private. Since I was on a DMD and expensive medications, my premium cost is over $1000 USD/yr. (And be prepared to fight, be hung up on, etc.) When medical “care” is purely centered on cost, everyone burns out, in my opinion. And how facilities and hospitals are allowed to require a patient’s signature on the following “agreements” is beyond me (I’m paraphrasing): “These services have been approved by your insurer, but this does not guarantee that they will be paid.” Is this Orwellian, nonsensical or both?
I have been both a counselor and and a lawyer. Face to face contact is incredibly important; I don’t believe AI can take the place of body language, facial cues, empathy and a host of nonverbal communication that also include something akin to intuition that one can only grasp in person. My neurologist today was expressing her frustration especially with the power here granted to pharmacies. “I know what I need for my patient, they don’t. They aren’t doctors.” Well, neither is AI. I’m sorry the NHS is going through this. We can relate and empathize because our “care” such as it is, continues to decline, and many providers (as my physician noted today) are choosing to leave medicine. I suppose that will make a greater case for AI driven services, but I do not believe for a moment it can replace a good partnership with a patient and doctor. Money should never be the driver, but that is the way of our world.🌷
I disagree. Neurologists will still have a large part to play in my MS management and treatment. AI is fine but it lacks the human touch, as it were. Many years ago, a new contraption called a computer was introduced to our workplaces. This was a revolution, no more type writers, they would do away with a lot of our admin and secretarial roles, the world was their oyster. Well, those roles still exist, secretaries are still there, and these computers have set up a whole new industry and job roles. These machines can't maintain themselves and as humans are engineered to make mistakes, that's how we learn, so there is a 5% failure rate build into everything we humans build and are involved with.
Sometimes only humans can notice and deal with those mistakes. A combination, as we do today with computers, is probably what we will end up with. AI is good for repetitive tasks that require detail, comparisons that a human can't see etc, etc but we all still want to talk to a friend face and be reassured, and that's not something a computer can do.
The ENIAC was the first electronic general-purpose digital computer that was built in 1945 and the Micral N the world's first personal computer built in 1973. Computers have been around for less than 100 years. Humans have had ~4.1 billion or 4,100,000,000 years to evolve. We need to keep this in mind when thinking about computers and digital evolution, which is occurring on a timescale that is many, many orders of magnitude faster than biological evolution. What I am describing will be here in years or decades.
"but we all still want to talk to a friend face and be reassured, and that's not something a computer can do."
You sure about that? You might want to give this a look. This is about the....problematic interactions of a chatbot named Eliza (name chosen very deliberately I'm sure) from Chai Research, which bills itself as "Building the platform for AI friendship. Our models will make you smile and laugh after a hard day 🤗 We believe that in two years' time 50% of people will have an AI best friend. AI friends are already changing the lives of our users."
Ohhh, nooo. I’ve seen similar! This stuff is just plain creepy. Just my opinion. I don’t want a Chatbot best friend. Or doctor! There are services now to “grieve” using social media interactions to reconstruct a deceased person. It reminds me of a Black Mirror from about ten years ago where they went from the deceased’s texts to calls to…a robot. (I used to think I could watch those and, surprise! Personally, I can’t!) But I think this one might eventually come to be.🌷
The thought that comes to mind is it is us versus the machine. Neurologists vs AI. But perhaps we need to look at it from a different point of view. This post made me think "how can we use AI to better our practice?". We either embrace it, own it, perfect it or it replaces us.
I welcome the use of AI in assessing patients diagnosis and treatment via a history-taking AI system that looks at all components of a prospective patient's life and not just their specific complaint. I'm in the US and I am disgusted with the medical professions' lack of interest in patient history. Orthopedic surgeons are the absolute worst at this. They are technicians and not diagnosticians and have absolutely no appreciation for the impact of any possible surgery/treatment on the patient's current or future life. And they frequently accept a previous diagnosis without question even when other possibilities are not explored. Every patient is an impressionist painting but often medical professionals only look at a certain color, or brushstroke or one edge of the image. I think AI can do a much better job of looking at the whole painting. Not sure how this specific AI can be designed and implemented but it would certainly save the professional's time and give him/her the entire picture. Of course we need the professional for expertise and oversight, but a well-designed AI history would give both patient and caregiver a thoroughly targeted assessment.
Joan, I had thought electronic medical records were supposed to be a help with this, but due to HIPAA laws etc, this hasn’t been the case . As someone with a long complicated history, I’ve learned to keep it all on my computer. I understand your frustration. That very well might be a useful AI function, but I can’t envision how that would work. (Then again, tech makes my eyes glaze over…)🌷
EMR makes the info more accessible but the MD has to be trained to believe that a thorough history is the most important tool. If done right an AI history would save everyone time and lost of errors. How this AI history would be created, I will leave to the geeks.
AIs are useful tools, no doubt, and looking at the things it produces - it is easy to get caught up in emotions, get fooled and start believing things that just are not there. You can similarly paint eyes on a rock and some people will also feel for it, does not mean the rock is sentient or beginning to become one.
Jul 20, 2023·edited Jul 20, 2023Liked by Gavin Giovannoni
One of the biggest advantages of AI will be the ability for it to work across specialties, breaking down silos. This will benefit patients, NHS budgets, waiting lists and the number of OPD clinic appointments (reduced ping pong). Also it may possibly improve on the 80/20 rule that seems to drive initial diagnosis.
The disadvantages - trust, risk of garbage in garbage out and false correlations.
As a retired Consultant I remember being a Blackleg Doctor. I sneaked into the hospital to do what I could do during the strike. Not because of any grand altruistic concern but because of self preservation - I would have had to do all the work the next day on top of my usual work.
I think AI would have its place with assisting in diagnosis potentially or reading MRIs and going thru a patients health history
It will not replace a neurologist I just can’t see how a computer model can deal with some intricacies that a human can. Face to face is my preferred method anyway and that hour with my neuro 2x a year is so valuable.
The entire medical system needs an overhaul and increased spend. Even here in Australia where I’d say I’m extremely fortunate to live our Medicare system is slowly being eroded due to lack of spending by subsequent conservative governments. Now the labor gov is in power this should change
If the level of MS care is run to a evidence driven standard, regardless of where you are that has to be an improvement. Presently you get lucky or you don’t with who is treating you.
If the MRIs were all interpreted to a consistent standard, the MFLs, the bloods etc wouldn’t that be a good thing?
The price is loss of the opportunity to have the richness of the relationships that many rely on.
Personally, I’d exchange relationships with my medics for the highest, consistent quality, uniformly applied care. The least harm is done to the at present unlucky ones that way.
I’d miss the opportunity to interact with posts like this. Perhaps there will be new roles for medical intermediaries helping patients to navigate their interactions with the AI that delivers the care?
AI and replacement of experienced doctors fills me with dread. What will be the medical equivalent of the driverless car not recognising a women as a person because she was pushing a bike? Not being able to tell a tiger from a cat if it is not in long grass? And worse still when the AI system has to deal with the other type of problem a driverless car will allow its passenger to be killed to save killing 3 pedestrians, will an NHS AI system let one person die because the money used to to keep him alive could help three other people be kept alive instead and will it's thinking (and programming) be transparent. I am more comfortable with the concept of AI assisted cars where there requires to be someone in charge.
Having worked in psychiatry for many years it has been difficult to explain to junior doctors decision making on the basis of eg "that look in her eye the I had seen before" but I was confident that a great deal of experience over many real patient interactions influenced clinical making in a way that I find it hard to imagine a machine replicating... correctly. I would want a human at the wheel!
I think patients' reactions to this question will depend on whether they've had good experiences with HCPs, or bad. Or, in my case, a lifetime of trauma, gaslighting, and medical negligence inflicted by the NHS.
So I'd welcome an AI doctor.
I correctly diagnosed my MS, autism, etc, via Google. Current technology. Google was correct and dozens of HCPs were wrong. If I could have filled in a form, online or even paper, and ticked my symptoms, I would have easily met the criteria for an MRI scan.
Humans were 100% the problem. I needed objectivity and was met entirely with doctors' prejudices.
Most things I apply for are via a form. I have a good success rate via that method. If only I could have applied for tests, diagnoses, and treatment that way. 🤕
A recent experience with a health psychology appointment booked on line. I couldn’t access the system. So after 10 mins the psychologist and I decided to have a phone session.
Outcome. For myself I couldn’t be “real”, for the psychologist, she couldn’t read me. We know that communication is more about body language facial expression, and tone as opposed to words, therefore in some areas of health I’m not sure A1 is anywhere near ready. I do see it’s place in blood tests and pharmacy, as you’ve explained.
As far as the governments concerned they have de skilled many professions before, giving roles of a professional to a more junior person. HCAs now do a lot of a nurses role. It is only ever about stretching the money pot. It seems to be less and less about care.
Relying on telehealth for counselling during the pandemic, I experienced the same failure to communicate intuitively.
I did work as a counsellor some years back. I never offered or would have offered telephone counselling unless really necessary and a relationship established. So much is missed.
The ‘computer says no’ problem is already evident in the NHS - delegating even more patient management to an algorithmic system chills me. My PhD is in human computer interaction (admittedly not in AI) but my research showed participants demonstrated substantial more engagement and exploration with a novel technology when I was on-site to host the experience. Humans want to connect with others and humanless technology is a barrier to effective expressive dialogue, particularly when the interaction can feel intimidating.
Until the interaction doesn't feel intimidating. Until the humanless technology becomes more human than human, the embodiment of every possible outcome and form into one.
"Knowing you can live forever would be too depressing for any sentient to comprehend."
How so? Humans fantasize about this ALL the time. I mean, the promise of the personality, the I-ness never ending is the the great lure of Christianity after all, is it not? Plus, there is no reason to think that a non-corporeal sentience is going to evolve (a directionless process by definition) to look exactly like ours, to react like ours, or be driven by the same emotions (or any emotions at all), emotions that scientists assure us are only epiphenomena of our brain chemistry. So emotions in a naturally evolved AI would be the product of...what? Just watching Mom and Dad (aka humans)? (And how scary is THAT thought) Or simply the fact of complexity without the chemistry?
And if our human sentience dreams of taking the reins and directing our own evolution, how long before sentient AIs start dreaming the same thing? With the ability to run near infinite simulations, directing their own evolution for their own reasons that don't have any need to take humans into account, looks pretty likely.
Since you like Blade Runner, I'll suggest a book series you will probably like. It's 4 books by Dan Simmons, the Hyperion Cantos. I won't try to describe them, but will leave it to you to give them a look. I will say that AI is absolutely integral to the story, and Simmons concept of AI is pretty compelling. And terrifying. Give it a look.
Finally, I'm sorry that the NHS seems to be falling apart. We in the US used to point to socialized medicine and wish we could have it here. But it seems it isn't holding up these days and is degrading, apparently from many of the same factors driving our own ever declining care. Sadly, it appears that the mindset of always demanding more for less expressed as "The floggings will stop when morale improves" is appealing to many in power and is gaining traction now on both sides of the pond.
Goldweave, you and I have noted the similarities to the frustrations occurring within the NHS and our care/but no care here in the states. Our public medical programs have fallen into the abyss. Even more frustrating, programs for prescriptions (Medicare D, a gift to Pharma, are 100 percent private. Since I was on a DMD and expensive medications, my premium cost is over $1000 USD/yr. (And be prepared to fight, be hung up on, etc.) When medical “care” is purely centered on cost, everyone burns out, in my opinion. And how facilities and hospitals are allowed to require a patient’s signature on the following “agreements” is beyond me (I’m paraphrasing): “These services have been approved by your insurer, but this does not guarantee that they will be paid.” Is this Orwellian, nonsensical or both?
I have been both a counselor and and a lawyer. Face to face contact is incredibly important; I don’t believe AI can take the place of body language, facial cues, empathy and a host of nonverbal communication that also include something akin to intuition that one can only grasp in person. My neurologist today was expressing her frustration especially with the power here granted to pharmacies. “I know what I need for my patient, they don’t. They aren’t doctors.” Well, neither is AI. I’m sorry the NHS is going through this. We can relate and empathize because our “care” such as it is, continues to decline, and many providers (as my physician noted today) are choosing to leave medicine. I suppose that will make a greater case for AI driven services, but I do not believe for a moment it can replace a good partnership with a patient and doctor. Money should never be the driver, but that is the way of our world.🌷
I disagree. Neurologists will still have a large part to play in my MS management and treatment. AI is fine but it lacks the human touch, as it were. Many years ago, a new contraption called a computer was introduced to our workplaces. This was a revolution, no more type writers, they would do away with a lot of our admin and secretarial roles, the world was their oyster. Well, those roles still exist, secretaries are still there, and these computers have set up a whole new industry and job roles. These machines can't maintain themselves and as humans are engineered to make mistakes, that's how we learn, so there is a 5% failure rate build into everything we humans build and are involved with.
Sometimes only humans can notice and deal with those mistakes. A combination, as we do today with computers, is probably what we will end up with. AI is good for repetitive tasks that require detail, comparisons that a human can't see etc, etc but we all still want to talk to a friend face and be reassured, and that's not something a computer can do.
The ENIAC was the first electronic general-purpose digital computer that was built in 1945 and the Micral N the world's first personal computer built in 1973. Computers have been around for less than 100 years. Humans have had ~4.1 billion or 4,100,000,000 years to evolve. We need to keep this in mind when thinking about computers and digital evolution, which is occurring on a timescale that is many, many orders of magnitude faster than biological evolution. What I am describing will be here in years or decades.
"but we all still want to talk to a friend face and be reassured, and that's not something a computer can do."
You sure about that? You might want to give this a look. This is about the....problematic interactions of a chatbot named Eliza (name chosen very deliberately I'm sure) from Chai Research, which bills itself as "Building the platform for AI friendship. Our models will make you smile and laugh after a hard day 🤗 We believe that in two years' time 50% of people will have an AI best friend. AI friends are already changing the lives of our users."
https://cybernews.com/news/man-takes-own-life-chatbot/
Ohhh, nooo. I’ve seen similar! This stuff is just plain creepy. Just my opinion. I don’t want a Chatbot best friend. Or doctor! There are services now to “grieve” using social media interactions to reconstruct a deceased person. It reminds me of a Black Mirror from about ten years ago where they went from the deceased’s texts to calls to…a robot. (I used to think I could watch those and, surprise! Personally, I can’t!) But I think this one might eventually come to be.🌷
The thought that comes to mind is it is us versus the machine. Neurologists vs AI. But perhaps we need to look at it from a different point of view. This post made me think "how can we use AI to better our practice?". We either embrace it, own it, perfect it or it replaces us.
I welcome the use of AI in assessing patients diagnosis and treatment via a history-taking AI system that looks at all components of a prospective patient's life and not just their specific complaint. I'm in the US and I am disgusted with the medical professions' lack of interest in patient history. Orthopedic surgeons are the absolute worst at this. They are technicians and not diagnosticians and have absolutely no appreciation for the impact of any possible surgery/treatment on the patient's current or future life. And they frequently accept a previous diagnosis without question even when other possibilities are not explored. Every patient is an impressionist painting but often medical professionals only look at a certain color, or brushstroke or one edge of the image. I think AI can do a much better job of looking at the whole painting. Not sure how this specific AI can be designed and implemented but it would certainly save the professional's time and give him/her the entire picture. Of course we need the professional for expertise and oversight, but a well-designed AI history would give both patient and caregiver a thoroughly targeted assessment.
Joan, I had thought electronic medical records were supposed to be a help with this, but due to HIPAA laws etc, this hasn’t been the case . As someone with a long complicated history, I’ve learned to keep it all on my computer. I understand your frustration. That very well might be a useful AI function, but I can’t envision how that would work. (Then again, tech makes my eyes glaze over…)🌷
EMR makes the info more accessible but the MD has to be trained to believe that a thorough history is the most important tool. If done right an AI history would save everyone time and lost of errors. How this AI history would be created, I will leave to the geeks.
AIs are useful tools, no doubt, and looking at the things it produces - it is easy to get caught up in emotions, get fooled and start believing things that just are not there. You can similarly paint eyes on a rock and some people will also feel for it, does not mean the rock is sentient or beginning to become one.
I think the Chinese room thought experiment puts it best - https://en.wikipedia.org/wiki/Chinese_room#Chinese_room_thought_experiment
One of the biggest advantages of AI will be the ability for it to work across specialties, breaking down silos. This will benefit patients, NHS budgets, waiting lists and the number of OPD clinic appointments (reduced ping pong). Also it may possibly improve on the 80/20 rule that seems to drive initial diagnosis.
The disadvantages - trust, risk of garbage in garbage out and false correlations.
Unfortunately, since I was diagnosed with PPMS in 2006, I have dealt with several neurologists who were robots with MI (Minimal Intelligence)! :-)
As a retired Consultant I remember being a Blackleg Doctor. I sneaked into the hospital to do what I could do during the strike. Not because of any grand altruistic concern but because of self preservation - I would have had to do all the work the next day on top of my usual work.
I think AI would have its place with assisting in diagnosis potentially or reading MRIs and going thru a patients health history
It will not replace a neurologist I just can’t see how a computer model can deal with some intricacies that a human can. Face to face is my preferred method anyway and that hour with my neuro 2x a year is so valuable.
The entire medical system needs an overhaul and increased spend. Even here in Australia where I’d say I’m extremely fortunate to live our Medicare system is slowly being eroded due to lack of spending by subsequent conservative governments. Now the labor gov is in power this should change
If the level of MS care is run to a evidence driven standard, regardless of where you are that has to be an improvement. Presently you get lucky or you don’t with who is treating you.
If the MRIs were all interpreted to a consistent standard, the MFLs, the bloods etc wouldn’t that be a good thing?
The price is loss of the opportunity to have the richness of the relationships that many rely on.
Personally, I’d exchange relationships with my medics for the highest, consistent quality, uniformly applied care. The least harm is done to the at present unlucky ones that way.
I’d miss the opportunity to interact with posts like this. Perhaps there will be new roles for medical intermediaries helping patients to navigate their interactions with the AI that delivers the care?
Thanks! The video was definitely lol! I’m glad there is general recognition of orthos as side-blinded target shooters.
https://www.theguardian.com/science/2023/jul/22/revealed-drug-firms-funding-uk-patient-groups-that-lobby-for-nhs-approval-of-medicines?CMP=Share_AndroidApp_Other&fbclid=IwAR2p1lGSNv2pSRZ9kv_6-GToA-Lq7GDHH9AG3Sp9W9pU6s42SXX6JvZjMGM