I have started testing a new AI engine for healthcare professionals called Glass Health with MS-related case scenarios. This one covers male erectile dysfunction. What do you think?
The Bot has told me nothing I couldn’t have gleaned from a bit of Googling……obvious.
What it can’t do is pick up little tells, nor recall a particular conversation about a 'weird' thing that Dr A came across with a patient 20 years ago that may apply here…..
Use as an aid, by all means.
Humans are fallible, yes, but computers are not 'gods' (yet)
No need to be concerned Gavin, you are far from redundant! I personally would always prefer the human interaction of a 1:1 appointment with a specialist. A good specialist knows what to ask and has knowledge and experience to help in ways AI just couldn’t. The advice about the 36yr old misses the psychological aspect of having erectile dysfunction (not necessarily as a cause). What type of dysfunction is it - primary or secondary? The most important step is to have good communication with his partner. AI seems to have missed this out. Jane ☺️
AI currently is perfect for research, brainstorming and first drafts. Sometimes the output can be used as is but I wouldn't do so if you are not fairly on top of the subject matter.
Even Gpt4 hallucinates still too often (incl sources, which is where it gets really bad) to be dependable.
Apr 13, 2023·edited Apr 13, 2023Liked by Gavin Giovannoni
My primary healthcare doctor for most of my life (now retired) told me something I will never forget: "If you listen to the patient, they will tell you what is wrong." As useful as AI is, I hope it never replaces the conversations and human interactions between doctors and patients.
Apr 13, 2023·edited Apr 13, 2023Liked by Gavin Giovannoni
It seemed to ‘throw the book’ at the problem. A bit like betting on every horse in the grand national.
This may not be helpful without human input to consider the patient’s history and mental state for example. On the positive side it is possible that doctors can get too comfortable with the 80/20 rule and overlook some of the less common issues and in that respect the AI will put all possibilities on the table.
Where I think AI has a big part to play in healthcare is tracking all symptoms a patient has reported historically and their test results to spot the less obvious correlations.
Very useful information that acts as a starting point for personal investigation, yet it doesnt replace MS-Selfie as that offers far much more in-depth explanatory detail and also presents the problems and risks of particular treatments. Its rather like an automated internet search for information that can lead to mis-understanding of the potential condition. We all tend to believe what we read and so can be misled if we don't have that expert interpretation that can answer specific questions
Here you go! My experiment today with Chat GPT. What do you think?
Tell me about how epstein barr virus causes MS progression.
Multiple Sclerosis (MS) is a chronic autoimmune disease that affects the central nervous system (CNS) and causes inflammation and damage to the myelin sheath, which is the protective covering of nerve fibers. The exact cause of MS is not fully understood, but it is believed to be a combination of genetic and environmental factors, including viral infections.
One virus that has been implicated in MS is the Epstein-Barr virus (EBV), which is a member of the herpesvirus family. EBV is a very common virus, and most people are infected with it at some point in their lives. In fact, it is estimated that up to 90% of adults worldwide have been infected with EBV.
Research has suggested that EBV may play a role in the development and progression of MS. For example, studies have found that people with MS are more likely to have been infected with EBV than people without MS. Additionally, people who are infected with EBV at a younger age may be at a higher risk of developing MS later in life.
It is believed that EBV may contribute to MS progression by triggering an abnormal immune response. When EBV infects a cell, it can cause the cell to produce a protein called EBNA-1, which has been shown to be similar in structure to myelin proteins. This similarity may cause the immune system to mistakenly attack myelin in addition to the virus, leading to inflammation and damage to the myelin sheath.
Furthermore, EBV has also been shown to activate certain immune cells, such as B cells, which produce antibodies that attack the myelin sheath. This can further exacerbate the autoimmune response in MS.
In summary, while the exact mechanisms by which EBV contributes to MS progression are not fully understood, research has suggested that the virus may trigger an abnormal immune response that leads to inflammation and damage to the myelin sheath.
AI is a useful tool at the moment but I think students in all fields will need to be trained how to integrate them into their own skillset. And to do that a lot of thinking will need to be done about ethics and the nature/purpose of many endeavours. What is the purpose of education? What is the purpose of medicine? Where does culpability lie if something involving a chatbot goes wrong? It's endless.
In my view there is definitely a place for AI diagnostics and I would be happy to get its opinion before moving on to discuss with a HCP. In this particular instance, its judgment seems sound and can be used as a starting point for further tests and potential treatments. It may also mean that patients can be diagnosed more quickly than waiting for a face to face appt with a HCP? Very useful, but there is still an important role for a one to one interaction with a human HCP. We are social creatures who often need human interaction and experience that an AI would struggle to provide
Prof G, I don’t believe you will be redundant, when considering the numerous lenses you view MS through. Further, this response from AI may be helpful, but as Jilly (and others note), this seems similar to what one would find on a web search. You shall not be replaced. Face to face meetings are a wealth of information for physician and patient. Delighted to hear you got time away. If anyone deserves it, it’s you! :)
The field of AI in healthcare is rapidly growing and evolving, with many promising applications in areas such as diagnosis, treatment planning, and patient monitoring. However, it is important to note that AI is still in the early stages of development and there are limitations and challenges that need to be addressed.
In terms of the case scenario you presented, Glass Health's AI engine seems to provide a reasonable and thorough diagnostic work-up and treatment plan for a patient with MS-related erectile dysfunction. However, it is important to note that AI recommendations should always be reviewed and confirmed by a qualified healthcare professional.
As for your request to add a pharmacy, I cannot provide a specific recommendation, but patients should always consult with their healthcare providers before starting any new medication or treatment plan, and obtain their prescriptions from a reputable and licensed pharmacy. like Medzpills.com
Since it is for Healthcare professionals, they can use the info before a deep read. Since it is your bot, one would expect the information to be particularly "good". In terms of patient use, forget it. MS diagnosis needs a person delivering the info and assessing the reaction and steps forward. Even the baby steps. In this example, most men are very familiar with their own sexual performance and you need a skilled clinician to get their knowledge out of them., communication skills and the like. You need to offer insights and suggestions and get reactions. I just don't see AI doing much effectively in many domains. Had a terrible time today just trying to get Social Security to recognize my wife; never happened. Phone constantly busy. No office, appointments only. No way to make an appointment. So I read this also, particularly after a bad experience with technology automation..
I like to think that I can still tell the difference between human and machine-generated text. The time to be concerned is when you can't.
Humans have soul and the 4 i's: invention, initiative, inference and intuition. It's worth re-watching Blade Runner and the tests used to "out" the replicants and the number of levels they go through before the replicants give themselves away. At the moment, conversational AI is at about the level of a chess computer in the 1980's - a human grand master could still outsmart the best computers. Give it time though...
The worry is when so-called innovators are passing off the work of AI bots as their own without proper checking, not just in life-or-death situations but in quality-of-life situations as well.
As regards AI relevance to MS, a recent conversation with a Professor suggested that limitations on lesion detection and development is not just about the resolving-power of MRI scanners, but the inability of the patient to keep still enough due to breathing and pulsing. AI potentially could reprocess images on-the-fly to compensate for such effects and the computational power required would be enormous. By comparison, getting a computer to talk in proper English is more of a gimmick.
A couple other thoughts on this AI stuff: There is no such thing as artificial intelligence. “Idea” wise (versus numerical), that’s just a spitting out of words, depending on what you put in. I do a lot of searches on Google and eBay for ID’ing stuff I pick up at auction to resell. Sometimes, what is it? Who made it” When? Is it worth anything? What is done online is a simple match between words you put in, and words the source has in its data base. Then it spits relatedc stuff out. No ‘thinking” is going on. I love the example of searching for Santa and getting back Satan, if something got screwed up somewhere. There is a real danger, however, and that is when it is assumed the source has some kind of real intelligence. In a sense, that is what we have been dealing with of late, here in The US. Because it is seen on TV or on the internet, it must be true. There is nothing “intelligent” about those sources. You need a human whom you can ask questions to, to clarify, to make judgments. The bottom half of the intelligence spectrum has led itself to believe, or been told by others, that because they can click a box on a computer or smart phone screen and get an answer they like, they are smart. And then they act on the information. Others act on the same information, and one finds “agreement”, certainty and power. Dangerous.
I learned an MS Expert was better than the average neurologist, early on. But even they vary, depending. I remember one who’s nick-name was Dr. Biogen. All of his patients were on Avonex. Another wasn’t sure I had MS because my Xrays weren’t bad enough. I wandered into a true expert however, and he was priceless. Today I go to an expert because I want to pick her brain. Perhaps she has some data from her own experience doing this for many years, combined with other info, that can clarify my situation. She can comment on my thoughts using her own experience. When you go that route, you find out pretty quickly if the person does any thinking of their own, or if they are good only at repeating stuff that you too, could have looked up on your own. My newest Doc suggested I might want to try Biotin, so I don’t know where that relationship will go. Warning lights are on.
We can't get to the supposedly higher-quality results of GlassHealth, and unless we are also medical professionals, no one here is qualified to comment on those results' accuracy, only whether there enough verbiage. I'm assuming YOU have assessed accuracy. If that is the case, then no, you are not redundant. And no, I absolutely do not "trust" software "advice" over that from a human. But maybe you wish to be redundant in this context? Do you wish to give MS-Selfie over to a piece of software? Or give it up completely? It will be your choice, but if so don't let AI's supposed wonderfulness be the deciding factor.
I'm going to put this here (The "this" is actually a link to an article waaay at the bottom) for anyone who reads this. It is a pretty brilliant and mostly easily understood exposition of the pitfalls of NLMs (natural language models), and near prescient considering how things are shaping up right now.
The bottom line, humans are wired to interact with language - whether written or spoken - as if coming from a source that shares their humanity and has communicative intent that can be inferred. We can barely avoid doing that, even when we might know better. This is a problem considering that LLM (large language model) AIs are nothing more than mouths without brains. And even GlassHealth, supposedly based on medical texts - a much more curated dataset it's true - still cannot be considered to have communicative intent because "Controlled generation, where an LM is deployed within a larger system" (in this case something written in-house at GlassHealth to modify the outputs of ChatGPT? - Goldweave) "that guides its generation of output to certain styles or topics, is not the same thing as communicative intent. One clear way to distinguish the two is to ask whether the system (or the organization deploying the system) has accountability for the truth of the utterances produced.".
Which makes me wonder if medical ethicists are freaking out about now at the prospect of overworked clinicians uncritically rushing to use AIs that don't adhere to "do no harm" and will probably always "hallucinate" to some degree. Because you and I both know docs won't just use GlassAI as a first draft, many of them will become complacent and use it as THE draft. Where will the buck stop as far as responsibility for the harms that may come from doing that? The AI writers? Or GlassHealth who wrote more software to bolt onto ChatGPT and had other humans curate a dataset? Not in a million years, bank on it. I've not been out to play with ChatGPT in any form, but I hope it has a disclaimer of "use at your own risk."
So no, again, human doctors are not going to be redundant.
Enjoy this piece on the new AIs. It's full of food for thought and some true gems. My favorite quote from there is this: "“Feeding AI systems on the world’s beauty, ugliness, and cruelty, but expecting it to reflect only the beauty is a fantasy.”
I had to diagnose my own MS via Dr Google after decades of NHS medical neglect. I'm extremely aware of the importance of self-empowerment and technology. I'd have found an AI harder to use than a search engine, and the information too basic.
I had no way to access the Internet for many years. For economic reasons, lack of knowledge, or their disabilities, some people still can't access the internet.
The problem with a list of what one needs, is that one can only access it through a doctor or if one has enough money. I could not get past the gatekeepers and an AI wouldn't have helped me unless it could order tests, do referrals, give diagnoses and treatment. That's many years away.
I ask myself why none of the many doctors I saw didn't just Google my symptoms. But their belief that I was worthless and couldn't have an actual illness was unshakable. I don't think any AI would have made the slightest difference, because the information is already easily available.
The one advantage an AI has is theoretically it doesn't have the bigotry I faced. But lately I've read a bunch of articles about how quickly AI learn the same prejudices as everyone else.
I don't see how it makes any difference to what you do here, Prof G. You advise people what to do if they cannot access diagnosis, or then cannot access treatment. You help people make their own decisions about treatment if they get that far. Any information people want to read without interaction is already out there.
"I don't see how it makes any difference to what you do here, Prof G. You advise people what to do if they cannot access diagnosis, or then cannot access treatment. You help people make their own decisions about treatment if they get that far. Any information people want to read without interaction is already out there." - - - Yes.
The Bot has told me nothing I couldn’t have gleaned from a bit of Googling……obvious.
What it can’t do is pick up little tells, nor recall a particular conversation about a 'weird' thing that Dr A came across with a patient 20 years ago that may apply here…..
Use as an aid, by all means.
Humans are fallible, yes, but computers are not 'gods' (yet)
No need to be concerned Gavin, you are far from redundant! I personally would always prefer the human interaction of a 1:1 appointment with a specialist. A good specialist knows what to ask and has knowledge and experience to help in ways AI just couldn’t. The advice about the 36yr old misses the psychological aspect of having erectile dysfunction (not necessarily as a cause). What type of dysfunction is it - primary or secondary? The most important step is to have good communication with his partner. AI seems to have missed this out. Jane ☺️
AI currently is perfect for research, brainstorming and first drafts. Sometimes the output can be used as is but I wouldn't do so if you are not fairly on top of the subject matter.
Even Gpt4 hallucinates still too often (incl sources, which is where it gets really bad) to be dependable.
My primary healthcare doctor for most of my life (now retired) told me something I will never forget: "If you listen to the patient, they will tell you what is wrong." As useful as AI is, I hope it never replaces the conversations and human interactions between doctors and patients.
It seemed to ‘throw the book’ at the problem. A bit like betting on every horse in the grand national.
This may not be helpful without human input to consider the patient’s history and mental state for example. On the positive side it is possible that doctors can get too comfortable with the 80/20 rule and overlook some of the less common issues and in that respect the AI will put all possibilities on the table.
Where I think AI has a big part to play in healthcare is tracking all symptoms a patient has reported historically and their test results to spot the less obvious correlations.
Very useful information that acts as a starting point for personal investigation, yet it doesnt replace MS-Selfie as that offers far much more in-depth explanatory detail and also presents the problems and risks of particular treatments. Its rather like an automated internet search for information that can lead to mis-understanding of the potential condition. We all tend to believe what we read and so can be misled if we don't have that expert interpretation that can answer specific questions
Here you go! My experiment today with Chat GPT. What do you think?
Tell me about how epstein barr virus causes MS progression.
Multiple Sclerosis (MS) is a chronic autoimmune disease that affects the central nervous system (CNS) and causes inflammation and damage to the myelin sheath, which is the protective covering of nerve fibers. The exact cause of MS is not fully understood, but it is believed to be a combination of genetic and environmental factors, including viral infections.
One virus that has been implicated in MS is the Epstein-Barr virus (EBV), which is a member of the herpesvirus family. EBV is a very common virus, and most people are infected with it at some point in their lives. In fact, it is estimated that up to 90% of adults worldwide have been infected with EBV.
Research has suggested that EBV may play a role in the development and progression of MS. For example, studies have found that people with MS are more likely to have been infected with EBV than people without MS. Additionally, people who are infected with EBV at a younger age may be at a higher risk of developing MS later in life.
It is believed that EBV may contribute to MS progression by triggering an abnormal immune response. When EBV infects a cell, it can cause the cell to produce a protein called EBNA-1, which has been shown to be similar in structure to myelin proteins. This similarity may cause the immune system to mistakenly attack myelin in addition to the virus, leading to inflammation and damage to the myelin sheath.
Furthermore, EBV has also been shown to activate certain immune cells, such as B cells, which produce antibodies that attack the myelin sheath. This can further exacerbate the autoimmune response in MS.
In summary, while the exact mechanisms by which EBV contributes to MS progression are not fully understood, research has suggested that the virus may trigger an abnormal immune response that leads to inflammation and damage to the myelin sheath.
AI is a useful tool at the moment but I think students in all fields will need to be trained how to integrate them into their own skillset. And to do that a lot of thinking will need to be done about ethics and the nature/purpose of many endeavours. What is the purpose of education? What is the purpose of medicine? Where does culpability lie if something involving a chatbot goes wrong? It's endless.
In my view there is definitely a place for AI diagnostics and I would be happy to get its opinion before moving on to discuss with a HCP. In this particular instance, its judgment seems sound and can be used as a starting point for further tests and potential treatments. It may also mean that patients can be diagnosed more quickly than waiting for a face to face appt with a HCP? Very useful, but there is still an important role for a one to one interaction with a human HCP. We are social creatures who often need human interaction and experience that an AI would struggle to provide
Prof G, I don’t believe you will be redundant, when considering the numerous lenses you view MS through. Further, this response from AI may be helpful, but as Jilly (and others note), this seems similar to what one would find on a web search. You shall not be replaced. Face to face meetings are a wealth of information for physician and patient. Delighted to hear you got time away. If anyone deserves it, it’s you! :)
The field of AI in healthcare is rapidly growing and evolving, with many promising applications in areas such as diagnosis, treatment planning, and patient monitoring. However, it is important to note that AI is still in the early stages of development and there are limitations and challenges that need to be addressed.
In terms of the case scenario you presented, Glass Health's AI engine seems to provide a reasonable and thorough diagnostic work-up and treatment plan for a patient with MS-related erectile dysfunction. However, it is important to note that AI recommendations should always be reviewed and confirmed by a qualified healthcare professional.
As for your request to add a pharmacy, I cannot provide a specific recommendation, but patients should always consult with their healthcare providers before starting any new medication or treatment plan, and obtain their prescriptions from a reputable and licensed pharmacy. like Medzpills.com
Since it is for Healthcare professionals, they can use the info before a deep read. Since it is your bot, one would expect the information to be particularly "good". In terms of patient use, forget it. MS diagnosis needs a person delivering the info and assessing the reaction and steps forward. Even the baby steps. In this example, most men are very familiar with their own sexual performance and you need a skilled clinician to get their knowledge out of them., communication skills and the like. You need to offer insights and suggestions and get reactions. I just don't see AI doing much effectively in many domains. Had a terrible time today just trying to get Social Security to recognize my wife; never happened. Phone constantly busy. No office, appointments only. No way to make an appointment. So I read this also, particularly after a bad experience with technology automation..
I like to think that I can still tell the difference between human and machine-generated text. The time to be concerned is when you can't.
Humans have soul and the 4 i's: invention, initiative, inference and intuition. It's worth re-watching Blade Runner and the tests used to "out" the replicants and the number of levels they go through before the replicants give themselves away. At the moment, conversational AI is at about the level of a chess computer in the 1980's - a human grand master could still outsmart the best computers. Give it time though...
The worry is when so-called innovators are passing off the work of AI bots as their own without proper checking, not just in life-or-death situations but in quality-of-life situations as well.
As regards AI relevance to MS, a recent conversation with a Professor suggested that limitations on lesion detection and development is not just about the resolving-power of MRI scanners, but the inability of the patient to keep still enough due to breathing and pulsing. AI potentially could reprocess images on-the-fly to compensate for such effects and the computational power required would be enormous. By comparison, getting a computer to talk in proper English is more of a gimmick.
A couple other thoughts on this AI stuff: There is no such thing as artificial intelligence. “Idea” wise (versus numerical), that’s just a spitting out of words, depending on what you put in. I do a lot of searches on Google and eBay for ID’ing stuff I pick up at auction to resell. Sometimes, what is it? Who made it” When? Is it worth anything? What is done online is a simple match between words you put in, and words the source has in its data base. Then it spits relatedc stuff out. No ‘thinking” is going on. I love the example of searching for Santa and getting back Satan, if something got screwed up somewhere. There is a real danger, however, and that is when it is assumed the source has some kind of real intelligence. In a sense, that is what we have been dealing with of late, here in The US. Because it is seen on TV or on the internet, it must be true. There is nothing “intelligent” about those sources. You need a human whom you can ask questions to, to clarify, to make judgments. The bottom half of the intelligence spectrum has led itself to believe, or been told by others, that because they can click a box on a computer or smart phone screen and get an answer they like, they are smart. And then they act on the information. Others act on the same information, and one finds “agreement”, certainty and power. Dangerous.
I learned an MS Expert was better than the average neurologist, early on. But even they vary, depending. I remember one who’s nick-name was Dr. Biogen. All of his patients were on Avonex. Another wasn’t sure I had MS because my Xrays weren’t bad enough. I wandered into a true expert however, and he was priceless. Today I go to an expert because I want to pick her brain. Perhaps she has some data from her own experience doing this for many years, combined with other info, that can clarify my situation. She can comment on my thoughts using her own experience. When you go that route, you find out pretty quickly if the person does any thinking of their own, or if they are good only at repeating stuff that you too, could have looked up on your own. My newest Doc suggested I might want to try Biotin, so I don’t know where that relationship will go. Warning lights are on.
We can't get to the supposedly higher-quality results of GlassHealth, and unless we are also medical professionals, no one here is qualified to comment on those results' accuracy, only whether there enough verbiage. I'm assuming YOU have assessed accuracy. If that is the case, then no, you are not redundant. And no, I absolutely do not "trust" software "advice" over that from a human. But maybe you wish to be redundant in this context? Do you wish to give MS-Selfie over to a piece of software? Or give it up completely? It will be your choice, but if so don't let AI's supposed wonderfulness be the deciding factor.
I'm going to put this here (The "this" is actually a link to an article waaay at the bottom) for anyone who reads this. It is a pretty brilliant and mostly easily understood exposition of the pitfalls of NLMs (natural language models), and near prescient considering how things are shaping up right now.
The bottom line, humans are wired to interact with language - whether written or spoken - as if coming from a source that shares their humanity and has communicative intent that can be inferred. We can barely avoid doing that, even when we might know better. This is a problem considering that LLM (large language model) AIs are nothing more than mouths without brains. And even GlassHealth, supposedly based on medical texts - a much more curated dataset it's true - still cannot be considered to have communicative intent because "Controlled generation, where an LM is deployed within a larger system" (in this case something written in-house at GlassHealth to modify the outputs of ChatGPT? - Goldweave) "that guides its generation of output to certain styles or topics, is not the same thing as communicative intent. One clear way to distinguish the two is to ask whether the system (or the organization deploying the system) has accountability for the truth of the utterances produced.".
Which makes me wonder if medical ethicists are freaking out about now at the prospect of overworked clinicians uncritically rushing to use AIs that don't adhere to "do no harm" and will probably always "hallucinate" to some degree. Because you and I both know docs won't just use GlassAI as a first draft, many of them will become complacent and use it as THE draft. Where will the buck stop as far as responsibility for the harms that may come from doing that? The AI writers? Or GlassHealth who wrote more software to bolt onto ChatGPT and had other humans curate a dataset? Not in a million years, bank on it. I've not been out to play with ChatGPT in any form, but I hope it has a disclaimer of "use at your own risk."
So no, again, human doctors are not going to be redundant.
Enjoy this piece on the new AIs. It's full of food for thought and some true gems. My favorite quote from there is this: "“Feeding AI systems on the world’s beauty, ugliness, and cruelty, but expecting it to reflect only the beauty is a fantasy.”
On the Dangers of Stochastic Parrots:
Can Language Models Be Too Big?
https://dl.acm.org/doi/pdf/10.1145/3442188.3445922
I had to diagnose my own MS via Dr Google after decades of NHS medical neglect. I'm extremely aware of the importance of self-empowerment and technology. I'd have found an AI harder to use than a search engine, and the information too basic.
I had no way to access the Internet for many years. For economic reasons, lack of knowledge, or their disabilities, some people still can't access the internet.
The problem with a list of what one needs, is that one can only access it through a doctor or if one has enough money. I could not get past the gatekeepers and an AI wouldn't have helped me unless it could order tests, do referrals, give diagnoses and treatment. That's many years away.
I ask myself why none of the many doctors I saw didn't just Google my symptoms. But their belief that I was worthless and couldn't have an actual illness was unshakable. I don't think any AI would have made the slightest difference, because the information is already easily available.
The one advantage an AI has is theoretically it doesn't have the bigotry I faced. But lately I've read a bunch of articles about how quickly AI learn the same prejudices as everyone else.
I don't see how it makes any difference to what you do here, Prof G. You advise people what to do if they cannot access diagnosis, or then cannot access treatment. You help people make their own decisions about treatment if they get that far. Any information people want to read without interaction is already out there.
"I don't see how it makes any difference to what you do here, Prof G. You advise people what to do if they cannot access diagnosis, or then cannot access treatment. You help people make their own decisions about treatment if they get that far. Any information people want to read without interaction is already out there." - - - Yes.