There are a number of technologies coming together right now which will drastically alter the future of healthcare and how we interact with medical professionals. The question at hand is whether we are headed for AI or virtual doctors?
Artificial intelligence (AI) and machine learning, live video technology, and voice assistant technology are already making an impact on medical care, although exactly where this all ends up is not clear yet due to the rapidly evolving nature of these technologies. Because that future is not clear yet, we sometimes get caught up in alarmist concerns like “Robots will soon replace doctors.”
Reality does not need to be so alarmist, instead we can expect these technologies to complement each other and create new interaction patterns which will improve our patient experience.
These questions are ones we regularly ponder with our clients and for ourselves. In addition to our custom work that we do for our clients around WebRTC and telehealth, we’ve also experimented at hackathons with ideas like virtual triage applications for emergency situations, and built our own telehealth platform UniCare which we customize and license for our clients.
What the future patient experience could look like:
- You wake up not feeling well, and notice a growing rash on your body. You ask your home voice AI: “Alexa, what should I do for a rash and fever?”
- Alexa asks you some questions, and you talk back and forth about symptoms. Alexa is using advanced machine learning and artificial intelligence behind the scenes to talk with you and research information, but ultimately Alexa reminds you that she’s not able to give final medical advice, and would be happy to setup an appointment with your doctor.
- “Alexa, please book an appointment with my doctor for today about this rash”
- After a pause, Alexa asks for your permission to turn on her camera and connect you with your doctor’s office for a pre-screen. She can also give them your personal information and some background information from the previous conversation. You approve, and then your voice assistant’s video screen lights up with the live video feed of a nurse practitioner at your doctor’s office.
- The nurse asks you a couple quick questions, and you show him the growing rash. “Ok, let’s definitely have you come into the office, how about two hours from now? In the meantime, drink fluids, and get some rest.”
- Your appointment has been set, and two hours later you are in the doctor’s office. While meeting with the doctor, she is dictating notes to her voice assistant, which automatically records them with your medical records. If a specialist needs to be consulted on your growing rash, then your doctor can quickly bring in a dermatologist from a nearby practice, who agrees that you just need a prescription ointment.
- You pick up your prescription, and a day or two later the doctor’s office requests a quick video session with you to see how the rash is doing. You share video from your phone, and the nurse agrees that you are on the mend and no further care is necessary.
Why that future may not be here yet
One of our engineers, Alberto Gonzalez, recently showed me this article: “Alexa is a terrible doctor.” Alexa is a terrible doctor at this point, and even in my imaginary scenario above I’m not expecting her to give us solid medical advice, even though there are many other things she can do to help out the patient experience. Her discussion with us about our symptoms is basically a glorified voice-based search of medical conditions online. Anyone who has been scared to death after googling their symptoms of a minor condition knows that self-diagnosing based on internet searches is fraught with danger.
But in the future, a more advanced Alexa can still help us with those sorts of searches and then help coordinate our medical visit if we integrate her skills into medical systems. Voice assistants have a long ways to go before giving prescriptive advice in medical scenarios, but current voice technology could already be used to help us research symptoms, or on the doctor’s side to develop dictation software to attach memos to our medical records. There are many ways that we can start to use voice assistants now in medicine even before the artificial intelligence behind them is completely sophisticated.
The most mature of these technologies mentioned so far is live video technology. In the patient experience above, we don’t currently have a way for Amazon Echo devices to use Alexa to call your doctor’s office for you, but I imagine that will be possible in the future. But initiating the video call from your laptop or phone is certainly reasonable, and having the doctor bring in a remote specialist via video while you’re in the office is already happening in hospital settings. There’s nothing technologically preventing us from having these video interactions with doctors now, it’s just a matter of more hospitals and medical practices being willing to integrate WebRTC video into their existing applications or processes.
How will these technologies intersect?
Machine learning is the most lagging of these technologies. Voice assistants are here, and their APIs could be expanded to support more of my example scenario above. But the underlying AI and machine learning behind those voice assistants is not ready for as interactive a scenario as I’ve described above. That will be the last puzzle piece to fall into place.
Even once those technologies intersect better, there will still be regulatory and privacy reasons that a patient experience like above will take more time to become commonplace.
HIPAA compliance and privacy concerns will understandably slow down the arrival of voice assistant software that dials our doctors’ office for us and relays initial information to them. Our society is still not fully accepting of the idea of turning over all our private medical information to consumer electronics, and so it will be hard to fully implement the example I’ve given above. But that day is coming too.
Human doctors are unlikely to be replaced by AI and machine learning at any point, instead, the most likely intersection of these trends is what researchers call Collaborative Intelligence1. Basically this means that the two forces, humans and AI, will complement each other and amplify each others’ abilities. Machine learning can help the doctors and medical researchers process and understand large sets of data, as well as perform routine tasks like updating medical records or setting up appointments. The AI can also “amplify” the abilities of the humans to offer medical advice with extra data, but ultimately the decisions require a level of judgment and empathy that only the human can provide.
The point of it all – Improving lives
We are heading towards more use of artificial intelligence and virtual doctor visits, although we should not be concerned about how good of a doctor Alexa can be herself. Artificial intelligence will play a big role in medical treatment in the future, but it’s unlikely to ever completely replace interactions with medical professionals.
Instead, we are heading towards a future where it will be commonplace to see a combination of artificial intelligence, voice assistants, virtual doctor visits over live video, and in-person doctor visits. It’s not an “all or nothing” proposition for any of these things, but instead the correct combination of them will radically change and improve how we interact with doctors.
1“Collaborative Intelligence: Humans and AI are joining forces”, By H. James Wilson and Paul R. Daughtry, Harvard Business Review, July-August 2018 issue, https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces