Industry insights: how AI will revolutionize healthcare

Sam Gharbi, co-founder and CEO of Arya Health, details how his startup’s adoption of AI could change the industry.

Dr. Sam Gharbi. Photo: VTJ/Kai Jacobson

Vancouver Tech Journal first spoke with physician-founder Dr. Sam Gharbi two years ago, when we detailed his personal journey towards co-founding Arya Health. Fast forward to today, and Gharbi sees the company’s adoption of AI as an antidote to our public healthcare system’s shortage of practitioners. Gharbi shared his thoughts with us on what it’s like to adopt AI in healthcare and the journey ahead for regulation and ethics in the sector.

Vancouver Tech Journal: Can you describe what an average doctor’s relationship with technology is like? 

Sam Gharbi: As a doctor, your main tool now is your EMR [electronic medical record] in a lot of ways. For the longest time, all we did was just write on paper. It's very new for doctors to be using technology in a meaningful way.

Unfortunately, the technology has made us less efficient. Everybody else is saying, ‘Hey, tech is making my work and life better.’ And in medicine, we're saying, ‘I hate using technology,’ which is kind of nuts.

I used to be the Associate Chief Medical Information Officer for Vancouver Coastal. And so when we were doing our Cerner implementation [software for clinical information], people were angry and defiant. And this is ubiquitous when we look at doctors in the world with their tech. Which is kind of nuts, because we love our phones, we love our TVs, we love our cars, and everything else, but doctors hate their technology.

VTJ: Why exactly do doctors hate technology?

SG: There's tons of papers that come out every year that say, ‘Doctors spend 40 percent of their time on their computer.’ We're just becoming data-entry monkeys and are unhappy about it, frustrated, and burnt out.

I didn't like working in-clinic, mainly because of the technology. It was so hard to write a prescription. My first day in-clinic, I remember I was using this system that after a bunch of hours of training, I still couldn't figure out how to really put together a prescription in the system in an effective way. So I just started taking pieces of paper and writing. If my parents who are computer illiterate — who are in their 60s and 70s now — can sign up for Facebook and Gmail, then somebody who's done 10 to 12 years of college level education and healthcare should be able to figure out intuitively how to use these systems. But they're just very poorly designed.

VTJ: You and your co-founders — all physician-founders — started Arya Health in response to this problem. Can you describe what Arya does?

SG: The [existing] tech was built by business folks or developers who've meant well, but ultimately weren't in the trenches, and didn't understand how these tools needed to be built to be effective for doctors. We [at Arya Health] said, ‘Let's build a system where anybody can sit down and [use it] without any training.’ Which seems crazy, because most people need eight hours or more of training to use existing systems.

With Arya, you need half an hour. And most people in fact, just start using it. It’s that intuitive. We said, ‘Let's start with the building blocks, the basics — how to write a note, electronically create a prescription, essentially create information and share information.’ We see a patient, we then document information about the encounter, and then share that information in a meaningful way. So that's what we build with Arya at a basic level.

VTJ: AI has taken all industries by storm, and healthtech is no exception. You’ve integrated the Arya AI Co-Pilot, powered by TORTUS. Can you explain what opportunity you saw with the adoption of AI?

SG: There's things that can be done by AI that saves the doctor and nurse tons of time, and can do even more reliably.

Instead of talking to a patient, and then writing, typing, or dictating your note (which is what takes the most amount of time and interaction — you spend sometimes 15 minutes putting together that note after you’ve spent up to 20 minutes talking to your patient, doubling the amount of time that you interact with a patient from start to finish), What we’ve done is partnered with TORTUS and integrated AI that listens to the conversation — obviously if the patient's okay with that.

We'll listen to that conversation, parse out the information, and create your note for you. So all you do is click a button once you're done speaking with the patient, and that 20 minute conversation is turned into a consultation note. That's incredible.

People kill themselves over five, ten percent improvements and efficiencies in different industries. You have now just saved 50 percent of the time that you were doing in terms of interacting with your EMR, your paperwork, your documentation.

VTJ: How have doctors responded to this?

SG: It's just jaws dropped when they see this, in terms of how incredible this is, and how reliable it is. It's so much better that I think that the new standard medicolegally, which will happen in the coming years, will be [this]. Because right now, you're trusting that the doctor reliably remembered everything the patient told them, and was honest in their documentation. We assume the honesty and we assume the reliability. But let's be honest: we're not perfect as people. We forget things. A lot of doctors write their notes at the end of the day — so they've talked to 10, 15, maybe 40 patients, and then they start documenting at the end of the day. I guarantee you, there's things that they forget to write.

It's safer for the patients. Medicolegally it's better, because now you can say, ‘No, see, I actually did say this; this was the interaction,’ or, ‘You know what, we did miss this,’ right? It's transformational. And that's just the first phase of it.

VTJ: In many industries, AI is set to take away jobs through automation. How do you envision AI will affect healthcare?

SG: In our industry, AI is not not going to hurt jobs because there aren't enough of us. There's a huge shortage of healthcare practitioners. We're harming people because there aren't enough of us to provide good healthcare. People are waiting six months or a year to see different specialists. A quarter of Canadians don't have a family doctor. That's criminal.

And so all of a sudden, now every doctor [and] every nurse spending 30 to 40 percent of their time on paperwork can spend maybe five to 10 percent of their time on [that], and see so many more patients. Doctors and nurses are happy because the existing ones get to make more money and see more patients. Patients are happy, because they now have more access to existing resources. And the system is happy because it doesn’t need to spend so more money on folks and training. It's one of those unique situations where it's a win-win-win in our industry. And I think that for us, we've identified that and tried to lead the charge. We’re the first one in Canada that I'm aware of that has an AI co-pilot integrated within our EMR. We're dedicated to really push that forward in a meaningful way.

VTJ: How are you navigating the regulation and privacy around AI, especially coming from a healthcare perspective, where patient privacy and ethics are of the utmost importance?

SG: Within our industry, thankfully, I think that it's not so much the case, because we're not trying to cause harm, we're trying to do good. But I think that the one thing that we need to be careful about is not doing what we need to do, in terms of our responsibility.

For example, if the AI can now listen to your conversation and create a note, that doesn't mean you don't have to review that note. It doesn't mean that the doctor now has no liability or responsibility to then sit down and review the information that's been created by the AI. I think that’s similar to when I dictate a note right now, with front-end speech. When I get that dictated note completed, I still have to review it for errors because the dictation isn't perfect.

There will need to be a process in place, medicolegally, so you don't make mistakes. Let's say somebody comes in and says heart failure, and the AI for whatever reason reads that as kidney failure. That's a big problem, right? I think a lot of the responsibility needs to reside with the healthcare practitioner. Ultimately, as the last line of defense, we need to review that information. I do think that regulation needs to come. And come quick, because things are moving so quickly.

VTJ: How have investors received your product?

SG: All three of us are doctors who co-founded Arya, and all of us still work clinically. [The investors] keep saying, ‘Well, you guys should go full time with Arya — you know, we don't trust the fact that you're not all full time.’ First of all, the analogy I always use is that Rome had two consuls at a time. If two people working for Rome was good enough, at a small company like Arya, you could have three people running it as a three-headed CEO, part-time, combined.

And then the other piece is, more importantly, [that] the fact we're still in the trenches allows us to build a better product. The competitive advantage that we have and the beauty of what we do is that we feel the pain of our users. So I can then be in-clinic, and then I'll sit down at the end of the day and talk to our developers and product manager and say, ‘You know what, this doesn't work very well,’ or, ‘This needs to be fixed.’

So we listen to our users, but we are users. It's like those old Hair Club for Men commercial products. Back in the ‘90s and ‘80s, this guy was on Hair Club for Men, saying, ‘It's good. And I also use it myself.’ So I think that goes a long way.

Editor’s note: This article has been updated to reflect Arya Health’s partnership with TORTUS through Arya Health AI Co-Pilot.

Reply

or to participate.