Mutuo Health uses artificial intelligence technology to automatically record interactions between the patient and clinician. Their product, AutoScribe, transcribes the dialogue between a healthcare provider and their patient in real-time into a high quality electronic medical record, EMR for short. Hands-free speech recognition helps reduce time consuming manual clinical documentation, and ultimately improves the patient-clinician experience.
“We’re using AI to solve the complex process problems at point of care,” says CEO Noah Crampton. AutoScribe uses speech recognition technology to transcribe the dialogue between patient and clinician. Machine learning then parses the dialogue language and the platform suggests outputs based on the interaction. The clinician can edit and accept the outputs, triggering specific EMR user tasks to be automatically performed. AutoScribe learns from each edit, increasing its accuracy after each use. We spoke with Noah to learn more about Mutuo Health and their product.
Let’s start with the beginning of Mutuo Health. How did the company start?
I’m a family doctor working at Sunnybrook Health Sciences Centre. I’ve always been interested in how technology affects my work. At the University of Toronto I completed my Masters in Health Informatics and met Muhammad Mamdani, the director of a new unit at St. Michael’s Hospital called the Centre for Healthcare Analytics Research and Training (LKS-CHART). Together we decided to explore how technology can help with clinical care. We assembled a team and started on a research project to see how feasible it was to extract pertinent clinical information from dialogue between patients and physicians. In the fall of 2018, we began a commercialized spinoff of the research and thus, Mutuo Health was born. Our product is AutoScribe and we’re currently finalizing our prototype by ramping up the user features and user experience of the product.
What is AutoScribe and what are some of its key features?
The vision for AutoScribe is to automate all the processes that doctors hate doing in EMRs. The product uses a microphone in the clinic exam room and a user interface web application. The patient gives consent each time for the physician to use the microphone to listen in on the conversation. There are two parts to the web application: real time direct transcription of the dialogue and parallel updating into a medical note. Future iterations of the product will not only generate a medical note but, at the same time, automatically prepare prescriptions, create appointments and lab requests, send reminders to the patient, and conduct billing.
Why do you think this problem in healthcare hasn’t been tackled before?
The way that EMRs were developed and deployed in the 2000s in North America worked around how to optimize billing and prioritized what the government or private organizations insured. It was poorly designed for end users such as physicians. Over the last ten years, we have gathered a lot of research showing that doctors resent using these computers because it is a low value add for the amount of effort required on their part. Before EMRs, 25% of a doctor’s time involved patient documentation, and now it’s currently 50%, which dramatically reduces overall face time with patients.
AI has only recently become powerful enough to be able to tackle something like this. There is a lot of complex cognitive development that needs to replicate the current documentation process. Our technology aims to tackle this problem by using new neural networks and machine learning to process speech recognition.
What makes your product better than others on the market?
Well firstly, we have a very strong technical team. We have connections to the Vector Institute, a leader in the transformative field of artificial intelligence. Second, the intellectual property that we have developed is world class. We have a very strong understanding of how doctors think and how to generate the best EMR based on these dialogue interactions. Our focus is to make the product as flexible as possible with the data obtained to be customizable to any medical specialty. Lastly, we provide benefits for both the patients and physicians. We believe that patients should receive a personalized experience based on what is said in each visit and we do this by giving reminders to our patients. We are also heavily focused on consent. We wish to be as transparent as possible on what we do with the data that is collected. We want our patients to have complete control on the data permissions.
Since 2018, what do you feel are some of the major successes the team has gone through?
We’re very happy with the performance of our natural language processing model. On average, we are getting between 80-90% accuracy depending on speaking conditions. We are also very excited because we have partnered with St. Michael’s Hospital. Our partnership allows us to begin the iteration and validation process for our product.
On the reverse, what are some problems the team faced?
I believe this is common to most AI based startups but ultimately, we need data. We’re data hungry. With our investment funds, we’re looking to procure that data and to grow our team of developers.
How has MedStack helped in Mutuo Health’s journey?
Medstack has provided an important service for healthcare startups especially in a Canadian context where we need to meet all the health regulatory compliance mandates for collecting personal health information. As a founder, I don’t have to expend my time and resources to put all these elements together. MedStack has created a turnkey solution for startups like mine who are going to be collecting this sensitive data. Their product allows us to easily have processes and procedures in place for how we appropriately access that data.
Looking forward, where is Mutuo Health headed next?
We want to iterate our prototype to a MVP. We are focused on getting more data to train our algorithm. By the end of 2019, we hope to deploy 10 pilots and have them run for about four months in both large and small clinical organizations. These pilots will help us learn a lot. If there are errors in the platform, we want to see what is tolerated by doctors in terms of output accuracy. We wouldn’t want physicians to feel the need to edit our platform’s outputs to be more onerous than their current clinical documentation experience. Through the pilot we’ll be focusing on learning what those limits are, gathering testimonials of the pros and cons and hopefully make it into a market ready product in the middle of 2020.