An introduction to AI-assisted documentation tools

How artificial intelligence (AI) will affect our lives and work — even our intellectual and cultural development — fuels ongoing (and fascinating) debate.AI has been lauded as the answer to a wide swath of problems – from fighting financial fraud and conducting legal analysis to enhancing education and boosting power systems. For most physicians, the most immediate use of AI is found in minimizing burdensome administrative tasks associated with the electronic health record (EHR). This includes using AI as a medical scribe or transcription tool during patient encounters.

by Wayne Wenske, Senior Marketing Strategist

How artificial intelligence (AI) will affect our lives and work — even our intellectual and cultural development — fuels ongoing (and fascinating) debate. AI has been lauded as the answer to a wide swath of problems – from fighting financial fraud and conducting legal analysis to enhancing education and boosting power systems. 

In health care, strides are already being made to employ AI technology in the development of vaccines and medications; to evaluate diagnostic testing; or to assist physicians with making diagnoses, recommending treatments, and tracking patient progress. 

But for most physicians, the most immediate use of AI is found in minimizing burdensome administrative tasks associated with the electronic health record (EHR). This includes using AI as a medical scribe or transcription tool during patient encounters. In theory, relieving physicians of these routine duties can help them shift their focus away from the screen and more toward spending quality time with patients. 

Voice transcription tools powered by AI can transform dictation or even conversations between the physician, patient, family members, and others into text. Systems are now available that can distinguish different voices and identify medical terminology. Using algorithms that interpret language, these AI tools can even process and understand the context behind what is being said. These tools can be integrated with EHR systems to assist physicians with documentation that immediately and accurately records what is said during a patient encounter. 

As with any transcription tool, physicians must review transcribed documentation to ensure accuracy and completion.

However, these technologies come to be used (and regulated), they offer incredible benefits. Yet with fast growth comes new risks, including system errors, improper data sharing, or data that reflects human and systemic biases. 

AI systems learn by scanning thousands of records and data fields. From these scans, the technology learns how to respond to queries, perform tasks, prioritize information, and make decisions. Therefore, an AI system’s performance is dependent on the quality and amount of data it is given. In health care, this data could potentially be pulled from patient records, insurance claims, pharmacy accounts, and other data files created by (fallible) humans across different formats and platforms. 

For AI-assisted EHR systems to work well, they must learn from very large sets of patient data. This data includes detailed descriptions of treatments; how patients responded; and specific data about patient populations, such as vital signs, family histories, behaviors, and genetic data. These requirements lead to concerns for patient privacy and whether AI developers are violating federal and state privacy protections. 

Privacy concerns also arise when considering that AI developers may use health care data (such as electronic protected health information, or ePHI) to develop AI tools in other industries. Patients could see their ePHI being used to determine insurance premiums or influence banking/loan requests.

In addition, AI systems could incorporate any biases and inaccuracies found in the data. “For example, African American patients receive, on average, less treatment for pain than white patients; an AI system learning from health-system records might learn to suggest lower doses of painkillers to African American patients even though that decision reflects systemic bias, not biological reality. Resource-allocation AI systems could also exacerbate inequality by assigning fewer resources to patients considered less desirable or less profitable by health systems for a variety of problematic reasons.” (1)

The American Medical Association (AMA) recently published “key takeaways” for the health care industry to act upon in the development, use, and regulation of AI. Many of the takeaways address how health care should systemically approach AI for federal or state regulation and create technology frameworks to support consistent data creation. 

The AMA takeaways included a few practical actions that physicians can take now. These include: (2)

  • “Promote population-representative data with accessibility, standardization and quality.” “This is the way to ensure accuracy for all populations. While there is a lot of data now, there are issues with data quality, appropriate consent, interoperability and scale of data transfers.” (2)

    For physicians or practices, it is important to create records or document encounters consistently. Ensure that notes being captured using AI (such as with voice recognition software) are reviewed by a physician or staff member with medical training to ensure the information collected is accurate, clinically sound, and logical. 

Also, if using voice recognition software, ensure that the AI is capable of interpreting different dialects and accents. There have also been instances where this software had difficulty interpreting male vs. female voices. Background noise – including additional voices that could confuse the record – should also be filtered out.

The use of electronic health records (EHR) already provides consistency to the format and content of your documentation. However, take care to keep text structured and to not write notes that do not fit in a field. “AI doesn’t really understand human language, let alone the kind of shorthand that a doctor might use.” (3)

  • “Prioritize ethical, equitable and inclusive medical AI while addressing explicit and implicit bias. Underlying biases need to be scrutinized to understand their potential to worsen or address existing inequity and whether and how it should be deployed.”

    Carefully review patient records to ensure they do not reflect any biases that could add to health care disparities based on gender, race, financial status, religion, or language proficiency. These types of biases can lead AI systems to adopt the same biases and potentially make errors in diagnoses and treatments. It is important to make efforts to build cultural competency in your practice. A number of resources are found on the TMLT Resource Hub to help you and your staff build cultural competency.

Sources

  1. Price WN. Risks and remedies for artificial intelligence in health care. Bookings Institution. November 14, 2019. Available at https://www.brookings.edu/articles/risks-and-remedies-for-artificial-intelligence-in-health-care/. Accessed February 26, 2024.
  2. Henry TA. 7 tips for responsible use of health care AI. American Medical Association. March 4, 2021. Available at https://www.ama-assn.org/practice-management/digital/7-tips-responsible-use-health-care-ai. Accessed February 26, 2024.
  3. Desai AN. Artificial Intelligence: Promise, Pitfalls, and Perspective. JAMA. 2020;323(24):2448–2449. doi:10.1001/jama.2020.8737. Accessed February 27, 2024.