Artificial Intelligence in the Provision of Health Care: An American College of Physicians Policy Position Paper

Internal medicine physicians are increasingly interacting with systems that implement artificial intelligence (AI) and machine learning (ML) technologies. Some physicians and healthcare systems are even developing their own AI models, both within and outside of electronic health record (EHR) systems. These technologies have various applications throughout the provision of healthcare, such as clinical documentation, diagnostic image processing, and clinical decision support. With the growing availability of vast amounts of patient data and unprecedented levels of clinician burnout, the proliferation of these technologies is cautiously welcomed by some physicians. Others think it presents challenges to the patient–physician relationship and the professional integrity of physicians.

These dispositions are understandable, given the “black box” nature of some AI models, for which specifications and development methods can be closely guarded or proprietary, along with the relative lagging or absence of appropriate regulatory scrutiny and validation. This American College of Physicians (ACP) position paper describes the College’s foundational positions and recommendations regarding the use of AI- and ML-enabled tools and systems in the provision of healthcare. Many of the College’s positions and recommendations, such as those related to patient-centeredness, privacy, and transparency, are founded on principles in the ACP Ethics Manual. They are also derived from considerations for the clinical safety and effectiveness of the tools, as well as their potential consequences regarding health disparities. The College calls for more research on the clinical and ethical implications of these technologies and their effects on patient health and well-being.

The applications of artificial intelligence (AI) and machine learning (ML) in medicine have expanded steadily since the 1970s and continue to grow at a rapid rate. From January 2020 to October 2023, the U.S. Food and Drug Administration (FDA) reviewed, approved, authorized, or cleared more AI- and ML-enabled tools than it had in the preceding 25 years. Since November 2022, interest in AI has grown significantly alongside the rise of generative AI tools, such as OpenAI’s ChatGPT, and the corresponding increase in mainstream media coverage.

The healthcare industry has been particularly excited about AI technology and what it may mean for the future of medicine and healthcare delivery. The amount of data that continues to be compiled about persons through various consumer- and patient-facing digital health applications is impossible for even the most astute physicians to sift through and process, let alone apply to clinical decisions. With the worsening national shortage of clinicians and record levels of physician burnout, there is growing enthusiasm about the expansion of seemingly omniscient tools that guide healthcare practitioners in clinical decision-making and assist with common sources of administrative burden. Furthermore, it is expected that AI technologies, which can process vast amounts of patient data from various sources to inform medical decisions, will enable more personalized, data-driven patient care. The expected benefits for patient-centered care and decision-making are among the reasons that AI-enabled tools and systems may not only be expected but required in the future of medicine and healthcare.

Although data-driven care is a cornerstone of modern medicine, data-driven decision-making can be complicated and fraught with error. Similarly, although AI tools can transform the practice of medicine in many beneficial ways, clinical decision support based on AI output without a basic understanding of AI technology can have serious, even fatal, consequences for patients. Therefore, it is important to note that when being used for clinical decision-making, the more appropriate term is “augmented” intelligence, meaning that it continues to incorporate human intelligence and is used as a tool to assist clinicians. Extensive research is necessary to assess the short- and long-term risks and effects of the clinical use of AI on the quality of care, health disparities, patient safety, healthcare costs, administrative burden, and physician well-being and burnout. It is critical to increase overall awareness of the clinical risks and ethical implications of using AI, including any measures that can be taken to mitigate the risks. Comprehensive educational resources are necessary to help clinicians, both in practice and in training, navigate this rapidly evolving area of technology, including improving their collective understanding of where the technology may be integrated into systems they already use and recognizing its implications.

Along with best practices, research, regulatory guidance, and oversight are needed to ensure the safe, effective, and ethical use of these technologies. This executive summary provides a synopsis of the American College of Physicians’ (ACP) policy positions on the use of AI in the provision of healthcare. The full background, rationale, and policy recommendations can be found in Appendix 1. The recommendations in this paper are intended to inform the College’s advocacy regarding both predictive and generative AI policies. However, although we see great potential in generative AI benefiting both physicians and patients, because the landscape for this subset of AI technology is still evolving, it is too early to comment on its full scope and implications. The ACP will continue to consider this evolving technology as it matures. In addition, ACP recognizes that there may be challenges to implementing some of these recommendations due to the dynamic and evolving nature of AI technology.


Deepti Pandita
+ posts
Deepti Pandita MD is the VP of Informatics and Chief Medical Information officer at University of California Irvine Health, and Associate Professor in the Department of Medicine at UCI. Dr. Pandita is Board Certified in Internal Medicine and in Clinical Informatics.

Leave a Reply

Your email address will not be published. Required fields are marked *