Your Name *
Your E-mail *

JAMA Interviews Harvard on GenAI Transformation and Publishes Our ChatGPT Co-Pilot Work



Paving the Way for Global Health Equity with GenAI, ML, and LHS (Learning Health Systems)

Subscribe | All Issues |



  • JAMA interviewed the Harvard Dean of Medical Education about ongoing transformation brought about by GenAI.
  • JAMA has issued a call for papers focusing on a broad range of topics to understand AI’s role in Medicine, including its impact on health equity.
  • We have published a brief paper in the JAMA network – demonstrating the application of ChatGPT as a co-pilot in LHS training.
  • Our new learning tool, ELHS Copilot, is now available for medical students and doctors to use in clinical cases studies with the aid of ChatGPT.



(Source: JAMA)


From Page Mill

Published papers, recent news, and significant events in a coherent narrative for the main topic. 


AMA Interview Video: AI and Clinical Practice—Can AI Accelerate Medical Education? 

[2023/10] JAMA Chief Editor, Dr. Kirsten Bibbins-Domingo, interviewed Dr. Bernard S. Chang, Dean of Medical Education at Harvard Medical School. They discussed how AI will reshape medical education, admissions, and the training of the future generation of physicians and clinicians.


Chang BS. Transformation of Undergraduate Medical Education in 2023. 

JAMA. 2023. doi:10.1001/jama.2023.16943

[2023/9] Dr. Chang, an educator at Harvard Medical School, predicted that physician tasks, such as data gathering, information search, and even probabilistic decision analysis, will transition into newer, more efficient, and hopefully less error-prone AI-aided forms. Medical professionals will require comprehensive training on how best to utilize these tools in both their learning and practice. 


Khera R, Butte AJ, Berkwits M, et al. AI in Medicine-JAMA's Focus on Clinical Outcomes, Patient-Centered Care, Quality, and Equity.

JAMA. 2023;330(9):818-820. doi: 10.1001/jama.2023.15481.

[2023/9] In this call for papers, JAMA editors seek original science that focuses on developing, testing, and deploying AI in studies that improve understanding of its effects on the health outcomes of patients and populations. The starting point is original research rigorously examining the challenges and potential solutions to optimizing clinical care with AI. JAMA will focus on these essential areas: Clinical care and outcomes, Patient-centered care, Health care quality, Fairness in AI algorithms, Medical education and clinician experience, and Global solutions.


Chen A, Chen DO. Accuracy of Chatbots in Citing Journal Articles. 

JAMA Netw Open. 2023;6(8):e2327647. doi:10.1001/jamanetworkopen.2023.27647

[2023/8] For the first time, we utilized ChatGPT as a co-pilot to provide LHS training and documented the challenges faced with ChatGPT co-pilot.





In the Eyes of GPTs

GPTs are becoming an integral part of our society. AJ talks with GenAI to understand how machines view healthcare.


 Previously, we talked about the structural changes that GenAI may cause. Will the GenAI co-pilot be one of the key structural changes in medical education going forward?


 The integration of GenAI or similar AI systems as a co-pilot in medical education could indeed be one of the key structural changes in the field going forward. This integration represents a shift towards a more technologically advanced, personalized, and data-driven approach to medical training and education. Here are some aspects to consider regarding this potential structural change:


Enhanced Educational Framework

  • Blended Learning: The incorporation of AI co-pilots could lead to a more blended learning environment, combining traditional methods with AI-driven tools, simulations, and interactive content.
  • Customization and Adaptability: AI can offer personalized learning experiences, adapting to individual learning styles and needs, which could transform the one-size-fits-all approach of traditional education.


Improved Accessibility and Reach

Global Reach: AI co-pilots can make high-quality medical education more accessible globally, especially in regions with fewer educational resources.

Democratization of Education: This technology can help level the playing field by providing top-tier educational resources to a wider range of students, regardless of their location or background.


Data-Driven Insights

  • Performance Analytics: AI can provide detailed analytics on student performance, helping educators identify areas where students struggle and adapt the curriculum accordingly.
  • Research and Development: AI-driven analysis of current trends, research, and clinical data can inform curriculum updates, ensuring that medical education stays current with the latest scientific findings and clinical practices.


Integration with Clinical Practice

  • Simulated Clinical Experiences: AI can offer sophisticated simulations and virtual reality experiences, allowing students to practice and hone their skills in a risk-free environment.
  • Decision Support Training: By using AI in training scenarios, future medical professionals can become adept at integrating AI support tools in their clinical practice.


Ethical and Professional Development

  • Ethical Training: AI can pose ethical dilemmas and scenarios, preparing students for the complex moral decisions they will face in their careers.
  • Soft Skills Development: AI programs can also aid in developing communication, empathy, and other soft skills crucial in healthcare.


Challenges and Considerations

  • Technology Dependence: There is a risk of over-reliance on technology, which might affect the development of critical thinking and diagnostic skills without AI assistance.
  • Privacy and Security: The use of AI in medical education must rigorously protect patient data and comply with privacy regulations.
  • Ensuring Quality and Bias Mitigation: It’s crucial to ensure that AI systems are accurate, unbiased, and based on up-to-date and comprehensive medical knowledge.



While the potential for AI as a co-pilot in medical education is vast, it's important to approach this integration thoughtfully. Balancing technological advances with the essentials of human touch, judgment, and ethical considerations will be key. If implemented effectively, GenAI could indeed be a transformative element in the evolution of medical education.


  What were the main reasons that the previous generation of AI chatbots, such as the IBM Watson chatbot, failed to become a co-pilot in medical education and training?


 The journey of AI in medical education and training, particularly with earlier generations of AI chatbots like IBM Watson, has been a learning experience that highlighted several challenges and limitations. While these systems showed great promise, their full potential as co-pilots in medical education and training was not fully realized due to several reasons:


1. Technological Limitations

  • Understanding Context: Early AI systems often struggled with understanding complex medical context and nuances, which are critical in medicine.
  • Natural Language Processing (NLP) Capabilities: The ability to understand and process natural language accurately was less advanced, leading to difficulties in handling open-ended medical dialogues.


2. Data Challenges

  • Quality and Quantity of Data: AI systems require vast amounts of high-quality data to learn effectively. Earlier systems often faced challenges with the availability and quality of medical data.
  • Data Privacy and Security: Medical data is highly sensitive. Ensuring privacy and adhering to regulations like HIPAA was (and remains) a significant challenge.


3. Integration Issues

  • System Integration: Integrating AI tools seamlessly with existing medical education infrastructure and clinical workflows was often challenging.
  • User Interface and Experience: Early AI systems sometimes had less intuitive interfaces, making them less accessible for students and educators.


4. Clinical Accuracy and Reliability

  • Medical Accuracy: Ensuring the clinical accuracy of AI recommendations and teachings is crucial. Early AI systems occasionally provided recommendations that were not aligned with best medical practices.
  • Over-reliance Risk: There was a concern that reliance on AI could lead to atrophy of critical thinking and diagnostic skills in medical trainees.


5. Cost and Resource Intensity

  • Developing, training, and maintaining AI systems for medical education can be resource-intensive and expensive, which may have limited their adoption.


6. Ethical and Legal Considerations

  • Ethical Implications: The use of AI in medicine raises ethical questions, including the risk of bias and the implications of AI-generated advice.
  • Legal Responsibility: There were uncertainties regarding legal responsibility, especially in cases where AI advice might be linked to medical errors.


7. Expectation vs. Reality

  • Overhyped Expectations: The initial hype and high expectations around AI capabilities sometimes led to disillusionment when these expectations were not fully met.



The earlier generation of AI chatbots in medicine paved the way for learning and improvement, setting the stage for more advanced systems like today's GenAI. Understanding these challenges has been crucial in developing more sophisticated, accurate, and user-friendly AI tools that are better suited for integration into medical education and training.


  Among these reasons, what was the number one reason causing the earlier generation of AI chatbots to fail?


 Identifying a single, overarching reason for the challenges faced by earlier generations of AI chatbots like IBM Watson in medical education and training is complex, as multiple factors often interplayed. However, if we were to highlight one critical factor, it would likely be the Technological Limitations, particularly in Natural Language Processing (NLP) and Understanding Context.


Why NLP and Contextual Understanding Were Key Challenges:

  • Complexity of Medical Language: Medical language is highly specialized and context-dependent. Early AI systems often struggled to fully grasp the nuances of medical terminology, patient histories, and symptom descriptions.
  • Variability in Clinical Scenarios: Medicine is not always black and white; it often involves interpreting subtle clues and understanding complex clinical scenarios. Early AI chatbots had difficulty processing these nuances and providing accurate, context-aware responses or recommendations.
  • Interpreting Unstructured Data: Much of medical data (like patient histories, doctor's notes) is unstructured and requires sophisticated understanding. The ability of AI to interpret this data accurately is crucial for its utility in medical education and practice.
  • User Expectations: Medical professionals and students expected AI systems to understand and respond with a high degree of accuracy and relevance. When these expectations weren't met due to NLP limitations, it led to frustration and a lack of trust in the technology.
  • Integration into Clinical Workflows: For AI to be useful in medical training, it needs to simulate or integrate seamlessly into real-world clinical workflows. This requires a deep understanding of how medical professionals communicate and make decisions, which was a challenging feat for earlier AI systems.


Broader Impact:

  • Trust and Reliability: These limitations impacted the trust and perceived reliability of AI systems among medical professionals and educators.
  • Utility in Education and Training: For an AI system to be an effective educational tool, it needs to understand and respond accurately to a wide range of medical queries and scenarios. Shortcomings in this area limited their effectiveness as educational tools.



While it's challenging to pinpoint a single reason for the limitations of early AI in medical education, the challenges surrounding NLP and contextual understanding were particularly significant. They directly impacted the AI's ability to function effectively in the complex, nuanced world of medical education and practice. As AI technology has advanced, newer generations of AI systems have made significant strides in addressing these challenges.


  I agree. Being able to interact with humans in natural language makes a world of difference. Let's focus on the use case of symptom checking in a patient's EHR record. Are there any AI co-pilots effective for this use case?


 Yes, the use of AI co-pilots for symptom checking in electronic health records (EHR) is a growing area of application, and several advancements have been made in this field. AI co-pilots in this context are designed to assist healthcare professionals by analyzing patient data, including symptoms, medical history, lab results, and more, to help in diagnosis and decision-making. Some key aspects and examples include:


AI Tools for EHR Analysis

  • Natural Language Processing (NLP): Advanced NLP algorithms can interpret and extract relevant information from unstructured data in EHRs, such as physician's notes or patient histories.
  • Symptom Checkers and Diagnostic Assistants: AI-driven symptom checkers can analyze the symptoms recorded in EHRs and suggest possible diagnoses or further tests. Examples include IBM Watson Health and Google's DeepMind Health.
  • Predictive Analytics: Some AI systems use predictive modeling to identify patients at risk of certain conditions, enabling early intervention based on their EHR data.
  • Integration with Clinical Decision Support Systems (CDSS): These systems are increasingly using AI to enhance their capabilities, offering more accurate and personalized recommendations.


Examples of AI Co-pilots in Action

  • IBM Watson for Health: IBM's Watson has been used for oncology diagnosis and treatment recommendations, analyzing patient records and medical literature to suggest personalized treatment plans.
  • Google Health Studies: Google has conducted various health-related studies using AI to analyze EHRs for improved health outcomes, such as predicting patient medical events.
  • Epic Systems and Cerner: Major EHR vendors like Epic and Cerner are integrating AI functionalities into their systems for better data analysis and decision support.


Considerations and Challenges

  • Data Privacy and Security: Ensuring patient data confidentiality and compliance with regulations like HIPAA is paramount.
  • Accuracy and Reliability: The AI must provide accurate and clinically relevant recommendations, validated by medical professionals.
  • Interoperability: Effective integration with existing EHR systems is crucial for seamless operation.
  • User Training and Trust: Healthcare providers need training to use these systems effectively and must trust the AI's recommendations for them to be useful.
  • Ethical Concerns: The AI should avoid biases in decision-making and ensure equitable healthcare recommendations.



AI co-pilots in symptom checking and EHR analysis are proving to be valuable tools in healthcare. They can augment the capabilities of healthcare providers, lead to more efficient diagnosis processes, and potentially improve patient outcomes. However, their success depends on technological sophistication, seamless integration, user acceptance, and rigorous adherence to ethical and privacy standards. As these technologies continue to evolve and improve, their role in healthcare is likely to become increasingly significant.


  These AI co-pilot tools are not widely used, i.e., they are not as successful as the AI field had hoped. In comparison, do you think GenAI chatbots, like ChatGPT, have a better chance of succeeding as healthcare co-pilots, being widely used by students and doctors?


 The adoption of AI co-pilot tools in healthcare, including those for EHR analysis and symptom checking, has indeed been more gradual and cautious than initially anticipated by some in the AI field. The success of GenAI chatbots like ChatGPT in the healthcare sector, particularly for use by students and doctors, can be considered from several angles:


Advantages of GenAI Chatbots

  • Advanced Natural Language Processing: GenAI chatbots typically have superior natural language understanding and generation capabilities, making them more intuitive and easier to use for healthcare professionals and students.
  • User-Friendly Interface: ChatGPT-like models present information in a conversational manner, which can be more user-friendly compared to traditional AI tools, potentially leading to higher adoption rates.
  • Versatility and Broad Knowledge Base: These chatbots can provide a wide range of information, from medical knowledge to administrative guidelines, making them versatile tools in healthcare settings.
  • Accessibility and Convenience: Being primarily text-based and easily integratable into various platforms, GenAI chatbots can be more accessible to a broader range of users.


Challenges and Considerations

  • Clinical Accuracy and Reliability: The biggest challenge remains ensuring the clinical accuracy of the information provided, which is critical in healthcare.
  • Regulatory Compliance: Chatbots in healthcare must comply with strict regulations, including data privacy laws like HIPAA.
  • Integration with Healthcare Systems: Effective integration with existing healthcare systems and workflows is essential for practical use.
  • User Trust and Adoption: Building trust among healthcare professionals and students is crucial. This involves demonstrating the reliability and utility of the AI system in clinical and educational settings.
  • Ethical and Liability Issues: Addressing ethical concerns and liability issues, especially regarding misdiagnosis or misinformation, is paramount.


Potential for Success

  • Educational Use: For medical students, GenAI chatbots could be particularly useful for studying, accessing medical literature, and engaging in interactive learning scenarios. Their ease of use and conversational nature make them well-suited for educational purposes.
  • Clinical Assistance: For doctors, while direct clinical applications might be approached more cautiously due to the stakes involved, these chatbots can still serve as valuable tools for information retrieval, administrative tasks, and preliminary analyses.



GenAI chatbots like ChatGPT may have a better chance of becoming widely used in healthcare, particularly in educational contexts, due to their advanced NLP capabilities, user-friendly nature, and versatility. However, their success in clinical applications will heavily depend on addressing challenges related to accuracy, integration, regulatory compliance, and user trust. As the technology continues to evolve and these challenges are addressed, the potential for widespread adoption increases.


  Yes, using a GenAI co-pilot to analyze clinical cases, provide insights into possible causes, or make other predictions along the care delivery workflow for education and learning purposes will be a low-hanging fruit. Assuming a GenAI co-pilot can be proven effective in analyzing clinical cases, it may also be used to help with clinical research. What kinds of clinical research can GenAI assist with as a co-pilot?


 If a GenAI co-pilot like ChatGPT is proven effective in analyzing clinical cases, it can indeed be a valuable tool for various aspects of clinical research. Here are some areas where GenAI can significantly contribute:


1. Data Analysis and Pattern Recognition

  • Large-Scale Data Analysis: GenAI can process and analyze large datasets, including patient records, clinical trial data, and genomic data, identifying patterns and correlations that might not be evident to human researchers.
  • Phenotype-Genotype Correlation: AI can assist in correlating clinical phenotypes with genetic information, aiding in research on genetic diseases and personalized medicine.


2. Literature Review and Summarization

  • Systematic Reviews: AI can rapidly sift through thousands of published studies, helping researchers conduct comprehensive literature reviews efficiently.
  • Data Extraction and Synthesis: GenAI can extract key findings and data points from research papers, assisting in the synthesis of information across multiple studies.


3. Hypothesis Generation and Testing

  • Identifying Research Gaps: By analyzing existing literature and data, AI can help identify gaps in knowledge and suggest potential areas for further research.
  • Predictive Modeling: AI can develop predictive models based on existing data, which can form the basis for new hypotheses in clinical research.


4. Clinical Trial Design and Management

  • Patient Recruitment: GenAI can help identify eligible patients for clinical trials by analyzing EHRs and matching patients to trial criteria.
  • Trial Monitoring: Continuous monitoring of trial data can be automated, with AI flagging potential issues or deviations in real-time.


5. Drug Discovery and Development

  • Drug Repurposing: AI can identify potential new uses for existing drugs by analyzing drug databases and scientific literature.
  • Molecular Modeling: In the field of drug design, AI can predict how different chemical structures may behave or respond to diseases.


6. Personalized Medicine

  • Treatment Customization: AI can analyze patient data to suggest personalized treatment plans based on individual genetic profiles, lifestyle factors, and disease characteristics.


7. Predictive Analytics for Public Health

  • Disease Outbreak Prediction: AI can analyze various data sources to predict disease outbreaks or public health trends.
  • Health Policy Impact Assessment: GenAI can assist in modeling the potential impacts of different health policies or interventions.


Challenges and Ethical Considerations

  • Data Quality and Bias: Ensuring the quality of data and addressing potential biases in AI algorithms is crucial.
  • Interpretability: The decision-making process of AI should be transparent, especially in clinical research contexts.
  • Ethical Use of Data: Complying with ethical standards and regulatory requirements, particularly regarding patient data privacy, is essential.
  • Collaboration with Domain Experts: Successful application of AI in clinical research requires close collaboration with clinicians, biostatisticians, and other domain experts.



The potential of GenAI as a co-pilot in clinical research is vast and exciting. With its ability to process and analyze large volumes of data, provide insights, and assist in hypothesis generation, it can significantly accelerate and enhance various aspects of clinical research. However, careful consideration of data quality, ethical issues, and collaborative approaches with human experts is crucial to realize its full potential.




Recap of Previous Issues of ELHS Newsletters


  • Google reported its medical GenAI chatbot passed medical licensing exam tests.
  • UCSF educators have outlined the potential impact of GenAI in medical education.
  • Stanford researchers examined GenAI vs. medical student performance on medical exams.
  • GenAI transformation requires system-level changes, including learning health systems.
  • Generative AI (GenAI) chatbot, ChatGPT, has passed medical licensing exams.
  • JMIR Medical Education has issued a call for papers for the role of GenAI in medical education.
  • GenAI is transforming medical education.
  • Stanford University AI+Education Summit showcased research in AI and education.
  • PCORI and AHRQ investing $50M to create LHS Training and Research Centers.
  • National Academy of Medicine has called for action: Health Professional Education must incorporate AI training to prepare the workforce.
  • AI is a force behind the evolution of medical education in recent years.