Paving the Way for Global Health Equity with GenAI, ML, and LHS (Learning Health Systems)
| Subscribe | All Issues |
The Historic Coup at OpenAI Revealing the Urgent Need for Government Regulation
Implications for Healthcare Organizations from President Biden's Executive Order on Artificial Intelligence
The Dean of Stanford Medical School Discussing the Promising Applications of AI in Healthcare
What Does Responsible AI Mean in Healthcare?
Comparing the Dangers of Generative AI and Nuclear Weapons
~
On November 17, OpenAI CEO Sam Altman was unexpectedly dismissed by the board of the non-profit organization, causing a stir in Silicon Valley. Not only did Microsoft, its biggest investor and partner, express a desire for Sam's return, but a significant portion of OpenAI’s staff also threatened to resign if he was not reinstated.
On November 21, just a few days later, Sam was reappointed to his position. The exact reasons behind this tumultuous event at OpenAI remain unclear.
However, one clear implication from this historic incident is that financial interests often overshadow ethical concerns, even amid potential threats posed by Generative AI (GenAI) to humanity. This situation serves as a warning that the progress of GenAI may have reached a point where self-regulation by the industry is inadequate to prevent catastrophic outcomes. Consequently, swift governmental intervention is necessary to ensure responsible AI development and application moving forward.
What does this mean for healthcare GenAI? The healthcare industry has previously encountered high ethical alarms, such as during the emergence of human cloning technology. It may be better equipped to handle the potential dangers posed by GenAI to public health. Medical education should prioritize GenAI ethics. Urgent and robust regulations on the research, development, and practice of GenAI are essential.
AMA President Dr. Jesse Ehrenfeld, in a recent interview with Healthcare Finance News, predicted that "Physicians who use AI will replace those who don't." He emphasized that healthcare AI must be developed and implemented ethically, equitably, responsibly, and transparently.
As a health scientist with a broad background, I realized about a decade ago that the vision of Learning Health Systems (LHS) created by the National Academy of Medicine is the right direction for the future of healthcare. However, it wasn’t until the recent advancements in machine learning and AI that I found the suitable technologies to implement ML-enabled LHS. Particularly, the ChatGPT model GPT-4, as a prime example of GenAI, has been demonstrated in our published studies and many others to make acceptable predictions in various healthcare settings. It quickly became clear to me that GenAI is the technology I had been searching for to build more equitable and efficient LHS.
AI’s journey to its current paradoxical situation has been long and complex. Professor Geoffrey Hinton, a deep learning pioneer, began working on neural networks to mimic the human brain for artificial intelligence about five decades ago. While he developed a technique called backpropagation in the 1980s, his neural networks did not show impressive results until the 2010s. His team was eventually acquired by Google. One of his students was recruited by Elon Musk for OpenAI in 2018 and led the development of ChatGPT.
Shocked by the capabilities of new large language models like GPT-4, Hinton left Google in May 2023 to advocate for controlling GenAI robots before it’s too late. “These things are totally different from us,” he said in an interview with MIT Technology Review. “It’s as if aliens had landed, and people haven’t realized because they speak very good English.”
Hinton identified six potential risks from current AI models: bias and discrimination; unemployment; online echo chambers; fake news; “battle robots”; and existential risks to humanity.
Tech mogul Elon Musk has been warning since 2014 that AI poses the greatest existential threat to the human race. He has called for AI regulation.
Urgent need for regulations
At the recent UK’s AI Safety Summit, Musk said: “We’re not stronger or faster than other creatures, but we are more intelligent. Now, for the first time in human history, we are faced with something that will be far more intelligent than us. It’s unclear if we can control such a thing, but we can aspire to guide it in a direction beneficial to humanity.”
President Biden, on October 30, 2023, highlighted the urgency of this issue by introducing a landmark executive order on AI. Over the next 6-12 months, we hope to see the emergence of strong regulations for GenAI, aimed at alleviating widespread concerns."
Keep engaging with AI!
AJ
~
(Source: NAM)
Published papers, recent news, and significant events in a coherent narrative for the main topic.
Mello MM, Shah NH, Char DS. President Biden’s Executive Order on Artificial Intelligence—Implications for Health Care Organizations. JAMA. Published online November 30, 2023. doi:10.1001/jama.2023.25051
[2023/11] It must be governed,” President Biden remarked on October 30, 2023, introducing a landmark executive order on artificial intelligence (AI). Addressing the explosive promise and grave risks posed by AI, the executive order outlines the Biden administration’s vision for ensuring that AI is developed and used responsibly.
Menz BD, Modi ND, Sorich MJ, Hopkins AM. Health Disinformation Use Case Highlighting the Urgent Need for Artificial Intelligence Vigilance: Weapons of Mass Disinformation. JAMA Intern Med. Published online November 13, 2023. doi:10.1001/jamainternmed.2023.5947
[2023/11] This study examines whether artificial intelligence (AI) models can be manipulated for the generation of targeted health disinformation regarding vaccines and vaping.
Dr. Lloyd Minor, Stanford Medical School Dean discusses AI's promising applications in healthcare. He speaks with David Westin on "Wall Street Week Daily." (Source: Bloomberg)
[2023/11/15] Dean Minor emphasizes AI’s potential contribution to equal access to healthcare. He also thinks Al will have a radical effect on how we educate physicians.
Stanford initiative, RAISE-Health (Responsible AI for Safe and Equitable Health).
[2023/6] RAISE-Health is a joint initiative between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to guide the responsible use of AI across biomedical research, education, and patient care. Responsible AI for Safe and Equitable Health will address ethical and safety issues in AI innovation, define standards for the field, and convene experts on the topic.
NAM new project: Health Care Artificial Intelligence Code of Conduct (AICC)
[2023/6] The National Academy of Medicine (NAM) Leadership Consortium Collaborates with Leading Health, Tech, Research, and Bioethics Organizations to Develop Health Care AI Code of Conduct. AICC is aimed at providing a guiding framework to ensure that AI algorithms and their application in health, health care, and biomedical science perform accurately, safely, reliably, and ethically in the service of better health for all.
New Event: HIMSS AI in Healthcare Forum
[2023/12/15] You’ll explore real world examples of the benefits and risks of AI's application in healthcare and leave equipped with the knowledge and confidence to take the next step in your AI journey. AMA President Dr. Jesse Ehrenfeld will deliver the "Closing Keynote: Unlocking the Potential: AMA's Initiatives in Advancing AI in Medicine."
~
GPTs are becoming an integral part of our society. AJ talks with GenAI to understand how machines view healthcare.
What is generative AI?
Generative AI refers to a type of artificial intelligence that can generate new content. This content can be in various forms such as text, images, audio, or video. Unlike traditional AI systems that are designed primarily for analysis or pattern recognition, generative AI is capable of producing new data that resembles human-generated content. Here are some key aspects:
Learning from Data: Generative AI systems are trained on large datasets. They analyze and learn from this data to understand patterns, styles, and structures inherent in the content. This training enables them to generate similar data.
Types of Generative AI Models: Some popular models include Generative Adversarial Networks (GANs) and Transformer-based models like GPT (Generative Pre-trained Transformer) for text generation and DALL-E for image generation. GANs involve two models (a generator and a discriminator) that work against each other to improve the output, while transformers use attention mechanisms to generate high-quality, contextually relevant text or images.
Applications: Generative AI has a wide range of applications, such as:
Content Creation: Generating art, music, creative writing, and realistic images or videos.
Data Augmentation: Creating additional data for training machine learning models.
Personalization: Tailoring content to individual preferences in marketing or entertainment.
Simulation and Modeling: Creating realistic simulations in fields like architecture, game development, and scientific research.
Ethical Considerations and Challenges: Generative AI poses unique challenges, especially in the realm of ethics. Issues like the potential for generating misleading or harmful content (deepfakes), copyright concerns, and the impact on creative industries are critical areas of ongoing debate and regulation.
In summary, generative AI represents a cutting-edge field in AI that goes beyond interpretation and analysis to the creation of new, original content, opening up numerous possibilities across various sectors.
Why do Elon Musk and Geoffrey Hinton warn that GenAI poses the greatest existential threat to humanity?
Elon Musk and Geoffrey Hinton have warned that Generative AI (GenAI) could pose significant existential threats to humanity for several reasons:
Superintelligence Risk: Both Musk and Hinton have expressed concerns about the potential for AI to surpass human intelligence. If AI systems become more intelligent than humans, it could become challenging or even impossible to control them. Such superintelligent systems might develop goals misaligned with human values or interests, leading to unforeseen and potentially catastrophic consequences.
Autonomous Weapons: The development of autonomous weapons systems powered by AI could lead to new forms of warfare. These systems could make decisions to engage targets without human intervention, raising ethical concerns and the risk of unintended escalations in conflicts.
Misuse and Malicious Use: AI, especially advanced generative models, can be used to create deepfakes, synthetic media that can be indistinguishable from real content. This technology can be misused to spread misinformation, manipulate public opinion, impersonate individuals, and disrupt democratic processes.
Unemployment and Economic Disruption: AI's ability to automate tasks more efficiently than humans could lead to widespread unemployment. This economic disruption could create significant societal challenges, including increased inequality and social unrest.
Bias and Discrimination: AI systems can inherit and amplify biases present in their training data. These biases can lead to discriminatory outcomes in critical areas like hiring, law enforcement, and loan approvals, perpetuating systemic inequalities.
Loss of Human Autonomy: There's a concern that as AI systems become more integrated into daily life, they might erode human autonomy. People might become overly dependent on AI for decision-making, leading to a loss of critical thinking and decision-making skills.
Existential Risks: Both Musk and Hinton have suggested that the uncontrolled development of AI could lead to existential risks, where the very survival of humanity could be threatened by powerful AI systems with misaligned goals.
These concerns underscore the importance of careful and ethical development and deployment of AI technologies. It is crucial to establish robust ethical guidelines and regulatory frameworks to guide AI development and mitigate these risks.
Given the potential dangers of GenAI, what ethical guidelines and regulatory frameworks are appropriate for controlling the risks?
Given the potential dangers of Generative AI (GenAI), establishing ethical guidelines and regulatory frameworks is crucial for controlling its risks. These should be designed to ensure that AI development is aligned with human values and safety, and to mitigate potential negative impacts on society. Key elements of these guidelines and frameworks include:
Transparency: AI systems, especially those that generate content, should be transparent in their operations. Users should be aware when they are interacting with AI-generated content. Additionally, the processes behind AI decision-making should be as clear as possible to facilitate trust and understanding.
Accountability and Responsibility: Clear guidelines should be established to determine who is responsible for the outcomes of AI decisions. This includes addressing the legal and ethical responsibilities of AI developers, users, and, if applicable, the AI itself.
Privacy Protection: Strong data protection and privacy policies must be in place. AI systems often require vast amounts of data, which can include sensitive personal information. Ensuring that this data is used ethically and protected from misuse is critical.
Bias and Fairness: Efforts should be made to minimize and audit biases in AI systems. AI should be developed and tested to ensure it does not perpetuate existing societal biases or create new forms of discrimination.
Safety and Reliability: AI systems should be designed with robust safety features to prevent unintended consequences. This includes ensuring that AI behaves predictably and does not cause harm due to errors or vulnerabilities.
Public Participation and Inclusiveness: The development and deployment of AI should involve a broad range of stakeholders, including the public, to ensure diverse perspectives are considered. This helps in addressing the societal impact of AI technologies.
Regulation of High-Risk AI: Specific regulations should be put in place for AI applications that pose significant risks, such as autonomous weapons or deepfake technology. These regulations might include restrictions, special oversight, or outright bans.
International Collaboration: Since AI technology transcends borders, international collaboration is vital in creating global standards and regulations. This collaboration can help in addressing the global impact of AI and ensuring a harmonized approach.
Ethical AI Design and Deployment: Guidelines should promote the ethical design and deployment of AI, emphasizing human dignity, rights, freedoms, and cultural diversity.
Continuous Monitoring and Adaptation: As AI technology evolves, regulatory frameworks and ethical guidelines should also be updated. Continuous monitoring of the impacts of AI and adaptations to guidelines will be necessary to address emerging challenges.
These guidelines and frameworks should be developed collaboratively with input from technologists, ethicists, policymakers, industry experts, and other stakeholders to ensure they are comprehensive, practical, and effective in mitigating the risks associated with GenAI.
What does Responsible AI mean in healthcare?
Responsible AI in healthcare refers to the ethical development, deployment, and use of artificial intelligence in a manner that is safe, transparent, equitable, and aligned with the well-being of patients, healthcare providers, and society as a whole. Key aspects of Responsible AI in healthcare include:
Patient Safety and Well-being: The primary focus of AI in healthcare should be to enhance patient care and safety. AI applications should be rigorously tested and validated to ensure they do not pose any harm to patients.
Accuracy and Reliability: AI systems must be accurate and reliable, particularly in diagnostic procedures or treatment recommendations. The consequences of errors in healthcare are significant, so high standards of accuracy and reliability are crucial.
Data Privacy and Security: Patient data is highly sensitive. AI systems in healthcare must adhere to strict data privacy and protection laws, ensuring that patient data is securely stored and processed while maintaining confidentiality.
Bias and Fairness: AI algorithms must be free from biases that could lead to unequal treatment of patients based on race, gender, age, or socioeconomic status. This involves using diverse and representative data sets for training AI models and regularly auditing these models for biases.
Transparency and Explainability: Healthcare providers and patients should be able to understand how AI systems make decisions. This transparency is crucial for trust and for clinicians to make informed decisions about integrating AI recommendations into patient care.
Accountability: There should be clear accountability for AI decisions in healthcare. This includes determining who is responsible for the outcomes of AI-based recommendations – the developers, the healthcare providers, or others.
Regulatory Compliance: AI in healthcare must comply with existing healthcare regulations and standards, including those related to medical devices and patient care.
Ethical Considerations: AI should be used in a way that respects patient autonomy, consent, and ethical medical practices. This includes ensuring that AI does not undermine the patient-clinician relationship.
Inclusivity and Accessibility: AI tools should be accessible to all patients, including those from underrepresented and underserved communities, and should be designed to address a wide range of healthcare needs.
Continuous Monitoring and Improvement: AI systems should be continuously monitored and updated to ensure they remain effective and relevant as medical knowledge and societal norms evolve.
Collaboration and Stakeholder Engagement: The development and deployment of AI in healthcare should involve collaboration with a wide range of stakeholders, including patients, healthcare providers, ethicists, and regulators.
Implementing Responsible AI in healthcare aims to leverage the benefits of AI technologies while minimizing risks and ensuring ethical, equitable, and patient-centered care.
Which healthcare technologies in the past have caused a similar level of concern among the public?
Several healthcare technologies in the past have raised concerns similar to those currently surrounding Generative AI. These concerns often revolve around ethical implications, potential for misuse, and the impact on societal norms and individual privacy. Some notable examples include:
Genetic Engineering and CRISPR: The advent of genetic engineering, especially with the development of CRISPR-Cas9 gene-editing technology, raised significant concerns. The potential for "designer babies," unintended genetic consequences, and ethical dilemmas surrounding gene editing in humans have been hotly debated.
Stem Cell Research: Stem cell research, particularly involving embryonic stem cells, generated controversy due to ethical concerns regarding the use of human embryos. The potential for creating new life forms and implications for human identity and dignity were central to these debates.
Cloning: The cloning of Dolly the sheep in 1996 sparked widespread discussion and concern about the possibility of human cloning. Issues related to identity, ethics, and the nature of human reproduction were raised.
IVF and Assisted Reproductive Technologies: In vitro fertilization (IVF) and other assisted reproductive technologies faced ethical and moral scrutiny, especially in their early years. Concerns included the status of unused embryos, the implications of multiple births, and the psychological impacts on children born through these technologies.
Psychopharmacology: The development and use of psychotropic drugs, especially for mental health conditions, have raised concerns about overmedication, dependency, and the impact on the natural range of human emotions and behaviors.
Artificial Organs and Transplantation: Technologies related to artificial organs and transplantation, such as mechanical heart valves and organ transplants, initially faced questions about the nature of human identity and the ethics of prolonging life artificially.
Radiation and Imaging Technologies: The use of X-rays and later developments like CT scans raised concerns about radiation exposure and the potential long-term health effects, leading to debates about the risk-benefit balance in medical imaging.
Human Genome Project: The sequencing of the human genome brought concerns about privacy, genetic discrimination, and the potential for genetic information to be misused by employers, insurers, or governments.
Telemedicine: While not as ethically contentious as others, the rise of telemedicine and digital health raised questions about the quality of care, data security, and the potential loss of the personal touch in the doctor-patient relationship.
Biometric Monitoring Devices: Wearable technologies that monitor health have sparked concerns about privacy, data security, and the potential for constant health monitoring to create anxiety or lead to overdiagnosis.
Each of these technologies has brought significant benefits to healthcare, but they have also required careful consideration and management of ethical, social, and legal implications. Similarly, the ongoing development of AI in healthcare calls for a balanced approach that maximizes benefits while addressing legitimate public concerns.
Some experts think GenAI is more dangerous than nuclear weapons, as GenAI has the potential to become smarter than humans and take control of the human race. Do you agree?
The comparison between Generative AI (GenAI) and nuclear weapons in terms of danger is a complex and subjective matter. Both technologies possess distinct characteristics and potential risks. Here's a breakdown of the concerns and how they differ:
Generative AI Risks
Autonomous Intelligence: The primary fear with GenAI is that it could develop a level of intelligence and autonomy surpassing human control. This could lead to scenarios where AI makes decisions that are not aligned with human values or well-being.
Pervasiveness: Unlike nuclear weapons, which are controlled by specific entities (typically governments), AI technology is more widely accessible and can be integrated into numerous aspects of everyday life, potentially making its impact more pervasive.
Subtle Influence: GenAI can subtly influence public opinion, spread misinformation, or automate tasks in ways that might slowly and fundamentally alter society, potentially going unnoticed until significant changes have occurred.
Misuse and Unintended Consequences: The potential for misuse of AI, either intentionally (e.g., for malicious purposes) or unintentionally (through flawed or biased decision-making processes), is a significant concern.
Nuclear Weapons Risks
Immediate Physical Destruction: The most apparent risk of nuclear weapons is their capacity for immense immediate physical destruction and long-term environmental damage.
Controlled by Few: Nuclear weapons are typically under the control of nation-states, which poses a different kind of risk related to geopolitical tensions and the potential for catastrophic warfare.
Deterrence and Mutually Assured Destruction: The doctrine of mutually assured destruction (MAD) has been a stabilizing, albeit precarious, factor in preventing nuclear war. This kind of deterrent does not directly apply to AI.
Regulation and Treaties: There are international treaties and agreements in place aimed at preventing the proliferation and use of nuclear weapons. Similar comprehensive international regulations for AI are still in development.
Comparison and Perspective
Different Nature of Threats: While nuclear weapons pose a clear and immediate existential threat to humanity, the threats posed by GenAI are more subtle, complex, and multifaceted.
Predictability: Nuclear weapons, while enormously destructive, are somewhat predictable in their effects. The risks of GenAI, especially concerning superintelligence, are less predictable and more speculative.
Regulatory Challenges: Regulating AI poses unique challenges due to its dual-use nature, rapid development, and integration into civilian life.
Agreement or Disagreement
Agreement: If you agree that GenAI is more dangerous, it would be due to its unpredictable nature, potential for widespread misuse, and the difficulty in controlling its development and deployment globally.
Disagreement: If you disagree, it might be because nuclear weapons present a clear, immediate, and catastrophic threat to human survival, whereas the existential threats from AI are more speculative and potentially manageable with proper safeguards.
Ultimately, whether GenAI is more dangerous than nuclear weapons is a matter of perspective, dependent on how one weighs immediate physical threats against more abstract, yet potentially profound, long-term risks. Both require diligent oversight, ethical consideration, and global cooperation to mitigate their respective dangers.
~~~
Democratizing Generative AI in Healthcare to Help Achieve Global Health Equity
info@elhsi.org
Palo Alto, California, USA