The Dual-Edged Sword: Generative AI Health Assistants and the Proliferation of Cyber-Biological Threats

Abdulrahman Humidan O Alsuhaymi (1), Abdulaziz Ibrahim Abdulalrhman Alsarrani (1), Mohammed Saleh Saleem Alraddadi (2), Faris Salamh Aljohani (3), Abdulmajid Maneh Matar Alharbi (4), Wed Ali Alwan Fadhel (5), Mohammed Saleh Ateeq Alharbi (6), Fahed Awad Albalawi (6), Mohmmed Owaidh Almutairi (7), Muteb Ali Alshammari (8), Mohammed Shuwayt Alsubaie (8), Mubarak Fayez Saleh bin omran (9)
(1) Al-Miqaat General Hospital, Almadinah, Ministry of Health, Saudi Arabia,
(2) Oqlat Al-Soqour Hospital, Al-Qassim, Ministry of Health, Saudi Arabia,
(3) King Salman bin Abdulaziz Medical City, Al- Madinah Al- Munawwarah, Ministry of Health, Saudi Arabia,
(4) Al Miqat Hospital City, Al-Madinah Al-Monawara, Ministry of Health, Saudi Arabia,
(5) Sabya General Hospital, Ministry of Health, Saudi Arabia,
(6) Al-Rafai'a General Hospital, Al-Jamsh, Riyadh Region, Third Health Cluster, Ministry of Health,, Saudi Arabia,
(7) Al Abdaliyah Primary Health Care Center – Riyadh Second Health Cluster, Ministry of Health, Saudi Arabia,
(8) Maternity & Children Hospital – Hafar Al-Batin, Ministry of Health,, Saudi Arabia,
(9) Ramah General Hospital, Ministry of Health, Saudi Arabia

Abstract

Background: The integration of generative artificial intelligence (AI) into healthcare, particularly through AI health assistants for diagnostic support, clinical decision-making, and drug discovery, represents a paradigm shift in medicine. However, these powerful tools, trained on vast biomedical datasets, possess inherent dual-use potential. Their very capabilities—to understand, generate, and optimize complex biological information—could be maliciously repurposed to lower barriers to the creation of biological threats, disseminate dangerous misinformation, or circumvent established biosecurity protocols.


Aim: This narrative review aims to analyze the emerging risk landscape where generative AI health assistants intersect with biosecurity. 


Methods: A comprehensive literature search was conducted across PubMed, IEEE Xplore, ACM Digital Library, and preprint servers (arXiv, bioRxiv) for English-language publications from 2010 to 2024. 


Results: The review identifies three primary threat vectors: The AI-accelerated design of biological pathogens or toxins, the generation of hyper-realistic biomedical misinformation to undermine public health, and the AI-facilitated circumvention of physical and digital biosecurity controls. The analysis highlights a critical gap in governance, technical mitigation, and practitioner awareness.


Conclusion: Generative AI health assistants necessitate a fundamental rethinking of biosecurity in the digital age. Proactive, multidisciplinary collaboration among AI developers, biomedical researchers, security experts, ethicists, and policymakers is essential to develop and implement robust technical, ethical, and regulatory guardrails. Failing to preemptively address this dual-use dilemma risks eroding the immense benefits of medical AI and introducing unprecedented global catastrophic biological risks.

Full text article

Generated from XML file

References

Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., DasSarma, N., ... & Kaplan, J. (2022). Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. https://doi.org/10.48550/arXiv.2204.05862

Barrett, A. M., Hendrycks, D., Newman, J., & Nonnecke, B. (2022). Actionable guidance for high-consequence AI risk management: Towards standards addressing AI catastrophic risks. arXiv preprint arXiv:2206.08966. https://doi.org/10.48550/arXiv.2206.08966

Bengio, Y., Hinton, G., Yao, A., Song, D., Abbeel, P., Harari, Y. N., ... & Mindermann, S. (2023). Managing ai risks in an era of rapid progress. arXiv preprint arXiv:2310.17688, 18.

Bran, A. M., Cox, S., Schilter, O., Baldassari, C., White, A. D., & Schwaller, P. (2023). Chemcrow: Augmenting large-language models with chemistry tools. arXiv preprint arXiv:2304.05376. https://doi.org/10.48550/arXiv.2304.05376

Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228. https://doi.org/10.48550/arXiv.1802.07228

Buiten, M. C. (2019). Towards intelligent regulation of artificial intelligence. European Journal of Risk Regulation, 10(1), 41-59. doi:10.1017/err.2019.8

Clark, P., Kim, J., & Aphinyanaphongs, Y. (2023). Marketing and US Food and Drug Administration clearance of artificial intelligence and machine learning enabled software in and as medical devices: a systematic review. JAMA network open, 6(7), e2321792-e2321792. doi:10.1001/jamanetworkopen.2023.21792

COMPARE, A. F. T. (2021). TOOLS FOR TRUSTWORTHY AI.

Diggans, J., & Leproust, E. (2019). Next steps for access to safe, secure DNA synthesis. Frontiers in bioengineering and biotechnology, 7, 86. https://doi.org/10.3389/fbioe.2019.00086

Erdem, T., & Özbek, C. (2023). THE PROBLEM OF DISARMAMENT IN ARTIFICIAL INTELLIGENCE TECHNOLOGY FROM THE PERSPECTIVE OF THE UNITED NATIONS: AUTONOMOUS WEAPONS AND GLOBAL SECURITY. Akademik Hassasiyetler, 10(21), 57-79. https://doi.org/10.58884/akademik-hassasiyetler.1218115

Finlayson, S. G., Bowers, J. D., Ito, J., Zittrain, J. L., Beam, A. L., & Kohane, I. S. (2019). Adversarial attacks on medical machine learning. Science, 363(6433), 1287-1289. https://doi.org/10.1126/science.aaw4399

Ganguli, D., Lovitt, L., Kernion, J., Askell, A., Bai, Y., Kadavath, S., ... & Clark, J. (2022). Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858. https://doi.org/10.48550/arXiv.2209.07858

Grifoni, A., Sidney, J., Zhang, Y., Scheuermann, R. H., Peters, B., & Sette, A. (2020). A sequence homology and bioinformatic approach can predict candidate targets for immune responses to SARS-CoV-2. Cell host & microbe, 27(4), 671-680. https://doi.org/10.1016/j.chom.2020.03.002

Guenduez, A. A., & Mettler, T. (2023). Strategically constructed narratives on artificial intelligence: What stories are told in governmental artificial intelligence policies?. Government Information Quarterly, 40(1), 101719. https://doi.org/10.1016/j.giq.2022.101719

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature machine intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2

Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., ... & Hassabis, D. (2021). Highly accurate protein structure prediction with AlphaFold. nature, 596(7873), 583-589. https://doi.org/10.1038/s41586-021-03819-2

Kruse, C. S., Frederick, B., Jacobson, T., & Monticone, D. K. (2017). Cybersecurity in healthcare: A systematic review of modern threats and trends. Technology and Health Care, 25(1), 1-10. https://doi.org/10.3233/THC-161263

Larson, H. (2018). The biggest pandemic risk? Viral misinformation, Nature, 562.

Millett, P., Binz, T., Evans, S. W., Kuiken, T., Oye, K., Palmer, M. J., ... & Yu, S. (2019). Developing a comprehensive, adaptive, and international biosafety and biosecurity program for advanced biotechnology: the IGEM experience. Applied Biosafety: Journal of the American Biological Safety Association, 24(2), 64. https://doi.org/10.1177/1535676019838075

National Academies of Sciences, Medicine, Division on Earth, Life Studies, Board on Life Sciences, Board on Chemical Sciences, ... & Addressing Potential Biodefense Vulnerabilities Posed by Synthetic Biology. (2018). Biodefense in the age of synthetic biology. National Academies Press.

National Academies of Sciences, Medicine, Division on Earth, Life Studies, Board on Life Sciences, Committee on Biological Collections, ... & Options for Sustaining Them. (2021). Biological collections: Ensuring critical research and education for the 21st century. National Academies Press.

National Institutes of Health. (2016). NIH Guidelines for Research Involving Recombinant Or Synthetic Nucleic Acid Molecules:(NIH Guidelines). Department of Health and Human Services, National Institutes of Health.

Nori, H., King, N., McKinney, S. M., Carignan, D., & Horvitz, E. (2023). Capabilities of gpt-4 on medical challenge problems. arXiv preprint arXiv:2303.13375. https://doi.org/10.48550/arXiv.2303.13375

Price, W. N., Gerke, S., & Cohen, I. G. (2019). Potential liability for physicians using artificial intelligence. Jama, 322(18), 1765-1766. doi:10.1001/jama.2019.15064

Protection, F. D. (2018). General data protection regulation (GDPR). Intersoft Consulting, Accessed in October, 24(1).

Rajpurkar, P., Chen, E., Banerjee, O., & Topol, E. J. (2022). AI in health and medicine. Nature medicine, 28(1), 31-38. https://doi.org/10.1038/s41591-021-01614-0

Sandbrink, J. B. (2023). Artificial intelligence and biological misuse: Differentiating risks of language models and biological design tools. arXiv preprint arXiv:2306.13952. https://doi.org/10.48550/arXiv.2306.13952

Singhal, K., Azizi, S., Tu, T., Mahdavi, S. S., Wei, J., Chung, H. W., ... & Natarajan, V. (2023). Large language models encode clinical knowledge. Nature, 620(7972), 172-180.

Soice, E. H., Rocha, R., Cordova, K., Specter, M., & Esvelt, K. M. (2023). Can large language models democratize access to dual-use biotechnology?. arXiv preprint arXiv:2306.03809. https://doi.org/10.48550/arXiv.2306.03809

Sourati, J., & Evans, J. A. (2023). Accelerating science with human-aware artificial intelligence. Nature human behaviour, 7(10), 1682-1696. https://doi.org/10.1038/s41562-023-01648-z

Stadler, A. (2021). The Health Insurance Portability and Accountability Act and its Impact on Privacy and Confidentiality in Healthcare. Senior Honors Theses. 1084.

https://digitalcommons.liberty.edu/honors/1084

Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature medicine, 25(1), 44-56. https://doi.org/10.1038/s41591-018-0300-7

Urbina, F., Lentzos, F., Invernizzi, C., & Ekins, S. (2022). Dual use of artificial-intelligence-powered drug discovery. Nature machine intelligence, 4(3), 189-191. https://doi.org/10.1038/s42256-022-00465-9

Vaduganathan, M., Mensah, G. A., Turco, J. V., Fuster, V., & Roth, G. A. (2022). The global burden of cardiovascular diseases and risk: a compass for future health. Journal of the American College of Cardiology, 80(25), 2361-2371. https://doi.org/10.1016/j.jacc.2022.11.005

Watson, J. L., Juergens, D., Bennett, N. R., Trippe, B. L., Yim, J., Eisenach, H. E., ... & Baker, D. (2023). De novo design of protein structure and function with RFdiffusion. Nature, 620(7976), 1089-1100. https://doi.org/10.1038/s41586-023-06415-8

Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P. S., Mellor, J., ... & Gabriel, I. (2022, June). Taxonomy of risks posed by language models. In Proceedings of the 2022 ACM conference on fairness, accountability, and transparency (pp. 214-229). https://doi.org/10.1145/3531146.3533088

World Economic Forum. (2023). The Global Risks Report 2023.

Yuan, Y., Jiao, W., Wang, W., Huang, J. T., He, P., Shi, S., & Tu, Z. (2023). Gpt-4 is too smart to be safe: Stealthy chat with llms via cipher. arXiv preprint arXiv:2308.06463. https://doi.org/10.48550/arXiv.2308.06463

Authors

Abdulrahman Humidan O Alsuhaymi
AbhuAlsuhaymi@moh.gov.sa (Primary Contact)
Abdulaziz Ibrahim Abdulalrhman Alsarrani
Mohammed Saleh Saleem Alraddadi
Faris Salamh Aljohani
Abdulmajid Maneh Matar Alharbi
Wed Ali Alwan Fadhel
Mohammed Saleh Ateeq Alharbi
Fahed Awad Albalawi
Mohmmed Owaidh Almutairi
Muteb Ali Alshammari
Mohammed Shuwayt Alsubaie
Mubarak Fayez Saleh bin omran
Alsuhaymi, A. H. O., Abdulaziz Ibrahim Abdulalrhman Alsarrani, Mohammed Saleh Saleem Alraddadi, Faris Salamh Aljohani, Abdulmajid Maneh Matar Alharbi, Wed Ali Alwan Fadhel, … Mubarak Fayez Saleh bin omran. (2024). The Dual-Edged Sword: Generative AI Health Assistants and the Proliferation of Cyber-Biological Threats. Saudi Journal of Medicine and Public Health, 1(2), 1935–1942. https://doi.org/10.64483/202412552

Article Details