Cybersecurity Challenges and Mitigations for LLMs in DoD Applications
DOI:
https://doi.org/10.34190/eccws.24.1.3542Keywords:
Large Language Models, Department of Defense, Cybersecurity ChallengesAbstract
Great power competition has escalated globally, making it increasingly important for the Department of Defense (DoD) to adopt artificial intelligence (AI) technologies that are advanced and secure. Large language models (LLMs), which generate text, code, images, and other digital content based on data sets used in training have gained attention for their potential in DoD applications such as data analysis, intelligence processing, and communication. However, due to the complex architecture and extensive data dependency of LLMs, integrating LLMs into defense operations presents unique cybersecurity challenges. These risks, if not properly managed, could pose severe threats to national security and mission integrity. This survey paper categorizes these challenges into vulnerability-centric risks, such as data leakage, and misinformation, and threat-centric risks, including prompt manipulation and data poisoning, providing a comprehensive framework for understanding the potential risks of LLMs in DoD settings. Each category is reviewed to identify the primary risks, current mitigation strategies, and potential gaps, ultimately identifying where further research is needed. By summarizing the state of the art in LLM cybersecurity, this paper offers a foundational understanding of LLM security within the DoD. By advocating for a dual approach that considers both the evolving nature of cyber threats and the operational needs of the DoD, it aims to provide actionable recommendations to guide ongoing research in the integration of LLMs to DoD operations.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 European Conference on Cyber Warfare and Security

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.