top of page

Tackling Infodemics and Malicious LLMs in the Era of AI-Driven Information Warfare

Updated: Oct 17, 2023




Infodemics are an overabundance of information, both accurate and inaccurate, about a particular topic or event, that makes it difficult for people to find trustworthy sources and reliable information. In the context of public health, an infodemic can occur during a disease outbreak when there is an excess of information, often spread rapidly through social media and other communication channels, that can lead to confusion, panic, and misinformation. Infodemics can create challenges for public health officials who are trying to disseminate accurate information and mitigate the spread of a disease. The COVID-19 pandemic, for example, incited an unprecedented infodemic highlighting the impact of misinformation, disinformation, and information disseminated with mal-intent on the effectiveness of public health action and national health security.


As I observe the rapid development of Large Language Models (LLMs), I am deeply concerned about the potential weaponization of these technologies by malicious actors to spread even greater misinformation and sow mistrust amidst disasters and public health emergencies. State and non-state actors may exploit these advanced AI systems, which have the potential to generate content quickly, at volumes that could overwhelm and take advantage of algorithmic search engines and social platforms, and will be virtually indistinguishable from human-produced information. There are already demonstrations of using LLMs to generate and post thousands of articles through automated scripts at very low cost. Applied maliciously, this could result in a deluge of misinformation and disinformation – an even worse infodemic than experienced during the COVID-19 pandemic – that undermines trust in institutions, destabilizes societies, and polarizes populations. We may see even greater levels of misinformation and mistrust during the forthcoming 2024 Presidential election and in future disasters and public health emergencies.


The current national public health discussions on infodemics and building public trust often focus on the perceived root causes of mistrust, such as misinformation, politicization, economic inconvenience, and inconsistent messaging, as well as ways to combat it. The dialogue touches on fostering media literacy, improving communication, building trust with communities, and strengthening institutional transparency as ways to fight back. While these are important aspects, the conversation often fails to consider the actors behind the misinformation, their motives and resources, and their use of emerging tools to sow misinformation at rates far exceeding the capacity of traditional public health communications. In my opinion, these discussions are analogous to treating symptoms, while ignoring or being naïve to the disease.


An infodemic can be likened to guerrilla warfare, as it involves the deployment of AI-generated content in a highly decentralized and unpredictable manner. This is in contrast to conventional warfare, which typically features organized and hierarchical structures with clear battle lines. Just as guerrilla warfare poses unique challenges for traditional military forces, the infodemic challenges government agencies and citizens to adapt to unconventional threats and develop new strategies to counter the malicious use of LLMs. Governments, public health professionals, and citizens are largely unprepared for the rapidly evolving landscape of AI-generated content.


Much like the early days of the nuclear age, when governments and societies were still grappling with the implications of this powerful new technology, the infodemic exposes gaps in understanding, regulation, and defense. The challenge lies in developing the necessary tools, strategies, and regulations to confront these new threats and protect the integrity of the information ecosystem.


Federal, state, and local governments, including agencies like the CDC, NIH, and ASPR, need to fully leverage LLMs and AI technologies in a counteroffensive against the volume of malicious content that will soon be generated. By harnessing these advanced tools, governments can develop more effective communication strategies, monitor the spread of misinformation, and rapidly counter disinformation with accurate, reliable information. Specific government actions include:

  • Establish specialized teams to monitor, analyze, and quickly respond to disinformation campaigns, including the use of LLM-generated content, in order to mitigate their impact and prevent the spread of false information.

  • Allocate resources to develop and implement advanced detection algorithms and tools capable of identifying malicious content generated by LLMs, and deploy these technologies in partnership with relevant stakeholders.

  • Invest in grassroots movements and platforms of concerned citizens, journalists, and experts focused on combating the spread of false information by promoting critical thinking, media literacy, and transparent fact-checking initiatives. Leveraging decentralized and blockchain-based platforms, these movements can create a new, resilient information ecosystem that is less susceptible to manipulation.

  • Partner with social media platforms, search engines, and other online content providers to develop and implement strategies to identify and remove malicious content, as well as to promote credible information sources.

  • Educate the public about the risks and signs of misinformation, disinformation, and malicious use of LLMs. Promote critical thinking and media literacy skills to help people identify and verify information sources.

  • Advance innovation in the fields of AI detection, content authentication, and decentralized information systems. New technologies are developed to help users discern between genuine and AI-generated content, empowering them to make more informed decisions.

To build on previous discussions, the National Academies is holding a Navigating Infodemics and Building Trust during Public Health Emergencies: A Workshop on April 10 - 11 to generate actionable, targeted insights that state, local, territorial, and tribal agencies and officials can take to prevent and respond to the negative health effects of an infodemic. This presents an opportunity to discuss not just how to address infodemic symptoms, but truly attack an underlying cause of the disease now and how it may spread and expand in the coming years - malicious use of LLMs. Some questions that could be explored during this important workshop, and other national discussions, include:

  • How can we identify and assess the potential risks and threats posed by the malicious use of LLMs during public health emergencies and other crises?

  • What are the most effective strategies for detecting and countering AI-generated content, particularly content created with the intent to misinform or manipulate public opinion?

  • How can federal, state, and local governments collaborate with the private sector, academia, and civil society to develop comprehensive approaches to tackle infodemics fueled by the weaponization of LLMs?

  • What role should social media platforms, search engines, and other online content providers play in mitigating the spread of AI-generated disinformation, and how can governments partner with these entities to develop effective countermeasures?

  • How can we improve public awareness and understanding of the risks and signs of misinformation, disinformation, and malicious use of LLMs? What initiatives can be implemented to promote critical thinking and media literacy skills?

  • What new technologies or innovations should be prioritized in the fields of AI detection, content authentication, and decentralized information systems to better protect against the malicious use of LLMs?

  • How can we leverage decentralized and blockchain-based platforms to create a more resilient information ecosystem that is less susceptible to manipulation by malicious actors using LLMs?

  • What ethical considerations and regulatory frameworks should be in place to prevent the misuse of LLMs and ensure responsible AI development?

  • How can international cooperation and global standards be developed to address the weaponization of LLMs and other AI technologies, ensuring a safer and more secure digital landscape for all?

The challenges posed by the weaponization of LLMs and their role in exacerbating infodemics during public health emergencies and other crises must not be underestimated. The rapid advancement and malicious use of these technologies can have dire consequences for trust in institutions, social cohesion, and the effectiveness of public health responses. It is crucial that federal, state, and local governments, in collaboration with the private sector, academia, and civil society, take proactive measures to understand and counter the threats posed by AI-generated content.


To mount a successful counter-offensive against malicious LLM use, we must invest in advanced AI technologies and harness their potential for good. Governments should establish specialized teams to monitor, analyze, and rapidly respond to disinformation campaigns, leveraging LLMs to create targeted, accurate, and persuasive content that counters false narratives. Additionally, partnering with social media platforms, search engines, and other online content providers can enable the development of AI-driven tools to detect and remove malicious content while promoting credible information sources.


By investing in detection and authentication technologies, promoting media literacy and critical thinking, and fostering collaboration among stakeholders, we can work together to build a more resilient and trustworthy information ecosystem. The upcoming National Academies workshop provides an essential platform for generating actionable insights and strategies to address the risks associated with LLMs and infodemics, ensuring that we are better prepared to navigate the complex digital landscape of the future.

149 views
bottom of page