AI Ethics Learning Toolkit

Does AI Spread Mis/Disinfo?

“The popular emergence of generative AI has deepened uncertainty in an already-skeptical information environment, leading people to suspect that anything could be the product of synthetic media.”

– Charlie Warzel, technology journalist at The Atlantic

From Taylor Swift deepfakes to synthetic AI robocalls from politicians, AI tools have the potential to turn any message into a viral disinformation campaign. Disinformation refers to “false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit.” Misinformation is similar in nature, but the sharers of misinformation may have done so accidentally and without intent to harm. AI models aren’t human – they don’t possess intent to harm, whether deliberately or inadvertently. So, what’s the concern? Some argue that the ready availability and low cost of AI tools could facilitate the spread of higher quality mis/disinformation by bad (human) actors. Others argue the danger of this is overblown and there is insufficient evidence-based research to support the hype. It may be some time before we fully understand the impact of AI-generated mis/disinformation, students should be aware of the potential ways in which AI could be used to manipulate text, images, video, and information sources. They should remain critical consumers and sharers of information they come across.

Learning Activities

🗣️ Conversation Starters A Few Questions to Get the Discussion Going


  • Have you ever come across a deepfake, or mis/disinformation online? How could you tell?
  • What are some scenarios you can imagine where mis/disinformation could cause real-world harm?
  • Who do you think is most susceptible to being manipulated by mis/disinformation? Why do you think they are more vulnerable to being swayed by it?

💡 Active Learning with AI Fun Ways to Explore AI’s Strengths and Limitations


  • Students can create fake content with AI for a faux disinformation campaign. Share the prompt, AI outputs, and have the class react. 
  • Students can discuss a case study from the Disinformation Studies syllabus. Case Studies include: Japanese incarceration, media and the AIDs crisis, the myth of the “Welfare Queen”, and more.
  • Instructor can choose some examples from #deepfake-tagged articles in Snopes.com for students to discuss.  Ask students to reflect on the ways the Snopes authors fact-checked the claim.

🎓 Disciplinary Extensions Ideas for Exploring AI’s Impact in Specific Fields

  • Psychology: Research has found that when disinformation is more targeted and personalized, it is more persuasive. What are some of the psychological (phenomena) at play in the creation and consumption of disinformation?
  • History: Students could look at the historical roots of disinformation and make connections to modern day AI narratives. 
  • Arts & Humanities: What is the role of visual/media arts in disinformation/propaganda campaigns?
  • Computer Science/Engineering: What are some of the most promising detection technologies that could help identify fake content?

Resources

Scholarly

Recommendations


  1. Directorate-General for Communications Networks, Content and Technology (European Commission). (2018). A multi-dimensional approach to disinformation: Report of the independent High level Group on fake news and online disinformation. Publications Office of the European Union. https://data.europa.eu/doi/10.2759/739290
  2. Matz, S. C., Teeny, J. D., Vaid, S. S., Peters, H., Harari, G. M., & Cerf, M. (2024). The potential of generative AI for personalized persuasion at scale. Scientific Reports, 14(1), 4692. https://doi.org/10.1038/s41598-024-53755-
  3. Simon, F. M., Altay, S., & Mercier, H. (2023). Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-127