AI Ethics Learning Toolkit

Does AI Harm Critical Thinking?

“[AI] could completely reorient our relationship to knowledge, prioritizing rapid, detailed, abridged answers over a deep understanding and the consideration of varied sources and viewpoints.”

Matteo Wong, technology journalist at The Atlantic

Artificial intelligence is increasingly integrated into critical thinking and decision-making across research, government, and industry. While AI enables rapid data analysis at an unprecedented speed and scale, overreliance on AI can erode an individual’ s critical thinking skills. In the higher education context, researchers have found that university students who use Large Language Models (LLMs) to complete writing and research tasks experienced reduced cognitive load but demonstrated poorer reasoning and argumentation skills compared to those students using traditional search methods. Another research study found that students using LLMs focused on a narrower set of ideas, resulting in more biased and superficial analyses. Critical thinking–characterized by evaluation of information, questioning of assumptions, and formation of independent judgments–remains a uniquely human skill that AI cannot fully replicate. Instead of serving as a replacement for human reasoning, AI should function as a tool to enhance it. Students need to be aware of AI’s limitations, biases, and errors, ensuring they do not outsource their judgment to AI-generated content uncritically.

Learning Activities

🗣️ Conversation Starters A Few Questions to Get the Discussion Going


  • What does ‘critical thinking’ mean to you? Do you think AI will become capable of  ‘critical thinking,’ or is it something uniquely human? 
  • In what ways could overreliance on AI harm our critical thinking skills in school or at work? Reflect on your own experiences with AI when considering the question.
  • What strategies could students use to balance the benefits of AI with the need to develop their own critical thinking skills? 
  • How do you evaluate the accuracy of AI-generated information? What strategies do you use to factcheck?

💡 Active Learning with AI Fun Ways to Explore AI’s Strengths and Limitations


  • Students compare AI-generated summaries or arguments with ones they (or a peer) create and discuss differences in depth, nuance, and accuracy. Reflect on your own writing process. How does your writing and thinking process compare to what AI is doing? 
  • Students use AI to generate answers to complex questions, then cross-check with scholarly sources, identifying inconsistencies or biases. What did you discover? What was your process for checking the chatbot’s response?
  • Have students prompt AI to generate arguments for and against a controversial topic, then evaluate the reasoning and missing nuances. How might your own biases, or views, impact your evaluation? 
  • No AI Alternative: Provide students with two versions of an argument (one AI-generated, one human-generated) without revealing the source. Discuss which one they found more convincing and why.

🎓 Disciplinary Extensions Ideas for Exploring AI’s Impact in Specific Fields


  • Writing: Engage with a chatbot as a writing partner for an assignment. Reflect on the process: what role did you play in guiding the chatbot? How did the chatbot’s suggestions influence your writing? How different would it be to engage with a student/peer reviewer, as compared to a chatbot? 
  • Philosophy & Ethics: Explore whether reliance on AI undermines autonomy and personal responsibility in decision-making
  • Media & Journalism: Examine how AI-generated misinformation influences public opinion
  • Psychology & Neuroscience: Explore how the brain regions associated with trust and decision-making respond to AI-generated information and study whether AI can assist in reducing cognitive overload or, paradoxically, increase individuals’ dependency and make them less capable of handling complex decisions without AI intervention. Relevant readings: Kosmyna et al. and Stadler et al.

Resources

Scholarly

Recommendations


  1. Wong, M. (2024, November 8). AI Is Killing the Internet’s Curiosity. The Atlantic. 
  2. Stadler, M., Bannert, M., & Sailer, M. (2024). Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry. Computers in Human Behavior, 160, 108386.
  3. Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025, June 10). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv.Org. https://arxiv.org/abs/2506.08872v1