“It is inaccurate, and I am worried about the consequences.”

“It can be inaccurate and can raise academic integrity issues.”

“Oversimplification of ideas, wrong information, and unknowing plagiarism.”

“People use AI for the wrong reasons all the time.”

  • 94% of students believe AI is not equally accurate in all areas of study.
  • 90% of students believe AI should be transparent about its limitations when answering prompts.
  • 87% of students believe AI’s effectiveness depends on the quality of user prompts.
  • 75% of students believe AI provides inaccurate answers to prompts.
  • 62% of students believe AI provides overly simplified answers to prompts.

“I’d be interested in discussing ethical ways to use it, if they think there are applications of AI in their course that are not cheating, that contribute to learning rather than take away from it.”

“I would say that AI is developing quickly and rather than completely banning it, I think it makes more sense to give students guidelines of how to use it, have good technology that detects people totally relying on AI and warn about the dangers of relying on AI too heavily (climate implications, how it harms people’s own learning, etc).”

AI Ethics Learning Toolkit: “Can We Trust AI?”

  • “A major issue is AI hallucination–an instance where an AI model generates misleading, inaccurate, or entirely fabricated content, often without a clear basis in its training data. While some of these errors are easy to spot, others are subtle and difficult to detect, making them potentially dangerous. Even when AI generated information is cited, a recent study found that leading AI chatbots incorrectly cite their sources 60% of the time.”
  • Ask students: “Who do you think should be responsible for  false information generated by AI? The person using the AI, the company that made it, or someone else?  Why do you think that?”

Pushing Boundaries: How One Duke Professor is Reimagining Learning with Generative AI

  • “[Brinnae] Bent encourages her students to engage with GenAI tools, but emphasizes the importance of doing it with integrity and intentionality. Her expectations are explicit: students must cite the tools they use… and students should only use them after making their own initial attempts. In her syllabus, she clearly states, ‘These tools should be used to augment your learning, not as a crutch to hasten work you procrastinated on,’ and she warns students that these tools are prone to error.”

Artificial Intelligence Policies: Guidelines and Considerations

  • “Another aspect of AI literacy is understanding the ethical use of AI and its limitations. When students have information about the hallucinations, bias, and inaccuracies of generative AI, it underscores why AI is not a shortcut to good results.”

Recommendations By & For Students

  • Encourage students to cross-verify information: “Always verify synthetic information, especially when AI provides sources or citations. Check these sources to confirm their credibility and ensure AI-generated interpretations align with the original material. If AI outputs do not provide references, cross-check claims against trustworthy human-written resources like academic articles, books, or reputable websites. Never rely on AI output as your sole authority for important information, decisions, or projects.”