Teaching with AI, Grounded in Fundamentals: Lessons from Duke’s Jun Yang

This summer, Duke introduced its AI Suite, featuring new tools, prompting many instructors to experiment with fresh ways of bringing them into the classroom. One of those leading the change is Jun Yang, the Knut Schmidt Nielsen Distinguished Professor of Computer Science at Duke. In our conversation, Yang shared how he has actively integrated generative AI (GenAI) into his courses—both to enrich his students’ learning and to prepare them for the future of the discipline.

Perhaps Yang’s most important insight is that integrating AI into teaching isn’t a one-time shift but an ongoing journey of adaptation. Then again, isn’t that true of all meaningful approaches to teaching and learning?

Advice for Instructors Exploring AI

For educators just beginning to integrate AI into their courses, Yang offers simple but powerful advice: start by familiarizing yourself with the tools and understanding how your students actually use them.

With AI advancing so rapidly, it can be hard for educators to cut through the noise. Yang admits that “we are still figuring it out,” and encourages colleagues to start small by using GenAI to test and refine course materials. 

One practice he has adopted is running his assessment through GenAI to see how a student using GenAI might perform. GenAI does very well on some assessments, but it occasionally struggles, and he turns these cases into opportunities to teach students about GenAI’s limitations.

Perhaps most importantly, Yang emphasizes that educators must identify the core skills in their discipline that remain essential in an AI world. These skills, he argues, should guide the learning objectives and shape the way courses evolve moving forward.


Fundamental Skills Are Still Fundamental

Yang stresses that mastering the core skills of computer science remains essential, even in an AI-driven world, and he works hard to instill this belief in his students. While AI can help in many areas, he notes that tasks such as specification, verification, debugging, and critical analysis are still difficult for these technologies.

To drive this point home, Yang is working with his colleague, Eric Fouh, to redesign his undergraduate course Introduction to Database Systems. The class is experimenting with a new semester-long project where the use of AI is expected to help reduce “grunt work,” but system design and specification have become more challenging. Essentially, students can use AI at carefully chosen moments and are prohibited from relying on it at others. This approach encourages students to navigate the intersection of AI assistance and their own critical thinking skills.

👉 Which skills in your field are so fundamental that, even if AI can do them, you’d still want your students to learn them? 
Check out LILE’s Artificial Intelligence and Assignment Design considerations.


Conversations About AI with Students

Yang urges instructors to try different approaches to bringing AI into the classroom, but stresses that ignoring it is not an option. Instead, he advocates for open discussions with students about its possibilities and limitations. These conversations could take different forms:

  • Setting clear expectations about when, how, and why AI use is appropriate. 
  • Exploring examples together of human- and AI-generated outputs.
  • Inviting reflection on how AI may shape their future professional practice.
  • Co-creating guidelines with students, so the process feels less imposed and more collaborative.

In his own classroom, students are usually allowed to use AI tools as long as they document and cite how they use them. Yang also demonstrates how AI can lead them astray and gives them opportunities to reflect on their logical understanding of a concept versus AI’s outputs. By surfacing pitfalls such as errors and “hallucinations,” he creates space for students to grapple with AI critically, without fear of being penalized for exploring its use.

This emphasis on openness echoes recent feedback from Duke students themselves. In a new research study conducted by Duke’s Center for Applied Research and Design in Transformative Education (CARADITE), Duke students voiced their wish that faculty would provide clear guidance on how they are meant to use AI responsibly. As one student explained, “I would ask [faculty] to define clear guidelines in the beginning of class about what exactly is considered ‘AI-generated content.’” Another went even further, by stating, “As an educator, it is your responsibility to educate your students about the ethics of AI, as well as its unreliability.” 

These perspectives underscore what Yang promotes: the need for faculty to have open conversations about AI with their students. Instructors should proactively set boundaries and frame ethical considerations from the outset of their course.

👉 Do you want guidance on how to write your own AI policy and talk to students about AI? Check out LILE’s Artificial Intelligence Policies: Guidelines and Considerations. 


Where We Go Next with AI in Education

GenAI already has the potential to enhance learning, but reliability remains a concern. At the same time, Yang notes that humans, including teaching assistants and instructors, make mistakes as well. His advice to educators is to stay abreast of AI’s current strengths and weaknesses and adjust their teaching accordingly.

Yang reminds us that AI is not limited to generative models but includes a wide variety of tools. His research group is currently building systems that combine GenAI with symbolic reasoning techniques (with better interpretability and reliability) to scale teaching and provide personalized feedback in the way a tutor might. 

Looking ahead, Yang predicts that some entry-level coding jobs may disappear while new opportunities are likely to emerge. He shares this perspective with his students to encourage them to concentrate on problems that AI cannot solve. His ultimate hope is that curiosity and passion for the discipline, rather than financial incentives, will guide the next wave of innovation in computer science.

Clearly, AI is transforming higher education, but as Yang’s approach at Duke shows, the fundamentals still matter. Integrating AI into teaching is less about replacing traditional methods and more about redefining what students need to master to thrive in an AI-driven world. By blending experimentation with clear boundaries, openness with rigor, and innovation with timeless skills, educators can prepare students to use AI responsibly and to lead in a world increasingly shaped by it.

👉 How might you redesign your teaching to help students practice skills that AI can’t replace? Check out the Academic Research Center’s AI Toolkit for Students for inspiration.


Interested in exploring AI in your own teaching?

LILE is here to support you. Whether you’re curious about course-integrated chatbots, want help developing AI assignments, or just want to talk through the ethics of AI in the classroom, please reach out to us at lile@duke.edu or attend AI Office Hours every Wednesday from 10 a.m. to 11 a.m. here. You can also find resources from Duke on teaching with AI, including LILE’s teaching guides on the AI at Duke website.