AI Ethics Learning Toolkit

Who Builds Our AI?

“What can end up happening is that certain populations become the guinea pigs of these technologies, or conversely, they become the cheap labor to power these technologies.”

Dr. Seeta Peña Gangadharan, media scholar

Sam Altman, Mark Zuckerberg, Sundar Pichai, Elon Musk, Jeff Bezos – you’ve likely heard of the Big Tech founders behind some of the most prominent AI companies. But who actually builds the technology powering AI tools? For starters, a large workforce of computer programmers, data scientists, and software developers design the algorithms, neural networks, and machine learning models that provide the backbone of AI tools. These workers have backgrounds in computer or data science, statistics, and engineering. But behind the scenes, another more hidden or “ghost” workforce performs the often unsavory tasks that make AI actually function – tasks like content moderation, data annotation, data labeling, and model training. These workers, often outsourced by Big Tech to Global South countries, endure low wages and exploitative conditions, despite their critical role in shaping AI. As AI becomes integrated into daily life, students should critically examine the ethical implications of the hidden labor behind these technologies.

Learning Activities

🗣️ Conversation Starters A Few Questions to Get the Discussion Going


  • What do you know about how AI technology is built? Who are the key players?
  • Do you think tech companies should be responsible for ensuring safe working conditions for outsourced AI laborers? If so, how?
  • What parallels do you see between the AI workforce and other types of work that rely on outsourced, low-wage workers (ex. gig workers)?

💡 Active Learning with AI Fun Ways to Explore AI’s Strengths and Limitations


  • Students gain experience in data labeling by using an open source tool to label a set of  images and text.
  • No AI Alternative: Students examine AI company statements / policies surrounding outsourced labor for a few major AI companies and discuss. Ex. OpenAI, Google

🎓 Disciplinary Extensions Ideas for Exploring AI’s Impact in Specific Fields


  • Economics: Students could research the network of where many Big Tech companies outsource data annotation and content moderation for AI (ex. Sama, Scale AI). Alternatively, students could research other forms of “ghost economies” and compare/contrast with AI ghost work (see: Ghost Work book).
  • Languages/Linguistics: Students could explore how the issue of translation and transcription impact data labeling for AI – exploring some of the linguistic biases that may exist. 
  • Data Science/Global Health: Students consider a case study for bias in data labeling of medical imaging datasets (ex. skin color disparities in dermatology data).

Resources

Scholarly

Recommendations

  • Related topics → Is AI biased? Who benefits from AI? 
  • AI Pedagogy Project (Harvard) Assignments → Filter by theme (e.g. misinformation) and/or subject (e.g. ethics & philosophy)
  • Labor-related Articles from the AI Ethics & Policy News Aggregator sourced by Casey Fiesler. Note: This would be an excellent place to identify recent news stories you could share with students, or incorporate into a case study.

  1.  Interview in Rolling Stone, O’Neil, L. (2023, August 12). These Women Tried to Warn Us About AI. Rolling Stone.