Why AI Fails to Grasp Your Culture π | Dr. Vered Shwartz on Cultural Bias in Language Models
Discover why AI systems often lack cultural understanding and are biased towards Western perspectives. Join Jekaterina Novikova and Malikeh Ehgaghi as they explore the challenges of cultural bias in large language models with Dr. Vered Shwartz.

Women in AI Research WiAIR
202 views β’ Oct 29, 2025

About this video
Are todayβs AI systems truly global β or just Western by design? π
In this episode of Women in AI Research, Jekaterina Novikova and Malikeh Ehgaghi speak with Dr. Vered Shwartz (Assistant Professor at @UBC and @CIFARVideo AI Chair at the @vectorinstituteai about the cultural blind spots in todayβs large language and vision-language models.
Don't have time for the full episode? Watch it in parts:
Part 1 - "Lost in Automatic Translation": https://youtu.be/BeQFyY3Dld4
Part 2 - Coming soon...
Part 3 - Coming soon...
CHAPTERS:
00:00 Introduction to Women in AI Research Podcast
00:33 Guest introduction - Dr. Vered Shwartz
02:32 The Importance of Communication Skills in Academia
04:15 Navigating Faculty Roles and Student Supervision
07:52 Personal Experiences with Language Technologies
14:39 Exploring Cultural Representation in AI
20:29 The InfoGap Method and Cultural Information Gaps
22:29 Technical Challenges in Cross-Language Representation
24:02 Cultural Completeness and Wikipedia's Role
26:42 User Interaction with Language Models
37:22 Cross-Cultural Evaluation of Social Norm Biases
38:16 Cultural Alignment of Language Models
49:11 Exploring Vision Language Models
01:02:51 Benchmarking Cultural Bias in AI
01:06:54 Decentralizing AI Development
01:12:01 Addressing Biases in AI Development
01:15:52 Future Directions in AI Research
REFERENCES:
01:10 Vered Shwartz Google Scholar profile (https://scholar.google.ca/citations?user=bbe4ResAAAAJ&hl=en&oi=ao)
07:57 Book "Lost in Automatic Translation" (https://lostinautomatictranslation.com/)
19:25 Elevator Recognition, by The Scottish Comedy Channel (https://youtu.be/HbDnxzrbxn4?si=7YvUpZZ_earptBpN)
20:33 Locating Information Gaps and Narrative Inconsistencies Across Languages: A Case Study of LGBT People Portrayals on Wikipedia (https://arxiv.org/abs/2410.04282)
30:34 ECLeKTic: a Novel Challenge Set for Evaluation of Cross-Lingual Knowledge Transfer (https://arxiv.org/abs/2502.21228)
34:23 WikiGap: Promoting Epistemic Equity by Surfacing Knowledge Gaps Between English Wikipedia and other Language Editions (https://arxiv.org/abs/2505.24195)
37:24 Is It Bad to Work All the Time? Cross-Cultural Evaluation of Social Norm Biases in GPT-4 (https://arxiv.org/abs/2505.18322)
38:39 Towards Measuring the Representation of Subjective Global Opinions in Language Models (https://arxiv.org/abs/2306.16388)
48:09 I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box Generative Language Models (https://arxiv.org/abs/2306.03423)
50:43 From Local Concepts to Universals: Evaluating the Multicultural Understanding of Vision-Language Models (https://arxiv.org/pdf/2407.00263)
01:10:51 CulturalBench: A Robust, Diverse, and Challenging
Cultural Benchmark by Human-AI CulturalTeaming (https://aclanthology.org/2025.acl-long.1247.pdf)
π§ Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.
WiAIR website:
βΎοΈ https://women-in-ai-research.github.io
Follow us at:
βΎοΈ LinkedIn: https://www.linkedin.com/company/women-in-ai-research/
βΎοΈ Bluesky: https://bsky.app/profile/wiair.bsky.social
βΎοΈ X (Twitter): https://x.com/WiAIR_podcast
#AI #NLP #LLMs #CulturalBias #WomenInAI #ExplainableAI #FairnessInAI #AIResearch #EthicalAI #wiair #wiairpodcast
In this episode of Women in AI Research, Jekaterina Novikova and Malikeh Ehgaghi speak with Dr. Vered Shwartz (Assistant Professor at @UBC and @CIFARVideo AI Chair at the @vectorinstituteai about the cultural blind spots in todayβs large language and vision-language models.
Don't have time for the full episode? Watch it in parts:
Part 1 - "Lost in Automatic Translation": https://youtu.be/BeQFyY3Dld4
Part 2 - Coming soon...
Part 3 - Coming soon...
CHAPTERS:
00:00 Introduction to Women in AI Research Podcast
00:33 Guest introduction - Dr. Vered Shwartz
02:32 The Importance of Communication Skills in Academia
04:15 Navigating Faculty Roles and Student Supervision
07:52 Personal Experiences with Language Technologies
14:39 Exploring Cultural Representation in AI
20:29 The InfoGap Method and Cultural Information Gaps
22:29 Technical Challenges in Cross-Language Representation
24:02 Cultural Completeness and Wikipedia's Role
26:42 User Interaction with Language Models
37:22 Cross-Cultural Evaluation of Social Norm Biases
38:16 Cultural Alignment of Language Models
49:11 Exploring Vision Language Models
01:02:51 Benchmarking Cultural Bias in AI
01:06:54 Decentralizing AI Development
01:12:01 Addressing Biases in AI Development
01:15:52 Future Directions in AI Research
REFERENCES:
01:10 Vered Shwartz Google Scholar profile (https://scholar.google.ca/citations?user=bbe4ResAAAAJ&hl=en&oi=ao)
07:57 Book "Lost in Automatic Translation" (https://lostinautomatictranslation.com/)
19:25 Elevator Recognition, by The Scottish Comedy Channel (https://youtu.be/HbDnxzrbxn4?si=7YvUpZZ_earptBpN)
20:33 Locating Information Gaps and Narrative Inconsistencies Across Languages: A Case Study of LGBT People Portrayals on Wikipedia (https://arxiv.org/abs/2410.04282)
30:34 ECLeKTic: a Novel Challenge Set for Evaluation of Cross-Lingual Knowledge Transfer (https://arxiv.org/abs/2502.21228)
34:23 WikiGap: Promoting Epistemic Equity by Surfacing Knowledge Gaps Between English Wikipedia and other Language Editions (https://arxiv.org/abs/2505.24195)
37:24 Is It Bad to Work All the Time? Cross-Cultural Evaluation of Social Norm Biases in GPT-4 (https://arxiv.org/abs/2505.18322)
38:39 Towards Measuring the Representation of Subjective Global Opinions in Language Models (https://arxiv.org/abs/2306.16388)
48:09 I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box Generative Language Models (https://arxiv.org/abs/2306.03423)
50:43 From Local Concepts to Universals: Evaluating the Multicultural Understanding of Vision-Language Models (https://arxiv.org/pdf/2407.00263)
01:10:51 CulturalBench: A Robust, Diverse, and Challenging
Cultural Benchmark by Human-AI CulturalTeaming (https://aclanthology.org/2025.acl-long.1247.pdf)
π§ Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.
WiAIR website:
βΎοΈ https://women-in-ai-research.github.io
Follow us at:
βΎοΈ LinkedIn: https://www.linkedin.com/company/women-in-ai-research/
βΎοΈ Bluesky: https://bsky.app/profile/wiair.bsky.social
βΎοΈ X (Twitter): https://x.com/WiAIR_podcast
#AI #NLP #LLMs #CulturalBias #WomenInAI #ExplainableAI #FairnessInAI #AIResearch #EthicalAI #wiair #wiairpodcast
Tags and Topics
Browse our collection to discover more content in these categories.
Video Information
Views
202
Likes
8
Duration
01:17:18
Published
Oct 29, 2025
Related Trending Topics
LIVE TRENDSRelated trending topics. Click any trend to explore more videos.