Bridging the Data Gap Between Large Language Models and Children's Learning

Michael Frank from Stanford University discusses strategies for aligning large language models with children's cognitive development to enhance higher-level intelligence understanding.

Bridging the Data Gap Between Large Language Models and Children's Learning
Simons Institute for the Theory of Computing
296 views โ€ข Aug 1, 2024
Bridging the Data Gap Between Large Language Models and Children's Learning

About this video

Michael Frank (Stanford University)
https://simons.berkeley.edu/talks/michael-frank-stanford-university-2024-06-24
Understanding Higher-Level Intelligence from AI, Psychology, and Neuroscience Perspectives

In this talk, I'll describe the "data gap" between LLMs and humans (Frank, 2023, TiCS): that LLMs are trained on 3-5 orders of magnitude more data than human children receive. I'll review some viewpoints on why this gap exists, including 1) innate knowledge, 2) active and social learning, 3) multimodal information, and 4) evaluation differences. While I can't decide this issue, I'll provide some new data on the richness of multimodal input and the consequences of evaluation differences. In particular, I'll discuss how the cognitive science idea of competence / performance distinctions plays out in LLMs.

Tags and Topics

Browse our collection to discover more content in these categories.

Video Information

Views

296

Likes

5

Duration

48:14

Published

Aug 1, 2024

Related Trending Topics

LIVE TRENDS

Related trending topics. Click any trend to explore more videos.