Skip to main content

Language learning has come to be a central theme in both cognitive science and artificial intelligence. The nature of language learning has long been a topic of interest for cognitive scientists, and machine learning has begun to dominate natural language processing (NLP) in modern AI. NLP systems have benefited tremendously from machine learning. However, the learning systems developed using these procedures often don’t achieve the efficiency and robustness of human language acquisition. Insights from language acquisition have the potential to help address this problem. But there are two critical challenges in exploring this possibility: (1) identifying the innate learning biases that enable fast, robust language learning in humans, and (2) determining how to translate theoretical insights about these biases into effective implementation for learning in NLP systems. This project will tackle both of these issues, making use of insights from special linguistic populations.

Our first challenge –– identifying innate language learning predispositions –– is driven by the fact that most children are exposed to linguistic input from birth, making it difficult to disentangle innate characteristics versus characteristics that are rapidly learned from input. The rare cases in which children do not have usable linguistic input can help here by allowing us make important headway in identifying these predispositions. Congenitally deaf children who cannot learn the spoken language that surrounds them, and who have not been exposed to sign language by their hearing families, are in the unique situation of being without language input early in life. These children use their hands to communicate –– they gesture –– and those gestures (called “homesigns”) take on many, but not all, of the forms and functions of languages that have been handed down from generation to generation. The properties of these naturally-arising gestures provide evidence for the nature of linguistic predispositions independent of input.

Drawing candidate biases from homesign, we will then tackle the second challenge –– incorporating biases into machine learning systems –– by systematic testing of models against real-world child language acquisition data. The goal of this phase will be to identify effective means of instantiating proposed human biases, and to test whether models incorporating these biases will successfully simulate the learning trajectories exhibited by children. Models with the proposed biases will be compared against minimally-different baseline models lacking the biases; stronger fit to human data will be taken as support that the biases are actual human predispositions. An important priority of this phase will be to balance scientific and engineering needs –– to maintain transparency of the models’ cognitive implications and to simulate human learning patterns as closely as possible, but also to use models that will interface smoothly with modern NLP systems, with promise to scale to larger datasets and broader domains.

Mentor: Susan Goldin-Meadow, Beardsley Ruml Distinguished Service Professor in the Department of Psychology and Committee on Human Development

Susan Goldin-Meadow is the Beardsley Ruml Distinguished Service Professor in the Department of Psychology and Committee on Human Development at the University of Chicago. A year spent at the Piagetian Institute in Geneva while an undergraduate at Smith College piqued her interest in the relationship between language and thought, interests she continued to pursue in her doctoral work at the University of Pennsylvania (Ph.D. 1975). At Penn and in collaboration with Lila Gleitman and Heidi Feldman, she began her studies exploring whether children who lack a (usable) model for language can nevertheless create a language with their hands. She has found that deaf children whose profound hearing losses prevent them from learning the speech than surrounds them, and whose hearing parents have not exposed them to sign, invent gesture systems which are structured in language-like ways. This interest in how the manual modality can serve the needs of communication and thinking led to her current work on the gestures that accompany speech in hearing individuals. She has found that gesture can convey substantive information – information that is often not expressed in the speech it accompanies. Gesture can thus reveal secrets of the mind to those who pay attention.

Professor Goldin-Meadow’s research has been funded by the National Science Foundation, the Spencer Foundation, the March of Dimes, the National Institute of Child Health and Human Development, and the National Institute of Neurological and Communicative Disorders and Stroke. She has served as a member of the language review panel for NIH, has been a Member-at-Large to the Section on Linguistics and Language Science in AAAS, and was part of the Committee on Integrating the Science of Early Childhood Development sponsored by the National Research Council and the Institute of Medicine and leading to the book Neurons to Neighborhoods. She is a Fellow of AAAS, APS, and APA (Divisions 3 and 7). In 2001, she was awarded a Guggenheim Fellowship and a James McKeen Cattell Fellowship which led to her two recently published books, Resilience of Language and Hearing Gesture. In addition, she edited Language in Mind: Advances in the Study of Language and Thought in collaboration with Dedre Gentner. She has received the Burlington Northern Faculty Achievement Award for Graduate Teaching and the Llewellyn John and Harriet Manchester Quantrell Award for Excellence in Undergraduate Teaching at the University of Chicago. She is currently the President of the Cognitive Development Society and the editor of the new journal sponsored by the Society for Language Development, Language Learning and Development. Professor Goldin-Meadow also serves as chair of the developmental area program.

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallclosefacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass