|
|
ProgramAnna Borghi Concepts, abstractness, and social interaction
Catherine Del Negro Sensitivity to the sequential structure of communication sounds in the songbird’s brain
Yair Lakretz Linking Linguistic Theory and Brain Dynamics with Deep Neural Models
Humans have an innate ability to process language. This unique ability, linguists argue, results from a specific brain function: the recursive building of hierarchical structures. Specifically, a dedicated set of brain regions, known as the Language Network, would iteratively link the successive words of a sentence to build its latent syntactic structure. However, two major obstacles limit the discovery of the neural basis of recursion and nested-tree structures. First, linguistic models are based on discrete symbolic representations and are thus difficult to compare to the vectorial representations of neuronal activity. Second, non-invasive neuroimaging has a limited spatial resolution and cannot easily characterize the functions and representations of individual neurons or small neuronal populations. In this talk, we will review recent advances in neuroscience and Artificial Intelligence (AI) that can now help to address these issues. In neuroscience, intracranial recordings can now be used to study language down to the single-neuron level, as we have recently shown. In AI, deep-learning architectures trained on large text corpora demonstrate near-human abilities on a variety of language tasks such as speech and handwriting recognition, language modeling and even dialogue (ChatGPT). These new language models are, like the human brain, based on vectorial representations, and, as we will see, can provide new opportunities to understand complex neural computations underlying natural language processing.
Pierre-Yves Oudeyer
Autotelic agents, open-endedness and applications This presentation will review various strands of research studying mechanisms enabling open-ended development in humans and machines. I will focus on autotelic learning (= learning by inventing and sampling one’s own goals) and on the role of language and culture to guide creative exploration. I will describe recent work using large language models for autotelic exploration. Then, I will describe two kinds of applications of these approaches: educational technologies and assisted scientific discovery.
Thomas Schatz On developmental cognitive (neuro)science and artificial intelligence
In this talk, I will reflect on the cross-disciplinary interface between developmental cognitive (neuro)science and artificial intelligence. I will provide my personal perspective on the nature, history and limits of this interface, on some scientific opportunities it currently affords, on associated conceptual and methodological challenges, and on possible solutions to these challenges. To support my argument, I will draw on concrete examples, including from my own work on modeling the development of speech perception using machine learning methods. |
Online user: 2 | Privacy |