This year's theme is: language, development and cognition
6-6 Dec 2023 Cergy (France)

Program

Anna Borghi

Concepts, abstractness, and social interaction
In my talk, I will present a view of concepts that emphasizes the role of social interaction in their acquisition and use. I will illustrate some studies showing that people are less confident in their knowledge, use more frequently inner speech, and rely more on other people, especially experts, to acquire and use abstract concepts (e.g., “truth”) than concrete ones (e.g., “hammer”). I will also discuss differences between various kinds of concepts, including technology-related ones, in the uncertainty they generate and the social dynamics they trigger.

 

Catherine Del Negro 

Sensitivity to the sequential structure of communication sounds in the songbird’s brain
Over the past decades, songbirds have emerged as an attractive model system to explore how natural communication signals are encoded in the brain, research revealing highly selective properties of auditory neurons. Birdsongs, like human speech, are learned sequential behaviours giving the rare opportunity to investigate how the brain represents structured sequences of sounds. Our current research aim is to address this issue taking advantages of two songbird species, zebra finches and canaries. The ordering of song elements in canary song depends on transitional probabilities and leads to the formation of recurrent sequences.

 

Yair Lakretz

Linking Linguistic Theory and Brain Dynamics with Deep Neural Models
Humans have an innate ability to process language. This unique ability, linguists argue, results from a specific brain function: the recursive building of hierarchical structures. Specifically, a dedicated set of brain regions, known as the Language Network, would iteratively link the successive words of a sentence to build its latent syntactic structure. However, two major obstacles limit the discovery of the neural basis of recursion and nested-tree structures. First, linguistic models are based on discrete symbolic representations and are thus difficult to compare to the vectorial representations of neuronal activity. Second, non-invasive neuroimaging has a limited spatial resolution and cannot easily characterize the functions and representations of individual neurons or small neuronal populations. In this talk, we will review recent advances in neuroscience and Artificial Intelligence (AI) that can now help to address these issues. In neuroscience, intracranial recordings can now be used to study language down to the single-neuron level, as we have recently shown. In AI, deep-learning architectures trained on large text corpora demonstrate near-human abilities on a variety of language tasks such as speech and handwriting recognition, language modeling and even dialogue (ChatGPT). These new language models are, like the human brain, based on vectorial representations, and, as we will see, can provide new opportunities to understand complex neural computations underlying natural language processing.
 
 
Pierre-Yves Oudeyer

Autotelic agents, open-endedness and applications

This presentation will review various strands of research studying mechanisms enabling open-ended development in humans and machines. I will focus on autotelic learning (= learning by inventing and sampling one’s own goals) and on the role of language and culture to guide creative exploration. I will describe recent work using large language models for autotelic exploration. Then, I will describe two kinds of applications of these approaches: educational technologies and assisted scientific discovery.
 

Thomas Schatz

On developmental cognitive (neuro)science and artificial intelligence
In this talk, I will reflect on the cross-disciplinary interface between developmental cognitive (neuro)science and artificial intelligence. I will provide my personal perspective on the nature, history and limits of this interface, on some scientific opportunities it currently affords, on associated conceptual and methodological challenges, and on possible solutions to these challenges. To support my argument, I will draw on concrete examples, including from my own work on modeling the development of speech perception using machine learning methods.
Online user: 3 Privacy
Loading...