Plenary Lectures

The Perception Lecture will be given by:

Pascal Mamassian

Laboratoire des Systèmes Perceptifs
École Normale Supérieure, Paris




Pascal Mamassian is director of research at the CNRS and is the head of the Laboratoire des Sytemes Perceptifs at the Ecole Normale Superieure in Paris, France. He received a Telecommunication Engineering degree in Paris and a PhD in Experimental Psychology from the University of Minnesota. Using a combination of visual psychophysics and probabilistic modelling, his recent research interests are in 3D shape and material perception, multi-sensory integration, motion speed integration, history effects in perception, and visual confidence. With his collaborators, he has co-authored over 70 journal articles and he is currently serving on the editorial board of seven scientific journals including Perception and Journal of Vision. Upon returning to France after lecturing several years at the University of Glasgow, he received a Chair of Excellence from the French ministry of research. He is serving on the neuroscience panel of the CNRS, has been elected President of the Vision Sciences Society, and is a past co-ordinator of the European Conference on Visual Perception.


Visual Confidence

Visual confidence refers to our ability to predict the correctness of our perceptual decisions. Knowing the limits of this ability, both in terms of biases (e.g. overconfidence) and sensitivity (e.g. blindsight), is clearly important to approach a full picture of perceptual decision making. However, established methods to measure visual confidence are prone to at least two major problems. First, they tend to rely on subjective and non-measurable variables, such as boundaries between confidence levels in confidence rating tasks or criteria to opt-out in opt-out paradigms. Second, they can often be accused of measuring perceptual performance with high-precision rather than genuinely meta-perceptual performance (the perception of our perception). It is notoriously difficult to resolve the second problem, as witnessed by the vast literature on animal cognition that attempts to decide which animal species have the ability to make confidence judgments. But it is simple to address the first problem by using a confidence forced-choice paradigm. In this paradigm, observers have to choose which of two perceptual decisions is more likely to be correct. I will review some results obtained with the confidence forced-choice paradigm, discuss limits of this approach and future directions, and place this paradigm within the theoretical frameworks of signal detection theory and accumulation of evidence models.




The Rank Lecture will be given by:

Dora Angelaki

Department of Neuroscience, Baylor College of Medicine; Department of Electrical and Computer Engineering, Rice University



Dr. Angelaki is the Wilhelmina Robertson Professor & Chair of the Department of Neuroscience, Baylor College of Medicine, with a joint appointment in the Departments of Electrical & Computer Engineering and Psychology, Rice University. She holds Diploma and PhD degrees in Electrical and Biomedical Engineering from the National Technical University of Athens and University of Minnesota. Her general area of interest is computational, cognitive and systems neuroscience. Within this broad field, she specializes in the neural mechanisms of spatial orientation and navigation using humans and non-human primates as a model. She is interested in neural coding and how complex, cognitive behavior is produced by neuronal populations. She has received many honors and awards, including the inaugural Pradal Award in Neuroscience from the National Academy of Sciences (2012), the Grass lectureship from the Society of Neuroscience (2011), the Halpike-Nylen medal from the Barany Society (2006) and the Presidential Early Career Award for Scientists and Engineers (1996). Dr. Angelaki maintains a very active research laboratory funded primarily by the National Institute of Health and a strong presence in the Society for Neuroscience and other international organizations.


Merging of our senses: A brain challenge for perceptual reality

Navigation and spatial orientation are vital functions in our lives. Sensory information arises from the balance (vestibular) organs in the inner ear, as well as from visual optic flow and other sensory, motor and cognitive cues. As such, a fundamental aspect of our sensory experience is how information from different modalities is often seamlessly integrated into a unified percept. Both theory and behavioral studies have shown that humans and animals combine multiple cues, as well as prior experiences based on the statistics of our environment and our interactions with it, according to a statically optimal scheme derived from Bayesian probability theory. Using navigational heading perception tasks, we show how multisensory interactions improve precision, reaction time and accuracy. The latter is particularly important when navigational environments include independently-moving objects. We study both computational principles and their neural implementations in diverse subcortical and cortical circuits that process visual (optic flow) and vestibular (acceleration) signals.

Dos campus d'Excel·lència Internacional