Updating 3D world states from the optic flow (UPFLOW)

PI: Joan López-Moliner

Related to lines:

Decision Making in Complex Environments & PsychophysicsEye Movements & Perception and ActionNeurocomputational Modeling

Description:

The optic flow (i.e. the dynamic retinal image pattern) is thought to provide basic information to support many daily actions (e.g. navigation, collision avoidance, etc) and is becoming theoretically central to build sensors in future drones for picking up simple visual features. Despite the potential of optic flow for providing reach visual information of the space around us, there is little, or non-existent, development concerning the processing of optic flow signals to recover accurate 3D metrics of displacement or velocity. The complexity and variability of optic variables (e.g. eye movements dramatically change the optic flow of the same 3D scene) may be the reason for previous studies to not pursuing this path. In this project we aim at (a) testing the capabilities of our perceptual system to extract accurate and precise metrics from optic flow local signals under different conditions of eye movements behavior which change the retinal stimulation pattern of the same 3D layout; and (b) providing the optic flow with spectral structure consistent with natural images statistics. The latter goal involves devising new optic flow stimuli in the hope that it will increase performance with minimal exposure to visual information. By combining psychophysics and Bayesian tools we will be able to characterize accuracy and precision of recovered 3D metrics from optic flow and reveal the posterior beliefs corresponding to different scenes. The results are potentially relevant for the design of virtual environments that aim at achiving maximum immersion experiences and performance with minimal visual information.


Disrupting the prediction of Visual Motion (DIVISMO)

PI: Cristina de la Malla

Related to lines:

Eye Movements & Perception and Action

Description:

There are many tasks that we perform in our daily life that involve interacting with moving targets. For example, practicing several sports, crossing a road or driving require interacting with dynamically changing environments where objects move. Assessing how these objects move is necessary to successfully interact with them. In general, motion is perceived when an object changes its spatial position over time. Such a percept is thought to be achieved via two possible mechanisms that can be disentangled: changes in position (displacement- based) and velocity-based. Even though it is possible to differentiate between these two sources of information, the relation and interaction between both of them are undeniable as demonstrated by different perceptual illusions (e.g. de Valois illusion) or phenomena as motion after-effects. 

When it comes to process information about moving stimuli it is relevant the time it takes for visual information to reach brain areas and be processed. This is so because it implies that we are constantly processing information about past positions of objects. This is, at the moment we are able to tell where a moving object is, such object has already moved somewhere else. To deal with such delays in information processing and be able to succeed in our actions, we make predictions about where objects will be in the future. Several studies have suggested that information about the velocity at which the object moves is what allows to make these predictions. Velocity is thus understood in many cases as a predictive mechanism or component for updating position. The main question this project (DIVISMO) addresses is what happens when velocity information, as the predictive component of the motion system, is disrupted. To study this, different paradigms (occlusions, luminance manipulations, and variability in objects velocity) that will impair perceiving the velocity at which an object moves will be used. Furthermore, this project is set from a perspective where observers are active components of the scene. That means that observers actively seek information in the environment that can help them succeed in their decisions and actions. As has been shown by different studies, many of the movements we make (eye, and even head and torso movements) are directed at gathering information of objects or parts of the scene we have to interact with. For this reason, a second main objective of this project is to see whether disrupting velocity information leads to changes in the way we move to try to compensate for the loss of such information. This would allow us to know whether one prefers to update velocity estimates about how targets move rather than relying on predictions. We will take a multidisciplinary approach including psychophysics and decision-making paradigms as well as different techniques such as eye, head and hand movements recordings. The achievement of our research goals not only will advance knowledge on how motion is used by the visual system but also can have an impact on the development of virtual and immersive environments, one of the industries that is currently growing faster. 


 


The perceptual response function to contrast in schizophrenia and anti-NMDAR encephalitis

PI: Daniel Linares

Related to lines:

Decision Making in Complex Environments & Psychophysics

Description:

People with schizophrenia have altered visual perception including impaired contrast detection. It is unknown, however, whether contrast discrimination is also impaired. Assessing contrast discrimination is important because it allows the estimation of the perceptual response function to contrast, and several lines of evidence suggest this response should be weakened when there is a hypofunction of the glutamatergic system. Given that a glutamatergic hypofunction is currently considered a fundamental alteration in schizophrenia, we hypothesize that schizophrenic patients should have a reduced perceptual response to contrast. Moreover, if a glutamatergic hypofunction impairs contrast perception, patients with anti-NMDAR encephalitis, a disease that is precisely defined by such an impairment, should have an even more strongly reduced response to contrast. In this project, we will characterize the contrast response function of patients with schizophrenia and anti-NMDAR encephalitis by measuring their performance in perceptual discrimination tasks while recording the electrophysiological response. We will also study whether the hypothesized impairment is caused by an attention deficit. To facilitate the perceptual assessment in clinical environments we will use a mobile application that we have recently developed. If the contrast response is, as we hypothesize, a measurable indicator of glutamatergic dysfunction, it could be used to identify the group of patients with schizophrenia that have this alteration.


Synchronization Of Ocular Movements And Synchronization Of The Neurons

PI: Hans Supèr

Related to lines:

Eye Movements & Perception and Action

Description:

The visual system often has to gather different elements into meaningful global features in order to infer the presence of objects. This process is commonly referred to as figure-ground segmentation. Contextual influences carried by feedback have been interpreted as the neural substrate of figure-ground perception (Lamme & Roelfsema, 2000). We questioned a leading role of feedback projections in figure ground perception (See Supèr et al., 2010 for a discussion) and figure-ground may be achieved in a feedforward manner (Supèr et al., 2010; Supèr & Romeo, 2011). 

Synchronization between neural assemblies was suggested to be a core mechanism in the integration of sensory information. The computational functions of synchronous neural rhythms in cognition have been a matter of debate among neuroscientists. Therefore, a key question is how the brain could achieve temporal correlations to gather spatially separate features in order to form objects. 

The oculomotor circuit is an alternative yet less studied pathway that may modulate sensory information processing leading to figure-ground perception. During gaze fixation eyes are never completely still and small eye movements play various roles in perceptual processing. If the notion of correlation is interpreted in a wider sense, the temporal correlations (or synchrony) of the movements of both eyes during gaze fixation can be deemed as a factor worthy of being included.   

Synchronization of eye motion may produce coupled binocular input to the visual system promoting cortical synchrony. (Micro)-saccades may provide a mechanism for phase resetting of cortical oscillations promoting synchrony. Desynchronization of eye motion (see Solé Puig et al., 2013a) may decouple binocular input reducing cortical synchrony, allowing new ones to develop (Supèr et al., 2003; Van der Togt et al., 2006). Thus the different amounts of cross-correlation between eye velocities seem to be pointing at the underlying presence of a variable form of coupling, perhaps capable of changing the involved neural transfer functions (Bolhasani et al., 2013). 

In the present project we aim to study the relation between eye movement synchrony and neural synchrony. We will apply psychophysical studies on figure-ground combined with eye tracking and EEG recordings. Also we will use computer modelling to analyze the dynamics between eye and neural synchrony. 


A Computational Study of Predictive Coding Mechanisms for Visual Perception of Motion (SilentVision)

PI: Matthias S Keil

Related to lines:

Neurocomputational Modeling

Description:

A Computational Study of Predictive Coding Mechanisms for Visual Perception of Motion (SilentVision) 

The project aims at developing several computational models of LGMD neuron of the locust visual system. The LGMD neuron responds selectively to approaching objects. Computationally, the detection of a colliding object is relatively easy when the observer (locust) does not move. But when the observer itself moves, all objects are perceived as emanating from the focus of expansion in the visual image. Therefore, an approaching object on a direct collision course has to be distinguished from the rest of the objects that move towards the observer as well, but without colliding (background movement). Computationally, this is yet an unresolved problem. The idea behind our models is the suppression of background movement, while at the same time not loosing the sensitivity for detecting collision threats. To this end, we propose efficient mechanisms that are based on new principles for predictive coding. We expect that our mechanisms have more general applications as well. Particularly, we hope that they will also contribute to a better understanding of the human visual system, and provide powerful alternatives to inference based approaches to predictive coding.


 


Transforming Mental Health (Braingaze) EIT Health Headstart programme

PI: Hans Supèr

Related to lines:

Decision Making in Complex Environments & PsychophysicsEye Movements & Perception and Action

Description:

Lack of technology makes Mental Health care an ineffective, archaic system. It is a time consuming process with disproportional: 1) high personal costs for hospitals and 2) traveling for patients, resulting in poor medical service. Moving from “psychiatry department in hospitals” to “first line care providers and patients at care centres and homes” by accessible, accurate and objective diagnostic & training tests, and to monitor disease progress by just downloading an App with option of remote supervision by professionals. We therefore will develop a webcam based eye tracking of Cognitive Vergence that enables patient to self-administer diagnostic tests thereby relieving hospitals from long waiting list and reduce costs.