Perceptual-decision making in schizophrenia

PI: Daniel Linares

Related to lines:

Decision Making in Complex Environments & Psychophysics

Description:

Perceptual-decision making in schizophrenia 

I study perception in mental disorders such as schizophrenia. Currently, I have three main interests. 1) Facilitating data collection in clinical environments. To this end, we have created StimuliApp, an application to run psychophysical tests in mobile devices with high precision. 2) Disentangling perceptual from decisional effects in perceptual tasks. To claim that patients with a certain disease experience a perceptual alteration, it is necessary to show that the differences between patients and controls performing a perceptual task are not due to decisional factors. 3) Assess the role of the glutamatergic NMDA receptors in altered perception. To tackle this question we are assessing perception in patients with anti-NMDAR encephalitis.


Disrupting the prediction of visual motion (DIVISMO)

PI: Cristina de la Malla

Related to lines:

Decision Making in Complex Environments & PsychophysicsEye Movements & Perception and Action

Description:

There are many tasks that we perform in our daily life that involve interacting with moving targets. For example, practicing several sports, crossing a road or driving require interacting with dynamically changing environments where objects move. Assessing how these objects move is necessary to successfully interact with them. In general, motion is perceived when an object changes its spatial position over time. Such a percept is thought to be achieved via two possible mechanisms that can be disentangled: changes in position (displacement-based) and velocity-based. Even though it is possible to differentiate between these two sources of information, the relation and interaction between both of them are undeniable as demonstrated by different perceptual illusions (e.g. de Valois illusion) or phenomena as motion after-effects. 

When it comes to process information about moving stimuli it is relevant the time it takes for visual information to reach brain areas and be processed. This is so because it implies that we are constantly processing information about past positions of objects. This is, at the moment we are able to tell where a moving object is, such object has already moved somewhere else. To deal with such delays in information processing and be able to succeed in our actions, we make predictions about where objects will be in the future. Several studies have suggested that information about the velocity at which the object moves is what allows to make these predictions. Velocity is thus understood in many cases as a predictive mechanism or component for updating position. The main question this project (DIVISMO) addresses is what happens when velocity information, as the predictive component of the motion system, is disrupted. To study this, different paradigms (occlusions, luminance manipulations, and variability in objects velocity) that will impair perceiving the velocity at which an object moves will be used. Furthermore, this project is set from a perspective where observers are active components of the scene. That means that observers actively seek information in the environment that can help them succeed in their decisions and actions. As has been shown by different studies, many of the movements we make (eye, and even head and torso movements) are directed at gathering information of objects or parts of the scene we have to interact with. For this reason, a second main objective of this project is to see whether disrupting velocity information leads to changes in the way we move to try to compensate for the loss of such information. This would allow us to know whether one prefers to update velocity estimates about how targets move rather than relying on predictions. We will take a multidisciplinary approach including psychophysics and decision-making paradigms as well as different techniques such as eye, head and hand movements recordings. The achievement of our research goals not only will advance knowledge on how motion is used by the visual system but also can have an impact on the development of virtual and immersive environments, one of the industries that is currently growing faster. 


The effect of visual variability on perception and action in complex environments (VISVAR)

PI: Cristina de la Malla

Related to lines:

Eye Movements & Perception and Action

Description:

Many of the actions we perform in our daily life such as driving or playing sports take place within dynamic complex environments. Compared to low variability contexts where there are fewer perceptual variables to analyze, acting on highly variable environments is more difficult because making predictions about future states of the world becomes more difficult. To unfold actions in the different environments we first need to visually explore the scenes in order to get information from different perceptual variables. For example, crossing a road requires determining how many cars there are, their distance, their direction and at which speed they move. This information will be used to evaluate the risk of a response, and to make a decision about whether and if so, how and when to unfold the action. It follows that perception and decision-making are heavily intertwined concepts in sensorimotor tasks involving risks. It is known that errors in perception lead to errors in action, so it is important to accurately judge perceptual variables to successfully interact with the environment. However, recent studies have shown that increases in the variability within a scene make us more conservative in our decisions of whether to unfold an action despite not worsening our ability to estimate perceptual variables of interest. The aim of VISVAR is to deeply investigate for the first time the relative contribution of perceptual and decisional components on risk-taking in sensorimotor tasks. More specifically, we want to investigate how the confidence with which we judge perceptual variables, the own motor variability, and the eye movements patterns of exploration of the scene influence perceptual and decisional processes in these kinds of tasks. To do so, we will develop paradigms that will allow disentangling perceptual from decisional components in sensorimotor tasks and will use state-of-the-art Virtual Reality to simulate real-world dynamic environments. Importantly, we will look at different common factors that may be at the root of the change in the criteria we use to unfold actions in different situations. Deciphering the underlying processes of risk behaviour in sensorimotor tasks is the first step to unravel common principles shared among a variety of fields (e.g., Economics) and situations. The results and knowledge of VISVAR are therefore of potential interest for the development of learning and training programmes, as well as for prevention and intervention plans aiming at reducing dangerous risk behaviour. 

 


Updating 3D world states from the optic flow (UPFLOW)

PI: Joan López-Moliner

Related to lines:

Eye Movements & Perception and Action

Description:

The optic flow (i.e. the dynamic retinal image pattern) is thought to provide basic information to support many daily actions (e.g. navigation, collision avoidance, etc) and is becoming theoretically central to build sensors in future drones for picking up simple visual features. Despite the potential of optic flow for providing reach visual information of the space around us, there is little, or non-existent, development concerning the processing of optic flow signals to recover accurate 3D metrics of displacement or velocity. The complexity and variability of optic variables (e.g. eye movements dramatically change the optic flow of the same 3D scene) may be the reason for previous studies to not pursuing this path. In this project we aim at (a) testing the capabilities of our perceptual system to extract accurate and precise metrics from optic flow local signals under different conditions of eye movements behavior which change the retinal stimulation pattern of the same 3D layout; and (b) providing the optic flow with spectral structure consistent with natural images statistics. The latter goal involves devising new optic flow stimuli in the hope that it will increase performance with minimal exposure to visual information. By combining psychophysics and Bayesian tools we will be able to characterize accuracy and precision of recovered 3D metrics from optic flow and reveal the posterior beliefs corresponding to different scenes. The results are potentially relevant for the design of virtual environments that aim at achiving maximum immersion experiences and performance with minimal visual information.