Information that is available in the optic flow allows people to control interceptive or collision-avoidance actions. From a Gibsonian perspective all what matters is in the optic array and the encoding of optical variables would suffice to control actions. Resorting to physical variables is usually not considered. The ambiguity of the retinal image renders optical computations too complex to be addressed. I have been developing models in which optical variables are encoded but also decoded to provide a functional minimum 3D model of the environment. This 3D model encapsulates prior information of size or other relevant variables for controlling timing actions. For example, when we catch a ball the haptic sensory consequences provide veridical size information and this can be used to calibrate the optical flow. Similar principles can be applied to driving and collision avoidance. We are working on a general model of interception based on these concepts. The future model will incorporate both temporal estimates for timing actions and spatial information of the region of interest for interception.
In interceptive tasks you might commit spatial or temporal errors. We address the problem of which kind error you use to correct your movmements. Consider the case in which you aim for a moving target but you miss the target because you end up in a position that is different from the intended one (e.g. different position than the one the movement was planned for). Assuming there has been no correction during the movement (i.e. open loop movement), you have committed a spatial error, so you end up to the right of the aimed position.
Now, let us consider a sightly different possibility. As shown in the second animation, your hand lands on the aimed position, so spatially you were correct, but you were too early. We can say that you made a temporal error. In real-world interception errors can have both temporal and spatial components, but it is difficult to separate between both types of error because it is difficult to infer which was the position you were aimed for.
Now, let us consider a sightly different possibility. As shown in the second animation, your hand lands on the aimed position, so spatially you were correct, but you were too early. We can say that you made a temporal error. In real-world interception errors can have both temporal and spatial components, but it is difficult to separate between both types of error because it is difficult to infer which was the position you were aimed for.
One of our research lines is how people adapt to sensory information that is delayed (e.g. due to system delays) after a motor command has been issued. In a couple of publications we have shown that people learn to control their delayed sensory consequences but this process of learning DOES NOT lead to a general temporal adaptation. This has important and relevant consequences for futures technologies based like those based on teleoperations or drone control.