The machinery behind the visual perception of motion and the subsequent sensorimotor transformation, such as in Ocular Following Response (OFR), is confronted to uncertainties which are efficiently resolved in the primate's visual system. We may understand this response as an ideal observer in a probabilistic framework by using Bayesian theory (Weiss et al., 2002) which we previously proved to be successfully adapted to model the OFR for different levels of noise with full field gratings (Perrinet, 2005, ECVP, Perrinet, 2006, FENS and Perrinet, 2007, Sec. 2.3).
In general, a bayesian model is defined by introducing a prior for the inference of a latent state: For motion processing, this takes the form of a prior favoring slow speeds, as these are physically more probable given an observed motion signal. In particular, behavioral results suggested from the dynamics of shortlatency responses that the information was separated in a 2 pathways bayesian model, separating 1D cues from 2D cues (Barthélemy, 2007, Vision Research). However, these observations stay rather descriptive and the function and mechanisms underlying the separation between 1D and 2D cues remain to be discovered.
In that direction, more recent experiments of OFR have used disk gratings and bipartite stimuli which are optimized to study the dynamics of centersurround integration and for which we extended the previous model using the integration of independent spatiotemporal "modules". These models show similar behavior as physiological data (Perrinet, 2007, Journal of Physiology (Paris), Perrinet, 2008, COSYNE) and may be compared to the RatioofGaussians model (Perrinet, 2008, AREADNE). Also, we modeled the dynamical properties of the perception of motion in the visual flow as the recurrent interaction of elementary inferential processes (Montagnini, 2007).
The emerging properties of the system allows to predict and understand some aspects of the psycho and neurophysiological observations obtained in the DyVA team and allows to propose an architecture to understand the properties of cortical processing for visual functions (see this FACETS' presentation). In particular, it permits to compare the relative importance of feedforward, lateral and feedback streams of information in the visual architecture. 

Figure 1 Basic properties of human OFR. Several properties of motion integration for driving ocular following as summarized from our previous work. (a) A leftward drifting grating elicits a brief acceleration of the eye in the leftward direction. Mean eye velocity proﬁles illustrate that both response amplitude and latency are affected by the contrast of the sinewave grating, given by numbers at the rightend of the curves. Quantitative estimates of the sensorimotor transformation are given by measuring the response amplitude (i.e. change in eye position) over a ﬁxed time window, at response onset. Relationships between (b) response latency or (c) initial amplitude and contrast are illustrated for the same grating motion condition. These curves deﬁne the contrast response function (CRF) of the sensorimotor transformation and are best ﬁtted by a Naka–Rushton function (reprinted from (Barthélemy et al., 2007)). (d) At ﬁxed contrast, the size of the circular aperture can be varied to probe the spatial summation of OFR. Clearly, response amplitude ﬁrst linearly grows up with stimulus size before reaching an optimal size, the integration zone. For larger stimulus sizes, response amplitudes are lowered (reprinted from (Barthélemy et al., 2006)). (e) OFR are recorded for centeralone and center–surround stimuli. The contrast of the center stimulus is varied to measure the contrast response function and compute the contrast gain of the sensorimotor transformation at both an early and a late phase during response onset. Open symbols are data obtained for a centeralone stimulus, similar to those illustrated in (c). When adding a ﬂickering surround, ones can see that late (but not early) contrast gain is lowered, as illustrated by a rightward shift of the contrast response function (Barthélemy et al., 2006). 
