Predictive processing as an unifying framework to understand cognition
The goal of this project is to challenge the idea that the functional architecture of cognition reflects an imperative to predict the present state of the world (including oneself). We will test this hypothesis by developing new computational paradigms inspired by the architecture and dynamics of the visual system of low level. My goal is to integrate the free energy formalism developed by Professor Friston to generalize probabilistic models for studying the dynamics of eye movements on different time scales (coding, adaptation, learning). I detail a few key points of this research program:
I have studied the hypothesis that, in the low-level system dedicated to eye movements, information is represented in a neural population as probabilities and that dynamics could be understood as successive inferences (Perrinet, 2012, Neural Computation),. This representation has the advantage of being able to represent the precision of different assumptions (e.g. different luminance values in a spatial region, different velocities, different trajectories). We proved that this perfectly fits observations made in the laboratory on eye movement at neurophysiological and behavioural levels. In particular, we have shown that such model may compensate for axonal delays, (Khoei Masson Perrinet, 2017, PLoS CB),
However, to date there is no full model of eye movements involving both a probabilistic approach and a learning algorithm for natural scenes. My goal is to integrate the free energy formalism developed by Professor Friston to study the dynamics of eye movements. The laboratory of Karl Friston is integrated in the Wellcome Trust Centre for Neuroimaging, a world-renowned research centre. It has expertise at the interface between biology, mathematics and physics, which allows an integrated computational approach and multi-disciplinary brain function. Professor Friston is an authority in the neuroscience community for its experimental and theoretical contributions. In particular, he discovered the technique of Statistical Parametric Mapping (SPM) which is the gold standard for processing of neuro-imaging data. Most importantly for this collaboration, Prof. Friston established a unifying theory of cognitive processes based on a Bayesian formalism (the "Free-energy principle") that is central to this project. An application of this project is to generalize our hypothesis on different time scales using the accomplished research of the free- energy principles to various areas, from bird song analysis to neuro-imaging. In particular, such an approach allows to work at different temporal scales from coding (second scale) to adaptation (minutes) and learning (hours to years) and could also explain the optimal selection of some features in evolution (generations). We have contributed with such application to saccades (Friston et al, 2012), schizophrenia (Adams, Perrinet and Friston, 2012) and the dynmics of the oculomotor system (Perrinet, Adams and Friston, 2014).
In probabilistic inference, a critical (and often dismissed) question is to elucidate how internal knowledge (i.e the prior distributions of Bayesian models) is built from previous experience and how it is updated dynamically. This theoretical question will frame a learning scheme adapting probabilistic representations to the statistics of the observed sequence of events in an experimental block. In particular, we will hypothesize that the agent that is modeled by the Bayesian model assumes a generative model for the production of events. This model generates random events with sufficient statistics that are stationary within each block (for instance the probability of bias toward a given direction), but that each block in the whole sequence has a random length, yet with a fixed average length. Then, by extending previous theoretical work by Adams and MacKay (2007), it can be shown that one can infer at each trial the probability of the next outcome using an estimate of the current block length. Crucially, the model also infers the confidence associated with this decision. Such model can also be extended into a hierarchical model to account for more complex transition probabilities in the sequence of events (I.e. second-order transitions). Preliminary results show that these models provide much better fits (yet with the same number of parameters) of the behavioral data, when compared with a fixed-length model (Pasturel, Montagnini and Perrinet, 2017).
List of pages in this category