Let's admit it: brains are not computers. Indeed, computers are still deceptive compared to perceptual systems. Think for instance about architectures capable of feeding noisy, ambiguous and rapidly varying raw data to your computer. Or think about architecture performing this in an autonomous manner...
To narrow the gap between neuroscience and the theory of sensory processing computations, I am interested in bridging statistics of geometrical regularities in natural scenes with the properties of neural computations as they are observed in low-level sensory processes or through low-level behavior. For instance, has the predictability of the motion of objects in physical space an impact in the activity of neurons that detect and subsequently on eye movements? What mechanisms are used to learn statistical regularities? What happens if these adaptive mechanisms are dysfunctional?
New computational paradigms for vision
Vision exhibits many computational properties that are out of reach of traditional computer architectures: Highly distributed representations, fault tolerance computations, dynamical outputs. To understand the computational principles underlying low-level vision, we study models using probabilistic representations. In fact, probabilities act as an universal, distributed representation for statistical inference. For instance, we implement highly connected large-scale representations of a motion field that implements predictive coding. This shows that computational properties may emerge from simple rules implemented in these complex systems. A theoretical challenge is to link probabilistic models with dynamical systems modeling neural networks. This step is crucial to link functional models of information processing with realistic neural networks and elucidate the contribution of the different recurrent connectivity scales to the observed neural activity and behavior. In particular, in the framework of applying these systems to natural scenes, we study the emergence of hierarchical computational properties when including plasticity rules. This studies will allow in the future the transfer of the probabilistic framework to the range of platform provided by large-scale simulation of neural networks such as the aVLSI wafers developed in the BrainScaleS consortium. Finally, these studies are validated by a decoding strategy to link biology with these models. In fact, these new computational paradigms are inspired by biology and it essential to confront these new classes of algorithms with what is actually observed in the team. A central tool is information theory that allows to estimate from experimental recordings the critical parameters underlying efficient processing. In particular, we focus on the lateral propagation of contextual information. This provides a set of tools that allows a hand-in-hand cooperation with neurophysiological or behavioral studies.
Optimal spatio-temporal integration in low-level vision
A central challenge in neuroscience is to explain how local information processed by neurons is integrated to give a global response at a higher level, like the population level. An illustration of this problem is the early stages of the visual system, such as in area MT, where neurons have only access to a limited portion of the visual space, the receptive field. We propose in this task to explore optimal solutions of information transfer in the context of the spatio-temporal integration of local, noisy and ambiguous data. This first implies the rigorous definition of the probabilistic representation in the neural activity and how it may be used to reach optimal decisions. In particular, this is tightly couped with visual resolution, for instance the discrimination of different motions. In the context of coding natural scenes, this representation will emerge from these information maximization principles and define priors for building inference in the cortex. We will confront the solutions that emerge from probabilistic models with neuro-physiological imaging studies conducted in the primary visual areas and behavioral results. The long-term objective is to determine a more complete understanding for the receptive fields of neurons at the different scales of the brain from the neuron to neural populations, areas and behavior.
Emergence in the topographical and hierarchical architecture of the low-level visual system
Neural computations diverge from the classical Von Neumann architecture that formalizes standard computers by their parallel, asynchronous and dynamic nature. Computations in the low-level visual system are tightly linked to the representation of the visual information. We study how the topographical, hierarchical architecture may emerge as an optimal representation (such as in http://topographica.org/). At the scale of neurons, we explore models of sparse coding that may give a rational to gain control mechanisms such as divisive inhibition.
As an alternative to classical representations in machine learning algorithms, we explore coding strategies using events as is observed for spiking neurons in the central nervous system. Focusing on the primary visual cortex (V1), we have previously shown that we may define a sparse spike coding scheme by implementing accordingly lateral interactions corresponding to a correlation-based inhibition (Perrinet, 2002). This class of algorithms is both compatible with biological constraints and also to neurophysiological observations and yields a performant algorithm of computing by events. We explore here learning mechanisms to unsupervisely derive an optimal overcomplete set of filters based on previous work of Olshausen and Field and show its biological relevance. In particular, we studied the role of homeostasis in the efficiency of the resulting set if receeptive fields (Perrinet, 2010).
What is the role of spiking neurons in the neural code?
Information in the central nervous system is propagated by electro-chemical signals which often take the form of short pulses ---or spikes--- especially when connecting neurons over some distance. I am focused in studying why this solution has been privileged during evolution and the consequences it has on the properties and efficiency of cognitive abilities, such as the visual perception of motion.
What is the representation used by spiking neurons? How are inferences coded in spikes (Perrinet, 2005)?
How to derive an efficient, plausible model of learning using Sparse Hebbian Learning? Is efficiency translated in the sparseness of spiking representations?
How do these algorithms apply to the visual perception of motion?
We use Dynamical Neural Networks to analyze and simulate the behavior of neural models. We concentrate on plausible models of the cerebral cortex and on low- to mid-level cognitive abilities. Applied to the visual perception of motion, we study the integration of local spatio-temporal velocity information to a more "high-level" motion information. Main results produce novels ways of creating efficient algorithms designed for parallel machines such as Sparse Spike Coding and promote the application of neural algorithms to technology, especially to signal and image processing (see for instance (Perrinet, 03, IEEE TNN) or (Fischer, 07)).
Algorithms based on correlation based inhibition may provide a sparse coding of signals and images (see Publications/Perrinet02sparse),
This type of representation is achievable through Sparse Hebbian Learning,
Results on the perception of motion suggest that behavioral responses reflect the spatio-temporal integration of local information which gets progressively disambiguated.