News & Events

New paper on Active inference, eye movements and oculomotor delays. (July 2014)

J’ai soutenu mon habilitation à diriger des recherche (HDR) le 17 avril 2014.

Our art/science project ”Tropique” was opening October 10th, 2013.

New paper using motion-based prediction to predict eye movements during blanks. (Oct 2013)

New paper exhibiting motion-based prediction in spiking neural networks. (Sep 2013)

Our art/science project ”Tropique” is coming to its final year before it gets presented (Dec’ 2012).

New paper using MotionClouds to dissociate perception and action accepted for publication in Nature Neuroscience : “More is not always better (…)” (September 2012)

New paper modeling saccades using free-energy minimization. (May 2012)

Our paper showing that motion-based prediction is sufficient to solve the aperture problem is accepted! (April 2012)

Our paper on MotionClouds is accepted! (March 2012)

I spoke about motion detection and eye movements at the FIL (London) on Friday, January 27th, 2012.

I spoke about edge statistics in natural images at ANC (Edinburgh) on Thursday, January 24th, 2012.

I spoke about motion-based prediction at UCL (London) on Thursday, January 12th, 2012.

New review paper on the behavioral receptive field underlying motion integration for primate tracking eye movements (March 2011),

FACETS CodeJam Workshop #4 (June 2010),

Contact Information
Business Card
Laurent Perrinet – Team InViBe
Institut de Neurosciences de la Timone UMR 7289
Aix Marseille Université, CNRS, 13385 cedex 5, Marseille, France
Researcher
http://invibe.net/LaurentPerrinet

Work
Email

<Laurent DOT Perrinet AT univ-amu DOT fr>

Address

Institut de Neurosciences de la Timone (UMR 7289)
Aix Marseille Université, CNRS
Faculté de Médecine – Bâtiment Neurosciences
27, Bd Jean Moulin
13385 Marseille Cedex 05
France

Phone

+33.491 324 044

Personal
Email

<Laurent DOT Perrinet AT gmail DOT com>

Mobile

+33 6 19 47 81 20

Social networks

CiteUlike
Mendeley
SCOPUS
ORCID
Google scholar
ResercherID
G+
FB

v1_tiger.gif

Figure 1: Progressive reconstruction of the spiking image in the primary visual cortex. To illustrate that the visual information is contained in the spike code, we show the theoretical reconstruction of the Tiger image using the algorithm presented in the paper. The different edges are extracted using a sparse coding scheme that grabs most salient information first. This reconstruction would correspond to the reconstrucion of the image in an afferent area using the spiking information only. This particular reconstruction on the 256×256 image used a Steerable pyramid with 8 different orientations as the linear transform. The theoretical compression rate compares to JPEG at slow bpp Fig. 2 and is more efficient than the retina model (compare with Lena).

Les ordinateurs sont trop fiables pour remplacer rellement les humains.