ANR Horizontal-V1 (2017--2021): Connectivité Horizontale et Prédiction de Cohérences dans l'Intégration de Contour et Mouvement dans le Cortex Visuel Primaire

The Horizontal-V1 project aims at understanding the emergence of sensory predictions linking local shape attributes (orientation, contour) to global indices of movement (direction, speed, trajectory) at the earliest stage of cortical processing (primary visual cortex, i.e. V1). We will study how the long-distance "horizontal" connectivity, intrinsic to V1 and the feedback from higher cortical areas contribute to a dynamic processing of local-to-global features as a function of the context (eg displacement along a trajectory; during reafference change induced by eye-movements...). We will search to characterize the dynamic processes based on lateral propagation intra-V1, through which spatio-temporal inferences (continouous movement or apparent motion sequences) facilitating spatial ("filling-in") or positional ("flash- lag") future expected responses may be generated. The project will use a variety of animations of local oriented stimuli forming, according to their spatial and temporal coherence, predictable global patterns, apparent motion sequences and/or continuous trajectories. We will measure the cortical dynamics at two scales of neuronal integration, from micro- (intracellular, SUA) to meso-scopic levels (multi-electrode arrays (MEA) and voltage sensitive dye imaging (VSDI)) in the anesthetized (cat, marmoset) and awake fixating animal (macaca mulata). In a second step, we will combine these multiscale observations to constrain a structuro-functional model of low-level perception, integrating the micro-meso constraints. Two laboratories will participate in synergy to the project: UNIC-Gif (Dir. Yves Frégnac, DRCE2 CNRS, coordinator) and INT-Marseille (InVibe Team Dir. Frederic Chavane, DR2 CNRS).

WP3 - Design of novel visual paradigms, probabilistic model of V1 and data-driven simulations - co lead UNIC-INT.

Objectives : This WP will have two primary goals. The first one is theoretically driven, and for sake of simplicity will ignore the dynamic features of neural integration (as expected from a statistical model of image analysis). Binding the different features of visual objects at the local scale (contours) as well as a more global level involves understanding the statistical regularities of the sensory inflow. In particular, titrating the predictions that can be done at the statistical level can be seen as a first pass to better search for critical parameters constraining the network behaviour. From these, we will build probabilistic predictive models optimized for edge co-occurrence classification and generate novel visual statistics 1) which obey rules imposed by the functional horizontal connectivity anisotropies, such as co- circularity, and 2) which facilitate binding in the orientation domain, such as log-polar planforms. These statistics generated in the first half of the grant will be implemented and tested experimentally in the second half of the grant. The second one is more data-driven (as well as phenomenological for feedback from higher cortical areas, since it will not be explored in the grant). Since model fitting will depend on close interactions with WP1 and WP2 measurements, it will be done in the second half of the grant.

WP3-Task 1: Theoretically oriented workplan – Lead INT (Laurent Perrinet)

Informed by the generative model of edge co-occurrences studied in subtask 1, we will be able to extend the family of motion cloud stimuli (Leon et al, 2012; Simoncini et al, 2012) to include joint dependencies between different elements in position or orientation. An exact solution to this problem is hard to achieve as it involves a combinatorial search of all possible combinations of pairs of edges. However, numerous variational approaches are possible and fit well with our probabilistic framework. We will use the convolutional neural network described above but using a back-propagating stream to generate different images. Such a representation will then be optimized using an unsupervised learning method. This is similar to the process used in Generative Adversarial Networks in deep-learning architectures (Radford et al, Archives). Finally, the regularities observed in static images will be extended to dynamical scenes by observing that a co-occurrence can be implemented by simple geometrical operations as they are operated in time. For instance a co-circularity is easily described as the set of smooth roto-translational transformations of an edge in time using the group of Galilean transformations (Sarti and Citti, 2006). This theory calls for a first prediction to understand the set of whole possible spatio-temporal co-occurrences of edges as geodesics in the lifted space of all possible trajectories. We predict that such decomposition should allow us to better understand the different classes of features that emerged in the first task.

Similarly, we expect to see that the different independent features should decompose at various scales both in space and in time. For instance, we expect configurational aspects to be more local while aspects related to a motion (Perrinet and Masson, 2012; Khoei et al, 2016) or global shape (form) should be more global. This translates into a probabilistic hierarchical model that would combine dependencies from different cues. In particular, through the emergence of differential pathways for form and motion. These quantitative predictions should finally be confronted at the modelling and neurophysiological levels.

WP3-Task 2 : Data-driven comprehensive model of V1 – Co-lead UNIC and INT

The second task is more data-driven (as well as phenomenological for the feedback circuit part, since largely unknown). Since simulations will depend on close interactions with WP1 and WP2 measurements, it will be developed by the WP3-Post-Doc in the second half of the grant. It will benefit from existing structuro-functional models addressing separately two distinct levels of neural integration, microscopic (conductance-based in Kremkow et al, 2016; Antolik et al, submitted, Chariker et al, 2016) and mesoscopic (VSD-like mean field in Rankin and Chavane, submitted). Efforts will be made to merge these models to fit - in a unified multiscale biologically realistic model - the cellular and VSD data targeting critically horizontal propagation. The parametrization should be flexible enough to produce a generic cortical architecture accounting possibly for species-specificity (Antolik for cat; Chaliker for monkey)

The Marseille team

List of pages this category:


This work was supported by ANR project "Horizontal-V1" N° ANR-XXXXXXX.
ANR logo


TagTag TagGrants TagYear17 TagYear18 TagYear19 TagYear20 TagYear21

welcome: please sign in