NEuronal OPerations in visual TOpographic maps

A prominent property of sensory and motor cortices is their organization into cortical maps. Along the cortical hierarchy, low-level features such as visual position, visual orientation (but also auditory tone, local somatosensory touch, etc...) and higher-level features such as faces, object viewpoints, etc..., are topographically represented on the cortical surface. Thus, a local oriented stimulus whose position and orientation are stationary over time will activate local cortical columns at the proper orientation and retinotopic position in the topographical map (Figure 1)

JPEG - 117 kb

In natural conditions, visual inputs can be seen as a collection of these local features (position, orientation, direction of movement, spatial frequency…), which are, however, distributed across the visual scene at various scales in space and time, sometimes generating ambiguous or illusory percepts. Importantly, these features are also dynamic, in the sense that their values (position, orientation…) can change with time. As a consequence, in he case of a moving object, the cortex will be activated by a dynamic, non-stationary, sequence of feedforward inputs along a trajectory dictated by the gradual change of feature(s) (Figure 2).

JPEG - 114.1 kb

A central question in visual neuroscience is to understand how such dense dynamical inputs locally activating the cortex along the pre-existing functional map (Figure 2) are processed at various steps along the visual system in order to achieve a robust and fast encoding of the visual scene. At NeOpTo, our working hypothesis is that intra- and inter-cortical interactions, which represent the vast majority of synaptic inputs for any cortical neuron, can dynamically shape visual processing to achieve an efficient percept. Our first objective is therefore to understand the neuronal operations that dynamically shape the processing and representation of visual stimuli within these maps, using experimental and computational approaches (Chemla et al 2011, Reynaud et al 2012, Muller et al 2014, Chemla et al 2017, Rankin & Chavane 2017, Zerlaut et al 2017, Deneux et al. 2017).

Second, since these operations can occur at multiple spatial and temporal scales we are constantly developing new methods and signal procesing tools to better interpret the results of of neuronal activity measurements (Deneux et al 2011, Reynaud et al 2011, Deneux et al 2012, Takerkart et al 2014, Deneux et al, 2016). With thus respect, the imaging techniques we use to record brain activity allow to cover multiple scales and different animal models from rodent (normal and pathologic) to non-human primates (marmoset, macaque). This is crucial since we need access to local processing of information within the cortical column (single-unit recording and 2-photon microscopy) but also to measure activation at the level of cortical maps (multi-electrode arrays, optical imaging) that span what is called the “mesoscopic scale” (Figure 3), in between microscopic (the neuron) and macroscopic (brain areas). We thus constantly work on improveing the methods, but also on the signal processing tools to extract signals from noisy data, at single-trial level.

JPEG - 80.2 kb

A third objective of our team is to improve prosthetic vision using cortical imaging. Here, we use the theoretical knowledge and methodological knowhow we are acquiring on the notion of representations of visual stimuli within maps in order to optimize prosthetic vision. Retinal prostheses are promising tools for recovering visual functions in blind patients but, unfortunately, actual technologies yield still poor improvements of visual acuity. Increasing their resolution is thus a key challenge. Tackling it requires understanding the origin of the current bottlenecks faced by human prosthesis via their thorough exploration through the usage of appropriate animal models (Matonti et al 2015). By recording the functional impact of these prosthesis in animal models, we can better understand their physiological impact and detailed functioning in-situ, which in turn will allow to improve their resolution (Pham et al 2013, Roux et al 2016, 2017).

JPEG - 68.7 kb

Our project strongly benefits from the complementary expertise of F. Chavane (expert of the visual system, using VSDI and electrophysiology), L. Muller (expert of signal processing methods and computational neuroscience), I. Vanzetta (expert of the visual system using optical imaging of intrinsic signals, two-photon microscopy), F. Matonti (ophthalmologist specialist of the retina and prosthesis), Daniele Denis (ophthalmologist, specialist of pediatric ophthalmology and amblyopia) and L. Hoffart (ophthalmologist, specialist of the cornea and optical imaging of the anterior segment). This project also relies on the expertise of our collaborators, locally within the INT but also at the national and international level (Figure 4).

JPEG - 85 kb

Linking visual sciences and Arts

For the last 5 years, Laurent Perrinet as developed a collaboration with Etienne Rey, a plastic artiste living in Marseille. His work reflects upon how light is diffracted and disassembled by materials and its impact on our percepts. See This collaboration has led to several masterpieces, one of them being permanently exposed at the Institut de Neurosciences de la Timone. For more information, see []

JPEG - 177.6 kb

Elasticité dynamique is a masterpiece made of 3 elements : Expansion, Trame et Lignes sonores. (© Etienne Rey, Adagp Paris 2015). See

  • Laurent Perrinet (CRCN, CNRS) is a computational neuroscientist. He builds models of visual processing that are then compared to empirical data from biological systems. His work helps building better artificial intelligence systems, in particular for sensory processing. See
CNRS logo université Aix Marseille logo | Sitemap | Terms & Conditions | contact | admin | intranet | intcloud |