## Attachment 'perrinet08spie.tex'

Download```
1 \documentclass[a4paper]{article}
2 \usepackage{amsmath}%
3 \usepackage{amsfonts,bm}% http://www.tex.ac.uk/cgi-bin/texfaq2html?label=boldgreek
4 % Paper Number: 7000-15
5 % Tracking Number: EPE08-EPE115-96
6 % http://myspie.org/submission/index.cfm?fuseaction=ManuscriptConfirm
7 % Color Printing: No
8 % SVN revision 1060 Repository Root: svn+ssh://hulk.cnrs-mrs.fr/data/svn/dyva
9 \usepackage{ifpdf}%
10 % ======== polices de caracteres =============
11 %\usepackage{lmodern,pxfonts}%
12 %\usepackage{times}%
13 %\usepackage[T1]{fontenc}%
14 %\usepackage[latin1]{inputenc}%
15 % bypass for ArXiV
16 %\ifthenelse{\equal{\arxiv}{true}}{%true
17 % portability between LaTeX and pdfLaTeX
18 %\newif\ispdf
19 % \ifx\pdfoutput\undefined \pdffalse
20 %\else \pdfoutput=1 \pdftrue \fi
21 %
22 \ifpdf
23 \usepackage[pdftex]{graphicx}
24 \usepackage[pdftex, pdfusetitle,colorlinks=false, pdfborder={0 0 0}]{hyperref}%
25 \DeclareGraphicsExtensions{.png,.pdf}%
26 \pdfoutput=1 % we are running pdflatex
27 \pdfcompresslevel=9 % compression level for text and image;
28 \pdftrue
29 \graphicspath{{./figures_pdf/}}%
30 \usepackage{microtype}%
31 \else
32 \usepackage{graphicx}%
33 \usepackage[colorlinks=false]{hyperref}%
34 %\DeclareGraphicsRule{*}{eps}{*}{}
35 \DeclareGraphicsExtensions{.eps}%
36 \graphicspath{{./figures/}}%
37 \fi%
38 \hypersetup{%
39 pdftitle={Sparse Spike Coding : applications of Neuroscience to the processing of natural images.},%
40 pdfsubject={version 2},%
41 pdfauthor={Laurent Perrinet <Laurent.Perrinet@incm.cnrs-mrs.fr>, INCM/CNRS, 31, ch. Joseph Aiguier, 13402 Marseille Cedex 20, France - http://incm.cnrs-mrs.fr/LaurentPerrinet},%
42 pdfkeywords={Neural code, spike-event computation, correlation-based inhibition, Adaptive Matching Pursuit, Sparse Spike Coding, Competition Optimized Matching Pursuit (COMP).},%
43 }%
44 %============ BIBLIO ===================
45 \usepackage[sort&compress]{natbib}%
46 %============ graphics ===================
47 %\hypersetup{%
48 % pdftitle={Sparse Spike Coding : applications of Neuroscience to the processing of natural images.},%
49 % pdfsubject={version 2},%
50 % pdfauthor={Laurent Perrinet <Laurent.Perrinet@incm.cnrs-mrs.fr>, INCM/CNRS, 31, ch. Joseph Aiguier, 13402 Marseille Cedex 20, France - http://incm.cnrs-mrs.fr/LaurentPerrinet},%
51 % pdfkeywords={Neural code, spike-event computation, correlation-based inhibition, Adaptive Matching Pursuit, Sparse Spike Coding, Competition Optimized Matching Pursuit (COMP).},%
52 %}%
53 % symbols used in article
54 \newcommand{\Ss}{\mathcal{S}} % the space vector of coefficients
55 \newcommand{\Ps}{\mathcal{I}} % the space of natural images
56 \newcommand{\Wb}{\mathbf{W}} % the weights
57 \newcommand{\Cb}{{C}}%\mathbf
58 \newcommand{\SE}{\mathtt{SE}}
59 \newcommand{\sv}{\mathbf{s}} % image's hidden param
60 \newcommand{\nb}{\mathbf{n}} % noise
61 \newcommand{\sh}{\mathbf{\hat{s}}}
62 \newcommand{\sProj}{\mbox{Proj}}%}\textbf{
63 \newcommand{\Rr}{{\protect\mathbb R}}
64 \newcommand\ra{\rightarrow} %
65 \newcommand\la{\leftarrow} %
66 \newcommand{\sparsenet}{{\sc SparseNet}}%
67 \newcommand{\eqdef}{\stackrel{\rm def}{=}}%
68 %======= ref internes =====
69 %\newcommand{\lFig}[1]{\label{fig:#1}}
70 %\newcommand{\lSec}[1]{\label{sec:#1}}
71 %\newcommand{\lEq}[1]{\label{eq:#1}}
72 \newcommand{\seeFig}[1]{Fig.~\ref{fig:#1}}%
73 \newcommand{\seeSec}[1]{Sec.~\ref{sec:#1}}%
74 \newcommand{\seeEq}[1]{Eq.~\ref{eq:#1}}%
75 \newcommand{\seeAnnex}[1]{Annex.~\ref{ann:#1}}%
76 \def\W{{\cal W}}
77 \def\x{{\mathbf x}}
78 %=========== units ======
79 \usepackage{units}%
80 %============ end ===================
81 \title{Sparse Spike Coding : applications of Neuroscience to the processing of natural images}%
82 \author{Laurent U.~Perrinet\thanks{E-mail: \texttt{Laurent.Perrinet@incm.cnrs-mrs.fr}. Further information may be found at \url{http://incm.cnrs-mrs.fr/LaurentPerrinet}, especially \href{http://incm.cnrs-mrs.fr/LaurentPerrinet/SparseSpikeCoding}{supplementary data} and \href{http://incm.cnrs-mrs.fr/LaurentPerrinet/Publications/Perrinet08spie}{metadata} about this article, as well as the scripts to reproduce the figures.}\\
83 Institut de Neurosciences Cognitives de la M\'editerran\'ee (INCM) \\ CNRS / University of Provence\\
84 31, ch. Joseph Aiguier, 13402 Marseille Cedex 20, France }%
85 \date{}%
86 %%%%%%%%%%%% Her begynner selve dokumentet %%%%%%%%%%%%%%%
87 \begin{document}%
88 \maketitle %
89 %: abstract
90 \begin{abstract}%
91 If modern computers are sometimes superior to humans in some specialized tasks such as playing chess or browsing a large database, they can't beat the efficiency of biological vision for such simple tasks as recognizing and following an object in a complex cluttered background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this on static natural images by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we will apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the efficiency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will in particular focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach.%
92 \end{abstract}%
93 {\bf Keywords}: Neural population coding, decorrelation, spike-event computation, correlation-based inhibition, Sparse Spike Coding, Competition Optimized Matching Pursuit (COMP)%
94 % TODO= remove old scripts + docs on the SVN
95 % TODO= SSC scripts
96 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
97 \section{Introduction: efficient neural representations}%
98 \label{sec:intro}%
99 The architecture of modern day computers illustrate how we understand intelligence. But, if they are good at playing chess or at browsing databases, it is clear that computers are far from rivaling with what appears to be more simple aspects of intelligence such as the ones demonstrated in vision. Think for instance as something as simple as recognizing an object in natural conditions, such as while walking in the street. This necessarily involves a network of processes from segmenting its outline, perceiving its global motion, matching its different patterns invariantly to the shading, contrast, angle of view or to occlusions. Actually, while this seems obvious to us, computers cannot perform this task and it is a common practical ``Turing Test" to authenticate humans versus spamming robots by challenging the login upon recognizing for instance warped letters on a noisy background (the so-called \href{http://en.wikipedia.org/wiki/CAPTCHA}{CapTchas}).\\%
100 As the seat of this processing, the Central Nervous System (CNS) is therefore by its efficiency clearly different from a classical~\citet{Neumann66} computer defined as a sequential Turing-like machine with a few, very rapid Central Processing Units and a finite, adressable memory. Computational Neuroscience is a branch of neuroscience studying specifically the structure and function of computations in the CNS such as the more complex architectures imagined by~\citet{Neumann00}. Numerous successful theories exist to explain the complex dynamics of modern Artificial Neural Networks and how we may use neuro-physiological constraints to build up efficient systems~\citep{Grossberg03} that are ecologically adapted to the statistics of the input~\citep{Atick92}. However, a main challenge involving both neuroscience and computer science is to understand how and for what class of problems the CNS outperforms traditional computers. I am interested in this paper in extracting general principles from the structure of the CNS to derive a better understanding of the neural functions but also to apply these algorithms to signal processing applications.\\%
101 % complexity of ArgMax in parallel computers
102 A fundamental difference of the CNS is the fact that 1) information is distributed in parallel on the different neurons, 2) processes are dynamical and interruptible, 3) information is carried by elementary events, called \emph{spikes} which may be transmitted over long distances. This is well illustrated for the large class of pyramidal neurons of the neocortex. In a simplistic way the more a neuron is excited, the quicker and the more often it will emit spikes, with a typical latency of some milliseconds and a maximum firing frequency of the order of \unit[200]{ms}. Concentrating on local cortical areas (that is in human to the order of some squared centimeters and to a billion neurons), it means that the complexity of some operation will be different on a computer (a few but very rapid CPUs) and a population of neurons (a huge number of slow dynamical event generators). For instance, the complexity of the ArgMax operator (finding the sorted indices from a vector) will increase as $O(Nlog(N))$ with the dimension $N$ of the vector, while if we apply the vector as the activation of a neuronal population, the complexity will not increase with the number $N$ of neurons\footnote{Note that in a noisy environment, the output will be given with a certain temporal precision and that this precision may decrease with $N$.}. In addition, the result is given by the generated spike list and is interruptible. \\%
103 In this paper, we will explore how we may apply this class of operators to the processing of natural images by presenting an adaptive Linear/Non-Linear framework and then optimize its efficiency. We will in a first step draw a rationale for using a linear representation by linking it to a probabilistic representaiton under the condition of decorrelation. Then we will derive a linear transform adapted to natural images by constructing a simple pyramidal architecture similar to~\citep{Burt83} and extend it to a Laplacian and Log-Gabor pyramids~\citep{Fischer05a}. We will then in a third section propose that this linear information may be optimally coded by a spike list if we apply a point non-linear operation. At least, we will define an improvment over Matching Pursuit~\citep{Mallat93} by optimizing the efficiency of the ArgMax operator and which finally defines Sparse Spike Coding~\citep{Perrinet02sparse,Perrinet04tauc,Perrinet06}.%
104 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
105 \section{Linear filtering and whitening}%
106 \label{sec:white}%
107 A first step in the definition of this algorithm is to explicit the linear operations which are used to transform the input vector into a value representative of the quality of a match. %
108 Let's define an image as a set of scalar values $\tilde{x}_i$ on a set of positions ${\cal P}$, $i$ being the index of the positions, so that it defines a vector $\tilde{x} \in \mathbb{R}^M$, with $M=card({\cal P})$. As we saw in previous works~\citep{Perrinet04tauc}, the quality of a match between the raw data $\tilde{x}$ with a known image may be linked in a probabilistic framework to the correlation coefficient. In fact, the probability of the signal $\tilde{x}$ knowing the ``shape'' $\tilde{h}$ of the signal to find (see the table Tab.~\ref{tab:linear} for the chosen notation) is:
109 \begin{eqnarray}
110 P( \tilde{h} | \tilde{x} ) &=& \frac{1}{P(\tilde{x})} P( \tilde{x} | \tilde{h} ) P(\tilde{h} ) \nonumber\\
111 &=& \frac{1}{P(\tilde{x})} \frac{1}{(2\pi)^{M/2}} \exp(-\frac{ (\tilde{x} - \tilde{h}){\bm \Sigma}^{-T} (\tilde{x} - \tilde{h})^T }{2}).P( \tilde{h} ) \nonumber\\
112 \label{eq:log_proba}%
113 \end{eqnarray}
114 This is based on the assumption of centered data (that is $E(x)=0$), a Linear Generative Model and a gaussian noise of covariance matrix ${\bm \Sigma} = E(\tilde{x}\tilde{x}^T)$ (See Chapter~2.1.4 of ~\citep{Perrinet06}). In the case where the noise is white (that is that the covariance matrix is a diagonal matrix) and assuming an uniform prior for the scalar value of $h$, this may be simply computed with the correlation coefficient defined by:
115 \begin{eqnarray}%
116 \rho = <\frac{h}{\|h\|}, \frac{x}{\|x\|}> \eqdef \frac{ \sum_{1\leq i \leq M} x_i h_i }{\sqrt{\sum_{1\leq i \leq M} h_i^{2} } \sqrt{\sum_{1\leq i \leq M} x_i^{2} } }%
117 \label{eq:coco}%
118 \end{eqnarray}%
119 %-------------
120 %: Figure 1 : whitening
121 \begin{figure}
122 \begin{center}
123 \begin{tabular}{c}
124 \includegraphics[width=.9\linewidth]{whitening}
125 \end{tabular}
126 \end{center}
127 \caption[whitening]
128 %>>>> use \label inside caption to get Fig. number with \ref{}
129 { \label{fig:white} \rm{Spatial decorrelation. } \emph{(Top-Left)} Sample raw natural image ($M=512^2$). \emph{(Bottom-Left)} Mean pairwise spatial correlation in a set of $1000$ natural images (Red is 1, blue is zero). It shows the typical decrease in $ \frac{1}{f^2}$ of the power spectrum but also an anisotropy along the vertical and horizontal axis. \emph{(Middle)} decorrelation filter computed from the methods of~\citep{Atick92} (see text). This profile is similar to the interaction profile of bipolar and horizontal cells in the retina. \emph{(Top-Right)} Whitening of the sample image. \emph{(Bottom-Right)} The mean pairwise spatial correlation of $1000$ whitened natural images is highly peaked at the origin and inferior to $0.05$ elsewhere. As is observed in the LGN, the power spectrum is relatively decorrelated by our pre-processing~\citep{Dan96}. See script \texttt{experiment\_whitening.py} to reproduce the figure.}
130 \end{figure}%
131 %-------------
132 It should be noted that $\rho_j$ is the $M^{\rm th}$-dimensional cosinus and that its absolute value is therefore bounded by 1. The value of $\mbox{ArcCos}(\rho_j)$ would therefore give the angle of $x$ with the pattern $h$ and in particular, the angle would be equal (modulo $2\pi$) to zero if and only if $\rho_j=1$ (full correlation), $\pi$ if and only if $\rho_j=-1$ (full anti-correlation) and $\pm\pi/2$ if $\rho_j=0$ (both vectors are orthogonal, there is no correlation). Also, it is independent to the norm of the filters and we assume without loss of generality in the rest that these are normalized to unity. To achieve this condition, the raw data $\tilde{x}$ has to be preprocessed with a decorrelation filter to achieve a signal $x$ with no mean point-wise correlation\footnote{Of course, this does not achieve necessarily independence as is often stated.}. To define this, we may use for instance the eigenvalue decomposition (EVD) of the covariance matrix:
133 \begin{eqnarray}
134 {\bm \Sigma} = \mathbf{V}\mathbf{D}\mathbf{V}^T \nonumber\\
135 \label{eq:evd}%
136 \end{eqnarray}
137 where $\mathbf{V}$ is a rotation (and thus $\mathbf{V}^{-1}=\mathbf{V}^T$) and $\mathbf{D}$ is a diagonal matrix. This decomposition is similar to that achieved by PCA and may be computed for instance by averaging linear correlations such as is done with the linear Hebbian rule~\citep{Oja82}. In particular, the columns of matrix $\mathbf{V}$ contain the eigenvectors and $\mathbf{D}$ is a diagonal matrix of the corresponding eigenvalues. If we set $\mathbf{W}= \mathbf{D}^{-\frac{1}{2}}\mathbf{V}^T$ and $x= \mathbf{W}\tilde{x}$, then
138 \begin{eqnarray}
139 E(xx^T) &=& E(\mathbf{W}\tilde{x} (\mathbf{W}\tilde{x})^T ) \nonumber\\
140 &=& \mathbf{D}^{-\frac{1}{2}}\mathbf{V}^T E(\tilde{x} \tilde{x}^T) (\mathbf{D}^{-\frac{1}{2}}\mathbf{V}^T)^T \nonumber\\
141 &=& \mathbf{D}^{-\frac{1}{2}}\mathbf{V}^T {\bm \Sigma} \mathbf{V}\mathbf{D}^{-\frac{1}{2}} \nonumber\\
142 &=& \mathbf{D}^{-\frac{1}{2}}\mathbf{V}^T \mathbf{V}\mathbf{D}\mathbf{V}^T \mathbf{V}\mathbf{D}^{-\frac{1}{2}} \nonumber\\
143 &=& \mathbf{1}^{M\times M} \nonumber
144 \end{eqnarray}
145 We therefore proved that this linear transform de-correlates on average the input data. In practice, we used the power spectrum and its relation to the covariance in translation invariant data such as natural images to compute the whitening filter~\citep{Atick92}. This corresponds then to a filter with a gain proportional to the spatial frequency but with an anisotropy on the vertical and horizontal axis (see Fig.~\ref{fig:white}).\\%
146 Thanks to this processing, and only when these hypothesis have been fullfilled, we may in general use the correlation coefficient (see Eq.~\ref{eq:coco}) as a measure related to the probability of a match of the image with a given pattern. The next step is now to define the best patterns to represent images.%
147 %------------- VARIABLES
148 \begin{table}[hbpt]%
149 \label{tab:linear}%
150 \begin{center} %
151 \caption{Matrix notation and denoising Variables}
152 \begin{tabular}{|l|c|c|} \hline
153 Name&Symbol&Description\\\hline\hline %
154 Pixel positions&${\cal P}$&$ \vec{p} \in {\cal P}, card({\cal P}) = M$ \\ %\hline
155 Raw image&$\tilde{x}$&$\tilde{x} \in \mathbb{R}^M$, $E(\tilde{x}) =0$ \\ %\hline
156 Covariance matrix&${\bm \Sigma}$&${\bm \Sigma} \in \mathbb{R}^{M\times M}$\\
157 Whitening matrix&$\mathbf{ W}$&$\mathbf{ W}\in \mathbb{R}^{M\times M}$\\
158 Decorrelated image&$x$&$x= \mathbf{W}\tilde{x} \in \mathbb{R}^M$\\
159 Pattern image&$\tilde{h_j}$&$ h_j \in \mathbb{R}^M, j \in {\cal D}$\\
160 Overcomplete dictionary&${\cal D}$&$card({\cal D})= N \gg M$\\
161 Decorrelated pattern image&$h_j$&$h_j= \mathbf{W}\tilde{h_j} \in \mathbb{R}^N$\\
162 Transform matrix&$\mathbf{H}$&$ \mathbf{H} \in \mathbb{R}^{N\times M}$\\
163 Correlation coefficient&$\rho_j$&$\rho_j = \frac{<h_j,x>}{\|h_j\|\|x\|} \in [ -1,1 ] $\\
164 \hline\hline%
165 \end{tabular}
166 \end{center}%\vspace*{-.4cm}
167 \end{table}
168 %-------------
169 \section{Multiscale representations: the (Golden) Laplacian Pyramid}%
170 \label{sec:pyramid}%
171 Multi-scale representations are a popular method to allow for a scale invariant representation. This correspond to repeating basic shapes at different scales and it thus allows that one may easily compute the representation of a scaled image by a simple transformation in the representation space instead of recomputing the whole transform. As a consequence, this representation makes it for instance easier to compute the match of a feature at different scales. It is classically implemented in wavelet transforms but we present here a simple implementation using a recurrent scheme, the Laplacian Pyramid~\citep{Burt83}. This transform has indeed the advantage of being computed by simple down-scaling and up-scaling operations and is easily inverted for the reconstruction of the image. It transforms an image in a list of down-scaled images, or \emph{image pyramid}. Let's define the list $\{ M^k \}$ with $0 \leq k \leq s$ of the sizes of the down-scaled images ($k=0$ corresponds to the ``base'' and $M^0=M$ while $s$ is the level of the smallest image, that is the summit of the pyramid). Typically, such as in wavelets, the size decreases geometrically with an exponent $\gamma$. The most used exponent in image processing is $2$, the pyramid is then called \emph{dyadic}. The corresponding down-scale and up-scale transform from level $k$ to $k+1$ may be defined as ${\cal D}_k$ and ${\cal U}_k$ respectively. We may therefore define the gaussian pyramid as the recursive transform from the ``base'' of the pyramid to the top as the list of transforms: %
172 \begin{eqnarray}
173 {\cal G} = \{ {\cal D}^k \} \mbox{ with } {\cal D}^k = {\cal D}_0 \circ \cdots \circ {\cal D}_k
174 \label{eq:gpyr}%
175 \end{eqnarray}
176 %-------------
177 %: Figure 2 Golden pyramid
178 \begin{figure}
179 \begin{center}
180 \begin{tabular}{c}
181 \includegraphics[width=.9\linewidth]{SpikeCoding_pyramid}
182 \end{tabular}
183 \end{center}
184 \caption[SC]
185 %>>>> use \label inside caption to get Fig. number with \ref{}
186 {\label{fig:golden_pyr} \rm{The Golden Laplacian Pyramid. }%
187 To represent the edges of the image at different levels, we may use a simple recursive approach constructing progressively a set of images of decreasing sizes, from a base to the summit of a ``pyramid''. Using simple down-scaling and up-scaling operators we may approximate well a Laplacian operator. This is represented here by stacking images on a ``Golden Rectangle'', that is where the aspect ratio is the golden section $\phi \eqdef \frac{1+\sqrt{5}}{2}$. We present here the base image on the left and the successive levels of the pyramid in a clockwise fashion (for clarity, we stopped at level $8$). Note that here we also use $\phi^2$ (that is $\phi+1$) as the down-scaling factor so that the resolution of the pyramid images correspond across scales. Note at last that coefficient are very kurtotic: most are near zero, the distribution of coefficients has ``long tails''. See script \texttt{experiment\_SpikeCoding.py} to reproduce the figure.}%, but that for a better visualization, we used histogram equalization as will be described in Sec.~\ref{sec:SC}.
188 \end{figure}
189 %-------------
190 This means that a down-scaled version of the image ${\cal D}^k x$ may be obtained by applying all down-scaling transforms sequentially from the base to level $k$. If the elementary operators are linear, the ${\cal G}$ transform is linear. The corresponding filters correspond approximately to gaussians with increasing radiuses~\citep{Burt83} and the images in the pyramid thus correspond to progressively more blurred versions of the ``base'' image. This transform is usually very fast and is very likely to be implemented by the extended dendritic arbor of neurons\footnote{Note however that in vertebrates, the retinal representation the preferred spatial frequency grows with eccentricity.}.\\%
191 The Laplacian Pyramid is defined from the Gaussian Pyramid as the pyramid of images constituted by the residual between the image at one scale and the up-scaled image from the upper level. It is therefore mathematically defined as:
192 \begin{eqnarray}
193 {\cal L} = \{ {\cal D}^k -({\cal U}_k \circ {\cal D}^{k+1}) \} \mbox{ with } 0 \leq k \leq s
194 \label{eq:lpyr}%
195 \end{eqnarray}
196 by defining for clarity that ${\cal D}^{0} = 1$ and ${\cal D}^{s+1} = 0$. This transform is still linear that is that $\forall x, \forall y, \forall \lambda$, ${\cal L}(x+y) = {\cal L}x + {\cal L}y$ and ${\cal L}(\lambda x) = \lambda {\cal L}x$. Since every level corresponds to the residual, it is easy to invert. In fact, if we write as ${\cal L}_k x$ the image at level $k$ and ${\cal U}^k = {\cal U}_0 \circ \cdots \circ {\cal U}_k$, then $\forall x$,
197 \begin{eqnarray}
198 \sum_{0 \leq k \leq s} {\cal U}^k {\cal L}_k x &=& \sum_{0 \leq k \leq s} {\cal U}^k({\cal D}^k -({\cal U}_k \circ {\cal D}^{k+1})) x \nonumber\\
199 &=& \sum_{0 \leq k \leq s} {\cal U}^k{\cal D}^k x - \sum_{0 \leq k \leq s} {\cal U}^k{\cal U}_k \circ {\cal D}^{k+1} x \nonumber\\
200 &=& \sum_{0 \leq k \leq s} {\cal U}^k{\cal D}^k x - \sum_{1 \leq k \leq s+1} {\cal U}^k \circ {\cal D}^{k} x = x
201 \label{eq:lpyr_rec_proof}%
202 \end{eqnarray}
203 Therefore the inverse of the Laplacian Pyramid transform is defined as:
204 \begin{eqnarray}
205 {\cal L}^{-1} = \sum_{0 \leq k \leq s} {\cal U}^k {\cal L}_k
206 \label{eq:lpyr_rec}%
207 \end{eqnarray}
208 The filters corresponding to the different levels of the pyramid (and which are the inverse image of a Dirac pyramid by ${\cal L}^{-1}$) are similar to difference of gaussians (because they are the difference of two successive levels of the Gaussian Pyramid). The exponent $\gamma$ will therefore play the important role of the ratio of the the radiuses of the Gaussians. We choose here the exponent to be equal to the golden number $\gamma = \phi \eqdef \frac{1+\sqrt{5}}{2} \approx 1.618033$ for two reasons. First, it corresponds to a value which approximates well a Laplacian-of-Gaussians with a Difference of Gaussians as is implemented here. Second, it allows to construct a natural representation of the whole pyramid in a full Golden Rectangle (see Fig.~\ref{fig:golden_pyr}) where the resolution of each image will be constant.\\%
209 Note the following properties of the pyramid:
210 \begin{itemize}
211 \item the over-completeness is equal to $\sum_{0 \leq k \leq s} \frac{1}{\gamma^{2k}} \approx \frac{1}{1-\gamma^{-2}}$ so that it is equal to $\frac{1}{1-\phi^{-2}} = \frac{\phi}{\phi - \phi^{-1}} = \phi$ which is indeed the area of the Golden Rectangle compared to the area of the image. It is slightly higher than for a dyadic pyramid (indeed $\frac{1}{1-2^{-2}}=\frac{4}{3} \approx 1.333 < \phi$).
212 \item since this linear transform is over-complete, there may exist non zero pyramids which inverse image is null (that is $\exists L\neq 0$ such that $ {\cal L}^{-1} L = 0$) but this pyramids are not accessible from any non-null image.
213 \item one may also implement a simple ``Golden Pyramid'' using the Fourier transform, and one may observe that in both cases, the filters corresponds to localized filters in the frequency space. The whitening (see Sec.~\ref{sec:white}) has an approximately scalar effect that corresponds to an equalization of the variances of the coefficients to natural images at the different spatial frequencies.
214 \item Finally, once the obtained filters are normalized, the coefficients will correspond to the correlation coefficients of the image with edge detectors at different scales as defined in Eq.~\ref{eq:coco}. The coefficients will therefore as in wavelet analysis correspond to the local Lipschitz coefficients of the image~\citep{Perrinet03ieee}. When ordered by decreasing absolute values they will correspond to features of decreasing singularities, from a pure singularity, to a smooth transition (as a ramp of luminosity).%
215 \end{itemize}%
216 %------------- VARIABLES
217 \begin{table}[hbpt]%
218 \label{tab:pyramid}%
219 \begin{center} %
220 \caption{Notations used for the Laplacian Pyramid}
221 \begin{tabular}{|l|c|c|} \hline
222 Name&Symbol&Description\\\hline\hline %
223 sizes of the down-scaled images&$\{ M^k \}$& $0 \leq k \leq s$ \\ %\hline
224 Down-scale operator&${\cal D}_k$ & from level $k$ to $k+1$ \\ %\hline
225 Up-scale operator& ${\cal U}_k$& from level $k$ to $k+1$ \\
226 Full Down-scale operator&${\cal D}^k$ & ${\cal D}^{0} = 1$ and ${\cal D}^{s+1} = 0$ \\ %\hline
227 Full Up-scale operator& ${\cal U}^k$& \\
228 Gaussian Pyramid&${\cal G}$&\\
229 Laplacian Pyramid&${\cal L}$ & ${\cal L} = \{ {\cal L}_k\}$ with $0 \leq k \leq s$\\
230 \hline\hline%
231 \end{tabular}
232 \end{center}%
233 \vspace*{-.4cm}
234 \end{table}
235 %-------------
236 \section{Spike Coding}%
237 \label{sec:SC}%
238 Now that we defined a linear transform which is suitable for natural images by associating the whitening filters and the Laplacian Pyramid, we wish to transmit this information efficiently using neurons. As we saw in the previous section, the higher coefficients correspond to more singular features and therefore to more informative content. By using Integrate-and-Fire neurons, it is therefore natural that we may associate to every coefficient of the pyramid applied to the image a single neuron. For the linear Leaky-IF, if we associate a driving current to each value $\rho_j$ (with $0 \leq j \leq N$, as noted in Tab.~\ref{tab:linear}) it will will elicit spikes with latencies~\citep{Perrinet03ieee}:
239 \begin{eqnarray}
240 \lambda_j = \tau \log\frac{1}{1 - \theta . g_j(\rho_j)}
241 \label{eq:lif}
242 \end{eqnarray}%
243 where $\tau$ is the characteristic time constant, $\theta$ is the neuron's threshold and $g_j$ is a monotonously increasing function of $\rho_j$ corresponding to the transformation of the linear value into the driving current. By this architecture, since the relation in Eq.~\ref{eq:lif} is monotonously increasing, one implements a simple ArgMax operator where the output is the index of the neurons corresponding to the ordered list of output spikes.\\%
244 However, one may observe that for some linear transforms, the distribution of correlation coefficients may be not similar for all $j$. This is contradictory with the fact that spikes are similar across the CNS since it would mean that the probability of the coefficient underlying the emission of a spike is not uniform. To optimize the efficiency of the ArgMax operator, one has therefore to ensure that one optimizes the entropy of the index of output spikes and therefore of the driving current. This may be ensured by modifying the functions $g_j$ so that:
245 \begin{enumerate}
246 \item for all $j$, the distributions of $g_j(\rho_j)$ are similar,
247 \item allow that this overall distribution has a shape adapted to the spiking mechanism (for instance by using Eq.~\ref{eq:lif}).
248 \end{enumerate}
249 The second point ---finding a global non-linearity $g$--- will be out of scope of this paper, and we will for the sake of generality only ensure that we find functions $f_j$ (with $g_j = g \circ f_j$) such that the variables $z_j = f_j(\rho_j)$ are uniformly distributed.\\%
250 This condition is easily performed by operating a point non-linearity on the different variables $\rho_j$ based on the statistics of natural images~\citep{Atick92}. This method is similar to histogram equalization in image processing and provides an output with maximum entropy for a bounded output: it therefore optimizes the coding efficiency of the representation in terms of compression~\citep{Hateren93} or dually the minimization of intrinsic noise~\citep{Srinivasan82}. It may be easily derived from the probability $P$ of variable $\rho_j$ (bounded in absolute value by $1$) by choosing the non-linearity as the cumulative function
251 \begin{equation}
252 f_j(\rho_j)=\int_{-1}^{\rho_j} dP(\rho)%
253 \label{eq:laughlin}%
254 \end{equation}
255 where the symbol $dP(x) = P_X(x) dx$ will here denote in general the probability distribution function (pdf) for the random variable $X$. This process has been observed in a variety of species and is for instance perfectly illustrated in the salamander~\citep{Laughlin81}. It may evolve dynamically to slowly adapt to varying changes in luminances, such as when the light diminishes at dawn but also to some more elaborated schemes within a map~\citep{Hosoya05}. As in ``ideal democracies'' where all neurons are ``equal'', this process has to be dynamically updated over some characteristic period so as to achieve optimum balance. As a consequence, since for all $j$, the pdf of $z_j = f_j(\rho_j)$ is uniform and that sources are independent, it may be considered as a random vector drawn from an uniform distribution in $[0, 1]$. Knowing the different spike generation mechanisms which are similar in that class of neurons, every vector $\{ \rho_j \}$ will thus generate a list of spikes $\{ j(1), j(2), \ldots \}$ (with corresponding latencies) where no information is carried \emph{a priori} in the latency pattern but all is in the relative timing across neurons.\\%
256 We coded the signal in a spike volley, but how can this spike list be ``decoded'', especially if it is conducted over some distance and therefore with an additional latency? In the case of transient signals, since we coded the vector $\{ \rho_j \}$ using the homeostatic constraint from \seeEq{laughlin}, we may retrieve the analog values from the order of firing neurons in the spike list. In fact, knowing the ``address'' of the fiber $j(1)$ corresponding to the first spike to arrive at the receiver end, we may infer that it has been produced by a value in the highest quantile of $P(\rho_{j(1)})$ on the emitting side. We may therefore decode the corresponding value with the best estimate $\hat{\rho}_{j(1)} = f_{j(1)}^{-1}(\frac{1}{N})$ where $N$ is the total number of neurons. This is also true for the following spikes and if we write as $z_{j(k)}=\frac{k}{N}$ the relative rank of the spike (that is neuron $j(k)$ fired at rank $k$), we can reconstruct the corresponding value as%
257 \begin{equation}%
258 \hat{\rho}_{j(k)}=f^{-1}_{j(k)}(1- z_{j(k)} )%
259 \label{eq:mod}%
260 \end{equation}%
261 %-------------
262 \begin{figure}
263 %: Figure 3 SSC
264 \begin{center}
265 \begin{tabular}{c}
266 \includegraphics[width=\linewidth]{SparseSpikeCoding}
267 \end{tabular}
268 \end{center}
269 \caption[example]
270 %>>>> use \label inside caption to get Fig. number with \ref{}
271 { \label{fig:SSC} \rm{Spike Coding of natural images. }%
272 We did build here a simple framework of pyramidal neurons illustrating the efficiency of neural architectures compared to classical computer architectures. We show here how a bundle of L-NL neurons~\citep{Carandini97,Carandini05} tuned by a simple homeostatic mechanism allow to transfer a transient information, such as an image, using spikes. (L) The signal to be coded, for instance the match $\rho_j$ of an image patch (the tiger on the left bottom) with a set of filters (edge-like images), may be considered as a stochastic vector defined by the probability distribution function (pdf) of the values $\rho_j$ to be represented. (NL) By using the cumulative function as a point non-linearity $f_j$, one ensures that the probability of $z_j = f_j(\rho_j)$ is uniform, that is that the entropy is maximal. This non-linearity in the L-NL neuron implements a homeostasis that is controlled only by the time constant with which the cumulative probability function $f_j$ is computed (typically $10^4$ image patches in our case). (S) Any instance of the signal may then be coded by a volley of spikes: a higher value corresponds to a shorter latency and a higher frequency. (D) Inversely, for any spike events vector, one may estimate the value from the firing frequency, the latency. We may simply use the ordering of the spikes since the rank provides an estimate of the quantile in the probability distribution function thanks to the equalization. Using the inverse of $f_j$ one retrieves the value in feature space so that this volley of spikes is decoded (or directly transformed) thanks to the relative timing of the spikes using the modulation (see \seeEq{mod}). This builds a robust information channel where information is solely carried by spikes as binary events. Given this model, the goal of this work is to find the most efficient architecture to code natural images and in particular to define a coding cost and to derive efficient compression algorithms. Note that this scheme is similar to the N-NL scheme but that instead of generating a Poisson point process, we use the the exact timing. This is allowed by the point non-linearity which permits to code the value by the timing and not the firing frequency.}
273 \end{figure}
274 %-------------
275 This corresponds to a generalized rank coding scheme~\citep{Perrinet99,Perrinet01}. First, it loses the information on the absolute latency of the spike train which is giving the maximal value of the input vector. This has the particular advantage of making this code invariant to contrast (up to a fixed delay due to the precision loss induced by noise). Second, when normalized by the maximal value, it is a first order approximation of the vector which is especially relevant for over-complete representations where the information contained in the rank vector (which is thanks to Stirling's approximation of order $\log_2(N!)= O(N.\log(N))$, that is more than \unit[2000]{bits} for $256$ neurons) is greater than the information contained in the particular quantization of the image\footnote{We are generally unable to detect quantization errors on an image consisting of more $256$ gray levels, that is for \unit[8]{bits}.}. On a practical note, we may use the fact that the inverse of $f_j$ may be computed from the mean over trials of the function of the absolute functions as a function of the rank. \\%ref??
276 This code therefore focuses on the particular sequence of neurons that were chosen and loses the particular information that may be coded in the pattern of individual inter-spike intervals in the assembly. A model accounting for the exact spiking mechanism would correct this information loss, but this would be at the cost of introducing new parameters (hence new information), while it seems that this information would have a low impact relative to the total information~\citep{Panzeri99}. More generally, one could use different mappings for the transformation of the $z$ value into the a spike volley which can be more adapted to continuous flows, but this scheme corresponds to an extreme case (a transient signal) which is useful to stress on the dynamical part of the coding~\citep{Van-Rullen01a} and is mathematically more tractable. In particular, one may show that the coding error is proportional to the variability of the sorted coefficients~\citep{Perrinet03ieee}, the rest of the information being the information coded in the time intervals between two successive spikes. Thus, the efficiency of information transmission will directly depend on the validity of the hypothesis of independence of the choice of components and therefore on the statistical model build by the LGM.\\%
277 It should be also noted that no explicit reconstruction is \emph{necessary} (in the mathematical sense of the term) on the receiver side as we do here, since the goal of the receiver could only be to manipulate information on for instance some subset on the spike list (that is on some receptive field covering a subpart of the population). In simple terms, there is no reason to have a reconstruction of the image in the CNS. In particular one may imagine that we may add some arbitrary global point linearity to the $z$ values in order to threshold low values or to quantize values (for instance set all values to $1$ only for the first $10\%$ of the spikes). However, this full reconstruction scheme is a general framework for information transmission, and we may then imagine that if for instance we pool information over a limited receptive field, the information needed (the ranks in the sub-spikelist) will still be available to the receiver directly without having to compute the full set (in fact, since the pdf of $z$ is uniform, the pdf of a subset of components of $z$ is also uniform).%
278 \section{Sparse Spike Coding}%
279 \label{sec:SSC}%
280 However, as we described before~\citep{Perrinet02sparse,Perrinet04tauc,Perrinet06}, if we use over-complete dictionaries of filters, the resulting spiking code gets redundant. In fact, unless the dictionary is orthogonal, when choosing one component over an other, any choice may modify the choice of the other components. If we chose the successive neurons with maximum correlation values, the resulting representation will be proportionally more redundant when the dictionary gets more over-complete. Also, we saw that optimizing the choice leads then to a combinatorial explosion~\citep{Perrinet08shl}. To solve this NP-complete problem to model realistic representations such as when modeling the primary visual cortex, one may implement a solution designed after the richly laterally connected architecture of cortical layers~\citep{Fischer05a,Fischer06tip,Fischer07cv}. In fact, an important part of cortical areas consists of a lateral network propagating information in parallel between neurons. We will here propose that the NP-problem can be approximately solved by using a cross-correlation based inhibition between neurons.\\%
281 In fact, as was first proposed in the \emph{Sparse Spike Coding} (SSC) algorithm~\citep{Perrinet02sparse}, one could use a greedy algorithm on the L$_0$-norm cost and that these led to use of Matching Pursuit algorithm~\citep{Mallat93}. More generally, let's first define Weighted Matching Pursuit (WMP) by introducing a non-linearity in the choice step. Like Matching Pursuit, it is based on two repetitive steps. First, given the signal $x$, we are searching for the \textit{single} source $ s^\ast_{j^{\ast}} . h_{j^\ast}$ that corresponds to the maximum \textit{a posteriori} (MAP) realization for $x$ (see \seeEq{coco}) transformed by a point non-linearity $f_j$. This Matching step is defined by:%
282 \begin{equation}
283 j^\ast = \mbox{ArgMax}_{j} [f_j( \rho_j )]
284 \label{eq:mp1}
285 \end{equation}%
286 where $f_j(.)$ is some gain function that we will describe below and which may be set initially to strictly increasing functions and $\rho_j$ is initialized by Eq.~\ref{eq:coco}. %
287 In a second step (Pursuit), the information is fed-back to correlated sources through :
288 \begin{equation}
289 x \la x - s^\ast_{j^{\ast}} . h_{j^{\ast}}
290 \label{eq:mp2}
291 \end{equation}
292 where $s^\ast_{j^{\ast}}$ is the scalar projection $ < x, h_{j^\ast} > $. Equivalently, from the linearity of the scalar product, we may propagate laterally:
293 \begin{equation}
294 <x, h_j> \la <x, h_j> - < x, h_{j^{\ast}} > < h_{j^{\ast}}, h_j >
295 \end{equation}
296 that is from \seeEq{coco}:
297 \begin{equation}
298 \rho_j \la \rho_j - \rho_{j^{\ast}} < h_{j^{\ast}}, h_j >
299 \label{eq:mp3}
300 \end{equation}
301 For any set of monotonously increasing functions $f_j$, WMP shares many properties with MP, such as the monotonous decrease of the error or the exponential convergence of the coding. The algorithm is then iterated with Eq.~\ref{eq:mp1} until some stopping criteria is reached. The signal may be reconstructed from the spike list as $x = \sum \hat{\rho}_{j(k)} h_{j(k)}$, where $ \hat{\rho}_{j(k)}$ is the value reconstructed using Eq.~\ref{eq:mod}. %
302 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
303 \begin{figure}%[ht]%
304 %: Figure 4 efficiency
305 % \begin{center}
306 % \begin{tabular}{c}
307 \includegraphics[width=.49\textwidth,height=.52\textwidth]{SpikeCoding_Mod_all}%
308 \hspace*{.019\textwidth}%
309 \includegraphics[width=.49\textwidth,height=.475\textwidth]{fig_nonhomeo_bits}%
310 % insets another graph:
311 \vskip -.47\textwidth % -.49 + .02
312 \hskip .759\textwidth% . .019 + .49 - .22 -.02
313 \includegraphics[width=.22\textwidth]{fig_nonhomeo_L0}%
314 \vskip .25\textwidth% -(.22 -.49 )
315 %\end{tabular}
316 %\end{center}
317 \caption[COMP]
318 {\label{fig:COMP} \rm{Efficiency of Competition Optimized Matching Pursuit (COMP). }%
319 Spike Coding and Sparse Spike Coding (using COMP) produce flows of spikes representing the image. By representing the the distance of the original image with a reconstruction, one may quantify the dynamical efficiency of this solution as a function of the number of spikes. \emph{(Left)} When applying the algorithm on a set of natural images, the coefficients exhibited differences in their probability density functions. We show this by plotting the cumulative density functions of the coefficients for different levels in the pyramid. Using these cumulative pdf, one could transform the pyramids of coefficients in pyramids for which all coefficients where \emph{a priori} equiprobable. This optimizes the ArgMax operator which is at the heart of the Sparse Spike Coding scheme. %
320 \emph{(Right)} The resulting COMP solution gives a similar result than MP in terms of residual energy as a function of pure $L_0$ sparseness (see inset). In fact, in MP, by taking the maximum absolute, and since the decrease of energy is proportional to the square of the coefficient (see Chapter ~3.1.2 of ~\citep{Perrinet06}) one ensures that the decrease of MSE \emph{per coefficient} is optimal for MP. These are both better for that purpose than conjugate gradient. However, when defining the efficiency in terms of the residual energy as a function of the description length of the spiking code word, then the proposed COMP model is more efficient than MP because of the quantization errors inherent to the higher variability of coded coefficients. Thus, including homeostasis improved the efficiency of adaptive Sparse Spike Coding by ensuring that the decrease of MSE \emph{per bit of code} is optimal. It should be noted that the homeostasis mechanism is important during ``learning'' but that it is not useful for ``pure'' coding (see Sec.~\ref{sec:SSC}).}%
321 \label{fig:homeo}%
322 \end{figure}%
323 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
324 We then define Competition Optimized Matching Pursuit (COMP) as WMP where the point non-linearities are defined by Eq.~\ref{eq:laughlin} and Sparse Spike Coding (SSC) is then defined as the spike coding/decoding algorithm which uses COMP as the coder. As described in~\citep{Perrinet04tauc}, while the Matching step is efficiently performed by the LIF neurons driven by the NL input, the pursuit step could be implemented in a cortical area by a correlation-based inhibition. This type of inhibition is typical of fast-spiking interneurons though there is no direct evidence of this activity-based synaptic topology. It will correspond to a lateral interaction within the linear (L) neuronal population. In practice, the $f_j$ functions are initialized for all neurons to the identity function (that is to a MP algorithm) and then evaluated using an online stochastic algorithm with a ``learning'' parameter corresponding to a smooth average which effect was controlled. As a matter of fact, this algorithm is circular since the choice of $\sv$ is non-linear and depends on the choice of $f_j$. However, thanks to the exponential convergence of MP, for any set of components, the $f_j$ will converge to the correct non-linear functions as defined by \seeEq{laughlin}. This scheme extends the Matching Pursuit (MP) algorithm by linking it to a statistical model which tunes optimally the matching step (in the sense that all choices are statistically equally probable) thanks to the adaptive point linearity. In fact, as stated before, thanks to the uniform distribution of the choice of a component, one maximizes the entropy of every match and therefore of the computational power of the ArgMax operator. Think \emph{a contrario} to a totally unbalanced network where the match will be always a given neuron: the spikes are totally predictable and the information carried by the spike list then drops to zero. It therefore optimizes the efficiency of MP for the Sparse Spike Coding problem (see Fig.~\ref{fig:SSC}).\\%
325 Extensions of this type of event-based algorithms are multiple. First, It extends naturally to the temporal domain. In fact, we restricted us ourselves here to static flashed images, but is easily extendable to causal filters (see Ch.~3.4.1 in ~\citep{Perrinet06}). It however raises the unsolved problem of a dynamical compromise between precision and rapidity of the code which is still unanswered. It may also be extended in a adaptive code, showing the emergence of V1-like receptive fields~\citep{Perrinet08shl}. At last, using in these sparse representations of long-range interactions such as those present in the primary visual cortex should prove to be very helpful to resolve generic image processing problems such as denoising.
326 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
327 \subsubsection*{Reproducible science / Acknowledgments}%
328 All algorithms used in this paper were implemented using Python, Numpy, SciPy (FFT and image libraries) and Matplotlib (for the visualization). Scripts are available upon request. \\%
329 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
330 \subsubsection*{Acknowledgments}%
331 This work was supported by a grant form the French Research Council (ANR ``NatStats'') and by EC IP project FP6-015879, "FACETS".%
332 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
333 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
334 %\bibliographystyle{plainnat}%
335 %\bibliography{../../lup/bib/babel}%
336 \begin{thebibliography}{27}
337 \providecommand{\natexlab}[1]{#1}
338 \providecommand{\url}[1]{\texttt{#1}}
339 \expandafter\ifx\csname urlstyle\endcsname\relax
340 \providecommand{\doi}[1]{doi: #1}\else
341 \providecommand{\doi}{doi: \begingroup \urlstyle{rm}\Url}\fi
342
343 \bibitem[Atick(1992)]{Atick92}
344 Joseph~J. Atick.
345 \newblock Could information theory provide an ecological theory of sensory
346 processing?
347 \newblock \emph{Network: {C}omputation in {N}eural {S}ystems}, 3\penalty0
348 (2):\penalty0 213--52, 1992.
349 \newblock URL \url{http://ib.cnea.gov.ar/~redneu/atick92.pdf}.
350
351 \bibitem[Burt and Adelson(1983)]{Burt83}
352 Peter~J. Burt and Edward~H. Adelson.
353 \newblock The {L}aplacian {P}yramid as a compact image code.
354 \newblock \emph{I{EEE} {T}ransactions on {C}ommunications}, COM-31,4:\penalty0
355 532--40, 1983.
356
357 \bibitem[Carandini et~al.(1997)Carandini, Heeger, and Movshon]{Carandini97}
358 Matteo Carandini, J.~Heeger, and Anthony Movshon.
359 \newblock Linearity and normalization in simple cells of the macaque primary
360 visual cortex.
361 \newblock \emph{Journal of {N}euroscience}, 17\penalty0 (21):\penalty0
362 8621---44, November 1997.
363
364 \bibitem[Carandini et~al.(2005)Carandini, Demb, Mante, Tolhurst, Dan,
365 Olshausen, Gallant, and Rust]{Carandini05}
366 Matteo Carandini, Jonathan~B. Demb, Valerio Mante, David~J. Tolhurst, Yang Dan,
367 Bruno~A. Olshausen, Jack~L. Gallant, and Nicole~C. Rust.
368 \newblock Do we know what the early visual system does?
369 \newblock \emph{Journal of {N}euroscience}, 25\penalty0 (46):\penalty0
370 10577--97, Nov 2005.
371 \newblock \doi{10.1523/JNEUROSCI.3726-05.2005}.
372 \newblock URL \url{http://dx.doi.org/10.1523/JNEUROSCI.3726-05.2005}.
373
374 \bibitem[Dan et~al.(1996)Dan, Atick, and Reid]{Dan96}
375 Y~Dan, Joseph~J. Atick, and RC~Reid.
376 \newblock Efficient coding of natural scenes in the lateral geniculate nucleus:
377 experimental test of a computational theory.
378 \newblock \emph{Journal of {N}euroscience}, 16\penalty0 (10):\penalty0
379 3351--62, May 1996.
380
381 \bibitem[Fischer et~al.(2005)Fischer, Redondo, Perrinet, and
382 Crist{\'o}bal]{Fischer05a}
383 Sylvain Fischer, Rafael Redondo, Laurent~U. Perrinet, and Gabriel
384 Crist{\'o}bal.
385 \newblock Sparse {G}abor wavelets by local operations.
386 \newblock In Gustavo Linan-Cembrano Ricardo A.~Carmona, editor,
387 \emph{Proceedings {SPIE}}, volume 5839 of \emph{Bioengineered and Bioinspired
388 Systems II}, pages 75--86, Jun 2005.
389 \newblock \doi{doi:10.1117/12.608403}.
390
391 \bibitem[Fischer et~al.(2006)Fischer, Crist{\'o}bal, and Redondo]{Fischer06tip}
392 Sylvain Fischer, Gabriel Crist{\'o}bal, and Rafael Redondo.
393 \newblock Sparse {O}vercomplete {G}abor {W}avelet {R}epresentation {B}ased on
394 {L}ocal {C}ompetitions.
395 \newblock \emph{I{EEE} {T}ransactions in {I}mage {P}rocessing}, 15\penalty0
396 (2):\penalty0 265, February 2006.
397
398 \bibitem[Fischer et~al.(2007)Fischer, Sroubek, Perrinet, Redondo, and
399 Crist{\'o}bal]{Fischer07cv}
400 Sylvain Fischer, Filip Sroubek, Laurent~U. Perrinet, Rafael Redondo, and
401 Gabriel Crist{\'o}bal.
402 \newblock Self-invertible 2{D} log-{G}abor wavelets.
403 \newblock \emph{Int. Journal of Computional Vision}, 2007.
404
405 \bibitem[Grossberg(2003)]{Grossberg03}
406 Stephen Grossberg.
407 \newblock {H}ow does the cerebral cortex work? development, learning,
408 attention, and 3-d vision by laminar circuits of visual cortex.
409 \newblock \emph{Behavioral and Cognitive Neuroscience Reviews}, 2\penalty0
410 (1):\penalty0 47--76, March 2003.
411
412 \bibitem[Hosoya et~al.(2005)Hosoya, Baccus, and Meister]{Hosoya05}
413 Toshihiko Hosoya, Stephen~A Baccus, and Markus Meister.
414 \newblock Dynamic predictive coding by the retina.
415 \newblock \emph{Nature}, 436\penalty0 (7047):\penalty0 71--7, Jul 2005.
416 \newblock \doi{10.1038/nature03689}.
417 \newblock URL \url{http://dx.doi.org/10.1038/nature03689}.
418
419 \bibitem[Laughlin(1981)]{Laughlin81}
420 Simon~B. Laughlin.
421 \newblock A simple coding procedure enhances a neuron's information capacity.
422 \newblock \emph{Zeitung f{\"u}r {N}aturforschung}, 9--10\penalty0
423 (36):\penalty0 910--2, 1981.
424
425 \bibitem[Mallat and Zhang(1993)]{Mallat93}
426 St{\'e}phane Mallat and Zhifeng Zhang.
427 \newblock Matching {P}ursuit with time-frequency dictionaries.
428 \newblock \emph{I{EEE} {T}ransactions on {S}ignal {P}rocessing}, 41\penalty0
429 (12):\penalty0 3397--3414, 1993.
430
431 \bibitem[Oja(1982)]{Oja82}
432 Erkki Oja.
433 \newblock A {S}implified {N}euron {M}odel as a {P}rincipal {C}omponent
434 {A}nalyzer.
435 \newblock \emph{Journal of {M}athematical biology}, 15:\penalty0 267--273,
436 1982.
437
438 \bibitem[Panzeri et~al.(1999)Panzeri, Treves, Schultz, and Rolls]{Panzeri99}
439 Stefano Panzeri, Alessandro Treves, Simon Schultz, and Edmund~T. Rolls.
440 \newblock On decoding the responses of a population of neurons from short time
441 windows.
442 \newblock \emph{Neural {C}omputation}, 11\penalty0 (7):\penalty0 1553--1577,
443 1999.
444
445 \bibitem[Perrinet(2004)]{Perrinet04tauc}
446 Laurent~U. Perrinet.
447 \newblock Feature detection using spikes : the greedy approach.
448 \newblock \emph{Journal of {P}hysiology ({P}aris)}, 98\penalty0 (4-6):\penalty0
449 530--9, July-November 2004.
450 \newblock \doi{10.1016/j.jphysparis.2005.09.012}.
451 \newblock URL \url{http://hal.archives-ouvertes.fr/hal-00110801/en/}.
452
453 \bibitem[Perrinet(2007)]{Perrinet06}
454 Laurent~U. Perrinet.
455 \newblock Dynamical neural networks: modeling low-level vision at short
456 latencies.
457 \newblock In \emph{Topics in Dynamical Neural Networks: From Large Scale Neural
458 Networks to Motor Control and Vision}, volume 142 of \emph{The European
459 Physical Journal (Special Topics)}, pages 163--225. Springer Berlin /
460 Heidelberg, mar 2007.
461 \newblock \doi{10.1140/epjst/e2007-00061-7}.
462 \newblock URL
463 \url{http://incm.cnrs-mrs.fr/LaurentPerrinet/Publications/Perrinet06}.
464
465 \bibitem[Perrinet(2008)]{Perrinet08shl}
466 Laurent~U. Perrinet.
467 \newblock Optimal signal representation in neural spiking codes: A model for
468 the formation of simple cell receptive fields.
469 \newblock 2008.
470 \newblock URL \url{http://fr.arxiv.org/abs/0706.3177}.
471
472 \bibitem[Perrinet(1999)]{Perrinet99}
473 Laurent~U. Perrinet.
474 \newblock Apprentissage hebbien d'un reseau de neurones asynchrone a codage par
475 rang.
476 \newblock Technical report, Rapport de stage du DEA de Sciences Cognitives,
477 CERT, Toulouse, France, 1999.
478 \newblock URL \url{http://www.risc.cnrs.fr/detail_memt.php?ID=280}.
479
480 \bibitem[Perrinet et~al.(2001)Perrinet, Delorme, Thorpe, and
481 Samuelides]{Perrinet01}
482 Laurent~U. Perrinet, Arnaud Delorme, Simon~J. Thorpe, and Manuel Samuelides.
483 \newblock Network of integrate-and-fire neurons using {R}ank {O}rder {C}oding
484 {A}: how to implement spike timing dependant plasticity.
485 \newblock \emph{Neurocomputing}, 38--40\penalty0 (1--4):\penalty0 817--22,
486 2001.
487
488 \bibitem[Perrinet et~al.(2002)Perrinet, Samuelides, and
489 Thorpe]{Perrinet02sparse}
490 Laurent~U. Perrinet, Manuel Samuelides, and Simon~J. Thorpe.
491 \newblock Sparse spike coding in an asynchronous feed-forward multi-layer
492 neural network using {M}atching {P}ursuit.
493 \newblock \emph{Neurocomputing}, 57C:\penalty0 125--34, 2002.
494 \newblock URL
495 \url{http://incm.cnrs-mrs.fr/LaurentPerrinet/Publications/Perrinet02sparse}.
496 \newblock Special issue: New Aspects in Neurocomputing: 10th European Symposium
497 on Artificial Neural Networks 2002 - Edited by T. Villmann.
498
499 \bibitem[Perrinet et~al.(2004)Perrinet, Samuelides, and Thorpe]{Perrinet03ieee}
500 Laurent~U. Perrinet, Manuel Samuelides, and Simon~J. Thorpe.
501 \newblock Coding static natural images using spiking event times: do neurons
502 cooperate?
503 \newblock \emph{I{EEE} {T}ransactions on {N}eural {N}etworks}, 15\penalty0
504 (5):\penalty0 1164--75, September 2004.
505 \newblock ISSN 1045-9227.
506 \newblock \doi{10.1109/TNN.2004.833303}.
507 \newblock URL \url{http://hal.archives-ouvertes.fr/hal-00110803/en/}.
508 \newblock {S}pecial issue on '{T}emporal {C}oding for {N}eural {I}nformation
509 {P}rocessing'.
510
511 \bibitem[Srinivasan et~al.(1982)Srinivasan, Laughlin, and Dubs]{Srinivasan82}
512 Mandyam~V. Srinivasan, Simon~B. Laughlin, and A~Dubs.
513 \newblock Predictive coding: {A} fresh view of inhibition in the retina.
514 \newblock \emph{Proceedings of the Royal Society of London. Series B,
515 Biological Sciences}, 216\penalty0 (1205):\penalty0 427--59, Nov 1982.
516
517 \bibitem[van Hateren(1993)]{Hateren93}
518 J.~Hans van Hateren.
519 \newblock Spatiotemporal contrast sensitivity of early vision.
520 \newblock \emph{Vision {R}esearch}, 33:\penalty0 257--67, 1993.
521
522 \bibitem[{van Rullen} and Thorpe(2001)]{Van-Rullen01a}
523 Rufin {van Rullen} and Simon~J. Thorpe.
524 \newblock Rate coding versus temporal order coding: what the retina ganglion
525 cells tell the visual cortex.
526 \newblock \emph{Neural {C}omputation}, 13\penalty0 (6):\penalty0 1255--83,
527 2001.
528
529 \bibitem[von Neumann(2000)]{Neumann00}
530 John von Neumann.
531 \newblock \emph{The Computer and the Brain : Second Edition (Mrs. Hepsa Ely
532 Silliman Memorial Lectures)}.
533 \newblock {Yale University Press}, July 2000.
534 \newblock ISBN 0300084730.
535
536 \bibitem[von Neumann(1966)]{Neumann66}
537 John von Neumann.
538 \newblock \emph{Theory of {S}elf-{R}eproducing {A}utomata}.
539 \newblock University of Illinois Press, Champain, IL, 1966.
540
541 \end{thebibliography}
542
543 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
544 \end{document}%
545
```

## Attached Files

To refer to attachments on a page, use**, as shown below in the list of files. Do**

`attachment:filename`**NOT**use the URL of the

`[get]`link, since this is subject to change and can break easily.

You are not allowed to attach a file to this page.