Fig4.png

Figure 4: Cooperative homeostasis implements efficient quantization. (Left) When switching off the cooperative homeostasis during learning, the corresponding Sparse Hebbian Learning algorithm, Adaptive Matching Pursuit (AMP), converges to a set of filters which contains some less localized filters and some high-frequency Gabor functions which correspond to more "textural" features (Perrinet, 2003). One may wonder if these filters are inefficient and capturing noise or if they rather correspond to independent features of natural images in the LGM model. (Right, Inset) In fact, when plotting residual energy as a function of L_0 norm sparseness with the MP algorithm (as plotted in Fig. 3, Right), the AMP dictionary gives a slightly worse result than aSSC. (Right) Moreover, one should consider representation efficiency as the overall coding and decoding algorithm. We compare the efficiency for these dictionaries thanks to same coding method (SSC) and the same decoding method (using rank quantized coefficients). Representation length for this decoding method is proportional to the L_0 norm with lambda=log(M)/L  ~ 0.032 bits per coefficient and per pixel as defined in Eq. 1 (see text). We observe that the dictionary obtained by aSSC is more efficient than the one obtained by AMP while the dictionary obtained with SparseNet (SN) gives an intermediate result thanks to the geometric homeostasis: Introducing cooperative homeostasis globally improves neural representation.

welcome: please sign in