Figure 1: Simple neural model of sparse coding and role of homeostasis. (Left) We define the coding model as an information channel constituted by a bundle of Linear/Non-Linear spiking neurons. (L) A given input image patch is coded linearly by using the dictionary of filters and transformed by sparse coding (such as Matching Pursuit) into a sparse vector. Each coefficient is transformed into a driving coefficient in the (NL) layer by using a point non-linearity which drives (S) a generic spiking mechanism. (D) On the receiver end (for instance in an efferent neuron), one may then estimate the input from the neural representation pattern. This decoding is progressive, and if we assume that each spike carries a bounded amount of information, representation cost in this model increases proportionally with the number of activated neurons. (Right) However, for a given dictionary, the distribution of sparse coefficients and hence the probability of a neuron's activation is in general not uniform. We show (Lower panel) the log-probability distribution function and (Upper panel) the cumulative distribution of sparse coefficients for a dictionary of edge-like filters with similar selectivity (dotted scatter) except for one filter which was randomized (continuous line). This illustrates a typical situation which may occur during learning when some components did learn less than others: Since their activity will be lower, they are less likely to be activated in the spiking mechanism and from the Hebbian rule, they are less likely to learn. When selecting an optimal sparse set for a given input, instead of comparing sparse coefficients with respect to a threshold (vertical dashed lines), it should instead be done on the significance value z_i (horizontal dashed lines): In this particular case, the less selective neuron (a_1 < a_2) is selected by the homeostatic cooperation (z_1 > z_2). The role of homeostasis during learning is that, even if the dictionary of filters is not homogeneous, the point non-linearity in (NL) modifies sparse coding in (L) such that the probability of a neuron's activation is uniform across the population.

welcome: please sign in