# Efficient learning of sparse image representations using homeostatic regulation

This work is a followup of (Perrinet, 2010, Neural Computation)

code is available @ https://github.com/laurentperrinet/BoutinRuffierPerrinet17spars and heavily uses https://github.com/bicv/SparseHebbianLearning

the poster will be presented Thursday, June 8 @ SPARS, Lisbon.

One core advantage of sparse representations is the efficient coding of complex signals using compact codes. For instance, it allows for the representation of any sample as a combination of few elements drawn from a large dictionary of basis functions. In the context of the efficient processing of natural images, we propose here that sparse coding can be optimized by designing a proper homeostatic rule regulating the competition between the elements of the dictionary. Indeed, a common design for unsupervised learning rules relies on a gradient descent over a cost measuring representation quality with respect to sparseness. The sparseness constraint introduces a competition which can be optimized by ensuring that each item in the dictionary is selected as often as others. We implemented this rule by introducing a gain normalization similar to what is observed in biological neural networks. We validated this theoretical insight by challenging the matching pursuit sparse coding algorithm with the same learning rule but with or without homeostasis. Simulations show that for a given homeostasis rule, gradient descent performed similarly the learning of a dataset of image patches. While the coding accuracy did not vary much, including homeostasis changed qualitatively the learned features. In particular, homeostasis results in a more homogeneous set of orientation selective filters, which is closer to what is found in the visual cortex of mammals. To further validate these results, we applied this algorithm to the optimization of a visual system to be embedded in an aerial robot. In summary, this biologically-inspired learning rule demonstrates that principles observed in neural computations can help improve real-life machine learning algorithms.

## reference

- Victor Boutin, Franck Ruffier, Laurent Perrinet.
__Efficient learning of sparse image representations using homeostatic regulation__, URL . In*SPARS2017, Lisbon*, 2017 abstract .

All material (c) L. Perrinet. Please check the copyright notice.

This work was supported by the Doc2Amu project which received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 713750. Projet cofinancé par le Conseil Régional Provence-Alpes-Côte d’Azur. |