Emergence of filters from natural scenes in a sparse spike coding scheme

As an alternative to classical representations in machine learning algorithms, we explore coding strategies using events as is observed for spiking neurons in the central nervous system. Focusing on the primary visual cortex (V1), we have previously shown that we may define a sparse spike coding scheme by implementing accordingly lateral interactions corresponding to a correlation-based inhibition (Perrinet, 2002). This class of algorithms is both compatible with biological constraints and also to neurophysiological observations and yields a performant algorithm of computing by events. We explore here learning mechanisms to unsupervisely derive an optimal overcomplete set of filters based on previous work of Olshausen and Field and show its biological relevance. In particular, we studied the role of homeostasis in the efficiency of the resulting set if receeptive fields (Perrinet, 2010).

assc.png

sparsenet.png

Results of learning the sparsification for two different coding strategies : (Left) Coding by Matching Pursuit, (Right) Coding using Conjugate Gradient as in [Olshausen, 1998] (data from experiment 20081002T123100, available upon request)

ssc.gif

cgf.gif

Evolution in time of the RFs' shape: (Left) Coding by Matching Pursuit, (Right) Coding using Conjugate Gradient

References

pointers to other work on SHL

Reproducible research : Python implementation of SparseHebbianLearning

Animation of the formation of RFs during aSSC learning
Animation of the formation of RFs during aSSC learning.

  • latest code is available on GitHub

Object

  • This is a collection of python scripts to test learning strategies to efficiently code natural image patches. This is here restricted to the framework of the SparseNet algorithm from Bruno Olshausen.

  • this has been published as Perrinet, Neural Computation (2010) (see http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl ):

    • Laurent U. Perrinet. Role of homeostasis in learning sparse representations, URL . Neural Computation, 22(7), 2010 abstract.
  • all comments and bug corrections should be submitted to Laurent Perrinet at Laurent.Perrinet@gmail.com

  • find out updates on http://invibe.net/LaurentPerrinet/SparseHebbianLearning

Reproducible research : matlab(c) implementation of SparseHebbianLearning

Object

  • This is a collection of matlab (c) scripts to test learning strategies to efficiently code natural image patches. This is here restricted to the framework of the SparseNet algorithm from Bruno Olshausen.

  • this has been published as Perrinet, Neural Computation (2010) (see http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl ):

    • Laurent U. Perrinet. Role of homeostasis in learning sparse representations, URL . Neural Computation, 22(7), 2010 abstract.
  • This includes a set of "experiments" to test the effect of various parameters (and eventually a 'good-night, computer' Contents.m script for running all experiments and generate all figures that are included in the "report"). I recommend using Gnu Screen.

  • all comments and bug corrections should be submitted to Laurent Perrinet at Laurent.Perrinet@gmail.com

  • find out updates on http://invibe.net/LaurentPerrinet/SparseHebbianLearning

Get Ready!

  • Be sure to have :
  • a computer (tested on Mac, Linux, Irix, Windows2k) with Matlab (tested on R13 and R14, 2007, R2009a) or Octave (get Octave > 3.0 to get imwrite.m). You will not need any special toolbox.

  • grab the sources from the zip file. Then:

    • if needed (that is, if the code does break complaining it does not find the cgf function), compile the cgf routines used by B. Olshausen compiled for your platform (some compiled mex routines are included)

    • to generate PDFs, you have to get the epstopdf script (see fig2pdf.m)

    • the source files exportfig and ppm2fli may be found in the src folder,

    • to generate the final report, you'll need a TeX distribution with the pdflatex program and the beamer package,

    • you will need to have a set of decorrelated images in your ./data folder (its provided in the zip file, but you may make your own using the src/spherize_images.m script),

    • These scripts should be platform independent, however, there is a heavy bias toward UN*X users when generating figures (I haven't tried to generate the figures on windows systems). In particular, it is designed to generate figures in the background as PDF (on a headless cluster), and no window from MATLAB should pop up.

Instructions for running the experiments / understanding the scripts

  • First, if you just want to experiment with the learning scheme using Competition-Optimized Matching Pursuit, go to the scripts folder and run experiment_simple.m

  • Simply run one of the experiment_*.m files of your interest ---for example experiment_stability_cgf.m to test the role of parameters in the learning scheme with CGF--- (or the whole collection in Contents.m) and edit it to change the parameters of the experiments. This will create a set of pdf figures in a dated folder depending on your preferences (see default.m)

  • the Contents.m script points to the different experiments. This produces a report using pdflatex: pdflatex results.tex (see results.pdf).

  • Notation is kept from the SparseNet package. Remember for the variables : n=network ; e=experiment; s=stats

  • on a multicore machine, you may try something like:

    for i in {1..8}; do cd /Volumes/perrinet/sci/dyva/lup/Learning/SHL_scripts/code  && sleep 0.$(( RANDOM%1000 )) ; matlab -nodisplay < Contents20100322T151819.m & done
    for i in {1..6}; do cd /master0/perrinet/sci/dyva/lup/Learning/SHL_scripts/code  && sleep 0.$(( RANDOM%1000 )) ; matlab -nodisplay < Contents20100322T151819.m & done
    for i in {1..4}; do cd /data/work/perrinet/sci/dyva/Learning/SHL_scripts/code && sleep 0.$(( RANDOM%1000 )) ; octave  Contents20100322T151819.m & cexec 'cd /data/work/perrinet/sci/dyva/Learning/SHL_scripts/code && sleep 0.$(( RANDOM%100 )) ;  octave Contents20100322T151819.m' & done 

Contents

  • code : the scripts (see Contents.m for a script pointing to the different experiments)
  • results : the individual experiments
  • data : a folder containing the image files (you van get them independently by downloading data.zip)

  • src : some other package that may be of use

Some useful code tidbits

  • get the code

    wget "http://invibe.net/LaurentPerrinet/SparseHebbianLearning/ReproducibleResearch?action=AttachFile&do=get&target=SHL_scripts.zip"
    unzip ReproducibleResearch\?action\=AttachFile\&do\=get\&target\=SHL_scripts.zip 
  • get the code from GitHub

    wget https://github.com/bicv/SHL_scripts/archive/master.zip 
    unzip master.zip -d SHL_scripts
    cd SHL_scripts/
  • get to the code

    cd SHL_scripts/matlab_code
  • begin a session using GNU screen:

    screen
  • start multiple MATLAB sessions

    matlab -nodisplay < Contents20100322T151819.m
  • check latest mat files produced

    ../results/20100322T151819/*.mat |tail -n30
  • check processes running

    top
  • transfer files to another computer

    rsync -av ../../SHL_scripts 10.164.2.49:~/Desktop
  • once finished, compile a report

    pdflatex results.tex
    pdflatex results.tex
  • remove SSC related files to start over

    rm -f fig* *ssc* MP.mat L* stability_eta.mat stability_homeo.mat hist.mat MP_nonhomeo_lut.mat MP_nonhomeo.mat  MP_no_sym* perturb.mat stability_oc*.mat MP_icabench_decorr.mat MP_yelmo.mat 

Changelog

  • 2.2 <- 2.1, 1-jan-2011

    • waited for official publication
  • 2.1 <- 2.0, 10-feb-2010

    • last code clean-up for publication
  • 2.0 <- 1.5, 10-dec-2009

    • lots of new figures for the reviewers
    • clean-up of the code for publication
    • a lot of bug fixes and speed improvements
    • made it Octave-compatible since I do not have Matlab anymore :-)

  • 1.5 <- 1.4, 10-nov-2007

    • a lot of bug fixes and speed improvements
    • testing binary coding more extensively
    • added scripts to test alpha MP and the perturbation of a learning
  • 1.4 <- 1.3

    • bug fixes
    • better image format, script to make your own input data
    • more control experiments
  • 1.3 <- 1.2.1

    • GPL License
    • study adaptive gain (natural gradient) - implemented through switch_ng
    • fixed a lot of little bugs
  • 1.2.1 <- 1.2 <- 1.1, minor corrections, 05-Dec-2004

    • fixed some performance issues
    • added experiments
    • generates a report
  • 1.1 <- 1.0, 10-Nov-2004

    • clean-up / tried to simplify scripts (removed learning.m, allowed mp_fitS to process a whole batch, ...)
    • added experiments _fft.m, _ homeo.m, _symmetric.m, modified the others
    • may now load non-square images ( to load IMAGES_icabench )
    • better Windows reactions (duh!)
  • 1.0 : initial release, 10-Nov-2003


TagSoftware


TagSparse

welcome: please sign in