Fig3.png

Figure 3: Coding efficiency of SparseNet versus aSSC. We evaluate the quality of both learning schemes by comparing coding efficiency of their respective coding algorithms, that is CGF and COMP, with the respective dictionary that was learnt (see Fig. 1). (Left) We show the probability distribution function of sparse coefficients obtained by both methods with random dictionaries (respectively 'SN-init' and 'aSSC-init') and with the dictionaries obtained after convergence of respective learning schemes (respectively 'SN' and 'aSSC'). At convergence, sparse coefficients are more sparsely distributed than initially, with more kurtotic probability distribution functions for aSSC in both cases. (Right) We plot the average residual error (L_2 norm) as a function of the relative number of active (non-zero) coefficients. This provides a measure of the coding efficiency for each dictionary over the set of image patches (error bars are scaled to one standard deviation). The L_0 norm is equal to the coding step in COMP. Best results are those providing a lower error for a given sparsity (better compression) or a lower sparseness for the same error (Occam's razor). We observe similar coding results in aSSC despite its non-parametric definition. This result is also true when using the two different dictionaries with the same OOMP sparse coding algorithm: The dictionaries still have similar coding efficiencies.

welcome: please sign in