Differences between revisions 7 and 8
 ⇤ ← Revision 7 as of 2015-06-17 13:09:47 → Size: 3695 Editor: LaurentPerrinet Comment: ← Revision 8 as of 2015-06-17 13:13:15 → ⇥ Size: 3046 Editor: LaurentPerrinet Comment: Deletions are marked like this. Additions are marked like this. Line 5: Line 5: || [[Publications/PerrinetBednar15|{{attachment:Figures/PerrinetBednar15/FigureModel/figure_model.jpg|Architecture of the model|width=100%,align="left"}}]] <
> ''[[Figures/PerrinetBednar15/FigureModel| Figure 1]]'': '''Edge co-occurrences''' ''(A)'' An example image with the list of extracted edges overlaid. Each edge is represented by a red line segment which represents its position (center of segment), orientation, and scale (length of segment). We controlled the quality of the reconstruction from the edge information such that the residual energy was less than 5%. ''(B)'' The relationship between a reference edge "A" and another edge "B" can be quantified in terms of the difference between their orientations $\theta$, ratio of scale $\sigma$, distance $d$ between their centers, and difference of azimuth (angular location) $\phi$. Additionally, we define $\psi=\phi - \theta/2$, which is symmetric with respect to the choice of the reference edge; in particular, $\psi=0$ for co-circular edges. As in~\citet{Geisler01}, edges outside a central circular mask are discarded in the computation of the statistics to avoid artifacts. (Image credit:[[https://commons.wikimedia.org/wiki/File:Elephant_\%28Loxodonta_Africana\%29_05.jpg|Andrew Shiva, Creative Commons Attribution-Share Alike 3.0 Unported license]]). This is used to compute the chevron map in [[Figures/PerrinetBednar15/FigureChevrons| Figure 2]]. Go back to [[Publications/PerrinetBednar15|manuscript page]]. || || [[Publications/PerrinetBednar15|{{attachment:Figures/PerrinetBednar15/FigureModel/figure_model.jpg|Architecture of the model|width=100%,align="left"}}]] <
> ''[[Figures/PerrinetBednar15/FigureModel| Figure 1]]'': '''Edge co-occurrences''' ''(A)'' An example image with the list of extracted edges overlaid. Each edge is represented by a red line segment which represents its position (center of segment), orientation, and scale (length of segment). . ''(B)'' The relationship between a reference edge "A" and another edge "B" can be quantified in terms of the difference between their orientations, ratio of scale, distance between their centers, and difference of azimuth. This is used to compute the chevron map in [[Figures/PerrinetBednar15/FigureChevrons| Figure 2]]. Go back to [[Publications/PerrinetBednar15|manuscript page]]. ||

Edge co-occurrences can account for rapid categorization of natural versus animal images Figure 1: Edge co-occurrences (A) An example image with the list of extracted edges overlaid. Each edge is represented by a red line segment which represents its position (center of segment), orientation, and scale (length of segment). . (B) The relationship between a reference edge "A" and another edge "B" can be quantified in terms of the difference between their orientations, ratio of scale, distance between their centers, and difference of azimuth. This is used to compute the chevron map in Figure 2. Go back to manuscript page.

Un séminaire de l'équipe SIS aura lieu le lundi 22 juin 2015 à 11h00 dans la salle de conférence de l'I3S.

Laurent Perrinet, chargé de recherche à l'Institut de Neurosciences de la Timone (Marseille, France), nous présentera des résultats récents en catégorisation d'images publiés dans Nature Scientific Reports.

Titre
Edge co-occurrences can account for rapid categorization of natural versus animal images.
Résumé
Making a judgment about the semantic category of a visual scene, such as whether it contains an animal, is typically assumed to involve high-level associative brain areas. Previous explanations require progressively analyzing the scene hierarchically at increasing levels of abstraction, from edge extraction to mid-level object recognition and then object categorization. Here we show that the statistics of edge co-occurrences alone are sufficient to perform a rough yet robust (translation, scale, and rotation invariant) scene categorization. We first extracted the edges from images using a scale-space analysis coupled with a sparse coding algorithm. We then computed the "association field" for different categories (natural, man-made, or containing an animal) by computing the statistics of edge co-occurrences. These differed strongly, with animal images having more curved configurations. We show that this geometry alone is sufficient for categorization, and that the pattern of errors made by humans is consistent with this procedure. Because these statistics could be measured as early as the primary visual cortex, the results challenge widely held assumptions about the flow of computations in the visual system. The results also suggest new algorithms for image classification and signal processing that exploit correlations between low-level structure and the underlying semantic category.

welcome:
• getACL = 0.002s
• i18n_init = 0.013s
• init = 0.014s
• loadLanguage = 0.000s
• load_multi_cfg = 0.010s
• run = 0.167s
• send_page = 0.081s
• send_page_content = 0.080s
• total = 0.182s