- Main
Can the High-Level Semantics of a Scene be Preserved in the Low-Level Visual
Features of that Scene? A Study of Disorder and Naturalness
Abstract
Real-world scenes contain low-level visual features (e.g., edges, colors) and high-level semantic features (e.g., objects and places). Traditional visual perception models assume that integration of low-level visual features and segmentation of the scene must occur before high-level semantics are perceived. This view implies that low-level visual features of a scene alone do not carry semantic information related to that scene. Here we present evidence that suggests otherwise. We show that high-level semantics can be preserved in low-level visual features, and that different high-level semantics can be preserved in different types of low-level visual features. Specifically, the ‘disorder’ of a scene is preserved in edge features better than color features, whereas the converse is true for ‘naturalness.’ These findings suggest that semantic processing may start earlier than thought before, and integration of low-level visual features and segmentation of the scene may occur after semantic processing has begun, or in parallel.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-