Skip to main content
eScholarship
Open Access Publications from the University of California

A Comparison of Human and Machine Performance in Object Recognition Using The ObjectNet Image Set

Abstract

The best-performing artificial intelligence systems for object recognition are deep neural networks (DNNs). For several years now, engineers and neuroscientists have claimed that DNNs show human-level performance in object recognition. However, Barbu et al. (2019) reported accuracies of around 30% for state-of-the-art object recognition systems when testing on their better-controlled image set – ObjectNet. How do humans perform on ObjectNet? In our experiment, we tested 25 undergraduates' ability to classify the ten categories of objects in ObjectNet that the Deep Convolutional Neural Networks (DCNNs) found easiest, moderate and hardest. Although humans and DCNNs had similar overall accuracy levels, there were some everyday, basic-level categories for which machine performance was much lower than humans. The pattern of errors generated by the DCNNs was about as similar to human error patterns as the individual human error patterns were to each other. Implications of these results and plans for future work are discussed.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View