Category learning is a fundamental process in human cognition. Recent efforts have attempted to adapt theories developedin vision to the auditory domain. However, no study has directly compared auditory and visual category learning in thesame individuals. Using a fully within-subjects approach, we trained participants on non-speech auditory, visual, andnon-native speech categories in a single day. By comparing category learning behavior, the ability to generalize to novelcategory exemplars, and leveraging decision bound computational models, we found that while individuals demonstratedsimilar learning across the auditory and visual modalities, there were distinct perceptual biases that influenced learning ofnon-speech auditory categories. Further, there were substantial individual differences in performance across the three tasks.This study presents a novel comparison of category learning across modalities in the same individuals and demonstratesthat although commonalities exist, there is some domain-specificity to category learning.