Vision systems use a pipeline of feature extraction and analysis to predict the desired output from input image data. With robust features, we can achieve high accuracy using care- fully chosen, but simple analysis. Learning features and using features effectively are two problems we focus on in this work. We propose a novel formulation of convolutional sparse coding called spherical sparse coding (SSC). SSC removes the need for iterative optimization to compute the sparse codes (features) for the typical least squares formulation. Using the SSC formulation, we show a clear connection between convolutional sparse coding and con- volutional neural networks (CNNs). We extend SSC to a supervised method for classification that uses codes that are biased by a hypothesized class. These class-specific codes can be re- constructed to give us images that maximize the classification score of the hypothesized class. Next, we propose using the Siamense network architecture with the multi-channel normalized cross-correlation (MCNCC) similarity metric for cross-domain image matching. We show that our choices in how we use features can have significant performance consequences; even prior to learning any parameters, we achieve state-of-the-art performance using off-the-shelf CNN features with MCNCC on a number of different cross-domain matching problems.