Real-time computer vision applications have difficult runtime constraints within which to execute. Implementing on a CPU provides a baseline for performance. But using custom parallel hardware such as graphics processing units (GPUs) and field programmable gate arrays (FPGAs) represents a cost effective method to achieve greater performance. Greater performance can move an algorithm from non-real-time into the realm of real-time. This opens numerous possibilities for interaction that did not exist before. Tasks such as face detection can be used to set focus points in cameras if performed in real-time. Similarly, body part tracking can be used as input for consumer televisions or video game systems when run in real-time. Acceleration using heterogeneous hardware is attractive because algorithms exhibit different models of computation at different stages of execution. Each platform can be exploited to execute when most efficient. However, it can be difficult to combine these platforms into a single application. This is due to the lack of reusable components and communication abstractions for these devices. This work describes a framework to lower the barrier for computer vision application acceleration called the Smart Frame Grabber Framework. This framework is a collection of reusable hardware acceleration components that are commonly used for accelerating computer vision applications using CPUs and FPGAs. It allows applications to be easily partitioned across multiple heterogenous compute devices. At the heart of this framework is a communication and synchronization platform called RIFFA: A Reusable Integration Framework for FPGA Accelerators. Using the Smart Frame Grabber Framework, researchers can design and build a hardware accelerated computer vision application in considerably less time and with less upfront effort than it would take using existing vendor provided tools alone