Skip to main content
eScholarship
Open Access Publications from the University of California

Automated Video-Based Fall Detection

  • Author(s): Edgcomb, Alex Daniel
  • Advisor(s): Vahid, Frank
  • et al.
Abstract

Automatically detecting falls is a desired part of caring for a live-alone senior. Researchers have developed various video-based fall detection methods, including moving-region-based 3D-projection-based methods. We introduce a video-based fall detection method that is simpler and more efficient than previous methods, while being equally or more accurate. The method is based on the moving-regions, represented as a minimum bounding rectangle (MBR) around the person in video. The method uses fall detectors that use a particular feature of the MBR, such as height or width, to contribute a fall likelihood score. Many fall likelihood scores can be combined to produce a single-camera fall score. Multiple cameras can be combined to produce a multi-camera fall score. We evaluated our method on a commonly used video data set featuring a middle-aged, male actor performing falls and in-home activities. We report accuracy as sensitivity and specificity, and efficiency as frames per second (FPS). The method for a single-camera achieved 0.960 sensitivity and 0.995 specificity, and for 2 or more cameras achieved at least 0.990 sensitivity and at least 0.990 specificity. The method runs at 32.1 FPS while single-threaded on a 3.30 GHz Xeon processor. Our method was more accurate than the state-of-the-art MBR-based methods, while being equally efficient. Also, our method was about 10x more efficient than the state-of-the-art projection-based algorithms, while being more accurate with 3 cameras and equally accurate with 4+ cameras.

Main Content
Current View