Skip to main content
eScholarship
Open Access Publications from the University of California

Using Camera Traps and AI to Improve Efficacy and Reduce Bycatch at Goodnature A24 Rodent Traps in Hawaii

  • Author(s): Crampton, Lisa H.
  • Gallerani, Erica M.
  • Reeves, Mari K.
  • et al.
Abstract

Camera traps provide an unobtrusive means to monitor wildlife presence and behavior. Yet there is a steep learning curve associated with their deployment. Camera model, settings and position, target behavior, and technicians’ skill greatly influence the success of camera trapping. Furthermore, data storage and management are complex, as copious photos occupy considerable storage space. Finally, evaluating large numbers of digital images is time-consuming for low frequency events; in each of our trials we amassed 10,000-50,000 photos, of which 6-20% were target animals. The application of artificial intelligence (AI) to digital image datasets can greatly increase efficiency, but few existing algorithms have been trained on small animals. We embarked on a camera trapping project to assess interactions of target (rodent) and non-target (bird) species with 125 GoodNature A24 rat traps deployed in rainforest sites on Kauai, Hawaii, following several observations of non-target mortality. While our long-term goal was to use camera trap data to suggest modifications to traps that would maintain target kills while minimizing bycatch, the short-term goal presented in this manuscript focused on perfecting our camera trapping program and AI to classify photos of small animals. Specifically, we described lessons learned regarding 1) the performance of several camera models, 2) camera placement, 3) data management, and 4) artificial network training and development. First, we report on field studies assessing Bushnell TrophyCam HD, Bushnell HD, Reconyx HyperFire, and Reconyx HyperFire2 models on a variety of settings, distances, and angles with respect to the traps. Camera model and placement at traps are critical to capturing images amenable to AI development, as is variation in the training dataset. Second, we outline our data management and sharing protocols. Third, we describe the development of preliminary AI models to review and sort camera trap data. Early models reduced the workload of reviewing camera trap data by correctly classifying photos of rats, birds, humans, pigs, and empty frames. We expect these results to further improve with more training data. These results will greatly enhance the efficacy of several camera trapping studies that we have recently undertaken and help us modify traps to avoid bycatch.

Main Content
Current View