Skip to main content
eScholarship
Open Access Publications from the University of California

Benchmarking Deep Learning Frameworks with FPGA-suitable Models on a Traffic Sign Dataset

  • Author(s): Lin, Zhongyi
  • Ota, Jeffrey M.
  • Owens, John D.
  • Muyan-Ozcelik, Pinar
  • et al.
Associated data will be made available after this publication is published
Abstract

We benchmark several widely used deep-learning frameworks for performing deep-learning-related automotive tasks (e.g., traffic sign recognition) that need to achieve realtime and high accuracy results with limited resources available on embedded platforms such as FPGAs. In our benchmarks, we use various input image sizes on models that are suitable for FPGA deployment, and investigate the training speed and inference accuracy of selected frameworks for these different sizes on a popular traffic sign recognition dataset. We report results by running the frameworks solely on the CPU as well as by turning on GPU acceleration. We also provide optimizations we apply to fine-tune the performance of the frameworks. We discover that Neon and MXNet deliver the best training speed and inference accuracy in general for all our test cases, while Tensorflow is always among the frameworks with the highest inference accuracies. We also observe that on the particular dataset we tested on (i.e., GTSRB), the image size of the region of interest does not necessarily affect the inference accuracy, and that using deep models, e.g., ResNet-32, which have longer training times, might not provide improvements to inference accuracy.

Main Content
Current View