Skip to main content
eScholarship
Open Access Publications from the University of California

UCSF

UC San Francisco Previously Published Works bannerUCSF

Sensitivity and specificity of computer vision classification of eyelid photographs for programmatic trachoma assessment.

  • Author(s): Kim, Matthew C
  • Okada, Kazunori
  • Ryner, Alexander M
  • Amza, Abdou
  • Tadesse, Zerihun
  • Cotter, Sun Y
  • Gaynor, Bruce D
  • Keenan, Jeremy D
  • Lietman, Thomas M
  • Porco, Travis C
  • et al.
Abstract

BACKGROUND/AIMS:Trachoma programs base treatment decisions on the community prevalence of the clinical signs of trachoma, assessed by direct examination of the conjunctiva. Automated assessment could be more standardized and more cost-effective. We tested the hypothesis that an automated algorithm could classify eyelid photographs better than chance. METHODS:A total of 1,656 field-collected conjunctival images were obtained from clinical trial participants in Niger and Ethiopia. Images were scored for trachomatous inflammation-follicular (TF) and trachomatous inflammation-intense (TI) according to the simplified World Health Organization grading system by expert raters. We developed an automated procedure for image enhancement followed by application of a convolutional neural net classifier for TF and separately for TI. One hundred images were selected for testing TF and TI, and these images were not used for training. RESULTS:The agreement score for TF and TI tasks for the automated algorithm relative to expert graders was κ = 0.44 (95% CI: 0.26 to 0.62, P < 0.001) and κ = 0.69 (95% CI: 0.55 to 0.84, P < 0.001), respectively. DISCUSSION:For assessing the clinical signs of trachoma, a convolutional neural net performed well above chance when tested against expert consensus. Further improvements in specificity may render this method suitable for field use.

Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.

Main Content
Current View