Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Previously Published Works bannerUC San Diego

Technical Note: Assessing the performance of monthly CBCT image quality QA

Published Web Location

https://doi.org/10.1002/mp.13535
Abstract

Purpose

To assess the performance of routine cone-beam computed tomography (CBCT) quality assurance (QA) at predicting and diagnosing clinically recognizable linac CBCT image quality issues.

Methods

Monthly automated linac CBCT image quality QA data were acquired on eight Varian linacs (Varian Medical Systems, Palo Alto, CA) using the CATPHAN 500 series phantom (The Phantom Laboratory, Inc., Greenwich, NY) and Total QA software (Image Owl, Inc., Greenwich, NY) over 34 months between July 2014 and May 2017. For each linac, the following image quality metrics were acquired: geometric distortion, spatial resolution, Hounsfield Unit (HU) constancy, uniformity, and noise. Quality control (QC) limits were determined by American Association of Physicists in Medicine (AAPM) expert consensus documents Task Group (TG-142 and TG-179) and the manufacturer acceptance testing procedure. Clinically recognizable CBCT issues were extracted from the in-house incident learning system (ILS) and service reports. The sensitivity and specificity of CATPHAN QA at predicting clinically recognizable image quality issues was investigated. Sensitivity was defined as the percentage of clinically recognizable CBCT image quality issues that followed a failing CATPHAN QA. Quality assurance results are categorized as failing if one or more image quality metrics are outside the QC limits. The specificity of CATPHAN QA was defined as one minus the fraction of failing CATPHAN QA results that did not have a clinically recognizable CBCT image quality issue in the subsequent month. Receiver operating characteristic (ROC) curves were generated for each image quality metric by plotting the true positive rate (TPR) against the false-positive rate (FPR).

Results

Over the study period, 18 image quality issues were discovered by clinicians while using CBCT to set up the patient and five were reported prior to x-ray tube repair. The incidents ranged from ring artifacts to uniformity problems. The sensitivity of the TG-142/179 limits was 17% (four of the prior monthly QC tests detected a clinically recognizable image quality issue). The area under the curve (AUC) calculated for each image quality metric ROC curve was: 0.85 for uniformity, 0.66 for spatial resolution, 0.51 for geometric distortion, 0.56 for noise, 0.73 for HU constancy, and 0.59 for contrast resolution.

Conclusion

Automated monthly QA is not a good predictor of CBCT image quality issues. Of the available metrics, uniformity has the best predictive performance, but still has a high FPR and low sensitivity. The poor performance of CATPHAN QA as a predictor of image quality problems is partially due to its reliance on region-of-interest (ROI) based algorithms and a lack of a global algorithm such as correlation. The manner in which image quality issues occur (trending toward failure or random) is still not known and should be studied further. CBCT image quality QA should be adapted based on how CBCT is used clinically.

Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View