- Yu, Cenji;
- Anakwenze, Chidinma P;
- Zhao, Yao;
- Martin, Rachael M;
- Ludmir, Ethan B;
- S.Niedzielski, Joshua;
- Qureshi, Asad;
- Das, Prajnan;
- Holliday, Emma B;
- Raldow, Ann C;
- Nguyen, Callistus M;
- Mumme, Raymond P;
- Netherton, Tucker J;
- Rhee, Dong Joo;
- Gay, Skylar S;
- Yang, Jinzhong;
- Court, Laurence E;
- Cardenas, Carlos E
Manually delineating upper abdominal organs at risk (OARs) is a time-consuming task. To develop a deep-learning-based tool for accurate and robust auto-segmentation of these OARs, forty pancreatic cancer patients with contrast-enhanced breath-hold computed tomographic (CT) images were selected. We trained a three-dimensional (3D) U-Net ensemble that automatically segments all organ contours concurrently with the self-configuring nnU-Net framework. Our tool's performance was assessed on a held-out test set of 30 patients quantitatively. Five radiation oncologists from three different institutions assessed the performance of the tool using a 5-point Likert scale on an additional 75 randomly selected test patients. The mean (± std. dev.) Dice similarity coefficient values between the automatic segmentation and the ground truth on contrast-enhanced CT images were 0.80 ± 0.08, 0.89 ± 0.05, 0.90 ± 0.06, 0.92 ± 0.03, 0.96 ± 0.01, 0.97 ± 0.01, 0.96 ± 0.01, and 0.96 ± 0.01 for the duodenum, small bowel, large bowel, stomach, liver, spleen, right kidney, and left kidney, respectively. 89.3% (contrast-enhanced) and 85.3% (non-contrast-enhanced) of duodenum contours were scored as a 3 or above, which required only minor edits. More than 90% of the other organs' contours were scored as a 3 or above. Our tool achieved a high level of clinical acceptability with a small training dataset and provides accurate contours for treatment planning.