- Kennedy, Chris;
- Chiu, Catherine;
- Chapman, Allyson;
- Gologorskaya, Oksana;
- Farhan, Hassan;
- Han, Mary;
- Hodgson, MacGregor;
- Lazzareschi, Daniel;
- Ashana, Deepshikha;
- Smith, Alexander;
- Espejo, Edie;
- Boscardin, John;
- Pirracchio, Romain;
- Cobert, Julien;
- Lee, Sei
OBJECTIVES: To develop proof-of-concept algorithms using alternative approaches to capture provider sentiment in ICU notes. DESIGN: Retrospective observational cohort study. SETTING: The Multiparameter Intelligent Monitoring of Intensive Care III (MIMIC-III) and the University of California, San Francisco (UCSF) deidentified notes databases. PATIENTS: Adult (≥18 yr old) patients admitted to the ICU. MEASUREMENTS AND MAIN RESULTS: We developed two sentiment models: 1) a keywords-based approach using a consensus-based clinical sentiment lexicon comprised of 72 positive and 103 negative phrases, including negations and 2) a Decoding-enhanced Bidirectional Encoder Representations from Transformers with disentangled attention-v3-based deep learning model (keywords-independent) trained on clinical sentiment labels. We applied the models to 198,944 notes across 52,997 ICU admissions in the MIMIC-III database. Analyses were replicated on an external sample of patients admitted to a UCSF ICU from 2018 to 2019. We also labeled sentiment in 1,493 note fragments and compared the predictive accuracy of our tools to three popular sentiment classifiers. Clinical sentiment terms were found in 99% of patient visits across 88% of notes. Our two sentiment tools were substantially more predictive (Spearman correlations of 0.62-0.84, p values < 0.00001) of labeled sentiment compared with general language algorithms (0.28-0.46). CONCLUSION: Our exploratory healthcare-specific sentiment models can more accurately detect positivity and negativity in clinical notes compared with general sentiment tools not designed for clinical usage.