The current measures of competency for nurse anesthetists for recertification are continuing education units for each biennial recertification cycle, and records of current practice and state licensure. Recently, the National Board of Certification and Recertification of Nurse Anesthetists (NBCRNA) adopted new standards that include a written examination every 8 years, an increase of 40 continuing education units per year, and completion of four core competency modules each four-year recertification cycle. However, little is known about the validity of these competency measures.
The purpose of this study was to determine relationships between written examination scores, self-assessment scores, and performance scores in a simulated environment. Eighteen nurse anesthetists from three hospitals completed the written exam, self-assessment, and 8 scenarios in the simulation lab.
The mean score on the 30 item written exam was 67.22%, SD = 11.42. There were no significant differences in scores between groups CRNA employed at different hospitals or of differing age and experience. The mean percentage score of the eight scenarios was 77.28%, SD = 7.35, with a range of scores from 64.50-89.00%. The only statistically significant correlation among the three competency measure was a negative correlation between the written examination and total performance scores (r = -.407, p = .094). No statistically significant correlations were found between the competency measures and age, years of experience, workplace, and prior exposure to simulation. Self-assessments were completed before and after taking the multiple-choice exam and the simulation performance tests; scores were lower after the tests but still correlated at .496(p = .036).
These results bring attention to the need to address the relationship between knowledge and performance. Utilizing performance assessments that have been validated and deemed reliable will help to improve practice standards. This in turn will lead to greater safety in anesthesia patient care.