Students come to science classrooms with ideas informed by their prior instruction and everyday observations. Following constructivist pedagogy, assessments that encourage students to elaborate their ideas, distinguish among them, and link the most promising ones can capture students' potential and help teachers plan their lessons. In this investigation, we study an assessment that engages students in a dialog to refine their response to a Knowledge Integration (KI) question. Our Research Practice Partnership (RPP) initially trained a Natural Language Processing (NLP) idea detection model on 1218 student responses from 5 schools and identified 13 student ideas. The original model had an overall micro-averaged F-score of 0.7634. After classroom testing, three RPP expert teachers with 10+ years of experience reviewed the classroom data and expanded the model, adding six additional ideas including two that they described as precursor ideas because they foreshadowed more sophisticated reasoning. We trained the idea detection model on these 19 ideas using a dataset from 13 teachers and 1206 students across 8 public schools. The updated model had a somewhat lower overall micro-averaged F-score of 0.7297. The two precursor ideas were among the top four detected ideas. The assessment, using the updated model, guided students to express significantly more ideas. A regression model showed that the updated model was associated with greater KI score gains. Expanding the model, thus, created an assessment that motivated students to express more ideas and to achieve higher KI scores. It also provides teachers with deeper insights into their students' understanding of science.