Recognizing Gaze-Motor Behavioral Patterns in Manual Grinding Tasks
- Author(s): Bales, G
- Das, J
- Linke, B
- Kong, Z
- et al.
Published Web Locationhttps://doi.org/10.1016/j.promfg.2016.08.011
This paper reports our progress in developing techniques for “parsing” raw gaze and force data from manual grinding tasks into a principled model. A grinding task, though simple, requires the practitioner to combine elements from the large repertoire of her skillset. Based on the joint, gaze, and force data collected from a series of experiments, and by extending existing scanpath methods, we develop a visualization method called Gaze-Motor Space-Time Cube (GMSTC), which can help us gain insight into the joint gaze-motor routine existing in complex manual tasks. For instance, there exists a strong correlation between the spectra of a subject's fixation and force distributions. Such insight might be hard to extract through an examination of either the gaze or the force data separately. Furthermore, by comparing data obtained from operators with different levels of skill, we are able to quantitatively describe characteristics of human manual skill. For instance, we find that an experienced subject exhibits longer fixation durations and smaller fixation variations than an intermediate one. A detailed understanding of gaze-motor behavior broadens our knowledge of how a manual task is executed. Our results help to provide this extra insight, and have implications in the way in which knowledge and manual expertise is transferred from one generation of practitioners to the next.