Skip to main content
eScholarship
Open Access Publications from the University of California

Quantitative Analysis of Human Facial Expression: Moving Towards The Creation of a Virtual Patient

  • Author(s): Lee, Sungah
  • Advisor(s): Moon, Won
  • et al.
Abstract

ABSTRACT OF THE THESIS

Quantitative Analysis of Human Facial Expression: Moving Towards The Creation of a Virtual Patient

By

Sungah Lee

Master of Science in Oral Biology University of California Los Angeles, 2017

Background: Recent introduction of three dimensional facial images allows us to have access to more information than ever before, creating the potential for more accurate facial evaluation. In orthodontic diagnostics and treatment planning, facial soft tissue analysis has been broadly recognized as a critical factor leading to successful orthodontic treatment outcomes. Even though facial soft tissue is by nature dynamic data and facial expressions are the dynamic movement of these facial soft tissue, 2D static photos have been used in facial analysis in orthodontics. Our overall objective is to develop an innovative method to quantifies the dynamic movements of soft tissue in 3D during facial expressions, which could further not only orthodontics field but also other health care fields.

Methods: Dynamic system to quantify 3D facial soft tissue movement was explored through investigation into physics and mathematical modeling. 3dMD facial images of 29 participants at five different time points (T1, T2, T3, T4, T5) during smiling were collected from starting of each facial expression till the end. Smiling patterns were classified and only homogenous samples were finally included. 3D meshes were processed for vertex correspondence, and 28 landmarks were tracked. Data analyses were performed via MATLAB. Average smiling faces at five different time points were generated. Average displacement vectors between each time point were computed, producing the average smiling movement trajectories. Statistical p values of all landmarks in three-dimension were computed to show the significance level of displacement. Color-coded displacement vector p maps were generated for movement of each landmark over the 5 time points.

Results: 3D meshes of 10 participants at five different time points (T1, T2, T3, T4, T5) while smiling were finally included in our study. 28 landmarks were quantitatively tracked and analyzed. Average smiling faces at five time points, average displacement vectors between each time point, and statistical p values of all landmarks in 3D were generated. Average movement trajectory while smiling was generated. Corner of lip showed maximum displacement of 6.42 mm (p<~ 0.01) in upward and outward directions. Statistically Significant displacements were shown at most landmarks of oral regions (p<0.05) rather than landmarks of nasal, eye, or eyebrow regions between each time point.

Conclusion: This is the first study to demonstrate that dynamic 3D movements of facial expressions can be quantitatively tracked and analyzed, offering an added dimension to the diagnosis and treatment planning of patients. This new approach which can allow us to analyze patients’ facial expressions in three dimension would shift the diagnostic paradigms currently used in craniofacial analysis, that is, 2D static facial analysis, towards an ever progressive direction, that is, dynamic 3D facial expression analysis.

Main Content
Current View