On Asymptotic Robustness of NT Methods with Missing Data
Literature on asymptotic robustness of normal theory (NT) methods outlines conditions under which the NT estimator remains asymptotically efficient and the NT test statistic retains its chi-square distribution even under nonnormality. These conditions have been stated both abstractly and in terms of properties of specific models. This research discusses issues associated with extending asymptotic robustness theory to the direct ML estimator and associated test statistic when data are missing completely at random (MCAR). It is shown that the same abstract robustness condition necessary for robustness to hold with complete data is required for incomplete data, while properties of specific models (such as mutual independence of the errors and their independence of the factors in a CFA model) no longer ensure robustness with incomplete data. The lack of robustness in such a case is illustrated both mathematically and empirically via a simulation study. Violation becomes more severe when the data are highly nonnormal and when a higher proportion of data is missing.