# Your search: "author:Rabe-Hesketh, Sophia"

## filters applied

## Type of Work

Article (27) Book (0) Theses (9) Multimedia (0)

## Peer Review

Peer-reviewed only (36)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (0) Other files (0)

## Publication Year

## Campus

UC Berkeley (36) UC Davis (0) UC Irvine (0) UCLA (0) UC Merced (0) UC Riverside (1) UC San Diego (0) UCSF (4) UC Santa Barbara (0) UC Santa Cruz (1) UC Office of the President (0) Lawrence Berkeley National Laboratory (0) UC Agriculture & Natural Resources (0)

## Department

School of Public Health (1)

## Journal

## Discipline

## Reuse License

BY-NC-ND - Attribution; NonCommercial use; No derivatives (13) BY - Attribution required (4)

## Scholarly Works (36 results)

The Next Generation Science Standards (NGSS Lead States, 2013) define performance targets which will require assessment tasks that can integrate discipline knowledge and cross-cutting ideas with the practices of science. Complex computerized tasks will likely play a large role in assessing these standards, but many questions remain about how best to make use of such tasks within a psychometric framework (National Research Council, 2014). This dissertation explores the use of a more extensive cognitive modeling approach, driven by the extra information contained in action data collected while students interact with complex computerized tasks. Three separate papers are included. In Chapter 2, a mixture IRT model is presented that simultaneously classifies student understanding of a task while measuring student ability within their class. The model is based on differentially scoring the subtask action data from a complex performance. Simulation studies show that both class membership and class-specific ability can be reasonably estimated given sufficient numbers of items and response alternatives. The model is then applied to empirical data from a food-web task, providing some evidence of feasibility and validity. Chapter 3 explores the potential of using a more complex cognitive model for assessment purposes. Borrowing from the cognitive science domain, student decisions within a strategic task are modeled with a Markov decision process. Psychometric properties of the model are explored and simulation studies report on parameter recovery within the context of a simple strategy game. In Chapter 4 the Markov decision process (MDP) measurement model is then applied to an educational game to explore the practical benefits and difficulties of using such a model with real world data. Estimates from the MDP model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A variety of methods have been developed for approximating the intractable likelihood functions, but there seems no method yet that is both computationally efficient and accurate in a wide range of situations. In this dissertation, I consider new estimation methods and applications of complex GLMMs for measurement and growth. The dissertation consists of three papers,

1) Variational maximization-maximization (MM) algorithm,

2) Monte Carlo local likelihood (MCLL) estimation,

and 3) Autoregressive item response theory (IRT) growth model for longitudinal item analysis.

In the first and second papers, I develop two ML methods for estimating GLMMs with crossed random effects. The variational MM algorithm is a modified expectation-maximization (EM) algorithm where a variational density is introduced in the expectation (E) step to approximate the true posterior density of the random effects given the data. The E-step is replaced by another maximization step that minimizes the Kullback-Leibler (KL) divergence between the posterior and the variational density, or equivalently, maximizes the lower bound of the log-likelihood with respect to the variational distribution. The MCLL algorithm uses the posterior samples of model parameters obtained from Markov chain Monte Carlo (MCMC) for likelihood inference. The posterior density is estimated by local likelihood density estimation and the likelihood function is approximated up to a constant by the local likelihood density estimate of the posterior divided by the prior. The performance of these new algorithms is evaluated using simulation and empirical studies and compared with other ML and Bayesian estimators. In the third paper, a new autoregressive IRT growth model is proposed to take into account serial correlations among responses to the same items over time. The consequences of ignoring serial dependence and the initial conditions problem are investigated using simulations. The new model is applied to longitudinal data of Korean students' self-esteem.

In most statistical analyses, quantitative education researchers often make simplifying assumptions regarding the manner in which their data was generated in order to answer some of these questions. These assumptions can help to reduce the complexity of the problem, and allow the researcher to describe their data using a simpler, and often times more interpretable, statistical model. However, making some of these assumptions when they are not true can lead to biased estimates and misleading answers. While the standard sets of assumptions associated with commonly-used statistical models are usually sufficient in a wide range of contexts, it will always be beneficial for education researchers to understand what they are, when they are reasonable, and how to modify them if necessary.

This dissertation focuses on three of the most common models used in quantitative education research (viz. parametric models like Linear Models (LMs), Item Response Theory (IRT) models, and Random-Intercept Models (RIMs)), discusses the standard sets of assumptions that accompany these models, and then describes related models with less stringent sets of assumptions. In each of the following three chapters, we either explicitly unpack existing models that are useful but are currently still uncommon in the field of education research, or propose novel models and/or estimation strategies for these models.

We begin in Chapter 1 with a common parametric model known as the Gaussian LM, and use it as a scaffold to better understand semiparametric models and their estimation. We begin by reviewing how the coefficients of the Gaussian LM are usually estimated using Maximum Likelihood (ML) or Least-Squares (LS). We then introduce the notion of an $m$-estimator as well as that of a Regular Asymptotically Linear estimator, and show how they relate to the ML estimator. In particular, we introduce the notion of influence functions/curves and discuss their geometry together with concepts such as Hilbert spaces and tangent spaces. We then demonstrate, concretely, how to derive the so-called efficient influence function under the Gaussian LM, and show that it is precisely the influence function of the ML and (Ordinary) LS estimators. This shows that the ML estimator (at least under the Gaussian LM) is efficient. Using the foundation built, we move on from the Gaussian LM by relaxing both the assumption that the residuals are normally distributed, as well as the assumption that they have a constant variance, and define this as the Heteroskedastic Linear Model. Unlike the Gaussian LM, this is a semiparametric model. Where possible, we make use of intuition and analogous results from the parametric setting to help describe the workflow for obtaining an efficient estimator for the coefficients of the Heteroskedastic Linear Model. In particular, we derive the nuisance tangent space for this semiparametric model, and use it to obtain the efficient influence function for our model. We then show how to use the efficient influence function to obtain an efficient estimator (which happens to be the Weighted LS estimator) from the (Ordinary) LS estimator via a one-step approach as well as an estimating equations approach. We then conclude by directing readers to more advanced material, including references on more modern approaches to estimating more general semiparametric models such as Targeted Maximum Likelihood Estimation.

In Chapter 2, we focus on a class of measurement models known as Item Response Theory models which are useful for measuring latent traits of a subject based on the subject's response to items. We relax the condition that the responses are only a result of the individual's latent trait (and possibly an external rater), and propose a dyadic Item Response Theory (dIRT) model for measuring interactions of pairs of individuals when the responses to items represent the actions (or behaviors, perceptions, etc.) of each individual (actor) made within the context of a dyad formed with another individual (partner). Examples of its use in education include the assessment of collaborative problem solving among students, or the evaluation of intra-departmental dynamics among teachers. The dIRT model generalizes both Item Response Theory models for measurement and the Social Relations Model for dyadic data. Here, the responses of an actor when paired with a partner are modeled as a function of not only the actor's inclination to act and the partner's tendency to elicit that action, but also the unique relationship of the pair, represented by two directional, possibly correlated, interaction latent variables. We discuss generalizations such as accommodating triads or larger groups, but focus on demonstrating the key idea in the dyadic case. We show that estimation may be performed using Markov-chain Monte Carlo implemented in \texttt{Stan}, making it straightforward to extend the dIRT model in various ways. Specifically, we show how the basic dIRT model can be extended to accommodate latent regressions, random effects, distal outcomes. We perform a simulation study that demonstrates that our estimation approach performs well. In the absence of educational data of this form, we demonstrate the usefulness of our proposed approach using speed-dating data instead, and find new evidence of pairwise interactions between participants, describing a mutual attraction that is inadequately characterized by individual properties alone.

Finally, in Chapter 3, we consider the often implicit assumption made when estimating the coefficients of structural Random Intercept Models (RIMs) that covariates at all levels do not co-vary with the random intercepts. A violation of this assumption (called cluster-level endogeneity) leads to inconsistent estimates when using standard estimation procedures. For two-level RIMs with such endogeneity, Hausman and Taylor (HT) devised a consistent multi-step instrumental variable estimator using only internal instruments. We, instead, approach this problem by explicitly modeling the endogeneity using a Structural Equation Model (SEM). In this chapter, we compare, through simulation, the HT and SEM estimators, and evaluate their asymptotic and finite sample properties. We show that the SEM approach is also flexible enough to deal with different exchangeability assumptions for the covariates (e.g., whether the correlations between pairs of all units in a cluster are the same) and investigate how these exchangeability assumptions affect finite sample properties of the HT estimator. For the simulations, we propose a new procedure for generating cluster- and unit-level covariates and random intercepts with a fully flexible covariance structure. We also compare our approach to another common approach known as Multilevel Matching using data from the High School and Beyond survey.