# Your search: "author:Steigerwald, Douglas G"

## filters applied

## Type of Work

Article (8) Book (0) Theses (2) Multimedia (0)

## Peer Review

Peer-reviewed only (3)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (0) Other files (1)

## Publication Year

## Campus

UC Berkeley (0) UC Davis (0) UC Irvine (0) UCLA (0) UC Merced (0) UC Riverside (0) UC San Diego (0) UCSF (0) UC Santa Barbara (10) UC Santa Cruz (0) UC Office of the President (0) Lawrence Berkeley National Laboratory (0) UC Agriculture & Natural Resources (0)

## Department

Department of Economics (8)

## Journal

## Discipline

Social and Behavioral Sciences (2)

## Reuse License

## Scholarly Works (10 results)

An adaptive estimator is an efficient estimator for a model that is only partially specified.

We study the problem of obtaining accurately sized test statistics in finite samples for linear regression models where the error dependence is of unknown form. With an unknown dependence structure there is traditionally a trade-off between the maximum lag over which the correlation is estimated (the bandwidth) and the decision to introduce conditional heteroskedasticity. In consequence, the correlation at far lags is generally omitted and the resultant inflation of the empirical size of test statistics has long been recognized. To allow for correlation at far lags we study test statistics constructed under the possibly misspecified assumption of conditional homoskedasticity. To improve the accuracy of the test statistics, we employ the second-order asymptotic refinement in Rothenberg (1988) to determine critical values. We find substantial size improvements resulting from the second-order theory across a wide range of specifications, including substantial conditional heteroskedasticity. We also find that the size gains result in only moderate increases in the length of the associated confidence interval, which yields an increase in size-adjusted power. Finally, we note that the proposed test statistics do not require that the researcher specify the bandwidth or the kernel.

We study the possible impact of daylight-saving time adjustment on stock returns. Previous work reveals that average returns tend to decline following an adjustment. As averages are sensitive to outliers, more recent work focused on the entire distribution of returns and found little impact following adjustments. Unfortunately, the general nature of the alternative hypothesis reduces the power of the distribution test to detect an effect of adjustments on the location of the distribution. We construct robust tests that are designed to have power to detect a time-adjustment effect on the location of returns. We also develop a more novel test of exponential tilting that is designed to accommodate possible heterogeneity in the return distribution over time. When we apply these test to S&P 500 stock returns, we are unable to rigorously detect a time adjustment effect on stock returns.

Microstructure noise contaminates high-frequency estimates of asset price volatility. Recent work has determined a preferred sampling frequency under the assumption that the properties of noise are constant. Given the sampling frequency, the high-frequency observations are given equal weight. While convenient, constant weights are not necessarily efficient. We use the Kalman filter to derive more efficient weights, for any given sampling frequency. We demonstrate the efficacy of the procedure through an extensive simulation exercise, showing that our filter compares favorably to more traditional methods.

In Cho and White (2007) "Testing for Regime Switching" the authors obtain the asymptotic null distribution of a quasi-likelihood ratio (QLR) statistic. The statistic is designed to test the null hypothesis of one regime against the alternative of Markov switching between two regimes. Likelihood ratio statistics are used because the test involves nuisance parameters that are not identified under the null hypothesis, together with other nonstandard features. Cho and White focus on a quasi-likelihood, which ignores certain serial correlation properties but allows for a tractable factorization of the likelihood. While the majority of their paper focuses on asymptotic behavior under the null hypothesis, Theorem 1(b) states that the quasi-maximum likelihood estimator (QMLE) is consistent under the alternative hypothesis. Consistency of the QMLE requires that the expected quasi-log-likelihood attain a global maximum at the population parameter values. This requirement holds for some Markov regime-switching processes but, as we show below, not for an autoregressive process as analyzed in Cho and White.

We study the e®ect of privately informed traders on measured high frequency price changes and trades in asset markets. We use a standard market microstructure framework where exogenous news is captured by signals that informed agents receive. We show that the entry and exit of informed traders following the arrival of news accounts for high-frequency serial correlation in squared price changes (stochastic volatility) and grades. Because the bid-ask spread of the market specialist tends to shrink as individuals trade and reveal their information, the model also accounts for the empirical observation that high-frequency serial correlation is more pronounced in trades than in squared price changes. A calibration test of the model shows that the features of the market microstructure, without serially correlated news, accounts qualitatively for the serial correlation in the data, but predicts less persistence than is present in the data.

This dissertation consists of three works which consider estimation of economic

variables when a spatial component exists. Each essay utilizes different techniques

and methodology for working with data that can be grouped into spatial clusters.

In the first essay, I estimate the impact of air pollution events caused by wildfire

smoke on respiratory and circulatory health outcomes. Utilizing a combination

of California health data and NOAA wildfire smoke data I can estimate the impact

of exposure to wildfire smoke on health outcomes for all individuals in California.

Using inpatient data I am able to construct a measure of exposure to wildfire smoke

prior to the hospital visit, this allows for the identification of the impact of wildfire

smoke exposure on different health outcomes. I find that an additional day of smoke

exposure in a month leads to on average 11.38 additional hospital admissions for respiratory

diagnoses and an additional 3 hospital admissions for circulatory diagnoses.

This translates to an annual cost of wildfire smoke exposure in California due to respiratory

and circulatory hospital admissions of $192,316,498.

The second essay, joint with Travis Cyronek, asks the question: How does the

sharing economy affect traditional lodging markets? The advent of platforms such

as Airbnb in 2008 has introduced a new channel of market interaction between those

with space and those who seek it. This allows for transactions of lodging services

that might otherwise be underutilized. This paper develops a framework to help

think about how peer-to-peer transactions interact with traditional rental markets,and what this means for property managers and tenants. Specifically, we examine

how the introduction of sharing platforms (e.g. Airbnb) affect the listing decisions of

vacant property managers and the lodging choices of dwelling seekers. The model

features landlords who choose where to list vacant properties and renters who search

for lodging. Renters can be either short or long-term, referencing how long they wish

to occupy the property. Sharing platforms give landlords the option of accessing these

short-term renters whom would otherwise occupy hotels, affecting traditional, longterm

renters. We find that Airbnbs decrease hotel prices by about $24 while they

increase average rents by $39 per month.

In the third essay, joint with Douglas G. Steigerwald, we study the behavior of

cluster-robust test statistics in models with instrumental variables when cluster heterogeneity

is present. Inference in a large number of papers using two-stage least

squares regressions published in American Economics Association journals are driven

by the presence of one or two influential clusters. We link a measure of cluster heterogeneity,

the feasible effective number of clusters, to measures of influence. Using

simulations, we demonstrate that high levels of cluster heterogeneity lead to coverage

of less than 95% for 95% confidence intervals when using instrumental variables with

panel data or with data that can be grouped into clusters. Using data from papers with

two-stage least squares regressions published in American Economic Association journals,

we show that the feasible effective number of clusters can be used as a pre-test

to the sensitivity of two-stage least squares inference to influential clusters. We further

show that when the feasible effective number of clusters is small, even when the

number of clusters is large, the distribution of the test statistic in non-normal. When

this severe cluster heterogeneity is present, the restricted wild cluster bootstrap can

be used to return coverage to the appropriate level.

Life Cycle Assessment (LCA) seeks to quantify the environmental impacts of product systems and services from “cradle-to-grave”, or from raw material extraction through the end-of-life. The ideal outcome of this exercise is the identification of actions that can be taken by firms and policymakers to reduce global environmental damage. LCA is quite young relative to the classical academic disciplines, and faces significant challenges in establishing its relevance for decision-making. Mainstream LCA practice seeks to account for environmental damage using a class of frameworks termed Attributional LCA (ALCA). This typically involves the use of normative, technology-focused rules to allocate inputs, outputs and emissions over product systems that interact with each other. The application of such rules can sever cause-effect relationships that strongly influence the environmental consequences of changes to industrial systems. This thesis develops and demonstrates new methodologies pertaining to Consequential LCA (CLCA), which has not been standardized and fully adopted in mainstream practice. In CLCA, I seek to assess the net environmental outcomes of decisions, rather than attribute environmental impacts using a set of normative rules. This leads to an inevitable focus on social dynamics and causal inference, which are scarcely addressed in the LCA field.

The first chapter is an extensive literature review on the history and current state of methods for characterizing the environmental consequences of actions in LCA. I first discuss the major existing differences between ALCA and CLCA in the literature. Then, I provide a detailed review of methods that have been proposed to evolve the structure of CLCA models towards a robust representation of cause-effect relationships. I recommend the use of an iterative framework between structural CLCA models and causal inference analysis, a class of methods largely absent from the LCA literature. The remainder of my dissertation applies this iterative framework and focuses on the integration of LCA with the modelling and quantification of social mechanisms. In Chapter 2, I build a CLCA model of automotive material substitution including parameterized market forces that drive the environmental impacts of changes in scrap generation and recycling activity. I show that market forces contribute significantly to uncertainty in modelling the greenhouse gas consequences of automotive material substitution using local and global sensitivity analysis. I also find that in 16% of trials of a Monte Carlo simulation, substituting aluminum for steel in a fleet of vehicles does not constitute a net decrease in greenhouse gas emissions. This finding contrasts with previous studies on the topic, and is influenced by the incorporation of market forces into the model. Chapter 3 explores the environmental consequences of recycling as an example of these market forces in greater depth. I generalize this concept as a question of the cause-effect relationship between recycling and production of materials from primary resources. For the first time in the industrial ecology literature, I propose the use of difference-in-differences (DID), a quasi-experimental statistical method that classifies observational data into treatment and control groups, to test hypotheses about this key relationship. I simulate the application of the DID estimator to the question of whether or not increases in the use of recycled aluminum in the automotive industry would lead to an equivalent reduction in the use of primary aluminum. Finally, in Chapter 4, I exploit the fact that water is used, recycled, and reused in localized units to create treatment and control groups of recycled water users. I design an empirical DID study that explores the question of whether or not increases in wastewater recycling lead to equivalent reductions in potable water usage. I find that in a large urban water district in California, the wastewater recycling program has displaced over 25 million cubic feet of potable water production with a displacement rate of 93.4%. Chapter 4 is the first empirical application of quasi-experimental methods to quantifying the relationship between recycling and primary production, and the first attempt to test hypotheses regarding the potable water savings achieved from wastewater recycling.

- 1 supplemental file