# Your search: "author:Anderson, Robert M"

## filters applied

## Type of Work

Article (16) Book (0) Theses (6) Multimedia (0)

## Peer Review

Peer-reviewed only (15)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (0) Other files (0)

## Publication Year

## Campus

UC Berkeley (21) UC Davis (0) UC Irvine (0) UCLA (1) UC Merced (0) UC Riverside (0) UC San Diego (0) UCSF (0) UC Santa Barbara (0) UC Santa Cruz (0) UC Office of the President (0) Lawrence Berkeley National Laboratory (0) UC Agriculture & Natural Resources (0)

## Department

Center for Risk Management Research (8) Department of Economics (7)

## Journal

## Discipline

Social and Behavioral Sciences (11) Business (5)

## Reuse License

## Scholarly Works (22 results)

My dissertation explores how tail risk and systematic risk affects various aspects of risk management and asset pricing. My research contributions are in econometric and statistical theory, in finance theory and empirical data analysis. In Chapter 1 I develop the statistical inferential theory for high-frequency factor modeling. In Chapter 2 I apply these methods in an extensive empirical study. In Chapter 3 I analyze the effect of jumps on asset pricing in arbitrage-free markets. Chapter 4 develops a general structural credit risk model with endogenous default and tail risk and analyzes the incentive effects of contingent capital. Chapter 5 derives various evaluation models for contingent capital with tail risk.

Chapter 1 develops a statistical theory to estimate an unknown factor structure based on financial high-frequency data. I derive a new estimator for the number of factors and derive consistent and asymptotically mixed-normal estimators of the loadings and factors under the assumption of a large number of cross-sectional and high-frequency observations. The estimation approach can separate factors for normal "continuous" and rare jump risk. The estimators for the loadings and factors are based on the principal component analysis of the quadratic covariation matrix. The estimator for the number of factors uses a perturbed eigenvalue ratio statistic. The results are obtained under general conditions, that allow for a very rich class of stochastic processes and for serial and cross-sectional correlation in the idiosyncratic components.

Chapter 2 is an empirical application of my high-frequency factor estimation techniques. Under a large dimensional approximate factor model for asset returns, I use high-frequency data for the S&P 500 firms to estimate the latent continuous and jump factors. I estimate four very persistent continuous systematic factors for 2007 to 2012 and three from 2003 to 2006. These four continuous factors can be approximated very well by a market, an oil, a finance and an electricity portfolio. The value, size and momentum factors play no significant role in explaining these factors. For the time period 2003 to 2006 the finance factor seems to disappear. There exists only one persistent jump factor, namely a market jump factor. Using implied volatilities from option price data, I analyze the systematic factor structure of the volatilities. There is only one persistent market volatility factor, while during the financial crisis an additional temporary banking volatility factor appears. Based on the estimated factors, I can decompose the leverage effect, i.e. the correlation of the asset return with its volatility, into a systematic and an idiosyncratic component. The negative leverage effect is mainly driven by the systematic component, while it can be non-existent for idiosyncratic risk.

In Chapter 3 I analyze the effect of jumps on asset pricing in arbitrage-free markets and I show that jumps have to come as a surprise in an arbitrage-free market. I model asset prices in the most general sensible form as special semimartingales. This approach allows me to also include jumps in the asset price process. I show that the existence of an equivalent martingale measure, which is essentially equivalent to no-arbitrage, implies that the asset prices cannot exhibit predictable jumps. Hence, in arbitrage-free markets the occurrence and the size of any jump of the asset price cannot be known before it happens. In practical applications it is basically not possible to distinguish between predictable and unpredictable discontinuities in the price process. The empirical literature has typically assumed as an identification condition that there are no predictable jumps. My result shows that this identification condition follows from the existence of an equivalent martingale measure, and hence essentially comes for free in arbitrage-free markets.

Chapter 4 is joint work with Behzad Nouri, Nan Chen and Paul Glasserman. Contingent capital in the form of debt that converts to equity as a bank approaches financial distress offers a potential solution to the problem of banks that are too big to fail. This chapter studies the design of contingent convertible bonds and their incentive effects in a structural model with endogenous default, debt rollover, and tail risk in the form of downward jumps in asset value. We show that once a firm issues contingent convertibles, the shareholders’ optimal bankruptcy boundary can be at one of two levels: a lower level with a lower default risk or a higher level at which default precedes conversion. An increase in the firm’s total debt load can move the firm from the first regime to the second, a phenomenon we call debt-induced collapse because it is accompanied by a sharp drop in equity value. We show that setting the contractual trigger for conversion sufficiently high avoids this hazard. With this condition in place, we investigate the effect of contingent capital and debt maturity on capital structure, debt overhang, and asset substitution. We also calibrate the model to past data on the largest U.S. bank holding companies to see what impact contingent convertible debt might have had under the conditions of the financial crisis.

Chapter 5 develops and compares different modeling approaches for contingent capital with tail risk, debt rollover and endogenous default. In order to apply contingent convertible capital in practice it is desirable to base the conversion on observable market prices that can constantly adjust to new information in contrast to accounting triggers. I show how to use credit spreads and the risk premium of credit default swaps to construct the conversion trigger and to evaluate the contracts under this specification.

We prove existence of equilibrium in a continuous-time securities market in which the securities are potentially dynamically complete: the number of securities is at least one more than the number of independent sources of uncertainty. We prove that dynamic completeness of the candidate equilibrium price process follows from mild exogenous assumptions on the economic primitives of the model. Our result is universal, rather than generic: dynamic completeness of the candidate equilibrium price process and existence of equilibrium follow from the way information is revealed in a Brownian filtration, and of a mild exogenous nondegeneracy condition on the terminal security dividends. The nondegeneracy condition, which requires that finding one point at which a determinant of a Jacobian matrix of dividends is nonzero, is very easy to check. We find that the equilibrium prices, consumptions, and trading strategies are well-behaved functions of the stochastic process describing the evolution of information. We prove that equilibria of discrete approximations converge to equilibria of the continuous-time economy.

We apply mathematical techniques in the context of economic decision making. First, we are interested in understanding the behaviors and beliefs of agents playing economic games in which the underlying action spaces are possibly non-compact and the agents’ payoff functions are possibly discontinuous. Under these circumstances, there is no guarantee of the existence of a Nash equilibrium in randomized strategies. In fact, there are games for which no Nash equilibrium exists. To restore equilibrium we allow each agent access to randomized strategies that are not necessarily countably additive. This has the unfortunate side effect of introducing uncertainty into the players’ payoff functions due to the failure of Fubini’s theorem for finitely additive measures. We introduce two ways of resolving this ambiguity and show that for one we are able to recover a general equilibrium existence result.

Next, we turn to the problem that expected utility theory typically assumes that agents use concave utility functions. This is problematic since this implies that agents are risk averse and, consequently, will not gamble. We speculate that non-concavity may be the result of agents’ utility functions arising from solving the the knapsack problem, a combinatorial optimization problem. We introduce a class of utility of wealth functions, called knapsack utility functions, which are appropriate for agents who must choose an optimal collection of indivisible goods from a countably infinite collection. We find that these functions are pure jump processes. Moreover, we find that localized regions of convexity–and thus a demand for gambling–is the norm, but that the incentive to gamble is much more pronounced at low wealth levels. We consider an intertemporal version of the problem in which the agent faces a credit constraint. We find that the agent’s utility of wealth function closely resembles a knapsack utility function when the agent’s saving rate is low.

Finally, we turn our attention to the Black-Scholes model of security price movements. Our goal is to understand the beliefs and incentives of individual agents required for the Black-Scholes model to be self-predicting. We consider a model in which each agent believes that the Black-Scholes model is correct. Each agent observes a private stream of information, which she uses to update her beliefs about future movements of the security price. Each agent is then faced with an optimization problem whose solution tells us her optimal portfolio for any given price (i.e. her demand function). Imposing market clearing conditions then determines a price at each point in time. That is, the agents prior beliefs about the security price process along with their private information streams generate a price process. We may then ask under which conditions the distribution of this process matches the agents’ prior belief. We find that that condition is fairly restrictive and imposes significant constraints on the drift of the price process when agents are homogenous and use utility functions with constant absolute risk aversion or constant relative risk aversion.

Low-risk investing refers to a diverse collection of investment strategies that emphasize low-beta, low-volatility, low idiosyncratic risk, downside protection, or risk parity. Since the 2008 financial crisis, there has been heightened interest in low-risk investing and especially in investment strategies that apply leverage to low-risk portfolios in order to enhance expected returns.

In chapter 1, we examine the well-documented low-beta anomaly. We show that despite the fact that low-beta portfolios had lower volatility than the market portfolio, some low-beta portfolios had higher realized Sharpe ratios (over a 22-year horizon) than the market portfolio. This is can not happen in an efficient market, where long-run return is expected to be earned as a reward for bearing risk, if risk is equated with volatility. We expand the notion of risk to include higher moments of the return distribution and show that excess kurtosis can make low-beta stocks and portfolios riskier than higher beta stocks and portfolios.

In chapter 2, we show that the cumulative return to a levered strategy is determined by five elements that fit together in a simple, useful formula. A previously undocumented element is the covariance between leverage and excess return to the fully invested source portfolio underlying the strategy. In an empirical study of volatility-targeting strategies over the 84-year period 1929-2012, this covariance accounted for a reduction in return that substantially diminished the Sharpe ratio in all cases.

In chapter 3, we gauge the return-generating potential of four investment strategies: value weighted, 60/40 fixed mix, unlevered and levered risk parity. We have three main findings. First, even over periods lasting decades, the start and end dates of a backtest can have a material effect on results; second, transaction costs can reverse ranking, especially when leverage is employed; third, a statistically significant return premium does not guarantee outperformance over reasonable investment horizons.

I explicitly derive the optimal dynamic incentive contract in a general continuous time agency problem where inducing static first-best action is not always optimal. My framework generates two dynamic contracts new to the literature: (1) a ``quiet-life" arrangement and (2) a suspension-based endogenously renegotiating contract. Both contractual forms induce a mixture of first-best and non-first-best action. These contracts capture common features in many real life arrangements such as ``up-or-out", partnership, tenure, hidden compensation and suspension clauses. In applications, I explore the effects of taxes, bargaining and renegotiation on optimal contracting. My technical work produces a new type of incentive scheme I call sticky incentives which underlies the optimal, infrequent-monitoring approach to inducing a mixture of first-best and non-first-best action. Furthermore, I show how differences in patience between the principal and agent factor into optimal contracting.