We describe a method for direct determination and visualization of the distribution of charge in a composite electrode. Using synchrotron X-ray microdiffraction, state-of-charge profiles in-plane and normal to the current collector were measured. In electrodes charged at high rate, the signatures of nonuniform current distribution were evident. The portion of a prismatic cell electrode closest to the current collector tab had the highest state of charge due to electronic resistance in the composite electrode and supporting foil. In a coin cell electrode, the active material at the electrode surface was more fully charged than that close to the current collector because the limiting factor in this case is ion conduction in the electrolyte contained within the porous electrode.

## Type of Work

Article (64) Book (0) Theses (2) Multimedia (0)

## Peer Review

Peer-reviewed only (53)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (0) Other files (0)

## Publication Year

## Campus

UC Berkeley (11) UC Davis (8) UC Irvine (3) UCLA (28) UC Merced (0) UC Riverside (4) UC San Diego (7) UCSF (4) UC Santa Barbara (1) UC Santa Cruz (0) UC Office of the President (5) Lawrence Berkeley National Laboratory (19) UC Agriculture & Natural Resources (0)

## Department

Anderson Graduate School of Management (12) Finance (12)

Research Grants Program Office (RGPO) (5) School of Medicine (3) Department of Statistics, UCLA (2) Bourns College of Engineering (1) Center for Environmental Research and Technology (1)

Department of Earth System Science (1)

## Journal

## Discipline

Engineering (1) Medicine and Health Sciences (1) Physical Sciences and Mathematics (1)

## Reuse License

## Scholarly Works (66 results)

This paper explicitly solves a dynamic portfolio choice problem in which an investor allocates his wealth between a riskless and a risky asset. The solution shows that insights gained from studying static portfolio choice problems do not necessarily carry over to dynamic choice settings. For example, even though the risk premium of the risky asset in the problem presented here is strictly positive, holdings of that risky asset might increase with risk aversion. More surprisingly, a risk-averse investor might take a short position in the risky asset. The findings suggest that using stock holdings as a proxy for risk aversion may be inappropriate. Finally, I show that volatility might not prevent a risk averse investor from holding an infinite amount of a risky asset, contrary to Harry Markowitz’s insights on the static portfolio choice

### Losing Money on Arbitrages: Optimal Dynamic Portfolio Choice in Markets with Arbitrage Opportunities

In theory, an investor can make infinite profits by taking unlimited positions in an arbitrage. In reality, however, investors must satisfy margin requirements which completely change the economics of arbitrage. We derive the optimal investment policy for a risk-averse investor in a market where there are arbitrage opportunities. We show that it is often optimal to underinvest in the arbitrage by taking a smaller position than margin constraints allow. In some cases, it is actually optimal for an investor to walk away from a pure arbitrage opportunity. Even when the optimal policy is followed, the arbitrage strategy may underperform the riskless asset or have an unimpressive Sharpe ratio. Furthermore, the arbitrage portfolio typically experiences losses at some point before the final convergence date. These results have important implications for the role of arbitrageurs in financial markets.

This paper studies the valuation of assets with debt tax shields when debt policy is a general time-dependent function of the asset’s unlevered cash flows, value, and history. In a continuous-time setting, it shows that the value of a project’s debt tax shield satisfies a partial di�erential equation, which simplifies to an easily solved ordinary di�erential equation for most plausible debt policies. A large class of cases exhibits closed-form solutions for the value of a levered asset, the value of its tax shield, and the appropriate cost of capital for discounting unlevered cash flows so as to account for the value of the tax shield.

Gallant, Hansen and Tauchen (1990) show how to use conditioning information optimally to construct a sharper unconditional variance bound on pricing kernels. The literature predominantly resorts to a simple, sub-optimal procedure that scales returns with predictive instruments and computes standard bounds using the original and scaled returns. This article provides a formal bridge between the two approaches. We propose a optimally scaled bound, which, when the first and second conditional moments are known, coincides with the bound derived by Gallant, Hansen and Tauchen (GHT bound). When these moments are mis-specified, our optimally scaled bound still yields a valid lower bound for the standard deviation of pricing kernels, unlike the GHT bound. Moreover, the optimally scaled bound can be used as a diagnostic for the specification of the first two conditional moments of asset returns because it only achieves the maximum when the conditional mean and conditional variance are correctly specified. The illustration in this article adds time-varying volatility to the familiar Hansen-Singleton (1983) set-up of an autoregressive model for consumption growth and bond and stock returns. Both an unconstrained version and a version with the restrictions of the standard consumption-based asset pricing model imposed, serve as the data-generating processes to illustrate the behavior of the bounds. In the process, we explore an interesting empirical phenomenon: asymmetric volatility in consumption growth

We characterize the joint dynamics of expected returns, stochastic volatility, and prices. In particular, with a given dividend process, one of the processes of the expected return, the stock volatility, or the price-dividend ratio fully determines the other two. For example, the stock volatility determines the expected return and the price-dividend ratio. By parameterizing one, or more, of expected returns, volatility, or prices, common empirical specifications place strong, and sometimes inconsistent, restrictions on the dynamics of the other variables. Our results are useful for understanding the risk-return trade-off, as well as characterizing the predictability of stock returns.

Binding of transcription factors on specific sites of DNA is central to the regulation of gene expression. ChIP-seq technology is a novel tool that combines the method of chromatin immunoprecipitation (ChIP) with the next generation DNA sequencing (seq) to identify the transcription factor binding loci on DNA. ChIP-seq has revolutionized the process of biological data acquisition for elucidating fundamental gene regulation mechanisms. However, the acquired large dataset on transcription factor-DNA binding calls for analyses using statistical tools, which will provide predictions that guide the wet-lab biological research. This research is part of statistical modeling of patterns of transcription factor-DNA binding which serves to analyze the various patterns of transcription factor co-clustering on DNA in a ChIP-seq dataset obtained in the mouse embryonic stem cells for 15 transcription factors/coregulators. First, we used the Chi-square goodness of fit test to determine whether the location of binding sites for each transcription factor constitute a Poisson process. The results indicated that it is unlikely to be a homogenous Poisson process. Second, we studied the correlation among the bindings by various transcription factors. Third, the patterns of various clustered sites containing three transcription factors were analyzed. It is found that there are a total of 3353 such sites. The transcription factors Smad1, Tcfcp2l1, Stat3, Klf4 and Esrrb and the coregulator p300 are preferentially co-localized with Nanog, Oct4, Sox2, while E2f1 and Zfx are preferentially colocalized with n-Myc and c-Myc.

Predicting how and where proteins, especially transcription factors (TFs), interact with DNA is an important problem in biology. We present here a systematic study of predictive modeling approaches to the TF-DNA binding problem, which have been frequently shown to be more efficient than those methods only based on position-speciﬁc weight matrices (PWMs). In these approaches, a statistical relationship between genomic sequences and gene expression or ChIP-binding intensities is inferred through a regression framework; and inﬂuential sequence features are identiﬁed by variable selection. We examine a few state-of-the-art learning methods including stepwise linear regression, multivariate adaptive regression splines (MARS), neural networks, support vector machines, boosting, and Bayesian additive regression trees (BART). These methods are applied to both simulated datasets and two whole-genome ChIP-chip datasets on the TFs Oct4 and Sox2, respectively, in human embryonic stem cells. We ﬁnd that, with proper learning methods, predictive modeling approaches can signiﬁcantly improve the predictive power and identify more biologically interesting features, such as TF-TF interactions, than the PWM approach. In particular, BART and boosting show the best and the most robust overall performance among all the methods.