This report presents the theoretical development of a method to evaluate differing platoon control strategies and determine each strategy's worst case behavior under bounded parametric variations. The approach is useful in aiding a platoon designer in determining the robustness a design strategy is in the face of system uncertainties. Index Terms: traffic platooning // safety // automated highways // vehicle dynami

# Your search: "author:Packard, Andrew"

## filters applied

## Type of Work

Article (2) Book (0) Theses (7) Multimedia (0)

## Peer Review

Peer-reviewed only (8)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (0) Other files (0)

## Publication Year

## Campus

UC Berkeley (9) UC Davis (0) UC Irvine (0) UCLA (0) UC Merced (0) UC Riverside (0) UC San Diego (0) UCSF (0) UC Santa Barbara (0) UC Santa Cruz (0) UC Office of the President (1) Lawrence Berkeley National Laboratory (1) UC Agriculture & Natural Resources (0)

## Department

University of California Institute of Transportation Studies (1) Institute of Transportation Studies at UC Berkeley (1) California Partners for Advanced Transportation Technology (1)

## Journal

## Discipline

## Reuse License

## Scholarly Works (9 results)

Safety in the execution of the sit-to-stand movement is a key feature for wide adoption of powered lower limb orthoses that assist the mobility of patients with complete paraplegia. This work provides techniques for planning the motion of these medical devices to yield biomechanically sound configurations, designing tracking controllers for the reference trajectories of the movements, evaluating the robustness of the controllers against parameter uncertainty, and assessing the ability of a proxy for the user to coordinate with the control input during rehabilitation and physical therapy sessions. Although our ideas can be applied to analyze any powered orthosis in the market, the featured numerical simulations consider a minimally actuated orthosis at the hips.

The orthosis and its user are modeled as a three-link planar robot. The reference trajectories for the angular position of the links are defined from the desired behavior for the Center of Mass of the system, and the corresponding input trajectory is obtained using a computed torque method with control allocation. With the Jacobian linearization of the dynamics about the reference trajectories, a pool of finite time horizon LQR gains are designed assuming that there is control authority over the actuators of the orthosis, and the torque and forces that are applied by the user. Conducting reachability analysis, we define a performance metric for the robustness of the closed-loop system against parameter uncertainty, and choose the gain from the pool that optimizes it. Replacing the presumed controlled actions of the user with an Iterative Learning Control algorithm as a substitute for human experiments, we find that the algorithm obtains torque and forces that result in successful sit-to-stand movement, regardless of parameter uncertainty, and factors deliberately introduced to hinder learning. Thus we conclude that it is reasonable to expect that the superior cognitive skills of real users will enable them to synchronize with the controller of the hips through training. Further tests are performed to verify the robustness of the system in feedback with the LQR gain in the presence of measurement noise, and model uncertainty.

We believe that our tests can set a good benchmark to systematically choose actuators for fitting a large variety of users, and develop a protocol for assessing the robustness of the sit-to-stand movement in clinical trials. This would then help to close the gap between these medical devices and standing wheelchairs, which still remain the most reliable mobility solution for patients with complete paraplegia.

Modern computational tools for stability, performance, and safety certification are not scalable to large nonlinear systems. In this dissertation we propose a compositional analysis approach that takes advantage of the interconnected structure of many modern large-scale systems to solve this problem. Specifically, we pose the certification problem as a distributed optimization that searches over the input-output properties of each subsystem to certify a desired property of the interconnected system. The alternating direction method of multipliers (ADMM), a popular distributed optimization technique, is employed to decompose and solve this problem.

This approach is very general in that it allows us to search over a wide range of input-output properties for each subsystem. We demonstrate the use of dissipativity, equilibrium independent dissipativity (EID), and integral quadratic constraints (IQCs) to characterize the properties of the individual subsystems and the entire interconnection. Multiple examples showing the applicability and scalability of the approach are presented.

Furthermore, we demonstrate how symmetries in the interconnection topology can be exploited to further improve the computational efficiency and scalability of the distributed optimization problem. Unlike other symmetry reduction techniques this approach does not require the subsystems to be identical, but only to share input-output properties. Thus, it can be applied to many real world systems. We demonstrate these reduction techniques on a large-scale nonlinear example and a vehicle platoon example.

Finally, we present a passivity-based formation control strategy for multiple unmanned aerial vehicles (UAVs) cooperatively carrying a suspended load. This strategy is designed such that the input-output properties of the individual UAVs and the interconnection structure guarantee stability of the system under appropriate conditions. Specifically, we show that the system is stable for any configurations where the cables carrying the suspended load are in tension.

Plants have internal stress response mechanisms. These mechanisms are activated in response to environmental stimuli. Measuring the activation of these mechanisms directly gives a clearer picture of how an environmental stimulus is actually affecting the stress state of the plant. This information is valuable in an agricultural context because these internal mechanisms are generally upstream of observable physiological manifestations of stress. But measuring the activity of these pathways is often expensive and time-consuming. We will demonstrate an optical biosensor system reporting the activity of a heat- and drought- sensitive gene pathway in *Arabidopsis thaliana*. The marginal cost per data point of this system is very low, and the rate of data acquisition is faster than the dynamics of the gene pathway, so the biosensor system can easily and quickly detect transient changes in pathway activation, opening a real-time window into the heat- and drought-stress state of the plant. With this real-time information in hand, we proceed to close the loop around the biosensor output using a computer-based feedback policy to maintain a constant biosensor expression level using the temperature in a small greenhouse as our control input. Along the way we will discover that as complicated and unknown as the internal dynamics of this process may be, a mechanistic model is not necessary to design a feedback policy that is robust to biological variability in the plants.

This thesis investigates performance analysis for nonlinear systems, which consist of both known and unknown dynamics and may only be defined locally. We apply combinations of integral quadratic constraints (IQCs), developed by Megretski and Rantzer, and sum-of-squares (SOS) techniques for the analysis.

In this context, analysis of stability and input-output properties is performed in three ways.

If the known portion of the dynamics is linear, the stability test from Megretski and Rantzer, which generalize early frequency-domain based theorems of robust control (Zames, Safonov, Doyle, and others), are well suited. If the known portion of the dynamics is nonlinear, frequency domain methods are not directly applicable. SOS methods using polynomial storage functions to satisfy dissipation inequalities are used to certify the stability and performance characteristics. However, if the known dynamics are high dimensional, then this approach to the analysis is (currently) intractable. An alternate approach is proposed here to address this dimensionality issue. The known portion is decomposed into a linear interconnection of smaller, nonlinear systems. We derive IQCs satisfied by the nonlinear subsystems. This is computationally feasible. With this library of IQCs coarsely describing the subsystems' behaviors, we apply the techniques from Megretski and Rantzer to the interconnection description involving the known linear part and all of the individual subsystems.

Traditionally, IQCs have been used to cover unknown portions of the dynamics. Our approach is novel in that we cover known nonlinear dynamics with IQCs, by employing SOS methods including novel techniques for estimating the input-output gain of a system. This perspective is a step towards reducing the dimensionality of the analysis of large, interconnected nonlinear systems.

The IQC stability analysis by Megretski and Rantzer is only applicable for systems that are well-posed in the large. This thesis makes contributions towards extending this analysis for with more limited notions of well-posedness. We define the notion of a local or "conditional" IQC, and develop a new test to verify stability and performance criteria.

We also study a specific class of interconnected, passive subsystems. If the subsystems also exhibit gain roll-off at high frequencies, one would expect improved analysis results. In fact, we characterized the gain roll-off property as an integral quadratic constraint, and achieved an improved bound on the performance with respect to the allowable time delay in order for the interconnected system to remain stable. In the case where the interconnection is cyclic, we derive an analytical condition for stability.

Model validation is the process of evaluating how well a computational model represents reality. That is to say, does the model make predictions that adequately agree with the experimental evidence? Both model validation and uncertainty quantification have gained tremendous attention from researchers in engineering, physics, chemistry, and biology. Uncertainty quantification methods have been successfully applied to assessing model predictions of unmeasured quantities of interest and assisting in the development of computationally efficient, yet predictive, reduced-order models. In both cases, experimental data are incorporated into the analysis to refine the uncertainty estimate. However, with the amount of experimental data published and being generated through ongoing scientific endeavors, it is crucial to organize and integrate experimental data with the uncertainty quantification methods.

In this work, I develop tools for uncertainty quantification and construct a validation workflow that seamlessly integrates uncertainty quantification tools with an online database of chemical kinetics validation data. The first part of this dissertation discusses the need for structured experimental data, emphasizing its value towards model validation, and explore how online databases provide structure to data. An optimization-based framework for uncertainty quantification, Bound-to-Bound Data Collaboration, is employed throughout the dissertation to verify the compatibility of models with data. A novel strategy for surrogate modeling using Bound-to-Bound Data Collaboration is developed to guide the fitting procedure towards regions of the parameter space where the model predicts the data accurately. This technique is demonstrated in two simple examples and a solid-fuel combustion example. In the second part of this dissertation, three complex physics-based models are investigated, specifically H2/O2 combustion, a solid-fuel char oxidation model, and a semi-empirical quantum chemistry model. The efficacy of the validation workflow for developing predictive models, and the scientific insights uncovered from the analysis, is discussed.

This dissertation discusses uncertainty quantication as posed in the Data Collaboration framework. Data Collaboration is a methodology for combining experimental data and system models to induce constraints on a set of uncertain system parameters. The framework is summarized, including outlines of notation and techniques. The main techniques include polynomial optimization and surrogate modeling to ascertain the consistency of all data and models as well as propagate uncertainty in the form of a model prediction.

One of the main methods of Data Collaboration is using techniques of nonconvex quadratically constrained quadratic programming to provide both lower and upper bounds on the various objectives. The Lagrangian dual of the NQCQP provides both an outer bound to the optimal objective as well as Lagrange multipliers. These multipliers act as sensitivity measures relaying the effects of changes to the parameter constraint bounds on the optimal objective. These multipliers are rewritten to provide the sensitivity to uncertainty in the response prediction with respect to uncertainty in the parameters and experimental data.

It is often of interest to find a vector of parameters that is both feasible and representative of the current community work and knowledge. This is posed as the problem of finding the minimal number of parameters that must deviate from their literature value to achieve concurrence with all experimental data constraints. This problem is heuristically solved using the l1-norm in place of the cardinality function. A lower bound on the objective is provided through an NQCQP formulation.

In order to use the NQCQP techniques, the system models need to have quadratic forms. When they do not have quadratic forms, surrogate models are fitted. Surrogate modeling can be difficult for complex models with large numbers of parameters and long simulation times because of the amount of evaluation-time required to make a good fit. New techniques are developed for searching for an active subspace of the parameters, and subsequently creating an experiment design on the active subspace that adheres to the original parameter constraints. The active subspace can have a dimension signicantly lower than the original parameter dimension thereby reducing the computational complexity of generating the surrogate model. The technique is demonstrated on several examples from combustion chemistry and biology.

Several other applications of the Data Collaboration framework are presented. They are used to demonstrate the complexity of describing a high dimensional feasible set of parameter values as constrained by experimental data. Approximating the feasible set can lead to a simple description, but the predictive capability of such a set is significantly reduced compared to the actual feasible set. This is demonstrated on an example from combustion chemistry.

The dissertation explores the problem of rigorously quantifying the performance of a fault diagnosis scheme in terms of probabilistic performance metrics. Typically, when the performance of a fault diagnosis scheme is of utmost importance, physical redundancy is used to create a highly reliable system that is easy to analyze. However, in this dissertation, we provide a general framework that applies to more complex analytically redundant or model-based fault diagnosis schemes. For each fault diagnosis problem in this framework, our performance metrics can be computed accurately in polynomial-time.

First, we cast the fault diagnosis problem as a sequence of hypothesis tests. At each time, the performance of a fault diagnosis scheme is quantified by the probability that the scheme has chosen the correct hypothesis. The resulting performance metrics are joint probabilities. Using Bayes rule, we decompose these performance metrics into two parts: marginal probabilities that quantify the reliability of the system and conditional probabilities that quantify the performance of the fault diagnosis scheme. These conditional probabilities are used to draw connections between the fault diagnosis and the fields of medical diagnostic testing, signal detection, and general statistical decision theory.

Second, we examine the problem of computing the performance metrics efficiently and accurately. To solve this problem, we examine each portion of the fault diagnosis problem and specify a set of sufficient assumptions that guarantee efficient computation. In particular, we provide a detailed characterization of the class of finite-state Markov chains that lead to tractable fault parameter models. To demonstrate that these assumptions enable efficient computation, we provide pseudocode algorithms and prove that their running time is indeed polynomial.

Third, we consider fault diagnosis problems involving uncertain systems. The inclusion of uncertainty enlarges the class of systems that may be analyzed with our framework. It also addresses the issue of model mismatch between the actual system and the system used to design the fault diagnosis scheme. For various types of uncertainty, we present convex optimization problems that yield the worst-case performance over the uncertainty set.

Finally, we discuss applications of the performance metrics and compute the performance for two fault diagnosis problems. The first problem is based on a simplified air-data sensor model, and the second problem is based on a linearized vertical take-off and landing aircraft model.