Skip to main content
eScholarship
Open Access Publications from the University of California

AI PULSE Papers

About AI PULSE Papers:  The Program on Understanding Law, Science, and Evidence (PULSE) at UCLA School of Law explores the complex, multi-faceted connections between technology, science, and law. 

Cover page of Max – A Thought Experiment: Could AI Run the Economy Better Than Markets?

Max – A Thought Experiment: Could AI Run the Economy Better Than Markets?

(2020)

One of the fundamental critiques against twentieth century experiments in central economic planning, and the main reason for their failures, was the inability of humandirected planning systems to manage the data gathering, analysis, computation, and control necessary to direct the vast complexity of production, allocation, and exchange decisions that make up a modern economy. Rapid recent advances in AI, data, and related technological capabilities have re-opened that old question, and provoked vigorous speculation about the feasibility, benefits, and threats of an AI-directed economy. This paper presents a thought experiment about how this might work, based on assuming a powerful AI agent (whimsically named “Max”) with no binding computational or algorithmic limits on its (his) ability to do the task. The paper’s novel contribution is to make this hitherto under-specified question more concrete and specific. It reasons concretely through how such a system might work under explicit assumptions about contextual conditions; what benefits it might offer relative to present market and mixed-market arrangements; what novel requirements or constraints it would present; what threats and challenges it would pose, and how it inflects long-standing understandings of foundational questions about state, society, and human liberty.

As with smaller-scale regulatory interventions, the concrete implementation of comprehensive central planning can be abstracted as intervening via controlling either quantities or prices. The paper argues that quantity-based approaches would be fundamentally impaired by problems of principal-agent relations and incentives, which hobbled historical planning systems and would persist under arbitrary computational advances. Price-based approaches, as proposed by Oskar Lange, do not necessarily suffer from the same disabilities. More promising than either, however, would be a variant in which Max manages a comprehensive system of price modifications added to emergent market outcomes, equivalent to a comprehensive economy-wide system of Pigovian taxes and subsidies. Such a system, “Pigovian Max,” could in principle realize the information efficiency benefits and liberty interests of decentralized market outcomes, while also comprehensively correcting externalities and controlling inefficient concentration of market power and associated rent-seeking behavior. It could also, under certain additional assumptions, offer the prospect of taxation without deadweight loss, by taking all taxes from inframarginal rents.

Having outlined the basic approach and these potential benefits, the paper discusses several challenges and potential risks presented by such a system. These include Max’s need for data and the potential costs of providing it; the granularity or aggregation of Max’s determinations; the problem of maintaining variety and innovation in an economy directed by Max; the implications of Max for the welfare of human workers, the meaning and extent of property rights, and associated liberty interests; the definition of social welfare that determines Max’s objective function, its compatibility with democratic control, and the resultant stability of the boundary between the state and the economy; and finally, the relationship of Max to AI-enabled trends already underway, with implications for the feasibility of Max being developed and adopted, and the associated risks. In view of the depth and difficulty of these questions, the discussion of each is necessarily preliminary and speculative.

Cover page of AI & Agency

AI & Agency

(2019)

In July of 2019, at the Summer Institute on AI and Society in Edmonton, Canada (co-sponsored by CIFAR and the AI Pulse Project of UCLA Law), scholars from across disciplines came together in an intensive workshop. For the second half of the workshop, the cohort split into smaller working groups to delve into specific topics related to AI and Society.

I proposed deeper exploration on the topic of “agency,” which is defined differently across domains and cultures, and relates to many of the topics of discussion in AI ethics, including responsibility and accountability. It is also the subject of an ongoing art and research project I’m producing. As a group, we looked at definitions of agency across fields, found paradoxes and incongruities, shared our own questions, and produced a visual map of the conceptual space. We decided that our disparate perspectives were better articulated through a collection of short written pieces, presented as a set, rather than a singular essay on the topic. The outputs of this work are shared here.

This set of essays, many of which are framed as provocations, suggests that there remain many open questions, and inconsistent assumptions on the topic. Many of the writings include more questions than answers, encouraging readers to revisit their own beliefs about agency. As we further develop AI systems, and refer to humans and non-humans as “agents”– we will benefit from a better understanding of what we mean when we call something an “agent” or claim that an action involves “agency.” This work is under development and many of us will continue to explore this in our ongoing AI work. 

– Sarah Newman, Project Lead, August 2019

From Shortcut to Sleight of Hand: Why the Checklist Approach in the EU Guidelines Does Not Work

(2019)

In April 2019, the High-Level Expert Group on Artificial Intelligence (AI) nominated by the EU Commission presented “Ethics Guidelines for Trustworthy Artificial Intelligence,” followed in June 2019 by a second “Policy and investment recommendations” Document.

The Guidelines establish three characteristics (lawful, ethical, and robust) and seven key requirements (Human agency and oversight; Technical Robustness and safety; Privacy and data governance; Transparency; Diversity, non-discrimination and fairness; Societal and environmental well-being; and Accountability) that the development of AI should follow.

The Guidelines are of utmost significance for the international debate over the regulation of AI. Firstly, they aspire to set a universal standard of care for the development of AI in the future. Secondly, they have been developed within a group of experts nominated by a regulatory body, and therefore will shape the normative approach in the EU regulation of AI and in its interaction with foreign countries. As the GDPR has shown, the effect of this normative activity goes way past the European Union territory.

One of the most debated aspects of the Guidelines was the need to find an objective methodology to evaluate conformity with the key requirements. For this purpose, the Expert Group drafted an “assessment checklist” in the last part of the document: the list is supposed to be incorporated into existing practices, as a way for technology developers to consider relevant ethical issues and create more “trustworthy” AI. Our group undertook a critical assessment of the proposed tool from a multidisciplinary perspective, to assess its implications and limitations for global AI development.

Cover page of Artificial Intelligence’s Societal Impacts, Governance, and Ethics: Introduction to the 2019 Summer Institute on AI and Society and its Rapid Outputs

Artificial Intelligence’s Societal Impacts, Governance, and Ethics: Introduction to the 2019 Summer Institute on AI and Society and its Rapid Outputs

(2019)

The works assembled here are the initial outputs of the First International Summer Institute on Artificial Intelligence and Society (SAIS). The Summer Institute was convened from July 21 to 24, 2019 at the Alberta Machine Intelligence Institute (Amii) in Edmonton, in conjunction with the 2019 Deep Learning/Reinforcement Learning Summer School. The Summer Institute was jointly sponsored by the AI Pulse project of the UCLA School of Law (funded by a generous grant from the Open Philanthropy Project) and the Canadian Institute for Advanced Research (CIFAR), and was coorganized by Ted Parson (UCLA School of Law), Alona Fyshe (University of Alberta and Amii), and Dan Lizotte (University of Western Ontario). The Summer Institute brought together a distinguished international group of 80 researchers, professionals, and advanced students from a wide range of disciplines and areas of expertise, for three days of intensive mutual instruction and collaborative work on the societal implications of AI, machine learning, and related technologies. The scope of discussions at the Summer Institute was broad, including all aspects of the societal impacts of AI, lternative approaches to their governance, and associated ethical issues.

Cover page of Mob.ly App Makes Driving Safer by Changing How Drivers Navigate

Mob.ly App Makes Driving Safer by Changing How Drivers Navigate

(2019)

A group of multi-disciplinary researchers from across North America today announced the launch of a new app, Mob.ly, that reduces the incidents of road rage by promoting a driver’s sense of well-being and safety without sacrificing efficiency and access.

Cover page of On Meaningful Human Control in High-Stakes Machine-Human Partnerships

On Meaningful Human Control in High-Stakes Machine-Human Partnerships

(2019)

Our team at the Summer Institute was diverse in both skills (including technical computer science, cognitive science, systems innovation, and radiology expertise) and career stage (including faculty, graduate students, and a medical student). We were brought together at the ‘pitch’ stage by a mutual interest in human-machine partnerships in complex, high-stakes domains such as healthcare, transport, and autonomous weapons. We began with a focus on the topic of “meaningful human control” – a term most often applied in the autonomous weapons literature, which refers broadly to human participation in the deployment and operation of potentially autonomous artificial intelligence (AI) systems, such that the human has a meaningful contribution to decisions and outcomes.

Cover page of AI Without Math: Making AI and ML Comprehensible

AI Without Math: Making AI and ML Comprehensible

(2019)

If we want nontechnical stakeholders to respond to artificial intelligence developments in an informed way, we must help them acquire a more-than-superficial understanding of artificial intelligence (AI) and machine learning (ML). Explanations involving formal mathematical notation will not reach most people who need to make informed decisions about AI. We believe it is possible to teach many AI and ML concepts without slipping into mathematical notation.

Cover page of Could AI Drive Transformative Social Progress? What Would This Require?

Could AI Drive Transformative Social Progress? What Would This Require?

(2019)

The potential societal impacts of artificial intelligence (AI) and related technologiesare so vast, they are often likened to those of past transformative technologicalchanges such as the industrial or agricultural revolutions. They are also deeplyuncertain, presenting a wide range of possibilities for good or ill – as indeed thediverse technologies lumped under the term AI are themselves diffuse, labile, anduncertain. Speculation about AI’s broad social impacts ranges from full-on utopia todystopia, both in fictional and non-fiction accounts. Narrowing the field of view fromaggregate impacts to particular impacts and their mechanisms, there is substantial(but far from total) agreement on some – e.g., profound disruption of labor markets,with the prospect of unemployment that is novel in scale and breadth – but greatuncertainty on others, even as to sign. Will AI concentrate or distribute economicand political power – and if concentrate, then in whom? Will it make human lives andsocieties more diverse or more uniform? Expand or contract individual liberty?Enrich or degrade human capabilities? On all these points, the range of presentspeculation is vast. 

Cover page of Siri Humphrey: Design Principles for an AI Policy Analyst

Siri Humphrey: Design Principles for an AI Policy Analyst

(2019)

This workgroup considered whether the policy analysis function in government could be replaced by an artificial intelligence policy analyst (AIPA) that responds directly to requests for information and decision support from political and administrative leaders. We describe the current model for policy analysis, identify the design criteria for an AIPA, and consider its limitations should it be adopted. A core limitation is the essential human interaction between a decision maker and an analyst/advisor, which extends the meaning and purpose of policy analysis beyond a simple synthesis or technical analysis view (each of which is nonetheless a complex task in its own right). Rather than propose a wholesale replacement of policy analysts with AIPA, we reframe the question focussing on the use of AI by human policy analysts for augmenting their current work, what we term intelligence-amplified policy analysis (IAPA). We conclude by considering how policy analysts, schools of public affairs, and institutions of government will need to adapt to the changing nature of policy analysis in an era of increasingly capable AI.

Cover page of Creating a Tool to Reproducibly Estimate the Ethical Impact of Artificial Intelligence

Creating a Tool to Reproducibly Estimate the Ethical Impact of Artificial Intelligence

(2019)

How can an organization systematically and reproducibly measure the ethical impact of its AI-enabled platforms? Organizations that create applications enhanced by artificial intelligence and machine learning (AI/ML) are increasingly asked to review the ethical impact of their work. Governance and oversight organizations are increasingly asked to provide documentation to guide the conduct of ethical impact assessments. This document outlines a draft procedure for organizations to evaluate the ethical impacts of their work. We propose that ethical impact can be evaluated via a principles-based approach when the effects of platforms’ probable uses are interrogated through informative questions, with answers scaled and weighted to produce a multi-layered score. We initially assess ethical impact as the summed score of a project’s potential to protect human rights. However, we do not suggest that the ethical impact of platforms is assessed exclusively through preservation of human rights alone, a decidedly difficult concept to measure. Instead, we propose that ethical impact can be measured through a similar procedure assessing conformity with other important principles such as: protection of decisional autonomy, explainability, reduction of bias, assurances of algorithmic competence, or safety. In this initial draft paper, we demonstrate the application of our method for ethical impact assessment to the principles of human rights and bias.