Skip to main content
eScholarship
Open Access Publications from the University of California

AI PULSE Papers

About AI PULSE Papers:  The Program on Understanding Law, Science, and Evidence (PULSE) at UCLA School of Law explores the complex, multi-faceted connections between technology, science, and law. 

Cover page of AI & Agency

AI & Agency

(2019)

In July of 2019, at the Summer Institute on AI and Society in Edmonton, Canada (co-sponsored by CIFAR and the AI Pulse Project of UCLA Law), scholars from across disciplines came together in an intensive workshop. For the second half of the workshop, the cohort split into smaller working groups to delve into specific topics related to AI and Society.

I proposed deeper exploration on the topic of “agency,” which is defined differently across domains and cultures, and relates to many of the topics of discussion in AI ethics, including responsibility and accountability. It is also the subject of an ongoing art and research project I’m producing. As a group, we looked at definitions of agency across fields, found paradoxes and incongruities, shared our own questions, and produced a visual map of the conceptual space. We decided that our disparate perspectives were better articulated through a collection of short written pieces, presented as a set, rather than a singular essay on the topic. The outputs of this work are shared here.

This set of essays, many of which are framed as provocations, suggests that there remain many open questions, and inconsistent assumptions on the topic. Many of the writings include more questions than answers, encouraging readers to revisit their own beliefs about agency. As we further develop AI systems, and refer to humans and non-humans as “agents”– we will benefit from a better understanding of what we mean when we call something an “agent” or claim that an action involves “agency.” This work is under development and many of us will continue to explore this in our ongoing AI work. 

– Sarah Newman, Project Lead, August 2019

Cover page of “Soft Law” Governance of Artificial Intelligence

“Soft Law” Governance of Artificial Intelligence

(2019)

On November 26, 2017, Elon Musk tweeted: “Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA wdn’t [sic] make flying safer. They’re there for good reason.”

In this and other recent pronouncements, Musk is calling for artificial intelligence (AI) to be regulated by traditional regulation, just as we regulate foods, drugs, aircraft and cars. Putting aside the quibble that food, drugs, aircraft and cars are each regulated very differently, these calls for regulation seem to envision one or more federal regulatory agencies adopting binding regulations to ensure the safety of AI. Musk is not alone in calling for “regulation” of AI, and some serious AI scholars and policymakers have likewise called for regulation of AI using traditional governmental regulatory approaches .

Cover page of Max – A Thought Experiment: Could AI Run the Economy Better Than Markets?

Max – A Thought Experiment: Could AI Run the Economy Better Than Markets?

(2020)

One of the fundamental critiques against twentieth century experiments in central economic planning, and the main reason for their failures, was the inability of humandirected planning systems to manage the data gathering, analysis, computation, and control necessary to direct the vast complexity of production, allocation, and exchange decisions that make up a modern economy. Rapid recent advances in AI, data, and related technological capabilities have re-opened that old question, and provoked vigorous speculation about the feasibility, benefits, and threats of an AI-directed economy. This paper presents a thought experiment about how this might work, based on assuming a powerful AI agent (whimsically named “Max”) with no binding computational or algorithmic limits on its (his) ability to do the task. The paper’s novel contribution is to make this hitherto under-specified question more concrete and specific. It reasons concretely through how such a system might work under explicit assumptions about contextual conditions; what benefits it might offer relative to present market and mixed-market arrangements; what novel requirements or constraints it would present; what threats and challenges it would pose, and how it inflects long-standing understandings of foundational questions about state, society, and human liberty.

As with smaller-scale regulatory interventions, the concrete implementation of comprehensive central planning can be abstracted as intervening via controlling either quantities or prices. The paper argues that quantity-based approaches would be fundamentally impaired by problems of principal-agent relations and incentives, which hobbled historical planning systems and would persist under arbitrary computational advances. Price-based approaches, as proposed by Oskar Lange, do not necessarily suffer from the same disabilities. More promising than either, however, would be a variant in which Max manages a comprehensive system of price modifications added to emergent market outcomes, equivalent to a comprehensive economy-wide system of Pigovian taxes and subsidies. Such a system, “Pigovian Max,” could in principle realize the information efficiency benefits and liberty interests of decentralized market outcomes, while also comprehensively correcting externalities and controlling inefficient concentration of market power and associated rent-seeking behavior. It could also, under certain additional assumptions, offer the prospect of taxation without deadweight loss, by taking all taxes from inframarginal rents.

Having outlined the basic approach and these potential benefits, the paper discusses several challenges and potential risks presented by such a system. These include Max’s need for data and the potential costs of providing it; the granularity or aggregation of Max’s determinations; the problem of maintaining variety and innovation in an economy directed by Max; the implications of Max for the welfare of human workers, the meaning and extent of property rights, and associated liberty interests; the definition of social welfare that determines Max’s objective function, its compatibility with democratic control, and the resultant stability of the boundary between the state and the economy; and finally, the relationship of Max to AI-enabled trends already underway, with implications for the feasibility of Max being developed and adopted, and the associated risks. In view of the depth and difficulty of these questions, the discussion of each is necessarily preliminary and speculative.

From Shortcut to Sleight of Hand: Why the Checklist Approach in the EU Guidelines Does Not Work

(2019)

In April 2019, the High-Level Expert Group on Artificial Intelligence (AI) nominated by the EU Commission presented “Ethics Guidelines for Trustworthy Artificial Intelligence,” followed in June 2019 by a second “Policy and investment recommendations” Document.

The Guidelines establish three characteristics (lawful, ethical, and robust) and seven key requirements (Human agency and oversight; Technical Robustness and safety; Privacy and data governance; Transparency; Diversity, non-discrimination and fairness; Societal and environmental well-being; and Accountability) that the development of AI should follow.

The Guidelines are of utmost significance for the international debate over the regulation of AI. Firstly, they aspire to set a universal standard of care for the development of AI in the future. Secondly, they have been developed within a group of experts nominated by a regulatory body, and therefore will shape the normative approach in the EU regulation of AI and in its interaction with foreign countries. As the GDPR has shown, the effect of this normative activity goes way past the European Union territory.

One of the most debated aspects of the Guidelines was the need to find an objective methodology to evaluate conformity with the key requirements. For this purpose, the Expert Group drafted an “assessment checklist” in the last part of the document: the list is supposed to be incorporated into existing practices, as a way for technology developers to consider relevant ethical issues and create more “trustworthy” AI. Our group undertook a critical assessment of the proposed tool from a multidisciplinary perspective, to assess its implications and limitations for global AI development.

Cover page of Bezos World Or Levelers: Can We Choose Our Scenario?

Bezos World Or Levelers: Can We Choose Our Scenario?

(2019)

Artificial intelligence (AI) augurs changes in society at least as large as those of the industrial revolution.  But much of the policy debate seems narrow – extrapolating current trends and asking how we might manage their rough edges.  This essay instead explores how AI might be used to enable fundamentally different future worlds and how one such future might be enabled by AI algorithms with different goals and functions than those most common today.

Cover page of Autonomous Weapons And Coercive Threats

Autonomous Weapons And Coercive Threats

(2019)

Governments across the globe have been quick to adapt developments in artificial intelligence to military technologies. Prominent among the many changes recently introduced, autonomous weapon systems pose important new questions for our understanding of conflict generally, and coercive diplomacy in particular. These weapons dramatically decrease the cost of employing military force, in human terms on the battlefield, in financial and material terms, and in political terms for leaders who choose to pursue conflict. In this article, we analyze the implications of these new weapons for coercive diplomacy, exploring how they will influence the course of international crises. We argue that drones have different implications for relationships between relatively equal states than they do for unbalanced relationships where one state vastly overpowers the other. In asymmetric relationships, these weapons exaggerate existing power disparities. In these cases, the strong state is able to use autonomous weapons to credibly signal, avoiding traditional and more costly signals such as tripwires. At the same time, the introduction of autonomous weapons puts some important forms of signaling out reach. In symmetric conflicts where states maintain the ability to inflict heavy damages on each other, autonomous weapons will have a relatively small effect on crisis dynamics. Credible signaling will still require traditional forms of high-cost signals, including those that by design put military and civilian populations at risk.

Cover page of Artificial Intelligence’s Societal Impacts, Governance, and Ethics: Introduction to the 2019 Summer Institute on AI and Society and its Rapid Outputs

Artificial Intelligence’s Societal Impacts, Governance, and Ethics: Introduction to the 2019 Summer Institute on AI and Society and its Rapid Outputs

(2019)

The works assembled here are the initial outputs of the First International Summer Institute on Artificial Intelligence and Society (SAIS). The Summer Institute was convened from July 21 to 24, 2019 at the Alberta Machine Intelligence Institute (Amii) in Edmonton, in conjunction with the 2019 Deep Learning/Reinforcement Learning Summer School. The Summer Institute was jointly sponsored by the AI Pulse project of the UCLA School of Law (funded by a generous grant from the Open Philanthropy Project) and the Canadian Institute for Advanced Research (CIFAR), and was coorganized by Ted Parson (UCLA School of Law), Alona Fyshe (University of Alberta and Amii), and Dan Lizotte (University of Western Ontario). The Summer Institute brought together a distinguished international group of 80 researchers, professionals, and advanced students from a wide range of disciplines and areas of expertise, for three days of intensive mutual instruction and collaborative work on the societal implications of AI, machine learning, and related technologies. The scope of discussions at the Summer Institute was broad, including all aspects of the societal impacts of AI, lternative approaches to their governance, and associated ethical issues.

Cover page of Genetically Modified Organisms: A Precautionary Tale for AI Governance 

Genetically Modified Organisms: A Precautionary Tale for AI Governance 

(2019)

The fruits of a long anticipated technology finally hit the market, with promise to extend human life, revolutionize production, improve consumer welfare, reduce poverty, and inspire countless yet-imagined innovations. A marvel of science and engineering, it reflects the cumulative efforts of a generation of researchers backed by research funding from the U.S. government and private sector investments in (predominantly American) technology companies. Though most scientists and policy elites consider the fruits of this technology to be safe, and the technology itself as a game-changer, there is still widespread acknowledgment that certain applications raise deeply challenging ethical issues, with some commentators even warning that careless or malicious applications could cause planet-wide catastrophes. Indeed, the technology has long been a fixture of science fiction, as an antagonist in allegories about hubris and science run amok—a narrative not lost on policy makers in the United States, Europe and elsewhere as they navigate the challenges and opportunities of this potentially world-changing new technology.

Cover page of One Shot Learning In AI Innovation

One Shot Learning In AI Innovation

(2019)

Modern algorithmic design far exceeds the limits of human cognition in many ways.Armed with large data sets, programmers promise that their algorithms can betterpredict which prisoners are most likely to recidivate and where future crimes arelikely to occur. Software designers further hope to use large data sets to uncoverrelationships between genes and disease that would take human researchers muchlonger to identify.

Cover page of Mob.ly App Makes Driving Safer by Changing How Drivers Navigate

Mob.ly App Makes Driving Safer by Changing How Drivers Navigate

(2019)

A group of multi-disciplinary researchers from across North America today announced the launch of a new app, Mob.ly, that reduces the incidents of road rage by promoting a driver’s sense of well-being and safety without sacrificing efficiency and access.