Skip to main content
eScholarship
Open Access Publications from the University of California

AI PULSE Papers

About AI PULSE Papers:  The Program on Understanding Law, Science, and Evidence (PULSE) at UCLA School of Law explores the complex, multi-faceted connections between technology, science, and law. 

Cover page of Artificial Intelligence in Strategic Context: An Introduction

Artificial Intelligence in Strategic Context: An Introduction

(2019)

Artificial intelligence (AI), particularly various methods of machine learning (ML), has achieved landmark advances over the past few years in applications as diverse as playing complex games, language processing, speech recognition and synthesis, image identification, and facial recognition. These breakthroughs have brought a surge of popular, journalistic, and policy attention to the field, including both excitement about anticipated advances and the benefits they promise, and concern about societal impacts and risks – potentially arising through whatever combination of accident, malicious or reckless use, or just social and political disruption from the scale and rapidity of change.

Cover page of Bezos World Or Levelers: Can We Choose Our Scenario?

Bezos World Or Levelers: Can We Choose Our Scenario?

(2019)

Artificial intelligence (AI) augurs changes in society at least as large as those of the industrial revolution.  But much of the policy debate seems narrow – extrapolating current trends and asking how we might manage their rough edges.  This essay instead explores how AI might be used to enable fundamentally different future worlds and how one such future might be enabled by AI algorithms with different goals and functions than those most common today.

Cover page of Autonomous Weapons And Coercive Threats

Autonomous Weapons And Coercive Threats

(2019)

Governments across the globe have been quick to adapt developments in artificial intelligence to military technologies. Prominent among the many changes recently introduced, autonomous weapon systems pose important new questions for our understanding of conflict generally, and coercive diplomacy in particular. These weapons dramatically decrease the cost of employing military force, in human terms on the battlefield, in financial and material terms, and in political terms for leaders who choose to pursue conflict. In this article, we analyze the implications of these new weapons for coercive diplomacy, exploring how they will influence the course of international crises. We argue that drones have different implications for relationships between relatively equal states than they do for unbalanced relationships where one state vastly overpowers the other. In asymmetric relationships, these weapons exaggerate existing power disparities. In these cases, the strong state is able to use autonomous weapons to credibly signal, avoiding traditional and more costly signals such as tripwires. At the same time, the introduction of autonomous weapons puts some important forms of signaling out reach. In symmetric conflicts where states maintain the ability to inflict heavy damages on each other, autonomous weapons will have a relatively small effect on crisis dynamics. Credible signaling will still require traditional forms of high-cost signals, including those that by design put military and civilian populations at risk.

Cover page of Technocultural Pluralism

Technocultural Pluralism

(2019)

t the end of the Cold War, the renowned political scientist, Samuel Huntington, argued that future conflicts were more likely to stem from cultural frictions– ideologies, social norms, and political systems– rather than political or economic frictions. Huntington focused his concern on the future of geopolitics in a rapidly shrinking world. But his argument applies as forcefully (if not more) to the interaction of technocultures.

Cover page of “Soft Law” Governance of Artificial Intelligence

“Soft Law” Governance of Artificial Intelligence

(2019)

On November 26, 2017, Elon Musk tweeted: “Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA wdn’t [sic] make flying safer. They’re there for good reason.”

In this and other recent pronouncements, Musk is calling for artificial intelligence (AI) to be regulated by traditional regulation, just as we regulate foods, drugs, aircraft and cars. Putting aside the quibble that food, drugs, aircraft and cars are each regulated very differently, these calls for regulation seem to envision one or more federal regulatory agencies adopting binding regulations to ensure the safety of AI. Musk is not alone in calling for “regulation” of AI, and some serious AI scholars and policymakers have likewise called for regulation of AI using traditional governmental regulatory approaches .

Cover page of One Shot Learning In AI Innovation

One Shot Learning In AI Innovation

(2019)

Modern algorithmic design far exceeds the limits of human cognition in many ways.Armed with large data sets, programmers promise that their algorithms can betterpredict which prisoners are most likely to recidivate and where future crimes arelikely to occur. Software designers further hope to use large data sets to uncoverrelationships between genes and disease that would take human researchers muchlonger to identify.

Cover page of Genetically Modified Organisms: A Precautionary Tale for AI Governance 

Genetically Modified Organisms: A Precautionary Tale for AI Governance 

(2019)

The fruits of a long anticipated technology finally hit the market, with promise to extend human life, revolutionize production, improve consumer welfare, reduce poverty, and inspire countless yet-imagined innovations. A marvel of science and engineering, it reflects the cumulative efforts of a generation of researchers backed by research funding from the U.S. government and private sector investments in (predominantly American) technology companies. Though most scientists and policy elites consider the fruits of this technology to be safe, and the technology itself as a game-changer, there is still widespread acknowledgment that certain applications raise deeply challenging ethical issues, with some commentators even warning that careless or malicious applications could cause planet-wide catastrophes. Indeed, the technology has long been a fixture of science fiction, as an antagonist in allegories about hubris and science run amok—a narrative not lost on policy makers in the United States, Europe and elsewhere as they navigate the challenges and opportunities of this potentially world-changing new technology.

Cover page of The Algorithm Dispositif (Notes Towards An Investigation)

The Algorithm Dispositif (Notes Towards An Investigation)

(2019)

How can we speak of algorithms as political?

The intuitive answer disposes us to presume that algorithms are not political. They are mathematical functions that operate to accomplish specific tasks. In this regard, algorithms operate independently of a specific belief system or of any one system’s ideological ambitions. They may be used for political ends, in the manner in which census data may be used for voter redistricting, but in and of themselves algorithms don’t do anything political.