Limits and risks of GPT-3 applications
Skip to main content
eScholarship
Open Access Publications from the University of California

Limits and risks of GPT-3 applications

Creative Commons 'BY' version 4.0 license
Abstract

Limits and risks of GPT-3 applications Recent GPT-3 applications have fed new optimism in AI research. To explore limits and risks, we investigated how far we can get in developing a so-called digital replica of Daniel Dennett, generating text outputs from a large language model trained on Dennett’s philosophical writing. In consultation with Dennett himself, we compare several fine-tuning strategies and evaluate outputs. Analyzing the failures and the successes will allow us to address technical and ethical issues such as: 1. How accurate can such models be with current technology? 2. To what extent (if at all) might it be acceptable to present a model’s outputs as representing an author’s view? 3. Should copyright holders have control over such models? 4. How one can address risks of making such models public (including possible over-reliance on their accuracy)? 5. How good are experts and non-experts in distinguishing humans from replicas?

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View