From the moment social media companies like Facebook were created, they have been largely immune to suit for the actions they take with respect to user content. This is thanks to Section 230 of the Communications Decency Act, 47 U.S.C. § 230, which offers broad immunity to sites for content posted by users. But seemingly the only thing a deeply divided legislature can agree on is that Section 230 must be amended, and soon. Once that immunity is altered, either by Congress or the courts, these companies may be liable for the decisions and actions of their algorithmic recommendation systems, artificial intelligence models that sometimes amplify the worst in our society, as Facebook whistleblower Frances Haugen explained to Congress in her testimony.
But what, exactly, will it look like to sue a company for the actions of an algorithm?
Whether through torts like defamation or under certain statutes, such as those aimed at curbing terrorism, the mechanics of bringing such a claim will surely occupy academics and practitioners in the wake of changes to Section 230. To that end, this Article is the first to examine how the issue of algorithmic amplification might be addressed by agency principles of direct and vicarious liability, specifically within the context of holding social media companies accountable. As such, this Article covers the basics of algorithmic recommendation systems, discussing them in layman’s terms and explaining why Section 230 reform may spur claims that have a profound impact on traditional tort law. The Article looks to sex trafficking claims made against social media companies—an area already exempted from Section 230’s shield—as an early model of how courts might address other claims against these companies. It also examines the potential hurdles, such as causation, that will remain even when Section 230 is amended. It concludes by offering certain policy considerations for both lawmakers and jurists.