The following (fictional) opinion of the (fictional) Zootopia Supreme Court of the (fictional) State of Zootopia is designed to highlight one particularly interesting issue raised by Solon Barocas and Andrew Selbst in Big Data’s Disparate Impact.1 Their article discusses many ways in which data-intensive algorithmic methods can go wrong when they are used to make employment and other sensitive decisions. Our vignette deals with one in particular: the use of algorithmically derived models that are both predictive of a legitimate goal and have a disparate impact on some individuals. Like Barocas and Selbst, we think it raises fundamental questions about how anti-discrimination law works and about what it ought to do. But we are perhaps slightly more optimistic than they are that the law already has the doctrinal tools it needs to deal appropriately with cases of this sort.After the statement of facts and procedural history, you will be given a chance to pause and reflect on how the case ought to be decided under existing United States law. Zootopia is south of East Dakota and north of West Carolina. It is a generic law-school hypothetical state, where federal statutes and caselaw apply, but without distracting state-specific variations. The citations to articles, statutes, regulations, and cases are real; RDL v. ZPD and Hopps v. Lionheart are not.2 Otherwise, life in Zootopia is much like life here, with one exception: It is populated entirely by animals. Published: 7 California Law Review Online 164 (2017) (responding to Solon Barocas and Andrew Selbst, Big Data’s Disparate Impact, 104 California Law Review 101 (2016))