CONNECTING PEOPLE TO IDEAS AND TO EACH OTHER
CONNECTING PEOPLE TO IDEAS AND TO EACH OTHER
Essay

Computers and Robots Can Copy Your Work, and Get Away With It

So Long as Computers Don’t Understand the Copied Content, Copyright Law Will Stay Focused on People

Copyright has a weird relationship with computers. Sometimes it completely freaks out about them; sometimes it pretends it can’t see them at all. The contrast tells us a lot about copyright—and even more about how we relate to new technologies.

Start with the freak-out. One thing that computers are good for is making copies—lots of copies. Drag your music folder from your hard drive to your backup Dropbox and congratulations, you’ve just duplicated thousands of copyrighted songs. If you look up the section of the Copyright Act that sets out what counts as infringement, the very first Thou Shalt Not is “reproduce the copyrighted work.” In theory, Congress could have added some language saying that putting your music in your Dropbox that no one else can access isn’t infringement. In practice, well, it’s Congress.

Congressional inaction has meant that the problem of explaining why the Internet isn’t just an infringement machine in need of a good unplugging has been kicked over to the courts. (Yes, the courts staffed by judges who call Dropbox “the Dropbox” and “iDrop.”) And in the process of keeping computers legal, the judges who make copyright law have developed some surprisingly broad rules shielding automatically made copies from liability.

Take, for example, the 2009 case A.V. v. iParadigms, in which high schools compelled students to submit their term papers to Turnitin, a plagiarism-detection site. First it compares papers to those already in its database, looking for suspicious similarities; then it stores the paper to compare to future submissions. Four students sued, arguing that these stored copies infringed their copyrights in their papers.

The court disagreed, because of course you shouldn’t be able to use copyright to keep your teachers from finding out whether you cheated on your homework. But its reasoning is fascinating. Turnitin, the court held, made a “transformative” use of the papers because its use was “completely unrelated to expressive content.” Turnitin’s computers might have copied the papers, but they didn’t really read them. The court added, “The archived student works are stored as digital code, and employees of [Turnitin] do not read or review the archived works.”

Courts use similar logic in case after case. It’s not infringement if computers “read or review” the new copies, only if people do. Google famously scanned millions of books. Completely legal, four courts have agreed, because it’s not as though Google is turning the complete books over to people. “Google Books … is not a tool to be used to read books,” wrote one judge. In another strand of the litigation, the parties at one point proposed a settlement that would have allowed “non-consumptive” digital humanities research on the scanned books, defined as “research in which computational analysis is performed on one or more Books, but not research in which a researcher reads or displays substantial portions of a Book to understand the intellectual content presented within the Book.” This was fine, in the view of the author and publisher representatives who negotiated the proposed settlement. Computers can do what they want with books as long as no one actually “understand[s]” its “intellectual content.”

This attitude—computers don’t count—isn’t new, either. A century ago, the cutting edge in artistic robotics was the player piano. The Supreme Court heard a player-piano case in 1908 and held that the paper rolls “read” by the player pianos weren’t infringing. The rolls, Justice William Day reasoned, “[c]onvey[] no meaning, then, to the eye of even an expert musician.” Instead, they “form a part of a machine. … They are a mechanical invention made for the sole purpose of performing tunes mechanically upon a musical instrument.” The anthropocentrism is unmistakable. I’ve cataloged many different settings where copyright law finds ways to overlook copying as long as no humans are in the loop.

On the one hand, this makes perfect sense. Copyright is designed to encourage human creativity for human audiences. If a book falls in a forest and no one reads it, does it make an infringement? It seems like the only sensible answer is “No harm, no foul.” On the other hand, there’s something strange about a rule that tells technologists just to turn the robots loose. It encourages uses that don’t have much to do with human aesthetics while discouraging uses that do.

Copyright is designed to encourage human creativity for human audiences. If a book falls in a forest and no one reads it, does it make an infringement?

This hands-off approach to robotic readership stands in sharp contrast to copyright’s surprisingly obsessive fretting about robotic authorship. We’re at the dawn of a golden age of algorithmic authorship. Twitter bots like Olivia Taters and Hottest Startups, simple as they are, are capable of amazing poetry. From Push Button Bertha to Microsoft Songsmith, computer-generated music ranges from beautiful to banal. Special-effects artists and video-game programmers use procedural content generation to make vast imaginary worlds far beyond what any one person could hope to draw or design. And of course spambots and telemarketing robots (and counter-robots) are getting eerily good at mimicking human expression.

If all you knew about copyright was the way it treats computer-generated copies, you might think it would similarly look the other way and ignore computer-generated creativity. But no! No two plays of a video game are the same; the computer produces a new and different sequence of sights and sounds every time through. Copyright doesn’t care; video games are still copyrightable. Now, of course they are, it would be ridiculous if you could just completely rip off games, and case after case holds that you can’t.

But even as copyright law goes on recognizing copyright in computer-generated works, it can’t help obsessively worrying about them with the same kind of nervous energy it gives to monkey selfies and for the same reason: What if there’s no author? What if a creative work just popped into existence, without being clearly traceable to the artistic vision of a specific human? What then, buddy?

The funny thing is that just as the player piano roll shows that mechanical copying long predates computers, so does algorithmic creativity. You know what’s a device for making art according to rigidly specified algorithmic rules? A Spirograph. You know what else is? A Musikalisches Würfelspiel (sometimes apocryphally named for Mozart), a game in which you roll dice to select measures of music to string together into a minuet. Computers are faster and fancier, but for the most part not fundamentally different. There’s no need to futz around with speculating on whether your iPhone is a copyright-owning “author” of a Temple Run maze, any more than a Spirograph is the author of a hypotrochoid drawing. Typically either the programmer or the user or both are authors, and that’s good enough.

There will be harder cases of what Bruce Boyden calls “emergent works” that arise out of unpredictable algorithmic interactions. Where neither the programmer nor the user can reasonably foresee what a computer will do, the case for calling either of them an author is weak; they lack the kind of artistic vision copyright is supposed to promote and reward. But what’s interesting and tricky about these emergent works is not that they come from computers but that they’re unpredictable by anyone involved in their creation.

In an age of police killbots, worrying about whether Futurama’s Bender owns a copyright in his dream about killing all humans may seem a little beside the point. But copyright provides a useful window for thinking about hot-button issues in law and technology, ironically because the stakes are so much lower. There are low-tech precedents for new high-tech puzzles, if we care to see them.

The key is not to treat “computers” or “robots” or “drones” or other new kinds of technologies as unified phenomena we have to figure out all at once, but instead to look at the different kinds of ways they operate and can be used. The Dallas bomb robot was under direct police control at all times; it was a tool for safely delivering lethal force from a distance in the same way that a sniper rifle is. The most important issue it raised was the security of its communications channel—because the last thing you want when you strap a pound of C-4 to a robot is for someone else to hijack the controls. That’s a very different kind of problem than worrying about delegating life-or-death decisions to algorithms with a limited human presence in the loop. Lumping them together as “lethal robots” obscures more than it reveals; it makes it harder to identify which robots are dangerous and how, and harder to figure out what to do about them.

The same is true for copyright, for privacy, for civil rights, and for the dozens of other pressing public policy problems surrounding new technologies. You learn more about augmented reality by thinking about Pokémon Go than vice versa. Technology policy is complicated because the world is complicated.