It seems that more and more companies are beginning to use automated algorithms to detect potential copyright infringement online. This is fine. What is worrisome, however, is the fact that such companies build automated DMCA notice-and-takedown scripts into their detectors, thus eliminating the ability (and in my opinion the obligation) for a human decision-maker to decide whether the alleged infringement is in fact an infringement.
Columbia Journalism Review has an interesting article about how documentary filmmaker Steven Rosenbaum had one of his TEDX presentation videos taken down from YouTube because he had included a fair use, snippet of one of his earlier documentaries:
In 2004, I’d licensed the dvd rights to Anchor Bay Entertainment. Almost eight years ago. It was a seven-year deal, and thinking back to where the world was in 2004, there was no mention of “streaming rights” or “Web rights.” But when Anchor Bay sold to Starz, Starz determined that it controlled the rights on Netflix, YouTube, Hulu, and other Web video distributors. Putting aside for a moment the question of whether it did or didn’t have those rights, my contract with Starz had ended more than a year earlier.
So what had happened? I presume that Starz had provided YouTube with a “digital fingerprint” of my film—and all the films they’d previously had the rights to. And since there is no way to make a judgment about fair use in an automated system, Starz had discovered the segment of the film inside my talk and issued a takedown.
Certainly the DMCA provides relief to alleged infringers when they are, in fact, in the right. It is worrisome, however, that so many companies have taken to automated monitoring. The monitoring itself is not the problem, but automated issuance of
accusations take-down notices is, in my opinion, not conforming with the spirit of the DMCA.
Enter your email to get started.