Monday, May 20, 2013
Plaintiffs Discover Risks of Refusing to Participate in Predictive Coding Discovery
Let’s see whether it works: Discovery! Are you excited? How about this: Technology Assisted Review!! Nothing yet? How about: Predictive Coding!!! We gave you three exclamation points for that one. Are you pumped yet?
Yeah, neither are we. But we're going to discuss these things anyway, in particular the way in which the court addressed them in a recent MDL decision in the hip implant litigation. In re Biomet M2A Magnum Hip Implant Prods. Liab. Litig., 2013 WL 1729682 (N.D. Ind. Apr. 18, 2013). Why? Because it’s important for anyone whose practice involves discovery of massive amounts of electronically stored information (ESI) – and mass torts certainly qualify – to understand the potential cost savings for clients presented by technology assisted searches and the legal viability of implementing them.
We’ve blogged about predictive coding before. Look here. In short, predictive coding software “learns” from the user’s selections or preferences and identifies – with greater accuracy as it learns – what the user wants to find. It’s used for many things on the Internet, and it’s now being used to identify electronic documents for production in litigation. The process involves an initial interaction between the software and reviewing attorneys, but at some point the software should be able to take it from there alone (for the most part). Here’s how the MDL court described the process that Biomet used to conduct it review of the 2.5 million documents it selected for review:
Under predictive coding, the software “learns” a user’s preferences or goals; as it learns, the software identifies with greater accuracy just which items the user wants, whether it be a song, a product, or a search topic. Biomet used a predictive coding service called Axelerate and eight contract attorneys to review a sampling of the 2 .5 million documents. After one round of “find more like this” interaction between the attorneys and the software, the contract attorneys (together with other software recommended by Biomet’s e-discovery vendor) reviewed documents for relevancy, confidentiality, and privilege.
Id. at *1. While it can reduce costs, things still aren’t cheap. The review cost Biomet $1.07 million, and Biomet projected that its ultimate costs would total $3.25 million. But a manual attorney review would have cost much more, and what plaintiffs were asking the court to order Biomet to do would have cost millions more.
You see, Biomet got to the 2.5 million documents by culling a total universe of 19.5 million documents using keyword searches and de-duplication. Id. Only then did Biomet apply predictive coding to search the 2.5 million documents. Plaintiffs, however, claimed that keyword searching wasn’t accurate enough and wanted Biomet to use predictive coding on the entire universe of 19.5 million documents.
They may or may not have had a point, but here’s where the plaintiffs found themselves in trouble with the court. Biomet had asked plaintiffs’ steering committee to suggest additional keywords to use in the search of the 19.5 million documents and even to review the (non-privileged) documents that the predictive coding search did not select from the 2.5 million documents – so that plaintiffs could assess the process. Plaintiff refused both offers. Why? Well, plaintiffs believed that both these efforts, even with their participation, would not “assure proper document production.” Id. They believed that Biomet should have used predictive coding on all 19.5 million documents and that, regardless, Biomet should have waited until the MDL was formed and addressed discovery before reviewing and producing documents. Id.
So plaintiffs asked the court to order Biomet to start over and review the 19.5 million documents using predictive coding. That’s where plaintiff’s lost the court.
The court didn’t see the issue as one of whether predictive coding was a better review method than keyword searching. The court was faced with the question of whether Biomet’s process met FRCP 26(b)(2)(C)’s proportionality test (burden and expense versus benefit) and whether Biomet should be required to start over and spend millions of dollars to do so. The court came out on Biomet’s side:
The issue before me today isn’t whether predictive coding is a better way of doing things than keyword searching prior to predictive coding. I must decide whether Biomet’s procedure satisfies its discovery obligations and, if so, whether it must also do what the Steering Committee seeks. What Biomet has done complies fully with the requirements of Federal Rules of Civil Procedure 26(b) and 34(b)(2).
Id. at *2.
Plaintiffs’ had a number of arguments on why they should have won. While Biomet ran confidence tests suggesting that “a comparatively modest number of documents would be found” if it redid things using a predictive coding search on the 19.5 million documents, plaintiffs argued that keyword searches using Boolean methods generally identify less than 25% of relevant documents. But the court found nothing in the record to equate the keyword searches done by Biomet to the type of Boolean search that plaintiffs used as the basis for their argument. Id. And the court was in no way sympathetic to the steering committee’s argument that Biomet should have waited to start its review until after the MDL was formed and addressed discovery. Biomet had discovery obligations in the original district courts, and it was required to meet them:
It might be that the Steering Committee’s argument could carry the day in some cases, but this one doesn’t seem to be such a case. The Steering Committee hasn’t argued (and I assume it can’t argue) that Biomet had no disclosure or document identification obligation in any of the cases that were awaiting a ruling on (or even the filing of) the centralization petition. Until the MDL Panel enters a centralization order under 28 U.S.C. § 1407 (or transfers a tag along pursuant to an earlier centralization order), a transferee court is free to act on pending matters. Indeed, through its conditional transfer orders, the Panel regularly encourages transferee courts to do so. To hold that a party that behaves as the transferee court directs, or that follows the transferee court’s standing procedures, does so only by forfeiture of the proportionality provision of Rule 26(b)(2)(C), seems an uncongenial exercise of whatever discretion I have. It also would seem inconsistent with the purposes of centralization under § 1407.
Id. at *3.
There can be little doubt that choosing to sit idly by while Biomet spent over a million reviewing documents hurt plaintiffs’ chances. Under those circumstances, the court found ordering Biomet to redo its review to the tune of even more millions of dollars didn’t fit within FRCP 26(b)(2)(c)’s proportionality standard:
It might well be that predictive coding, instead of a keyword search, at Stage Two of the process would unearth additional relevant documents. But it would cost Biomet a million, or millions, of dollars to test the Steering Committee’s theory that predictive coding would produce a significantly greater number of relevant documents. Even in light of the needs of the hundreds of plaintiffs in this case, the very large amount in controversy, the parties’ resources, the importance of the issues at stake, and the importance of this discovery in resolving the issues, I can’t find that the likely benefits of the discovery proposed by the Steering Committee equals or outweighs its additional burden on, and additional expense to, Biomet. Fed. R. Civ. P. 26(b)(2)(C).
Id. at *2.
Now, the plaintiffs’ steering committee will no doubt have more opportunities to address Biomet’s document review methods as the litigation moves forward. But their decision to refuse Biomet’s invitation to participate from the start lost them an opportunity to have input into the original culling of the 19.5 million documents. Plaintiffs may see a lesson in this: participate.