Migrated from eDJGroupInc.com. Author: Greg Buckles. Published: 2010-06-15 09:42:10Format, images and links may no longer function correctly. Howard Reissner, CEO of Planet Data, forwarded me new eDiscovery decision with best practice implications, Mt. Hawley Ins. Co. v. Felman Production, Inc., 2010 WL 1990555 (S.D. W. Va. May 18, 2010). Being elbow deep in an ugly client issue, I did not get around to digesting the case until well after Ralph Losey, Craig Ball and others have properly dissected it. So I missed the scoop and have to settle for chewing over some of the crumbs in one of the more interesting recent discovery decisions. Stepping aside from the legal wrangling about privilege waiver, I always enjoy getting insight into the raw metrics and burden of litigation that can be dissected publically. Start with the fact that 1,638 GB were collected via forensic imaging from 29 custodians. That means beginning with roughly 60 GB/user. Typical processing at $350-500/GB could have run the Feldman $500-750k just to get it ready to filter and search by their provider, Innovative Discovery. Although the actual file/email count was not given in the opinion, we can roughly guess that it was between 8 and 12 million individual ‘documents’. Even assuming that you can drop 50% in system files and the usual filters, Felman was still staring at  a multimillion dollar manual review.

Instead, they tested search terms for responsiveness and potential privilege. This dropped the collection down to 346 GB, an 80% reduction in volume. They only reviewed the potentially privileged items with relevance criteria hits and produced the rest of the search results with a blanket “CONFIDENTIAL” stamp. Up to the point of production, Felman was sounding pretty savvy compared to many plaintiffs. The Defendant’s discovery request was much larger than they had anticipated, so they worked with their counsel and provider to winnow down the collection. From what I can tell, their privilege search actually caught most of the privileged documents on the first pass. They had some kind of index corruption on one of their 13 Concordance databases and somehow 328 privileged documents made it into the production set, even though some or all were on the privilege log. Another 49 privileged documents were inadvertently produced, but that is attributable to Feldman’s lack of quality assurance and control checks. The magistrate judge called them out for not running random samples on their relevance and privilege search criteria. Simple metrics checks would have caught the mismatched numbers on the Concordance failure. A cross check of the tracking ID’s from the privilege log would have caught the items in the production set. As Ralph Losey points out, the overall quality of the search criteria was excellent.

My fundamental take away from this case is reinforcement that reasonable diligence includes quality checks on your input, process, software and media throughout the discovery lifecycle. There are a lot of moving pieces at every step and it only takes one hiccup to inject error into your final production, especially if you are not going to manually review every item. In light of the ever-growing size and complexity of ESI collections, we have to ‘trust but verify’ if we want to meet the bar of reasonable diligence. Ralph Losey correctly points out that a random sample is unlikely to have caught a potential 377 items within a 5 million item production (~0.0006% error rate). However, documented QA/QC sampling might have given their arguments some chance in front of Magistrate Judge Stanley. A quality process has to be measured and documented in order to demonstrate your efforts.

0 0 votes
Article Rating