Migrated from eDJGroupInc.com. Author: Greg Buckles. Published: 2012-03-27 05:00:08Format, images and links may no longer function correctly.
Last week I was a panelist at the 2012 Masters Series event in Houston and enjoyed the lively and frank discussions about purchasing trends, privacy issues and more that continued into the social gathering afterward. As you might expect, predictive coding and the latest Da Silva filing were a hot topic, especially amongst providers of managed review. One remark by Jim Wagner, CEO of DiscoverReady, resonated with me and I told him that I was going to steal it for a blog. To paraphrase, “The market sees linear review as disorganized review.” He was right on target. Linear review has become synonymous with plowing through millions of randomized email/documents in the least efficient or effective manner. I ask you, “In the last 5 years, have you reviewed collections that had not been culled, searched, prioritized, deduplicated, email threaded or otherwise optimized for review batching?”
If you answered “Yes” then I hope that it was a tiny collection. The odds are that you probably sorted the email by Subject and then Date to at least approximately group email threads or used other manual strategies to organize your items. The marketing departments of advanced analytics and review software have been using the old 40 docs/hour standard for the review of scanned paper documents to back their ROI claims. They make product comparisons to ‘linear review’ of raw collections without quantifying the label or process.
A mature eDiscovery process will leverage a wide variety of functions to filter and organize raw collections prior to review. Technology Assisted Review (TAR) platforms combine these functions with predictive, propagated or rule based training systems. They definitely can increase review efficiency and accuracy when incorporated into an overall relevance workflow. eDJ interviewed TAR customers and every one used some kind of culling searches, deduplication, etc as part of their workflow. The TAR marketing messages (yes – I actually did receive a cookie with “eDiscovery made easy” on it) pitch TAR as replacing linear review, when they all actually use it to train their systems.
The old fashioned, disorganized review of raw collections may be gasping its last breath on some poor paralegal’s desktop, but I believe that optimized, modern review is alive and kicking. The real question is whether counsel will allow a TAR system to designate relevance without confirmation from human eyes. In essence, collection and culling searches have been doing this for years upstream from the review room. The TAR learning systems are only controversial because counsel now perceives them as infringing on their domain. After all, even if your TAR system excludes critical documents, counsel bears the ultimate responsibility for the reasonableness and completeness of the productions. As long as these technologies were used to shape collection or exclusion criteria they seemed to escape controversy. Everyone understood the advantages they offered over simply making up Boolean search terms.
In closing, I conclude that “Linear Review” as we practice it today is very different from what TAR providers are comparing their offerings to. As one of my favorite quotes says, “You keep using that word. I do not think it means what you think it means.” So what does linear review mean to you?