Technology-Assisted Review (TAR) is not simply a hyped-up market category. In fact, I predicted late last year that 2012 would be the year that TAR goes mainstream. That prediction was based on data from an eDJ Group survey that found that by the end of 2012, market penetration for the TAR technique predictive coding should be over 50%. In getting beyond the hype of the Da Silva Moore and Kleen Products cases, I have had the opportunity to speak with several end users about their experiences with predictive coding and other forms of TAR. Those stories will be featured in an upcoming report from eDJ Group.
There seems to be a notion that TAR requires that some documents not be reviewed, a notion that eDJ’s data suggests there is uneasiness with.
While TAR does promise to allow making relevance and privilege calls without necessarily reviewing every single document, it does not require doing so. In fact, there is a middle ground use case in which all documents are looked at, but TAR is used to prioritize those documents. I had a chance to talk with Mary Ann Benson, who is Director of Consulting Services at Epiq Systems, about TAR. What made the discussion interesting was that Mary Ann’s team has been involved in approximately 60 engagements that used TAR – this is real, on-the-ground perspective. The team at Epiq uses Equivio’s Relevance for TAR engagements and many of their projects have involved the review of all documents, but in a prioritized manner.
Relevance utilizes technology that scores each document, based on input from an expert. The scores are then used to prioritize the collection, with the higher scoring documents being the most likely to be responsive. This prioritization then allows whoever is running review to allocate the right resources to the right documents (higher priced reviewers, or the case team itself, take the high scoring document while lower price contract attorneys or paralegals take the lower scoring documents).
This process requires an expert, knowledgeable about the matter at hand and senior enough to make hard decisions about responsiveness, to train the software in how to identify responsive and non–responsive content. The key is to have the system become “stable”. Once stable, the system scores the documents. The software measures and displays stability indicators in the interface at regular intervals as the expert reviews documents. Mary Ann cautions that the distribution of scores varies from matter to matter, and that each project requires careful review of the results when determining how best to structure the review workflow. The key is to get “stability” in the scoring. (Note: stability is a measurement of the agreement between the Relevance predictions on the sample set and the experts’ decisions. When the difference between these decisions is ‘stable’ to the client’s satisfaction, then you have done ‘enough’ training). The software measures stability via progress indicators in the software as the reviewer/expert looks at documents; every 40 incremental documents, the team looks at the stability indicators. Once the progress indicators show no (or little) incremental difference, it becomes clear that the software has learned.
In real life, many of Epiq’s clients have been fairly conservative in their approach to using the scores to determine which documents will be deemed likely non-relevant and sent to lower cost reviewers. Several, though, have been aggressive and have sampled larger sections of the lower scored documents to assess and confirm non-responsiveness. These clients have seen dramatic cost savings. Others skipped traditional first pass review of the highest-scored documents (thereby reducing cost) and have moved these documents into privilege or secondary review.
The key, from Mary Ann’s perspective, is determining the level of comfort of the client. Even the conservative clients see benefits because the right documents can be prioritized for the case team to review immediately, while lower cost resources are used to review the remainder of the collection. While completely eliminating documents lower than a certain score would result in even more benefit, the prioritization approach is one way to take baby steps with TAR. From there, the usage can evolve more aggressively as a client gets comfortable with the practice and the results.