Migrated from eDJGroupInc.com. Author: Mikki Tomlinson. Published: 2012-02-02 13:04:13Format, images and links may no longer function correctly. The Honorable Andrew J. Peck, United States Magistrate Judge for the Southern District of New York, graciously allowed me to interview him after the LTNY Man vs. Machine: The Promise/Challenge of Predictive Coding & Other Disruptive Technologies session in which he participated as a panelist. Judge Peck shared the panel with industry luminaries Maura Grossman and Ralph Losey, and moderator Dean Gonsowski. Overall, the session was excellent – very educational, and well organized.
When I reached out to Judge Peck last week to request the interview, my intention was to write a review of the session. I prepared questions and took fast and furious notes during the session. However, between the time the session was over and the time we sat down for a bite to eat and proceed with the interview, I realized that a session review is not really what will benefit the eDiscovery community the most. I decided, instead, to open up a discussion with the community. Directly below are some of my insights and questions on the session, my post-session discussion with Judge Peck, and the hot topic of Predictive Coding/Technology Assisted Review (“PC-TAR”). Do you agree, disagree, have something to add? Did you attend this or other PC-TAR sessions? What did you think? Please post your comments.
Key Word Searches Don’t Work? While I waited for Judge Peck after the session, I had an opportunity to visit with my friend and industry veteran Chuck Kellner. Chuck disagrees with blanket comments that key word searches simply don’t work and the insinuation that service providers are in favor of the key word method for purposes of profit. While a strong advocate of PC-TAR as a major improvement over search through the use of iterative key word development, Chuck was focusing his comments on the intent and recommendations of responsible service providers. He expressed that: (1) experienced, quality, ethical service providers have been motivated by client need to reduce the overall size of review and cost of discovery, and (2) the method of iterative development of key words can be and has been useful and defensible in the past when done properly. Chuck went on to discuss how to develop iterative workflows, sampling, and processes to use key words as a means of locating and managing ESI in the discovery process. We discussed the difference between a solid, iterative process versus “guessing” at key words and simply trudging forward down that path. That kind of “guessing” is what drew the attention of Judge Peck in the Gross decision (William A. Gross Constr. Assocs. v. Am. Mfrs. Mut. Ins. Co., 2009 U.S. Dist. LEXIS 22903 (S.D.N.Y. Mar. 19, 2009)).
This brings me to my next question…
Will PC-TAR Force the Industry Into Better Workflows? The reality of the state of our industry is that there are still a lot of attorneys and litigants that are not subscribing to well-designed (or any) workflows. Judge Peck told a story during the session that highlights this very fact. If you find yourself before Judge Peck, you will be required to complete a Joint Electronic Discovery Submission and Proposed Order, which is Exhibit “B” to the Judge’s Rule 16 IPTC Scheduling Order. In his story, Judge Peck spoke of a case where the parties agreed that they would print all of their ESI and exchange it in paper form. After he denied the proposal, one of the parties filed a motion to reconsider. Unbelievable? Not really – I still see this in practice a lot. More commonly than the paper scenario, I see parties blindly selecting key words without anything to back up the selection of them (such as asking custodians), followed by not sampling them, or processing all data with no filtering at all (such as applicable date range). Neither of these methods demonstrate efficient and effective (or any) workflows.
One of the most common statements we are hearing in discussions surrounding PC-TAR is that if you want to be able to defend its use, you must have a well developed and solidly documented process that includes appropriate levels of sampling and QC. Wouldn’t you agree that this should apply to all ESI review projects, no matter what technology or approach is being used?
Despite evidence (see, e.g., TREC, eDiscovery Institute, JOLT) that proponents of PC-TAR argue demonstrates otherwise, there are many attorneys and litigants that are concerned about the use of such advanced technology and continue to find an “eyes on every document” approach superior. There are also many that believe that PC-TAR may be superior, but would like to see some caselaw on the topic before they are willing to attempt it. (Note: in the session, Judge Peck hinted that he may have a ruling related to PC-TAR in the near future. We will keep a lookout for it and post as soon as we hear more.)
I am hopeful that as a result of these defensibility discussions, those that are not willing to make the leap to PC-TAR at this point might at least begin to develop improved workflow and processes (such as sampling, iterations, filtering , and QC) to their current processes if they are not already doing so. This is the Pollyanna in me. However, the devil’s advocate in me asks the question: if you are choosing not to apply well developed and solidly documented processes in seemingly more simple approaches to collection and review now, why would the PC-TAR discussions motivate you to do it now? After all, the same discussions took place over key word search and there is published caselaw on the topic. What do you think?
Please use the comments section to post your thoughts and questions on this topic and stay tuned for An Interview with The Honorable Andrew J. Peck – Part Two, which will include discussion on the paradigm shift required for PC-TAR, and community education (bench, bar and client).