Migrated from eDJGroupInc.com. Author: michael simon. Published: 2014-09-15 20:00:00Format, images and links may no longer function correctly. 

Cooperation?

Beating swords into plowshares . . .

then back into swords


A number of weeks ago, Greg Buckles posted the preliminary results of his survey on analytics usage in eDiscovery but then changed one of the key findings (as being too broadly asked) and instead drew a more specific, accurate conclusion from live interviews:

My best estimation is that only 5-7% of matters that reach the review stage . . .  actually use some form of PC-TAR.

If you haven’t read my initial commentary on the survey, you can do so here.  Reading my first article in this series will also go a long way towards explaining why I am using the term “machine learning” instead of “PC,” “PC-TAR,” “TAR,” “CAR/TAR,” “CAR,” or whatever.  In short, calling machine learning by an accurate term could help with its adoption by cost-shocked corporate counsel and, at the very least, with existing adoption rates in the single digits, how could it hurt?

Clearly, though, the disappointment of what was once the great hope of eDiscovery cost containment cannot be due solely to poor naming.  The idea that a terrible name can hold back an otherwise great product is mostly modern mythology.

Here’s another idea . . . and let’s start it with a story.  Back at the start of the summer, I had the privilege of attending a local event discussing corporate adoption of machine learning featuring counsel.  The program included representatives from three huge (all within the top 50 of the Fortune 500), constant, corporate consumers of eDiscovery.  The colloquy between the corporate types, an eDiscovery industry CEO, and the outside counsel moderator was well-within the typical banter one usually hears on the eDiscovery rubber chicken circuit–until suddenly it wasn’t.  What happened next changed my view of why machine learning has not taken the eDiscovery world by storm.

It seems mandatory for any session on machine learning to ask its panelists why this technology has not caught on a much as we had all expected, hoped, or maybe gambled our jobs upon.  The mandatory answers for this question are, of course:

  • We still need judicial approval of it
  • The technology is too complex
  • It’s all the fault of those damn Luddite lawyers

If you think about it, though, those problems never held back the ubiquity of the use of key word searching; show me the precedential case law or even a single Rule in the FRCP that approved the use of key words and search technology.  Similarly, these problems haven’t held back other forms of “technology” within “Technology-Assisted Review,” either.  Does anyone go to war over de-duping, de-NISTing, or using email threading?  And, while lawyers may not all love technology, try to take away their email, smartphones, computerized research, on-line docketing and such; you’ll find those “Luddites” are far more likely to throw shoes at you instead of at some machine.

So what happened at this conference I was attending?  When asked the mandatory question, the panelist from a Fortune 20 company didn’t cite one of the mandatory answers and instead blamed cooperation for the lack of use of machine learning.  She said that her experience was that when her company cooperated with the other side on using machine learning technology, their opponents used that cooperation as a sword against them.  The time – and resulting cost when billed by outside counsel by the hour – of the discussions, negotiations, argument and motions – ended up costing more than could be saved by using machine learning.  The other two corporate panelists at first seemed a bit surprised, even taken aback, at this “off-script” answer, but quickly came to agree with her assessment.

One has to wonder about just how much actual cooperation there is in Sedona Conference style “Cooperation” is if it can be used so easily as a weapon against the other side.  The Biblical notion of turning swords into plowshares sure seems a lot less peaceful when those plowshares can get turned back into swords at any time.  Leaving aside such concerns about cooperation (whether real or faux), though, let’s turn to another sacrosanct process: corporate budgeting.

For some time, I have been hearing stories, either unconfirmed or not for attribution, about several major corporate eDiscovery consumers that had tried machine learning, but decided that it wasn’t worth the return on investment because of the costs created by the legal disputes that surrounded its use.  Worse, machine learning created unpredictable costs, and being unable to properly predict costs can be viewed as a near-unpardonable sin in the corporate world.

It’s fair to ask why we have to cooperate with opposing counsel before we can use machine learning?  Mostly, this  seems to stem from the following lines of Da Silva Moore v. Publicis Groupe & MSL Group, No. 11 Civ. 1279, 2012 WL 607412 (S.D.N.Y. Feb. 24, 2012), the first case to approve machine learning:

While not all experienced ESI counsel believe it necessary to be as transparent as [the respondent] was willing to be, such transparency allows the opposing counsel (and the Court) to be more comfortable with computer-assisted review, reducing fears about the so-called “black box” of the technology.  This Court highly recommends that counsel in future cases be willing to at least discuss, if not agree to, such transparency in the computer-assisted review process.

This recommendation has created a great degree of controversy, though it also seems to have stuck.  Some courts, such in In re: Biomet M2a Magnum Hip Implant Prods. Liability Litigation, NO. 3:12-MD-2391, 2013 WL 6405156 (N.D. Ind. Aug, 21, 2013) have been unwilling to force the issue, stating in effect that the Sedona Cooperation Proclamation is a good idea, but not binding upon the court or the parties.  Even the Biomet court, though, still strongly questioned a refusal to turn over seed sets, hinting of potential dire repercussions for those parties that could be seen as refusing to cooperate.

Industry experts have been divided about cooperation as well.  Some have argued that eDiscovery process and workflow is not something that any party should be required to disclose, whether due to work product privilege or due more to the traditional restriction upon of discovery about discovery (i.e., we did not have to disclose our discovery, without the “e” workflow, so why does adding that “e” now change everything?).  Others have argued that the work product privilege should not apply.  The most practical approach seems to be that espoused by The Coalition of Technology Lawyers (“CTRL”) in its Guidelines for Regarding the Use of Predictive Coding—an approach that echoes the findings of the court in Biomet: cooperation is not actually required, but refusing to do so creates risks for party and its counsel.

No matter our views on cooperation, we should all be able to agree with a more fundamental point: as long as machine learning requires extensive cooperation between expensive lawyers creating expansive bills, adoption rates will continue to remain low.  Machine learning systems are—and will be—for the foreseeable future, black boxes.  The technology companies that have created such systems aren’t going to put their IP at risk by making it transparent and thus an easy target for reverse engineering.

So what is the alternative?  Perhaps it is time that we accept the box, even embrace the box (more on that in my next article).  Until that time, though, please do me a favor: look around at the technology you use in your work and in your everyday life (computers and toasters alike) and ask yourself, “Could I explain to someone else exactly how this works?”

Michael Simon – eDiscovery Expert Consultant – Seventh Samurai 

Contact Michael at Michael.Simon@Seventhsamurai.com

eDJ publishes content from independent sources and partners. If you have great information, perspective or analysis to share, please contact us for details. 

 

0 0 votes
Article Rating