I do a lot of acceptance and QC testing for clients as their designated 30(b)(6) witness of their discovery systems. In fact, I like to find hidden gotchas and exceptions that I can take back to providers to fix. In over 30 years I have never found a ‘bad’ analytic system. Instead I frequently encounter users who do not ‘trust but verify’ before they rely on technology not designed for their usage cases. Lit support has pithy phrases for these ‘user errors’:

  • PMAC – Problem exists between Monitor And Chair
  • ID-T10 error – idiot error
  • PICNIC – Problem In Chair, Not In Computer
  • IBM error – Idiot Behind Machine error
  • More for your amusement at https://en.wikipedia.org/wiki/User_error

As the designated deponent or ‘tech translator to counsel’ I am very concerned over potential client liability when relying on TAR/PC/machine learning analytics to determine relevance in discovery. AI Daily’s Increasing Liabilities of AI cites increasing class action lawsuits in response to privacy, discrimination and other issues caused by A.I. algorithms being misused, poorly trained or just badly conceived. My early (2010-2014) research on analytic adoption in eDiscovery showed very limited usage cases and a deep reticence by counsel to rely on them to make final relevance/privilege decisions. Rob Robinson’s Casting a Wider Net? Predictive Coding Technologies and Protocols Survey – Fall 2020 Results confirms my ‘in the trenches’ perspective that many counsel are now comfortable relying on PC/TAR/CAL technologies to tackle huge collections.

My concern is buried in “Chart 7: What are the areas where you use technology-assisted review technologies, protocols, and workflows?” 94.12% of respondents used TAR to identify relevant documents, while only 50.59% used it for QC. My question to you is how many of those users actually performed and kind of independent acceptance testing for their usage scenario?

Acceptance testing and QC are not technical magic. They do require time and expertise. Unfortunately, many providers are so busy trying to sell their latest tech as an ‘Easy Button’ solution that they do not raise the need to test with known data sets or tell counsel that their solution has not been validated on the client’s unique data types. If you do not have a document that explains your workflow, technology and validation testing to opposing counsel or a skeptical magistrate judge, then you should not be signing a Rule 26(g) certification of completeness.

Again, the analytic technology is almost never the issue. I have sat in the hot seat before the US Attorney, FBI tech consultants and regulators when a single vital email was not produced in a world famous investigation. We had a clear contemporaneous paper trail demonstrating our good reasonable diligence and overall process.  That early experience shaped my ‘trust but verify’ tenet. So I encourage my peers to take a moment to review your overall discovery workflow to identify technology dependencies. Make a list of them and ask yourself or your team whether you could defend them if something critical slipped through the net. Sales or consulting assurances from your providers might give you recourse to recover sanctions, but I am betting that your contract or terms of service have clear exclusion clauses.

Greg Buckles wants your feedback, questions or project inquiries at Greg@eDJGroupInc.com. Contact him directly for a free 15 minute ‘Good Karma’ call. He solves problems and creates eDiscovery solutions for enterprise and law firm clients.

Greg’s blog perspectives are personal opinions and should not be interpreted as a professional judgment or advice. Greg is no longer a journalist and all perspectives are based on best public information. Blog content is neither approved nor reviewed by any providers prior to being posted. Do you want to share your own perspective? Greg is looking for practical, professional informative perspectives free of marketing fluff, hidden agendas or personal/product bias. Outside blogs will clearly indicate the author, company and any relevant affiliations. 

0 0 votes
Article Rating