Monthly Archives: January 2024

Zubulake Revisited – Slaying The Ostriches

Any organization that ignores information lifecycle management from this point forward is stupid – plain and simple. To argue that eDiscovery is not a concern only categorizes an organization into the ostrich category. It’s time to get the heads out of the sand and start building out an infrastructure that supports finding and collecting potentially relevant information quickly and to manage the process for retaining and preserving this information.

By |2024-01-12T16:07:59-06:00January 12th, 2024|eDJ Migrated|0 Comments

No Such Thing as Free Advice – Consultants Part 2

In Consultants Part 1, I explored the different kinds of eDiscovery related consultants and how they came to be from the mid 1990’s to today. No we can look at the ethical issues facing subject matter experts in their different roles. Lawyers, accountants and physicians and other traditional advisory experts work within a well defined framework of legal and ethical standards that define their fiduciary responsibilities to their clients. There are no such regulatory or standards bodies governing eDiscovery experts as yet. In part, this is because such consultants are expected to deliver their advice directly to counsel, who should make the final legal determination.

By |2024-01-12T16:07:59-06:00January 12th, 2024|eDJ Migrated|0 Comments

Proposed: Mandatory C.S.E. for Lawyers

On December 18, 2009, the New Jersey Supreme Court adopted Rule 1:42, which sets forth the mandatory continuing legal education requirements for New Jersey attorneys. The new Rule, which took effect on January 1, 2010, requires all attorneys practicing in the State (including judges, law [...]

By |2024-01-12T16:07:59-06:00January 12th, 2024|eDJ Migrated|0 Comments

Sampling Sizes – No Easy Answers

As inside and outside counsel struggle with ever larger ESI collections, the question of appropriate sample sizes for quality assurance pops up on the national lists, conference panels and in social gatherings of law geeks. There are many different statistical theories that can be used to calculate the relative probability that the results of a sample set can be extrapolated against the total collection. In simpler terms, sampling is used to define how confident we are in an assertion when we have not or cannot review every single item due to scale, availability, cost or time. This is expressed as the Confidence Interval or Confidence Range. Do not worry, I am not a statistician and have no intention of even trying to translate significance levels, variability parameters or estimation errors. Instead, I will talk about how sampling can apply to the discovery process.

By |2024-01-12T16:07:59-06:00January 12th, 2024|eDJ Migrated|0 Comments

The Scale and Performance Wars Begin

As IT becomes more and more involved in eDiscovery software purchasing, the scalability and performance of tools will be important decision criteria. But, when every vendor claims to be the most scalable on the market, what should buyers do?

By |2024-01-12T16:07:59-06:00January 12th, 2024|eDJ Migrated|0 Comments

Is the Market Ready for Automated Review? – Part 1

In the weeks following LTNY 2010, I have tried to catch up on the demos and briefings that did not make it into my busy show schedule. I finally managed a look at the new i-Decision automated first pass review from the team at DiscoverReady. It got me thinking about the entire concept of automated relevance designation. Several years back, H5 introduced automated review to the market using their Hi-Q Platform™. Recommind’s Axcelerate, Equivio’s Relevance and now Xerox Litigation Services CategoriX also bring some flavor of automated categorization to the field. Having at least five serious products on the market tells me that customers are paying the relatively high per item or per GB rates to bypass a full manual review.

By |2024-01-12T16:07:59-06:00January 12th, 2024|eDJ Migrated|0 Comments

More Evidence of Scale and Performance Wars

Anyone evaluating eDiscovery software is going to have a hard time finding a way to compare tools in an apples-to-apples fashion. And, even if we are to know how many servers these vendors are using to get the numbers they report, we know nothing about the make-up of the data corpus. It’s very different to process a bunch of Word documents than it is to process TBs of PST files.

By |2024-01-12T16:07:59-06:00January 12th, 2024|eDJ Migrated|0 Comments

Inside of Automated Review – Part 2

In Part 1, we defined and looked at how automated document review has entered the eDiscovery market. Attenex and Stratify both encountered the same slow adoption and educational sales cycles when they brought concept clustering analytics to the hosted review market. Being on or over the cutting edge can be rough when you have a relatively conservative customer base. Counsel want strategic advantages without corresponding risks while corporations push for cost containment. In the midst of this pressure cooker, DiscoverReady has launched a new automated first pass review system called i-Decision™.

By |2024-01-12T16:07:59-06:00January 12th, 2024|eDJ Migrated|0 Comments

Internal Metadata – Hidden Text Lurking in Your ESI

When we talk about metadata for native ESI, we are usually concerned about the Operating System (OS) fields that are kept in the File Allocation Table (FAT). Different OS formats support a wide variety of fields such as different dates, attributes, permissions and file name formats (long vs. short). These fields are not usually stored within the actual file and so are very vulnerable to alteration or complete loss when items are read or copied. Forensic collection is focused on preserving this ‘envelope’ information so that evidence can be authenticated and the context reconstructed in court. That is only half of the metadata story. Microsoft Office and other programs retain non-displayed information within the header and body of all common file types, especially with the adoption of the XML based Office 2007 file formats.

By |2024-01-12T16:07:59-06:00January 12th, 2024|eDJ Migrated|0 Comments

Another Perspective on the Role of Automation in eDiscovery

In his earlier journal entries – Inside of Automated Review Part 1 and Part 2 – Greg Buckles explored the practice of using content analysis software to enable a level of automate for document review. The growing trend to let software create clusters of content by concept and other analytics in an effort to decrease massive review costs in a good indication that automation is here to stay.Thankfully, I’m seeing more and more indications that content analytics are becoming accepted in the information governance community. At LegalTech, I participated in a panel and one of the questions I received was how organizations can better proactively manage information in order to make eDiscovery as efficient as possible. My answer was to use auto-classification to go through legacy content and identify potential records, knowledge assets, and other retention-worthy content. This answer was the topic of debate, with some folks thinking that auto-classification will never stand up in court or is simply not advanced enough to work. Others feel that there is no way to effectively classify information manually and therefore auto-classification is inevitable.

By |2024-01-12T16:07:59-06:00January 12th, 2024|eDJ Migrated|0 Comments
Go to Top