Migrated from eDJGroupInc.com. Author: Greg Buckles. Published: 2010-01-26 17:39:10  

As legal technology migrates from external service providers to inside the firewall, it is more important than ever to test and understand your rates of collection, processing, loading, reviewing and producing ESI. Litigation support is a deadline driven business. Counsel will wait as long as possible before pulling the trigger on discovery in the hope and expectation of resolving the matter without having to incur further costs. Successfully adapting to this pressure cooker lifestyle requires that you construct a ‘burst capacity’ workflow as opposed to a normal ‘continuous capacity’ model.

When your counsel appears in your doorway holding a box of imaged custodian hard drives, their first question will be, ‘how long to get it loaded?’ To answer that question and others, you need to have firm baseline metrics defined for all of your priority processes. Missing deadlines has serious consequences in our world, so you want to set expectations based on documented diligence testing, not just your best guess.

I thought that I would at least share a simple testing process from my practice offerings:

  1. Identify priority/key processes – Collection speeds, default processing, load speeds, max review rates and production speeds in GB/hour or items/hour are a good start. Do not forget to a flat administrative-documentation time to everything.
  2. Define rate limiting steps/actions – dig into your software and understand which step really limits your overall rates. For example, storage write speeds can kill your processing rate. The native viewer refresh speed can limit the items/hour for review, but that might actually be a network connectivity issue instead of the actual software.
  3. Define test goals, parameters and metrics to track- Seems trivial, but documenting the test success criteria and what you need to track helps keep you on task. Example: Load Testing – Need average GB and items per hour rate for diverse batch sizes and type compositions loaded from workstations, remote and server locations.
  4. Assemble test corpus and resources – A good representative test set will have sufficient size and variations to give you a reasonable approximation of your typical collections. This is easier for corporate litsupport than a firm or service provider. You should keep a pristine copy of the test set and document any resources needed or used. This will help you replicate when testing system changes, such as moving a server or upgrading to a new version.
  5. Document test execution – Documenting when, where, who and how the test was run is as important as writing down your results. If you have the test conditions defined, then it is much easier to figure out later what has changed that dropped tiff speed by half.
  6. Evaluate metric impact – Knowing that your production rate averages 2,000 items/hour is good, but you need to extrapolate that information to convey the overall limitation of 40,000 items/day (assuming 4 hours back up). Evaluating and reporting your average daily capacities is the first step in getting funding or at least a clear policy about what has to be outsourced automatically.
  7. Preserve test resources for retesting – Once you have invested in testing, you need to recoup that investment as part of your overall change management system. Repeating tests is easy when you have everything pulled together ahead of time.

 

Now that you have an overall framework to adapt to your needs, we can cover a couple of best practice hints. If validation and performance testing is new to you or your department, reach out to your IT department or any other resource that has acceptance, change or performance testing already baked into their work processes. No sense reinventing the wheel if you have a Six Sigma guru hiding in the weeds.

  • Isolate system components to find bottlenecks
  • When first approaching performance metrics, it pays to trace the flow of data within a given system and make a basic diagram or list of what is happening and where at every major step. This makes finding the bottleneck much simpler.
  • Keep tests simple and to the point
  • Multistep tests are a recipe for confusion. It is better to have 20 simple tests than to sit down and perform a series of actions and then try to interpret them. A good example is Review performance tests. Start with measuring how long it takes to move sequentially through 60 or 100 items as each item is refreshed in the viewer. Only then do you want to add a single coding field to see if recording the changes slows you down.
  • Limit testing to relevant metrics
  • Test what matters. It is easy to get distracted with running test variations. Keep your overall goals in mind and your limitations in resources, time and effort.
  • Interpreting and disclosing results
  • You must measure a process in order to know that your changes have actually improved that process. You do not have to run a test for 24 hours to calculate a daily average throughput rate or to extrapolate your departments monthly or quarterly capacity. Be careful to qualify any reported capabilities with the testing conditions. Remember that your counsel should not make deadline commitments unless you have verified that the collection matches your testing collection profile or they have given you the chance to run a fast retest. Discovery system performance testing should be done in consultation with counsel to protect the results if possible. If you have to turn over any documentation, it is important to be clear that every matter involves some unique challenges and variables.

Every litsupport tech has an instinctive understanding of how much their system can handle in a given day. The challenge is to quantify this with actual standardized tests and to interpret the results to improve your overall process. Setting realistic deadlines is critical in our industry, meeting them is even more important.

0 0 votes
Article Rating