How to Quickly Measure a Process' Health to Determine What Migration Method to Use
Updated: Mar 30
Before migrating workflows, you will need to know the “health” of each one. This post explains how to use process mining to categorize your organization’s workflows into one of four migration categories:
The problem is that categorization has always been highly manual, subjective, and costly. Process mining changes this task by automating it and replaces subjective guesses with empirical data (though expertise is still required to interpret the data). The methodology to include process mining in process migration is described below.
“Lift and Shift” Is a Problem
“Lift and shift” workflow migrations are common. Organizations move everything “as-is” and hope to improve post-migration. This is the most glaring problem with migrations and the most dramatic opportunity for improvement:
Inefficient processes bring their deficiencies into the new environment
Low-value and low-volume processes are migrated at significant cost rather than retired
The highest-value, highest-volume processes get no focused attention
The future state never comes – the commitment to improve post-migration is often lost in the sea of competing corporate priorities and budgets
Because of the difficulty and amount of time it has historically taken to manually sort workflows into categories, it’s understandable why organizations have often chosen “lift and shift.” However, by automating this manual process, process mining changes the equation. Process mining is an emerging capability that can accelerate large-scale migrations. A growing trend is to use process mining to help sort candidate processes into migration categories. Each workflow can then be addressed appropriately (retired, recreate, improved, or moved as-is Thousands of labor hours can be repurposed from understanding and documenting current state to optimizing the future state BEFORE migrating.
The Four Migration Categories
By sorting your processes into these four categories you’ll be able to focus your efforts most efficiently:
Move As-Is: “Lift and shift” or replicate the current workflows in the new models for the new workflow system. This approach requires much less involvement from the business units, but it also doesn’t take advantage of opportunities to optimize processes or introduce additional process and task automation. Note also that it requires effort. It requires process inventory and analysis work to understand your current flows. And even if automated model generation works, there will still be post-generation cleanup re-work.
Retire: Decommission and retire the lowest-volume and lowest-value workflows where possible.
Recreate: Create optimized workflows and process models that introduce as much automation as possible. It requires a much more intensive upfront effort working with users to understand business processes and optimization opportunities from end to end. It will also require more iteration, testing, and refinement with business unit participation. Prioritize the highest-volume and highest-value workflows and invest most heavily in re-engineering and re-creating them natively in the target platform.
Improve: Incrementally but not drastically improve the remaining processes by standardizing, streamlining, and combining them where possible.
Use the Doculabs Health Index to Sort Your Workflows Into Migration Categories
We use our Workflow Process Migration Solution to assess thousands of workflows according to our Health Index, which uses three general assessment criteria: efficiency, effectiveness, and value. They are defined as follows:
Is the process efficient, with minimal waste and complexity, as measured by automation, volume through variants, activities per case, latency, etc.?
Is the process effective, fulfilling the business purpose of the process (e.g., does this claims process address all the claims it should; does this reregistration process address all the life events it should)?
Is the process valuable to the future state of the business (e.g., is this process trivial or outdated and not worth migrating, or does it have insufficient impact on revenue, cost avoidance, customer retention or otherwise fails the business case)?
This article will focus on efficiency as a primarily determiner of process health. We will discuss effectiveness and value in more detail in a future post.
The purpose of this exercise is to quickly determine the best migration approach for each of hundreds or thousands of processes according to decision rules. For example:
Move As-Is: If it passes the tests for efficiency, effectiveness, and value
Retire: If it fails efficiency, effectiveness, and value
Recreate: If it passes value, but significantly fails efficiency or effectiveness
Improve: If it passes value, and fails efficiency or effectiveness, but not significantly
How We Measure Process Efficiency
Let’s start with the efficiency metric. It measures the process’s structural and performance complexity and waste. Structural efficiency is about how the process is designed. Performance complexity is about how work actually flows through the process during runtime.
Much of the structural efficiency can be determined by the graphical depiction of the process: how many steps or activities, variants, loops, and rules the process has. We use our migration solution that includes both the Celonis process mining platform along with workflow BPMN tools that graph and compare processes.
There are various ways to set pass/fail thresholds, e.g., the process passes if it is in the lower 75% of processes that are similar to it (its process class). It fails if it is in the most complex 25% of its process class. This is automatic using our solution. To explain, let’s say you have 20 workflows that all do the same thing – they move money. Then the Doculabs Celonis Migration methodology calculates the number of activities and variants for the 20 move-money workflows, and then ranks the workflows from least number of activities and variants to the greatest number of each. The first 15 would pass complexity and 16-20 would fail.
There are more complex measurements of structural complexity as well, such as:
The number of people, business units, or external parties involved in the workflow
The number of integrations with IT systems or services
Whether it does serial (bad) versus parallel (good) processing
The number of QA steps (typically bad)
There are several ways to measure performance complexity. These include volume through process variants, time through variants, quality-caused inefficiencies and rework, as well as automation.
An example rule for volume through variants is that the process passes if 90% or more of the volume of work activities flows through fewer than five variants. It fails if less than 50% of the volume flows through 10 variants. Anything between those extreme values is neither a pass nor fail.
Time through variants is about latency, and there are several ways to measure it. For example, the process could pass if its average duration is in the lower 75% of its process class (processes that are similar to it) and fail if it’s in the upper 25% of its process class. A more useful measure would use the duration of the variant that is the median with respect to volume. That is, you could line up the variants according to how many work items (cases) flow through each one. And then you’d measure the duration of the variant in the middle of the lineup, where half the variants are slower, and half the variants are faster. This would provide a better indicator of the behavior of the process with respect to latency.
Quality-caused inefficiencies are very useful and include NIGOs (items that are not in good order), rework loops, and number of QA steps. Here we typically measure the number of items that are tagged as NIGO (or rework loops or QA steps). We might set the bar at 20%: the process passes if less than 20% are tagged as NIGO, fails if 20% or more are tagged as NIGO, and gets no score if we don’t know.
Automation is an obvious metric, and we might say that the process passes if 50% or more of the activities are automated, fails when fewer than 50% are automated, and gets no score if we don’t know.
How We Measure Process Effectiveness
Effectiveness determines the adequacy of the workflow for the problem it’s designed to solve. Does it do the job? Does it have the required capabilities to fulfil the process objective? This is usually a qualitative rather than quantitative assessment and requires subject matter expertise. For example, a highly efficient paperless vehicle claims workflow would clearly fail in effectiveness if it didn’t address trucks, where the objective of the process is to address all vehicles, not just cars. Or it might fail because it doesn’t meet compliance or business continuity requirements, both of which could be part of the job it’s supposed to do. However, effectiveness can be measured more rigorously, e.g., by using our process mining solution to determine the level of conformance that the process has to a designed target process standard.
How We Measure Process Value
Value determines whether the process is worth migrating to the new environment along with how much cost and effort should be expended. Is the process pointless, outdated, or simply doesn’t add value and is not worth migrating? Is the process high or low value in the post-migration future state? As with effectiveness, this is usually a qualitative assessment and requires subject matter expertise. But it can be measured more rigorously with a standard business case that assesses and compares, for example, the projected revenue, cost savings, and customer retention yielded by an aggressively redesigned workflow for paper-based print-and-mail customer communication versus an aggressively redesigned workflow for all-digital customer communication. The business case may show that the workflow for paper communications is not worth migrating – or that it definitely is worth it.
How We Put it All Together
How and what you measure will differ for every migration project. Before you begin, you must decide your measures and scoring thresholds. For instance, you could decide to keep it simple for efficiency and just look at average number of activities, average number of variants, and latency. Or you could go for more accuracy and also look at number of business units involved, number of external parties, number of IT system integrations, and number of QA steps. We always combine multiple measurements because more measurements mean more accuracy. However, there is always a threshold where the costs in time and difficulty exceed the value of more accuracy. That threshold varies from project to project. Do you need help with your workflow migration project? We can help you identify to the right mix of metrics for your project.