top of page
  • Writer's pictureMarty Pavlik

Process Monitoring, Prediction, and Simulation: The Next Frontier

Updated: Nov 9, 2023

Process intelligence capabilities are evolving and more capable of driving continuous process improvement than ever.


In this Two For Tuesday, I once again speak with Scott Opitz, Chief Product & Information Officer for ABBYY (a leading process mining and IDP vendor). In this short, informative interview, Scott highlights three critical new areas:

  1. Process monitoring detects variations and triggers alerts or automated remediation.

  2. Prediction analyzes execution patterns to forecast potential outcomes.

  3. Simulation experiments with process changes pre-deployment using a digital twin.

He also discusses ABBYY Timeline functionality across core pillars like discovery, analytics, monitoring, prediction, and simulation. He notes the combination of process and task mining provides complete visibility. Timeline's interactive process models enable changes to be tested before deployment.


4 Takeaways:

  • Process monitoring alerts users or systems when processes deviate from expected behaviors

  • Prediction analyzes patterns to forecast outcomes and get ahead of potential issues

  • Simulation allows experimenting with process changes before deployment

  • Interactive process models enable robust testing of process changes pre-implementation

You can watch the interview here or scroll down to read a (lightly) edited transcript of our conversation.




Marty: The first question I have is process intelligence has changed so much in the last 12 months, adoption has been through the roof and I think you know in the last 12 months the capabilities have also changed. Can you talk to the audience about what you're seeing as new capabilities of process intelligence and what they should be aware of in this space?


Scott: It really is evolving quickly. But I think that evolution is natural in this journey of process improvement and customers trying to figure that out. So the way we tend to think about it is, we think about the foundational elements in process intelligence as being really about discovering analysis? We start by trying to figure out what's the current state of the process.


How does it flow? Maybe what surprises we might find right? And then we move on to analysis to go in and try to figure out maybe the root cause, right? We say, hey, occasionally we violate an expected behavior. You know which person's involved? What products are we dealing with? What customer are we dealing with when that happens? And the goal of all of that really is to make improvements. In the end the reason you're doing all this right is you wanna make your process better.


So one of the things that I think gets borne out of that is one of the first advanced capabilities that I think is absolutely critical going forward. And we have some interesting customer examples of this where once I've made those improvements or at least what I hope and think are improvements, right? The big thing is to be able to have a mechanism to watch for situations where processes don't behave as you intended. In other words, you affected a change, or maybe you didn't change what you're just trying to make sure that people follow the rules today, even in the current process.


And since nobody has the ability to sit in front of a screen 24/7, we hope that the technology can help with this. And this is the whole category that we think of as process monitoring. The ability to have a system in place that can detect process variations that are of interest to you. Those things that are of concern to you and in the simplest case, notify some user who's assigned to deal with those types of issues when they arise.


Even more interesting is to actually automatically trigger an action in some other technology. Maybe spawn a business process to go and remediate it, or an RPA bot, or call an API call and some other application to go and try to fix it automatically and achieve the Holy Grail: get most of the stuff through closed loop type processing.


So I think that process monitoring is really a big first one. But that begs the question, it's like, well, look, process monitoring by its definition, you're being told when something happens. So at that moment, at that moment, it becomes history, right?


So the next big win is if there's a way to be able to predict what's going to happen before it happens. And so prediction, and in our case process prediction, is about building on those same ideas, but taking them a step further.


And so prediction related to process is using the patterns of execution that we can look at in our long histories of what we've mined out of those systems were both processes tasks and everything we figured out about it and then basically constantly reassessing as new activities happen on every in flight process since the any time and use those historical behaviors in combination with the new data that's arriving constantly.


Then on a case-by-case basis for every process instance be able to predict that the outcome will change. And now this is valuable for two reasons. Sometimes you may actually be able to see something that's gonna happen. You're gonna miss a deadline or or likely it's going to result in an adverse result at some point that you would really like to avoid. There's cases where you might actually be able to focus on it and prevent that from ever happening.


There's other cases where, well, it's happening for a reason. So you might not be able to change the outcome, but still knowing about it in advance lets you get to the point where you can actually mitigate it. You can prepare for it to minimize the damage.


We think that prediction just from a practical operations sense is extremely valuable. The last big area of this, that is really the newest, right, and it's new for us as well. We've really just rolled it out in the last 7-8 months, is the ability of doing process simulation right and a process simulation exists for one very important reason and that is processes by definition are complex and thinking that you could just kind of easily estimate what the impact of any change to a process flow or to a resource level that you make available or something like that that you could easily predict everything that will happen as a result of that and these types of complex systems.


It's been proven that you really need to use a proper simulation technique, and so we think over time we're gonna see simulation being the preferred method of testing every idea of what would be changed or improved or optimized in a process before anybody does any of the work to make those changes or impact production systems. So I think these three areas, the monitoring, the prediction of the capability, I think that's really where the excitement's coming from.


Marty: Great, very insightful. I'll ask you one easy question. Why ABBYY Timeline?


Scott: Well, I'm not at all biased here, of course, Marty, but having been with it since the beginning, it's like one of my children. So I am pretty proud of it. Let me tell you the way I think about it.


When we think about a process intelligence platform (of which we say that ABBYY Timeline really fully qualifies for) there's a set of standards that you have to hit to qualify as a true process intelligence platform. What we call our five pillars.


The first pillar is kind of what you'd expect. It's process discovery. I have to be able to take whatever data I can find on events and log files, transaction tables, whatever, and I have to reconstruct the processes and that's basically to tell me, OK, this is what really happened right now as part of process discovery, though, we feel very strongly that that has to be both event-based process discovery as well as task mining because what we find is there are these gaps that exist between events right.


There can still be a lot of work going on on any user's desktop and we wanna understand that as to how that relates to the execution of the process. And so the combination of process mining and task mining as well as the ability to handle ad hoc processes, in other words, any kind of process type that you can encounter, even those where they're extremely variable executions from 1 instance to the next. So that first pillar process is covered. That's the minimum to be in the conversation.


We think the second pillar is all about process analytics. And while yes, this is some of the things you think of like show me a nice diagram of my process and be able to do some filtering, we take it a lot further than that because we have this ability to deal very effectively with ad hoc case management-type processes.


And so there's a whole other set of analysis capabilities that you need to deal with that type of thing. Our view is that to do this effectively, you need to basically present a complete set of tools that are designed to answer any kind of question that you have and that the moment that the data is loaded into the environment without any coding and that's absolutely critical for broader adoption of this technology without a single line of code, they should be able to go to the myriad different types of analysis tools that we provide them to get whatever answer they want, right?


The third one is process monitoring because again it's great to do discovery and analysis. But hey, I want to know the 24/7 right worldwide.


I want to know that my processes are working effectively right? So we feel that you've gotta have that ability as part of it. The predictive capability we talked about with being able to use neural networks to mine complex behaviors, particularly when you start talking about these highly variable ad hoc process types, the ability to build a model there and then be able to predict what new actions on any process instance might likely result in.


And then finally, you know with the process simulation I give full marks to, I don't know pun intended, to Marc Kerremans of Gartner, who really strongly advocated for this idea years ago, the digital twin of a business process. Right. I always liked the idea, but as I think you'd agree with me most vendors who talk about this, I think they kind of missed the point?


I mean, digital twins are not about a pretty diagram or pretty visualization. The real power comes from getting a truly interactive process model that you can use to experiment before you actually have to go and touch your real environments? So through our research we found that the only way to really deliver a robust model that allows you to play through all these scenarios is by using simulation techniques. We don't think that any other stochastic models and other things really work. And so the benefit of course is if I can do that to bring it full spectrum, when I wanna make a change, I can test all those changes before I actually spend the money or incur any other disruption that I might have of making the changes on my running processes. And so we think all these capabilities are essential. And you know, when we line ourselves up; I can be proud of our product. I think we more effectively check these boxes than any other product out there.


For more information on process intelligence, follow this blog.


Also visit ABBYY at www.abbyy.com.

16 views0 comments
bottom of page