Professional Resume Template for Software Developers
Â
bpmNEXT 2018: Exploiting cloud infrastructure for efficient business process execution
1. Exploiting Cloud Infrastructure for
Efficient Business Process Execution
Kris Verlaenen
jBPM Project Lead
Red Hat Business Automation Platform Architect
9. Kris Verlaenen9
Controller (1)
â—Ź
Server configuration
• Capabilities (Rule, Process, Planning, etc.)
• Deployed containers, for example
• Project A – v1.0
• Project A – v2.0
• Project B – v1.1
â—Ź
Server instances
• Contacted by engine on startup and shutdown
Keeping track of server instances
10. Kris Verlaenen10
Controller (2)
â—Ź
Only if wanted !
â—Ź
Update existing server configurations
• Add container
• Remove containers
â—Ź
Dynamic deployments
• Individual server instances are updated
Managing server deployments
11. Kris Verlaenen11
Smart Router
â—Ź
Acts a server instance
â—Ź Delegate requests to the right server instance
• Across different server configurations
• Based on minimal information
â—Ź Aggregate data
• From different server instances
Delegate and aggregate
Hi, my name is Kris Verlaenen, I’m the jBPM Project lead – a completely open-source BPM platform - and a platform architect of the Business Automation product line, which is a fully supported offering based on jBPM and other community projects.
Today I will show how we are addressing some of the challenges that we – and our customers – are seeing in the context of BPM and cloud – whether it’s public, private or hybrid cloud.
I probably don’t have to explain that - while cloud builds on decades of IT experiences – it also introduces new challenges (like more memory and CPU contraints), new paradigms (like DevOps and microservices), etc.
This has an impact on the entire lifecycle of a business process, from authoring to execution and monitoring, but today I’ll focus mostly on the challenges in the latter two.
Our process execution engine is very lightweight – even runs on my mobile phone – and that brings a lot of benefits in the cloud context. Also, it can easily be embedded inside your own applications and cloud architecture.
Scalability requires an architecture where a lot of these engines will be working together. And the processes need to play well in a much larger ecosystem of application development in general, as end user application do not consist of only process logic.
While deploying one project in the cloud is trivial, the challenge quickly explodes if you have multiple versions of your project running in parallel, lots of projects and even different environments for development, staging and production.
While there might be a lot of value of having small, independent or even micro services, if you’re not careful, you will end up with a lot of isolated services that you can’t keep track off, let alone manage and monitor in a reasonable way.
So I will show how we’re using several components to help overcome those challenges. One of them is what we call the controller. It has a dual purpose, one is keeping track of all the engines out there and which projects are deployed where.
If wanted, it can even help manage those servers, like defining which projects should be loaded on startup or even dynamically adding and removing projects to those engines – if that is something you want of course.
Another component doing a lot of the heavy work is a smart router, that can route requests to the right server instances – so you or your applications don’t have to figure that out themselves, or even create an aggregated view across multiple different servers.
But controllers and smart routers don’t make the issues go away, so if something goes wrong in one of the engines, how can your process and task administrators figure our where to look and how to intervene?
The solution we adopted is having a monitoring console that can literally connect to any execution engine out there. Based on the topology collected by the controller, it can get the necessary information from the server instances.
And smart routers are a crucial element to help aggregate information from all these server instances automatically so your administrators don’t have to go through each instance individually.
Deploying new versions of your processes needs to play well in strategies like blue / green deployments, where you set up both projects in parallel and flip the switch at some point, or canary deployments where you start small and gradually increase.
We provide cloud images for the various different services, in a true open-source manner of course. Each image is built by combining layers, allowing customers to configure and customize them if necessary.
But bringing all these services together is where the complexity is, when running multiple independent projects, with HA monitoring, controllers and routers. Which is why we provide templates for common architectures like these.
So in the demo I will show a simple application for ordering hardware, being deployed on top of the Red Hat OpenShift Cloud Platform, more specifically a MiniShift instance running on my laptop.
And in the demo we will deploy our application, with our engine embedded, connect our monitoring console to it, and then the blue part - dynamically deploy an updated version of our business logic into it.
We will monitor the behavior of both version in parallel using a form of canary deployment, and check if service level agreements are satisfied in both cases, and if not take appropriate action.