środa, 10 sierpnia 2016

KIE Server (jBPM extension) brings document support

Another article for KIE Server series ... and what's coming in version 7. This time around documents and their use in business processes.

Business processes quite frequently need collaboration around documents (in any meaning of it), thus is it important to allow users to upload and download documents. jBPM provided documents support in version 6 already though it was not exposed on KIE Server for remote interaction.

jBPM 7 will come with support for documents in KIE Server - that covers both use within process context and outside - direct interaction with underlying document storage.

jBPM and documents

To recap quickly how document support is provided by jBPM

Documents are considered process variables, as such they are applicable for the pluggable persistence strategies defined. Persistence strategies allow to provide various backend storage for process variables, instead of always be put together with process instance into jBPM data base.

Document is represented by org.jbpm.document.service.impl.DocumentImpl type and comes with dedicated marshaling strategy to deal with this type of variables org.jbpm.document.marshalling.DocumentMarshallingStrategy. In turn marshaling strategy relies on org.jbpm.document.service.DocumentStorageService that is an implementation specific to document storage of your choice. jBPM comes with out of the box implementation of the storage service that simply uses file system as underlying storage system.
Users can implement alternative DocumentStorageService to provide any kind of storage like data base, ECM etc.

KIE Server in version 7, provides full support for described above usage - including pluggable DocumentStorageService implementations - and it extends it bit more. It provides REST api on top of org.jbpm.document.service.DocumentStorageService to allow easy access to underlying documents without a need to always go over process instance variables, though it still allows to access documents from within process instance.

KIE Server provides following endpoints to deal with documents:

  • list documents - GET - http://host:port/kie-server/services/rest/server/documents
    • accept page and pageSize as query parameters to control paging
  • create document - POST - http://host:port/kie-server/services/rest/server/documents
    • DocumentInstance representation in one of supported format (JSON, JAXB, XStream)
  • delete document - DELETE - http://host:port/kie-server/services/rest/server/documents/{DOC_ID}
  • get document (including content) - GET - http://host:port/kie-server/services/rest/server/documents/{DOC_ID}
  • update document - PUT - http://host:port/kie-server/services/rest/server/documents
    • DocumentInstance representation in one of supported format (JSON, JAXB, XStream)
  • get content - GET - http://host:port/kie-server/services/rest/server/documents/{DOC_ID}/content

NOTE: Same operations are also supported over JMS.

Documents in action

Let's see this in action, by just going over very simple use case:
  • Deploy translations project (that is part of jbpm-playground repository) to KIE Server
  • Create new translation process instance from workbench
  • Create new translation process instance from JavaScript client - simple web page
  • Download and remove documents from JavaScript client

As it can be seen in above screencast, there is smooth integration between workbench, kie server and JavaScript client. Even more is that kie server accept all the data over single endpoint - no separate upload of the document and then start of the process. 

Important note - be really cautious when using delete operation via KIE Server documents endpoint as it remove document completely meaning there will be no access to it from process instance (as presented in the screencast), moreover process instance won't be aware of it as it considers document storage as an external system.

Sample source

For those how would like to try it out themselves, here is a JavaScript client (a simple web page) that was used for the example screencast. Please make sure you drop it on kie server instance to not run into CORS related issues.

<title>Send document to KIE Server</title>
<style type="text/css">
table.gridtable {
 font-family: verdana,arial,sans-serif;
 border-width: 1px;
 border-color: #666666;
 border-collapse: collapse;
table.gridtable th {
 border-width: 1px;
 padding: 8px;
 border-style: solid;
 border-color: #666666;
 background-color: #dedede;
table.gridtable td {
 border-width: 1px;
 padding: 8px;
 border-style: solid;
 border-color: #666666;
 background-color: #ffffff;

<script type='text/javascript'>
  var user = "";
  var pwd = "";
  var startTransalationProcessURL = "http://localhost:8230/kie-server/services/rest/server/containers/translations/processes/translations/instances";
  var documentsURL = "http://localhost:8230/kie-server/services/rest/server/documents";

  var srcData = null;
  var fileName = null;
  var fileSize = null;
  function encodeImageFileAsURL() {

    var filesSelected = document.getElementById("inputFileToLoad").files;
    if (filesSelected.length > 0) {
      var fileToLoad = filesSelected[0];
      fileName = fileToLoad.name;
      fileSize = fileToLoad.size;
      var fileReader = new FileReader();

      fileReader.onload = function(fileLoadedEvent) {
        var local = fileLoadedEvent.target.result; // <--- data: base64
        srcData = local.replace(/^data:.*\/.*;base64,/, "");

        console.log("Converted Base64 version is " + srcData);
    } else {
      alert("Please select a file");

  function startTransalationProcess() {
    var xhr = new XMLHttpRequest();
    xhr.open('POST', startTransalationProcessURL);
    xhr.setRequestHeader('Content-Type', 'application/json');
    xhr.setRequestHeader ("Authorization", "Basic " + btoa(user + ":" + pwd));
    xhr.onreadystatechange = function () {
        if (xhr.readyState == 4 && xhr.status == 201) {
    var uniqueId = generateUUID();
    xhr.send('{' +
      '"uploader_name" : " '+ document.getElementById("inputName").value +'",' +
      '"uploader_mail" : " '+ document.getElementById("inputEmail").value +'", ' +
      '"original_document" : {"DocumentImpl":{"identifier":"'+uniqueId+'","name":"'+fileName+'","link":"'+uniqueId+'","size":'+fileSize+',"lastModified":'+Date.now()+',"content":"' + srcData + '","attributes":null}}}');

    function deleteDoc(docId) {
      var xhr = new XMLHttpRequest();
      xhr.open('DELETE', documentsURL +"/" + docId);
      xhr.setRequestHeader('Content-Type', 'application/json');
      xhr.setRequestHeader ("Authorization", "Basic " + btoa(user + ":" + pwd));
      xhr.onreadystatechange = function () {
          if (xhr.readyState == 4 && xhr.status == 204) {


  function loadDocuments() {
    var xhr = new XMLHttpRequest();
    xhr.open('GET', documentsURL);
    xhr.setRequestHeader('Content-Type', 'application/json');
    xhr.setRequestHeader ("Authorization", "Basic " + btoa(user + ":" + pwd));
    xhr.onreadystatechange = function () {
        if (xhr.readyState == 4 && xhr.status == 200) {
          var divContainer = document.getElementById("docs");
          divContainer.innerHTML = "";
          var documentListJSON = JSON.parse(xhr.responseText);
          var documentsJSON = documentListJSON['document-instances'];
          if (documentsJSON.length == 0) {
          var col = [];
          for (var i = 0; i < documentsJSON.length; i++) {
              for (var key in documentsJSON[i]) {
                  if (col.indexOf(key) === -1) {
          var table = document.createElement("table");

          var tr = table.insertRow(-1);                   

          for (var i = 0; i < col.length; i++) {
              var th = document.createElement("th");      
              th.innerHTML = col[i];
          var downloadth = document.createElement("th");
          downloadth.innerHTML = 'Download';
          var deleteth = document.createElement("th");
          deleteth.innerHTML = 'Delete';

          for (var i = 0; i < documentsJSON.length; i++) {

              tr = table.insertRow(-1);

              for (var j = 0; j < col.length; j++) {
                  var tabCell = tr.insertCell(-1);
                  tabCell.innerHTML = documentsJSON[i][col[j]];
              var tabCellGet = tr.insertCell(-1);
              tabCellGet.innerHTML = '<button id="button" onclick="window.open(\'' + documentsURL +'/'+documentsJSON[i]['document-id']+'/content\')">Download</button>';

              var tabCellDelete = tr.insertCell(-1);
              tabCellDelete.innerHTML = '<button id="button" onclick="deleteDoc(\''+documentsJSON[i]['document-id']+'\')">Delete</button>';



  function generateUUID() {
    var d = new Date().getTime();
    var uuid = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
        var r = (d + Math.random()*16)%16 | 0;
        d = Math.floor(d/16);
        return (c=='x' ? r : (r&0x3|0x8)).toString(16);
    return uuid;
  <h2>Start transalation process</h2>
  Name: <input name="name" type="text" id="inputName"/><br/><br/>
  Email: <input name="email" type="text" id="inputEmail"/><br/><br/>
  Document to translate: <input id="inputFileToLoad" type="file" onchange="encodeImageFileAsURL();" /><br/><br/>
  <input name="send" type="submit" onclick="startTransalationProcess();" /><br/><br/>
  <h2>Available documents</h2>
  <button id="button" onclick="loadDocuments()">Load documents!</button>
  <div id="docs">


And as usual, share your feedback as that is the best way to get improvements that are important to you.

środa, 27 lipca 2016

Knowledge Driven Microservices

In the area of microservices more and more people are looking into lightweight and domain IT solutions. Regardless of how you look at microservice the overall idea is to make sure it does isolated work, don't cross the border of the domain it should cover.
That way of thinking made me look into it how to leverage the KIE (Knowledge Is Everything) platform to bring in the business aspect to it and reuse business assets you might already have - that is:
  • business rules
  • business process
  • common data model
  • and possibly more... depending on your domain
In this article I'd like to share the idea that I presented at DevConf.cz and JBCNConf this year. 

Since there is huge support for microservice architecture out there in open source world, I'd like to present one set of tools you can use to build up knowledge driven microservices, but keep in mind that these are just the tools that might (and most likely will) be replaced in the future.

Tools box

jBPM - for process management
Drools - for rules evenalutaion
Vert.x - for complete application infrastructure binding all together
Hazelcast - for cluster support for distributed environments 

Use case - sample

Overall use case was to provide a basic back loan solution that process loan applications. So the IT solution is partitioned into following services:

  • Apply for loan service
    • Main entry point to the loan request system
    • Allow applicant to put loan request that consists of: 
      • applicant name 
      • monthly income 
      • loan amount 
      • length in years to pay off the loan

  • Evaluate loan service
    • Rule based evaluation of incoming loan applications 
      • Low risk loan 
        • when loan request is for amount lower that 1000 it’s considered low risk and thus is auto approved 
      • Basic loan 
        • When amount is higher than 1000 and length is less than 5 years - requires clerk approval process 
      • Long term loan 
        • When amount is higher than 1000 and length is more that 5 years - requires manager approval and might need special terms to be established
  • Process loan service
    • Depending on the classification of the loan different bank departments/teams will be involved in decision making about given loan request 
      • Basic loans department 
        • performs background check on the applicant and either approves or rejects the loan 
      • Long term loans department 
        • requires management approval to ensure a long term commitment can be accepted for given application.


  • Each service is completely self contained 
  • Knowledge driven services are deployed with kjar - knowledge archives that provide business assets (processes, rules, etc)
  • Services talks to each other by exchanging data - business data 
  • Services can come and go as we like - dynamically increasing or decreasing number of instances of given service 
  • no API in the strict sense of its meaning - API is the data

More if you're interested...

Complete presentation from JBCNConf and video from DevConf.cz conference.

Presentation at DevConf.cz

In case you'd like to explore the code or run it yourself have a look at the complete source code of this demo in github.

czwartek, 14 lipca 2016

jBPM v7 - workbench and kie server integration

As part of ongoing development of jBPM version 7 I'd like to give a short preview of one of the changes that are coming. This in particular relates to changes how workbench and kie server are integrated.  In version 6 (when kie server was introduced with BPM capability) we have had two independent execution servers:

  • one embedded in workbench 
  • another in in kie server
in many cases it caused bit of confusion as users were expecting to see processes (and tasks, jobs etc) created in kie server vie workbench UI. To achieve that users where pointing workbench and kie server to same data base and that lead to number of unexpected issues as these two were designed in different way and were not intended to work in parallel.

jBPM version 7 is addressing this problem in two ways:
  • removes duplication when it comes to execution servers - only kie server will be available - no execution in workbench
  • integrates workbench with kie server(s) so its runtime views (e.g. process instances, definitions, tasks) can be used with kie server as backend

While the first point is rather clear and obvious, the second takes a bit to see its full power. It's not only about letting users use workbench UI to start processes or interact with user tasks, it actually makes the flexible architecture of kie server to be fully utilized (more on kie server can be found in previous blog series).
In version 6.4 there was new Server Management UI introduced to allow easy and efficient management of kie servers. This came with concept of server templates - that is a definition of the runtime environment regardless of how many physical instances are going to run with this definition. That in turn allow administrators to define partitioned environment where different server templates are representing different part of the organization or domain coverage.

Server template consists of:
  • name
  • list of assigned containers (kjars)
  • list of available capabilities
Once any kie server starts and connects to workbench (workbench acts as controller) then it will be presented in server management under remote servers. Remote servers reflects the current state of controller knowledge - meaning it only changes the list upon two events triggered by kie servers:
  • start of the kie server in managed mode - which connects to controller and registers itself as remote server
  • shutdown of the kie server in managed mode - which notifies controller to unregister itself from remote servers
With this setup users can create as many server templates as they need. Moreover at the same time each server template can be backed by as many kie server instances as it needs. That gives the complete control of how individual server templates (and buy that part of your business domain) can scale individually.

So enough of introduction, let's see how it got integrated for execution. Since there is no more execution server embedded in workbench, all that needs it will be carried on by kie server(s). To accomplish this workbench internally relies on two parts:
  • server management (that maintains server templates) to know what is available and where
  • kie server client to interact with remote servers
Server management is used to collect information about:
  • server templates whenever project is to be deployed - regardless if there are any remote servers or not - again this is just update to the definition of the server
  • remote servers whenever kie server interaction is required - start process, get process instances, etc
NOTE: in case of multiple server templates are available there will be selection box shown on the screen so users can decide which server template they are going to interact with. Again, users do not care about individual remote servers as they represent same setup so it's not important which server instance will be used for given request, as long as one of them is available.
Server templates that do not have any remote servers available won't be visible on the list of server templates.
And when there is only one server template selection is not required and that one becomes the default - for both deployment and runtime operations.

On right top corner users can find server template selection button in case there are more than one server template available. Once selected it will be preserved across screen navigation so it should be selected only once.

Build & Deploy has been updated to take advantage of the new server management as well, whenever users decide to do build and deploy:
  • if there is only one server template:
    • it gets selected as default
    • artifact name is used as container id
    • by default container is started
  • if there are more than one server templates available user is presented with additional popup window to select:
    • container id
    • server tempalte
    • if container should be started or not

That concludes the introduction and basic integration between kie server and workbench. Let's now look what's included and what's excluded from workbench point of view (or the differences that users might notice when switching from 6 to 7). 

First of all, majority of runtime operations are suppose to work exactly the same what, that includes:
  • Process definition view
    • process definition list
    • process definition details
    • Operations
      • start process instance (including forms)
      • visualize process definition diagram

  • Process instance view
    • process instance list (both predefined and custom filters)
    • process instance details
    • process instance variables
    • process instance documents
    • process instance log
    • operations
      • start process instance (including forms)
      • signal process instance (including bulk)
      • abort process instance (including bulk)
      • visualize process instance progress via diagram

  • Tasks instance view
    • task list (both predefined and custom filters)
    • task instance details (including forms)
    • life cycle operations of a task (e.g. claim, start, complete)
    • task assignment
    • task comments
    • task log

  • Jobs view
    • jobs list (both predefined and custom filters)
    • job details
    • create new job
    • depending on status cancel or requeue jobs

  • Dashboards view
    • out of the box dashboards for processes and tasks

All of these views retrieve data from remote kie server so that means there is no need for workbench to have any data sources defined. Even the default that comes with wildfly is not needed - not even for dashboards :) With that we have very lightweight workbench that comes with excellent authoring and management capabilities.

That leads us to a last section in this article that explains that changed and what was removed. 

Let's start with changes that are worth noting:
  • asynchronous processing of authoring REST operations has been moved from jbpm executor to UberFire async service - that makes the biggest change in cluttered setup where only cluster member that accepted request will know its status
  • build & deploy from project explorer is unified - regardless if the project is managed or unmanaged - there are only two options
    • compile - which does only in memory project build
    • build and deploy - which includes build, deploy to maven and provision to server template
Now moving to what was removed

  • since there is no jbpm runtime embedded in workbench there is no REST or JMS interfaces for jBPM, REST interfaces for the authoring part is unchanged (create org unit, repository, compile project etc)
  • jobs settings is no longer available as it does not make much sense in new (distributed) setup as the configuration of kie servers is currently done on server instance level
  • ad hoc tasks are temporally removed and will be introduced as part of case management support where it actually belongs
  • asset management is removed in the form it was known in v6 - the part that is kept is
    • managed repository that allows single or multi module projects
    • automatic branch creation - dev and release
    • repository structure management where users can manage their modules and create additional branches
    • project explorer still supports switch between branches as it used to
  • asset management won't have support for asset promotion, build of projects or release of projects
  • send task reminders - it was sort of hidden feature and more of admin so it's going to be added as part of admin interface for workbench/kie server

Enough talking (... writing/reading depends on point of view) it's time to see it in action. Following are two screen casts showing different use cases covered.

  • Case 1 - from zero to full speed execution
    • Create new repository and project
    • Create data object
    • Create process definition with user task and forms that uses created data object
    • Build and deploy the project
    • Start process instance(s)
    • Work on tasks
    • Visualize the progress of process instance
    • Monitor via dashboards

  • Case 2 - from existing project to document capable processes
    • Server template 1
    • Deploy already build project (translations)
    • Create process instance that includes document upload
    • Work on tasks
    • Visualize process instance details (including documents)

    • Server template 2
    • Deploy already build project (async-example)
    • Create process instance to check weather in US based on zip code
    • Work on tasks 
    • Visualize process instance progress - this project does not have image attached so it comes blank
    • Monitor via dashboards

Before we end a short note for these that want to try it out. Since we have integration with kie server and as you noticed it does not require any additional login to kie server and workbench uses logged in user, there is small need for configuration of Wildfly:
workbench comes with additional login module as part of kie-security.jar so to enable smooth integration when it comes to authentication, please declare in standalone.xml of your wildfly following login module:

so the default other security domain should look like this:
  <security-domain name="other" cache-type="default">
       <login-module code="Remoting" flag="optional">
          <module-option name="password-stacking" value="useFirstPass"/>
        <login-module code="RealmDirect" flag="required">
            <module-option name="password-stacking" value="useFirstPass"/>
        <login-module code="org.kie.security.jaas.KieLoginModule" flag="optional"

important element is marked with red as it might differ between environments as it relies on the actual file name of the kie-wb.war. Replace it to match the name of your environment.

NOTE: this is only required for kie wb and not for kie drools wb running on wildfly. Current state is that this works on Wildfly/EAP7 and Tomcat, WebSphere and WebLogic might come later...

That's all for know, comments and ideas more than welcome

piątek, 15 kwietnia 2016

KIE Server clustering and scalability

This article is another in KIE Server series and this time we'll focus on clustering and scalability.

KIE Server by its nature is lightweight and easily scalable component. Comparing to execution environment included in KIE workbench it can be summarized with following:

  • allows to partition based on deployed containers (kjars)
    • in workbech all containers are deployed to same runtime
  • allows to scale individual instances independently from each other
    • in workbench scaling workbench means scaling all kjars
  • can be easily distributed across network and be managed by controller (workbench by default)
    • workbench is both management and execution which makes it a single point of failure
  • clustering of KIE Server does not include any additional components in the infrastructure 
    • workbench requires Zookeeper and Helix for clustered git repository
So what does it mean to scale KIE Server?
First of all it allows administrators to partition knowledge between different KIE Server instances. With that said, HR department related processes and rules can run on one set of KIE Server instances, while Finance department will have its own set of KIE Server instances. By that each department's administrator can easily scale based on the needs without affecting each other. That gives us unique opportunity to really focus on the components that do require additional processing power and simply add more instances - either on the same server or on different distributed across your network.

Let's look at most common runtime architecture for scalable KIE Server environment

As described above the basic runtime architecture will consists of multiple independent sets of KIE Servers where number of actual server instances can vary. In the above diagram all of them have three instances but in reality they can have as many (or as little) as needed.

Controller in turn will have three server templates - HR, Finance and IT. Each server template is then defined with identifier which is used by KIE Server instances via system property called org.kie.server.id.

In above screenshot, server templates are defined in the controller (workbench) which becomes single point of configuration and management of our KIE Servers. So administrators can add or remove, start or stop different containers and controller is responsible for notifying all KIE Server instances (that belongs to given server template) with performed operations. Moreover when new KIE server instances are added to the set they will directly receive all containers that should be started and by that increase processing power.

As mentioned, this is the basic setup, meaning actual usage of the server instances is by calling them directly - each individual KIE Server instance. This is a bit troublesome as users/callers will have to deal with instances that are down etc. So to solve this we can put load balancer in front of the kie servers and then utilize that load balancer to the heavy lifting for us. So users will simply call single URL which is then configured to work with all instances in the back end. One of the choices of a load balancer is Apache HTTP with ModCluster plugin for efficient and highly configurable load balancer.

In version 7, KIE Server client will come with pluggable load balancer implementation so whenever using KIE Server client users could simply skip additional load balancer as infrastructure component. Though it will provide load balancing and failure discovery support it's client side load balancer which has no knowledge of underlying backend servers and thus won't be as efficient as mod cluster can be.

So this covers, scalability of KIE Server instances as they can be easily multiplied to provide more power for execution and at the same time distribution both on the network and knowledge (containers) level. But looking at the diagram, a single point of failure is the controller. Remember that in managed mode (where KIE Server instances depend on controller) they are limited in case controller is down. Let's recap on how KIE Server interacts with controller:

  • when KIE Server starts it attempts to connect to any of the defined controllers (if any)
  • it will only connect to one once connection is successful
  • controller will then provide list of containers to deploy and configuration
  • based on this information KIE Server deploys and starts to serve requests
But what happens when none of the controllers can be reached when KIE Server starts? KIE Server will be pretty much useless as it does not know what container it should deploy. And will keep checking (with predefined intervals) if controller is available. So until controller is not available KIE Server has not containers deployed and thus won't process any requests - most likely response you'll get from KIE Server when trying to use it will be - no container found.

Note: This affects only KIE Servers that starts after controller went down, those that are currently running are not affected at all.

So to solve this problem workbench (and by that controller) should be scaled. Here default configuration of KIE workbench cluster applies, meaning with Apache Zookeeper and Apache Helix as part of the infrastructure.

In this diagram, we scale workbench by using Apache Zookeeper and Helix for cluster of GIT repository. This gives us replication between server instances (that runs workbench) and by that provides several controller endpoints (which are synchronized) to secure KIE Server instances can reach the controller and collect configuration and containers to be deployed.

Similar as it was for KIE Servers, controller can be either reached directly by independent endpoints or again fronted with load balancer. KIE Server allows to be given list of controllers so load balancer is not strictly required though recommended as workbench is also (or even primarily) used by end users who would be interested in load balanced environment as well.

So that would conclude the description of clustering and scalability of KIE server to gain most of it, let's now take a quick look what's important to know when configuring such setup.


We start with configuration of workbench - controller. The most important for controller is the authentication so connecting KIE Server instances will be authorized. By default KIE Server upon start will send request with Basic authentication corresponding to following credentials:
  • username: kieserver
  • password: kieserver1!
so to allow KIE Server to connect, make sure such user exists in application realm of your application server.

NOTE: the username and password can be changed on KIE Server side by setting following system properties:
  • org.kie.server.controller.user
  • org.kie.server.controller.pwd

This is the only thing needed on application server that hosts KIE workbench.

KIE Server
On KIE Server side, there are several properties that must be set on each KIE Server instance. Some of these properties must be same for all instances representing same server template defined in the controller.
  • org.kie.server.id - identifier of the kie server that corresponds to server template id. this must be exactly the same for all KIE Server instances that represent given server template
  • org.kie.server.controller - comma separated list of absolute URL to the controller(s). this must be the same for all KIE Server instances that represents given server template
  • org.kie.server.location - absolute URL where this KIE Server instance can be reached. This must be unique for each KIE Server instances as it's going to be used by controller to notify requested changes (e.g start/stop container). 
Similar to how workbench authenticates request KIE Server does the same, so to allow controller to connect to KIE Server instance (based on given URL as org.kie.server.location) application realm of the server where KIE Server instances are running must have configured user. By default workbench (controller) will use following credentials:
  • username: kieserver
  • password: kieserver1!
so it must exist in application realm. In addition it must be member of kie-server role so KIE Server will authorize it to its REST api.

NOTE: the username and password can be changed on KIE Workbench side by setting following system properties:
  • org.kie.server.user
  • org.kie.server.pwd
There are other system properties that can be set (and most likely will be needed depending on what configuration of KIE Server is needed). For those look at the documentation.

This configuration applies to any way you run KIE Server - on standalone Wildfly, domain mode of Wildfly, Tomcat, WAS or WebLogic. It does not really matter as long as you follow the set of properties you'll be ready to go with clustered and scalable KIE Server instances that are tailored to your domain.

That would be all for today, as usual comments are more than welcome.

poniedziałek, 21 marca 2016

Community extension to KIE Server - welcome Apache Thrift

In previous articles about KIE Server I described how it can be extended to bring in more features to it, starting with enhanced REST endpoints, through building addition transport layers, and finishing at building custom kie server client implementations.

It didn't take long and we got official confirmation that it works!!!

Maurice Betzel, has done excellent job and implemented KIE Server extensions that brings in Apache Thrift into the picture. That allowed him to bridge the gap between Java and PHP to make use of rule evaluation using KIE Server.

KIE Server with Apache Thrift

I'd like to encourage every one to look at detailed description about Maurice's work and take it for a spin to see how powerful it is.

All the credit goes to Maurice and I'd like to thank you as well for keeping me in the loop and verifying extensions mechanism of KIE Server in real life.

piątek, 4 marca 2016

jBPM UI extension on KIE Server

KIE Server that was first released in 6.3.0.Final with jBPM capabilities (among others) was purely focused on execution. Though it was lacking part of functionality BPM users expects:

  • process diagram visualization 
  • process instance diagram visualization
  • process and task forms information 
Since KIE Server is execution server thus it does not come with any UI and to be able to interact with it a custom UI needs to be built. Technology used to build such UI does not really matter and is left to developers to choose. Though certain parts should be possible to get out from KIE Server to improve the UI capabilities.

One of the most desired use case is to be able to visualize state of given process instance - including graphical annotations about which nodes are active and which are already completed, showing complete flow of the process instance.

This has been added to KIE Server as part of jBPM UI extensions and provides following capabilities:
  • display process definition diagram as SVG
  • display annotated process instance diagram as SVG
    • greyed out are completed nodes
    • marked as red are active nodes
  • display structure of process forms
  • display structure of task forms
While displaying process diagrams is self-explanatory, then operation around forms might be bit confusing. So let's go over them first to understand their usage. 

Primary authoring environment is KIE workbench where users can build various assets such as processes, rules, decision tables, data model and forms. Forms in workbench are built with Form Modeler that allows good integration with process and task variables providing binding between inputs and outputs - how data are taken out from process/task variable and displayed in the form and vice-versa how form data is put back to process variables.

Since KIE Server does not provide any UI it does not allow to render task nor process forms. It simply expect the data to be given that will be mapped (by name) to their process or task variables. While this is completely ok from execution point of view, it's not so great from UI and data collection stand point. So to ease a bit in this area, KIE Server is now capable to return form structure that can be used later on to render the form with whatever UI technology/framework you like.

Let's take a test drive of it. We will use our well known HR example to guide you through the usage of this UI support jBPM extension to KIE Server.

Form operations

First endpoint we are going to discuss is to get process form for given process definition - similar to what you get when you start a process instance in workbench.

  • http://localhost:8230/kie-server/services/rest/server/containers/hr/forms/processes/hiring
  • GET

  • hr - is container id
  • hiring - is process id
When you issue this request you'll get following response:

You can notice few important properties there:
  • form/name - hiring-taskform - is the name of the form built in form modeler - you'll find it in workbench under "Form definitions" section in Project explorer
  • form/field/name is the name of the first field on that form
  • under field properties you can find lots of details and depending on your form design you'll see more or less data, though still important
    • fieldName
    • fieldRequired
    • readonly
    • inputBinding
    • outputBinding
This form structure directly translates to what KIE workbench will render when you start hiring process

Similar thing can be done for task forms with slightly different endpoint url as it refers to tasks (already active tasks)

  • http://localhost:8230/kie-server/services/rest/server/containers/hr/forms/tasks/123
  • GET
  • hr - is container id
  • 123 - is task id

same content as in case of process forms are returned for tasks. You can notice that there are different data filled for different fields. Like some have inputBinding set some have outputBinding set. 

So this structure represents this form rendered by workbench:

So with this you can build a custom renderer that is based on same form structure that was designed in Form Modeler that comes with KIE workbench.

Note: In the above example the content is XML but by changing the Accept header to be application/json you'll get JSON content instead.

Image operations

There are two operations available - get "pure" process definition diagram or get annotated process instance diagram.

To get process diagram use following endpoint 
  • http://localhost:8230/kie-server/services/rest/server/containers/hr/images/processes/hiring
  • GET
  • hr - is container id
  • hiring - is process id
and this is what you'll get in your browser

To get annotated process instance, first of all you have to have process instance active and one you have its process instance id you can issue following 
  • http://localhost:8230/kie-server/services/rest/server/containers/hr/images/processes/instances/123
  • GET
  • hr - is container id
  • 123 - is process instance id
and you'll get this
Here you can see that start event is greyed out and thus means it was already completed and currently process instance is in HR Interview task.

Returned content for image operations are SVG - where its MIME type is: application/svg+xml

so make sure you have your client capable of displaying SVG content to properly display the diagrams. Note that all major browsers do support SVG and if you can display process diagram in KIE workbench with given browser you'll be fine.

Now, the most important configuration parameter to enable image operations. KIE workbench by default does not store SVG version of the process so that means such SVG will not be included in kjar and thus won't be available to KIE Server. To be able to take advantage of this feature you need to enable it in workbench configuration files.

Enable SVG on store in workbench

Edit file jbpm.xml that is stored in (depending on what installation you have):
  • jbpm installer: 
    • jbpm-console.war/org.kie.workbench.KIEWebapp/profiles/jbpm.xml
  • manual installation 
    • kie-wb{version-container}.war/org.kie.workbench.KIEWebapp/profiles/jbpm.xml
  • Red Hat JBoss BPMS: 
    • business-central.war/org.kie.workbench.KIEWebapp/profiles/jbpm.xml
in this file you need to find 
        <storesvgonsave enabled="false"/>
and set it to true
        <storesvgonsave enabled="true"/>

Once this enabled (re)start workbench and go to your process definition to save it again (any modification will be required) and that will trigger SVG file for that process to be generated and stored in kjar.

Then deploy that kjar to KIE Server and you can enjoy KIE Server shipping process images for your custom UI.

That's it for the jBPM UI extensions that is coming with 6.4.0.Final very soon so stay tuned. 

środa, 2 marca 2016

Are you ready to dive into (wildfly) swarm?

KIE Server is a lightweight execution server that comes with various capabilities where out of the box are following:

  • BRM - rules execution (Drools)
  • BPM - business process execution, task management, background jobs (jBPM)
  • BPM-UI - visualize your bpm components on runtime such as process definition and instance (since 6.4)
  • BRP - business resource planning (Optaplanner) (since 6.4)
It's by default packaged as JEE application (web archive) and deployed to various containers, such as:
  • JBoss EAP
  • Wildfly
  • Tomcat
  • WebLogic
  • WebSphere
While all this is already quite nice coverage, we don't stay idle and do work on bringing more to you. Let's see what's coming next...

All the hype about micro service is bringing in tons of new stuff that allows alternative approach for packaging and deployment of our systems or services if you like. Taking into consideration what capabilities KIE Server comes with it would be a crime to not take advantage of it to start building micro services with it. Instead of rewriting all the stuff in different way.

It's time to introduce Wildfly Swarm (to those that haven't heard about it yet) ...

Swarm offers an innovative approach to packaging and running JavaEE applications by packaging them with just enough of the platform to "java -jar" your application
So what Wildfly Swarm means in context of KIE Server?

Actually it means a lot:

  • first of all it allows us to build executable jars that will bring in KIE Server capabilities to simple java -jar way of working with all it's power!
  • next you can have a "executable kjar" just by starting it with argument that identifies kjar to be available for execution (Group Artifact Version)
  • you can still run in managed mode - connected to controller and managed from within controller but without a need to provision your application server

With this in mind let's take a look at how to use it with Wildfly Swarm.

  • Clone this repository kie-server-swarm into your local environment.
  • Build the project with maven (mvn clean package)
    • Make sure you run it with latest Maven otherwise you might run into build errors - I tested it with 3.3.9 so it works certainly with it
  • Once it's successfully build you'll find following file inside the target folder
    • kie-server-swarm-1.0-swarm.jar
  • Now you're ready to rock with KIE Server on Wildfly Swarm

but before we start our KIE Server on Swarm, let's look at what options we have for the project we just built. This project, same as KIE Server, is modularized and allows us to pick only the things we are interested with. While KIE Server allows to disable extensions on runtime (via system properties) sometimes it does not make sense to bring in lots of dependencies if they are not going to be used.

So you can build the project with following profiles:
  • BRM - includes BRM capability of the KIE Server that allows rules execution only
    • no server components besides REST is configured
    • build it with - mvn clean package -PBRM
  • BPM - includes both BRM and BPM capabilities of the KIE Server - this is the default profile
    • configures Swarm to have transactions and data sources enabled
    • build it with - mvn clean package -PBPM or mvn clean package
So why it's important to have it done as profiles? It's because the size of resulting file (executable jar) will be smaller. Moreover it reduces number of things Swarm is going to configure and boot when we start our system. So keep this in mind as it might become handy one day or another :)

Let's get our hands dirty with running KIE Server on Wildfly Swarm

First thing, let's just start empty server that will let us manage it manually - creating containers, running rules and processes via REST api

Make sure you're in the project folder (where you executed maven build) and then simply run this command:

java -Dorg.kie.server.id=swarm-kie-server -Dorg.kie.server.location=http://localhost:8380/server -Dswarm.port.offset=300 -jar target/kie-server-swarm-1.0-swarm.jar

Wait for a while to boot Wildfly Swarm and KIE Server on it. Once it's completed you should be able to access it at http://localhost:8380/server

NOTE: since KIE Server requires authentication, whenever you attempt to access its REST endpoints you need to logon - by default you should be able to logon with kieserver/kieserver1!
you can customize users and roles by editing following files:

Now let's examine a bit what all these parameters mean:

  •  -Dorg.kie.server.id=swarm-kie-server - specifies the unique identifier of the kie server - it is important when running in managed mode but good to use it always to make it a habit
  • -Dorg.kie.server.location=http://localhost:8380/server - specifies the actual location where our KIE Server is going to be available - this must be a direct URL to actual instance even it if's behind load balancer - again important when running in managed mode
  • -Dswarm.port.offset=300  - sets global port offset to avoid port conflicts when running many instances of wildfly on same machine

Next, let's run our first executable KJAR... to do so we just extend the command from first run and add arguments to the execution

java -Dorg.kie.server.id=swarm-kie-server -Dorg.kie.server.location=http://localhost:8380/server -jar target/kie-server-swarm-1.0-swarm.jar org.jbpm:HR:1.0

as you can see the only difference is:
which is GAV of a KJAR that is going to be deployed upon start of KIE Server on Swarm. So with just single command line we have fully functional server with BPM capabilities and HR project deployed to it.

Last but not least, let's run it in fully managed way - with controller.
Before you start wildfly Swarm with KIE Server, make sure you start controller (KIE workbench) so you'll see how nicely it registers automatically upon start.

Once controller (workbench) is running issue following command:

java -Dorg.kie.server.id=swarm-kie-server -Dorg.kie.server.location=http://localhost:8380/server -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller -jar target/kie-server-swarm-1.0-swarm.jar

Again, a singe parameter difference between the first command we used to start empty KIE Server on Swarm - in this case it's controller url:
  • -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller
Make sure that this URL matches your controller being deployed - it can differer in terms of

  • host (localhost in this case)
  • port (8080 in this case)
  • context root (kie-wb in this case)
Now you're ready to rock with Wildfly Swarm and KIE Server to build your own micro services backed by business knowledge.

Enjoy your dive into Swarm and as usual comments are more than welcome.