poniedziałek, 26 czerwca 2017

KIE Server welcomes Narayana

KIE Server (with BPM capabilities) requires data base for persistence. That is well known fact, though to have properly managed persistence there is also need for transaction manager that will ensure consistency of the data jBPM persists.

Since version 7 KIE Server is the only provided out of the box execution server (there is no execution server in workbench) so it got some additional attention to make sure it does perform in the best possible way.

KIE Server supports following runtime environments:

  • WildFly 10.x
  • EAP 7.x
  • WebSphere 9
  • WebLogic 12.3
  • Tomcat 8.x

Since all of the above are supported for jBPM usage they all must provide transaction manager capability. For JEE servers (WildFly, EAP, WebSphere, WebLogic) KIE server relies on what the application server provides. Though for Tomcat the story is slightly different...

Tomcat does not have transaction manager capabilities so to make use of jBPM/KIE Server on it, it required an external transaction manager to be configured. Until now it was recommended to use bitronix as jBPM test suite was running on it and it does provide integration with Tomcat (plus it covered db connection pooling and JNDI provider for data source look ups). But this has now changed ...

Starting from jBPM 7.1 KIE Server on Tomcat runs with Narayana, the state of the art transaction manager that nicely integrates with Tomcat and makes the configuration much easier than what was needed with bitronix - and is more native to Tomcat users.

Before I jump into details on how to configure it on Tomcat, I'd like to take the opportunity and give spacial thanks to:

Tom Jenkinson and Gytis Trikleris

for their tremendous help and excellent support while working on this change.

Installation notes - with BPM capabilities

Let's see what is actually needed to configure KIE Server on Tomcat with Narayana:
  • (1) Copy following libraries into TOMCAT_HOME/lib
    • javax.security.jacc:javax.security.jacc-api
    • org.kie:kie-tomcat-integration
    • org.slf4j:artifactId=slf4j-api
    • org.slf4j:artifactId=slf4j-jdk14
  • (2) Configure users and roles in tomcat-users.xml (or different user repository if applicable)
  • (3) Configure JACC Valve for security integration Edit TOMCAT_HOME/conf/server.xml and add following in Host section after last Valve declaration 
         <Valve className="org.kie.integration.tomcat.JACCValve" />
  • (4) Create setenv.sh|bat in TOMCAT_HOME/bin with following content
    CATALINA_OPTS="
    -Djbpm.tsr.jndi.lookup=java:comp/env/TransactionSynchronizationRegistry 
    -Dorg.kie.server.persistence.ds=java:comp/env/jdbc/jbpm 
    -Djbpm.tm.jndi.lookup=java:comp/env/TransactionManager 
    -Dorg.kie.server.persistence.tm=JBossTS 
    -Dhibernate.connection.release_mode=after_transaction 
    -Dorg.kie.server.id=tomcat-kieserver 
    -Dorg.kie.server.location=http://localhost:8080/kie-server/services/rest/server 
    -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller
    "
       Items marked in green are related to persistence and transaction.
       Items marked in blue are general KIE Server parameters needed when running in managed mode.
  • (5) Copy JDBC driver jar into TOMCAT_HOME/lib depending on the data base of your choice
  • (6) Configure data source for jBPM extension of KIE Server 
           Edit TOMCAT_HOME/conf/context.xml and add following within Context tags of the file:
     <Resource 
           name="sharedDataSource" 
           auth="Container" 
           type="org.h2.jdbcx.JdbcDataSource" 
           user="sa" 
           password="sa"
           url="jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;MVCC=TRUE" 
           description="H2 Data Source" 
           loginTimeout="0" 
           testOnBorrow="false"
           factory="org.h2.jdbcx.JdbcDataSourceFactory"/>
           This is only an example to use H2 as data base, for other data bases look at
           Tomcat's configurations docs.

           Once important note, please keep the name of the data source as sharedDataSource

  • (7) Last but not least is to configure XA recovery 
  • Create xa recovery file next to the context.xml with data base configuration with following content: 
    <?xml version="1.0" encoding="UTF-8"?> 
    <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd"> 
    <properties> 
      <entry key="DB_1_DatabaseUser">sa</entry> 
      <entry key="DB_1_DatabasePassword">sa</entry> 
      <entry key="DB_1_DatabaseDynamicClass"></entry> 
      <entry key="DB_1_DatabaseURL">java:comp/env/h2DataSource</entry> 
    </properties> 

    Append to CATALINA_OPTS in setenv.sh|bat file following: 
    -Dcom.arjuna.ats.jta.recovery.XAResourceRecovery1= \
    com.arjuna.ats.internal.jdbc.recovery.BasicXARecovery\;
    abs://$CATALINA_HOME/conf/xa-recovery-properties.xml\ \;1
    BasicXARecovery supports following parameters: 
    • path to the properties file 
    • the number of connections defined in the properties file


Installation notes - without BPM capabilities

In case you want to use KIE Server without BPM capabilities - for instance for Rules or Planning - then you can completely skip steps from 4 (in step 4 use only the marked in blue items) and still run KIE Server on Tomcat.

With that, I'd like to say welcome to Narayana in KIE Server - well done!

piątek, 28 kwietnia 2017

Control business rules execution from your processes

Business processes benefit a lot from business rules, actually they are like must to have in today's constantly changing world. jBPM comes with excellent integration with Drools so that already has a lot of advantages to provide multiple levels of business rules integration:

  • BPMN2 Business Rule task
  • Conditional events (start, intermediate and boundary)
  • Sequence flow conditions

Default integration with rules is that process engine shares the same working memory with rules engine. That in many cases is desired but the main limitations is life cycle of rules and processes are tightly coupled - they are part of the same project (aka kjar) or in other words knowledge base. 

This in turn makes it harder to decouple it and manage life cycle of business rules independently. There are domains out there that have much higher frequency of change in the rules rather than in processes so this tight integration makes it hard to easily upgrade rules without affecting processes - keep in mind that processes are usually long running and can't be easily migrated to next version just because of rule changes.

To resolve this, alternative way of dealing with business rules from within process has been introduced. It does support only business rule tasks and not conditional events or sequence flow conditions. 

What's in the toolbox?

This new feature introduces two new components that serve different use cases:
  • Business Rule Task based on work item handler
  • Remote Business Rule Task based on work item handler and Kie Server Client
Both of them support two types of rule evaluation:
  • DRL
  • DMN 
Let's look at them in details, starting first with Business Rule Task handler.

Business Rule Task handler

Main use case for this alternative business rule task is to decouple process knowledge base from rule knowledge base. That means we will have two projects - kjars to be responsible for handling these two business assets. Rule project is solely responsible for defining business rules which can then be updated at any time without affecting processes. 



Next process project (kjar) does focus only on business processes and hand rule evaluation to the rule project. The only common place is that Business Rule Task handler that defines what kjar (rule project) should be used within process project. 

Since the alternative business rule task is based on work item node, it requires work item handler to be registered in process project. As usual work item is registered in deployment descriptor (accessible via project editor)


Business Rule Task handler is implemented by org.jbpm.process.workitem.bpmn2.BusinessRuleTaskHandler and it expects following arguments:
  • groupId - mandatory argument referring to rule's project groupId
  • artefactId - mandatory argument referring to rule's project artefactId
  • version - mandatory argument referring to rule's project version 
  • scanner interval - optional used to schedule periodic scans for rules updates - in milliseconds
Business Rule Task handler supports two types of rule evaluation:
  • DRL - that allows to specify following data input on task level to control its execution
    • Language - set to DRL
    • KieSessionName - name of the kie session as defined in kmodule.xml of rule project - can be left empty which means default kie session is used
    • KieSessionType - stateless or stateful - stateless is default if not given
    • all other data inputs will be inserted into working memory as facts thus will be available for rule evaluation. Note that data input and output should be matched by name to properly retrieve updated facts and put back into process variables
  • DMN - that allows to specify following as data input on task level:
    • Language - set to DMN
    • Namespace - DMN namespace to be used
    • Model - model to be used
    • Decision - optional decision name to be used
    • all other data inputs are added to DMN context for evaluation. All results from DMNResult are available as data outputs referred by name as defined in DMN
NOTE: Since projects are decoupled they need to have common data model that both will operate on. Currently that model much be on parent class loader instead of kjar level due to classes being loaded by different class loaders and thus making rules not match them properly. In case of execution server (KIE Server) model jar must be added to WEB-INF/lib of the KIE Server app and added to both projects as dependency in provided scope.


    Remote Business Rule Task handler

    Remote flavour of the handler comes with KIE Server Client and aims at providing even more decoupling as it will communicate with external decision service (KIE Server) to evaluate rules. This provide more flexibility and removes the limitation of common data model to be present on the application level. With remote business rule task users can simply define model project as project dependency (with default scope) for both projects - rules and processes. Since there is marshalling involved there is no problem with class loaders and thus working as expected.

    From project authoring point of view there is only one difference - Business Rule Task requires additional data input - ContainerId that defines which container on execution server it should target. Obviously this can be container alias as well for more flexibility.



    Similar as Business Rule Task, this handler supports both DRL and DMN. Depending on which type is needed it supports different data inputs:
    • DRL
      • Language - DRL
      • ContainerId - container id or alias of the container to be used to evaluate rules
      • KieSessionName - name of the kie session on execution server for given container
    • DMN
      • Language - DMN
      • ContainerId - container id or alias of the container to be used to evaluate rules
      • Namespace - DMN namespace to be used
      • Model - model to be used
      • Decision - optional decision name to be used
    Any other data input will be send as data to the execution server. Results handling is exactly the same as with Business Rule Task handler.


    Remote Business Task Handler is implemented by org.kie.server.client.integration.RemoteBusinessRuleTaskHandler. Handler expects following arguments:
    • serverUrl - location of the execution server that this handler should use, it can be comma separated list of URLs that will be used by load balancer
    • username - user to be used to authenticate calls to execution server
    • password - password to be used to authenticate calls to execution server
    • classLoader - projects class loader to gain access to custom types 

    Registration of the handler is done exactly the same, via deployment descriptor. 


    That would conclude the description of new capabilities. If you'd like to try it yourself simply clone this repository to your workbench, build and run it!

    Here you can see this being done...


    That's all for now, though stay tuned as more will come :)




    czwartek, 13 kwietnia 2017

    Email configuration for jBPM

    Quite frequent question from users is how to send emails from within process instance. This is rather simple activity but requires bit of configuration upfront.

    So lets start with application server configuration to provide JavaMail Sessions via JNDI directly to jBPM so process engine does not need to be concerned too much with infrastructure details.

    In this article, WildFly 10 is used as application server and the configuration is done via jboss cli. As mail server Gmail is used as it makes it quite common choice.

    Configure WildFly mail session

    This is quite simple as it requires to configure socket binding and then the actual JNDI resource of mail session type:

    /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=jbpm-mail-smtp/:add(host=smtp.gmail.com, port=465)

    /subsystem=mail/mail-session=jbpm/:add(jndi-name=java:/jbpmMailSession, from=username@gmail.com)
    /subsystem=mail/mail-session=jbpm/server=smtp/:add(outbound-socket-binding-ref=jbpm-mail-smtp, ssl=true, username=username@gmail.com, password=password)

    Items in bold are external references:
    • host - smtp server host name
    • port - smtp server port number
    • JNDI name the mail session will be bound to
    Items in red must be replaced with actual Gmail account details that shall be used when sending emails:
    • from - address that should be valid email account
    • username - account to be used when login to SMTP server
    • password - account's password to be used when login to SMTP server

    When starting WildFly server, you have to provide the JNDI name to be used by jBPM to find the mail session, this is given as system property:

    ./standalone.sh .... -Dorg.kie.mail.session=java:/jbpmMailSession

    this is all that is needed from application server point of view.

    Configure jBPM for sending emails

    There are two email components that can be used by jBPM to send emails:
    • Email service task (backed by work item handler)
    • User task notifications
    Both of them can rely on application server mail sessions, so configuration of the application server applies to both email capabilities.

    Work item handler configuration

    Email activity is based on work item mechanism so that requires work item handler to be registered for it. Work item handlers can be registered via deployment descriptor that can be done on kjar level or server level. Below screen shot illustrates the configuration of the deployment descriptor on kjar level done in workbench


    what this does - it's registering org.jbpm.process.workitem.email.EmailWorkItemHandler for Email work item/service task and since it should use mail session from JNDI we need to reset the handler property so they won't get in the way (argument of the EmailWorkItemHandler constructor):
    • host set to null
    • port set to -1
    • username set to null
    • password set to null
    • useTLS set to true
    this is done to make sure all these values are taken from JNDI mail session of the application server.

    As it was mentioned, this email registration can also be done on server level, global deployment descriptor that all kjars will inherit from. See more about deployment descriptor in the docs.

    That's all from jBPM configuration point of view. 

    Process with Email task

    Last thing is to actually take advantage of the configured Email infrastructure so let's model most basic process that the only thing it does is sending email and completing.


    Email task can be found in the left hand side panel - palette - under Service Tasks category.

    Once placed on the process flow you need to provide four data input to instruct email work item handler how to compose the email message:
    • From - valid email address
    • To - valid email address of the recipients (to specify multiple addresses separate them with semicolon ';')
    • Subject - email subject
    • Body - email body (can include html)

    That's all, just build and deploy it and enjoy receiving emails from your processes.

    czwartek, 6 kwietnia 2017

    Case management application in workbench

    As part of coming jBPM version 7 I'd like to present a new component that will allow an easy and comprehensive look into case management. As described in the article case management should be business focused and domain specific to bring in the most value to the end users who in many cases are not technical. But there is also the other side of the coin - administrators or technical business users. They are as important as end users because in many cases these are the people that keep the apps running and available for end users.

    With jBPM 7 they are certainly not left behind. A new component is available for that audience to allow them to have quick and rather complete view at the cases (both definitions and instances). To make it possible to deal with various type of cases that component was made generic to:

    • bring in visibility to the technical users
    • provide insight in where the case instance is
    • allow to perform certain operations on a case instance which might not be visible to end users through the case app
    The good thing is that the application can be used standalone or can be automatically provisioned by workbench and accessible from within the workbench UI.

    By launching the Case Management Showcase application you will be transferred to a new window to allow you to operate on cases:
    • start new instance of case definition
    • view and administrate on already active case instances
    • use workbench runtime views (Process Instances and Tasks) to interact with case instance activities

    As can be seen, the example show the Order IT hardware case instance, one that is shown in the case app article. I'd like to show that exact same capability exposed to end users through the case app can be performed by technical/admin users by using the Case Management app and workbench.




    That's as simple as that, those who are familiar with workbench as an environment will find themselves in there and navigate through the screen easily. Again, its target is technical users as that does not bring the business context thus might lead end users (non technical ones) to be confused and lost a bit. 

    Running workbench with case management app

    So that illustrates how it actually works but how to set this up?

    First of all, let's configure the runtime environment for workbench, there is not much of new  things compared to standard installation of the workbench:
    • user and roles setup
      • make sure there is an application user that has at least following roles:
        • kie-server
        • user
      • make sure there is a management user - this will be used by the provisioning service of the workbench to deploy automatically case management application
    • security domain setup 
      • make sure that the security domain used by workbench (by default it's other) has additional login module defined (org.kie.security.jaas.KieLoginModule)
    • system properties set for JVM running server
      • org.jbpm.casemgmt.showcase.deploy=true
      • instruct workbench to deploy case management showcase apps when starting
      • org.kie.server.location=http://localhost:8230/kie-server/services/rest/server
      • location of the kie server that the case management app is going to talk to
      • org.jbpm.casemgmt.showcase.wildfly.username
      • optional user name (of the management user) in case it's different than admin
      • org.jbpm.casemgmt.showcase.wildfly.password
      • optional user password (of the management user) in case it's different than admin
      • org.jbpm.casemgmt.showcase.wildfly.port
      • optional wildfly port that server is running on - default 8080
      • org.jbpm.casemgmt.showcase.wildfly.management-port 
      • optional wildfly management port - default 9990 
      • org.jbpm.casemgmt.showcase.wildfly.management-host 
      • optional wildfly management host name - default localhost
      • org.jbpm.casemgmt.showcase.path
      • local file path that points to war file to be deployed, can be used instead of relying on maven to download it
    That should be enough to start the workbench and provision Case Management application to be automatically deployed. 

    Assuming there are most of the defaults in use, following command is enough to start the server where workbench is deployed:
    ./standalone.sh 
    --server-config=standalone-full.xml 
    -Dorg.jbpm.casemgmt.showcase.deploy=true 
    -Dorg.kie.server.location=http://localhost:8230/kie-server/services/rest/server

    As part of the startup you should see Case management app being provisioned:
     [org.jbpm.workbench.wi.backend.server.casemgmt.service.CaseProvisioningExecutor] (EJB default - 1) Executing jBPM Case Management Showcase app provisioning...
    ....
    [org.jbpm.workbench.wi.backend.server.casemgmt.service.CaseProvisioningExecutor] (EJB default - 1) jBPM Case Management Showcase app provisioning completed.

    NOTE: the provisioning is based on maven resolution of the jbpm-case-mgmt-showcase so depending on the download time it can timeout from time to time. To overcome this you can download that artefact manually via maven to speed it up.

    Next, is to clone the Order IT hardware case project into workbench and build and deploy it to KIE Server for execution. Once it's built and deployed it's ready for execution. 



    That concludes how to get started with brand new stuff coming with version 7 ... which is really around the corner so start preparing for it!

    Special credits go to Cristiano and Neus for excellent work that resulted in this new Case Management App.

    piątek, 31 marca 2017

    Case management in details at RedHat Summit 2017



    I'd like to invite everyone interested in upcoming jBPM 7 feature for case management to RedHat Summit talk

    Deep dive on case management

    We'll be talking about the most important things to know about how case management is built in jBPM 7 and how to make best out of it. A must to attend for those who plan to accelerate their business with new capabilities to make their work even more flexible.

    "Nowadays, the work performed by employees on a day-to-day basis isn't typically a well-defined sequence of tasks, but rather depends much more on the so-called knowledge workers to evaluate the data specific to the case and to collaborate with others to achieve their goals. To be able to better assist these kinds of users, traditional Business Process Management (BPM) Suites have evolved to include much more advanced features related to this case management. On top of that, these workers could benefit a lot from advanced applications, customized to the use case they are trying to solve. In this session, you will receive a deep dive from the experts into what case management is and how we extended our existing framework to better support these kinds of use cases. We will show how you can also develop a customized application on top of Red Hat JBoss BPM Suite more quickly, taking advantage of these advanced capabilities."

    Check out the details of the talk and other sessions and trainings on Summit agenda

    Planning to go there? Maybe it will make it easier with a discount code...

    Hope to see you there!!!


    środa, 29 marca 2017

    Get ready for jBPM 7 ... migrate from 6.5 to 7

    Yes, that is correct - it's time to get ready for jBPM 7!!!

    as part of preparation for final release of community version 7, I'd like to share some thoughts on how to move to it from 6.5.

    First of all, the big change that is in version 7 is the separation between authoring/management and runtime. There is no execution engine in workbench any more and obviously there is no REST/JMS endpoints for it either. Though there is still REST api that covers:

    • repository and project management (including maven builds)
    • KIE Server controller
    both are in version 6.5 as well and they have not changed much and are expected to be backward compatible.

    Focus of this article will be on migrating from 6.5 workbench (as execution server) to 7.0 workbench (as authoring and management) and kie server (as execution server).

    Let's put this article in some context, a very basic to keep it simple but should be enough to understand the concepts:
    • single workbench installation running on WildFly
    • single project deployed to execution engine in workbench - org.jbpm:HR:1.5
    • number of process instances started and active in different activities
    The goal of this exercise is to make sure we can stop 6.5 "migrate" and start 7 and these process instances can move on.

    Prepare for migration

    First step before we shutdown version 6.5 is to configure Server Template for the coming version 7 and KIE Server

    Here we define server template with name "myserver" that has single container configured "org.jbpm:HR:1.5". The name of the container is important as it must match the deployment id of the project deployed in workbench execution server. And this container is in started state.

    As can be seen, there are no remote server attached to it, that's normal as currently workbench is responsible for runtime operations.

    Now it's time to shutdown workbench's WildFly server...

    Migrate workbench

    Once the workbench is stopped it's time to migrate it to version 7 (at the time of writing latest one was Beta8). We are going to reuse same WildFly instance for new version of the workbench, so:
    • Download version 7 of kie-wb for WildFly 10. 
    • Move the workbench (6.5) from deployments directory of the WildFly server
    • Copy new war file into deployments directory - no modification to war file are needed - remember there is no runtime thus there is no persistence.xml to be edited :)
    • remove .index directory that was created by version 6.5 - this is required because version 7 comes with upgraded version of Lucene that is not compatible with one used in 6.5 - don't worry assets will be indexed directly after workbench starts
    There is one extra step needed - to add new login module to security domain used by workbench - this is to allow smooth integration with KIE Server on behalf of user logged in to workbench. Just add one line (one marked in red) to standalone[-full].xml

    <security-domain name="other" cache-type="default">
        <authentication>
           <login-module code="Remoting" flag="optional">
                <module-option name="password-stacking" value="useFirstPass"/>
            </login-module>
            <login-module code="RealmDirect" flag="required">
                <module-option name="password-stacking" value="useFirstPass"/>
            </login-module>
    <login-module code="org.kie.security.jaas.KieLoginModule" flag="optional" module="deployment.kie-wb.war"/>
          </authentication>
    </security-domain>
    module  (part in bold) should point to the actual name of the war file you copied to deployments folder.

    That's it, now it's time to start WildFly with workbench running version 7...

    ./standalone.sh
    ... yes, we can start it with default profile as there is no runtime in it (mainly JMS) so there is no need for full profile any more. But you can still use standalone-full.xml as server profile, especially if you have done any other configuration changes in it - like system properties, security domain, etc.

    After a while, workbench is up and you can logon to it the same way you used to with 6.5.

    Once logged in you can navigate to Deploy -> Execution Servers and the server template defined before the migrations should be there.
    When you go to Process Definitions or Process Instance they will be empty as there is no KIE Server connected to it ... yet.

    NOTE: Workbench uses data set definitions to query data from execution servers. In case you run into errors on data retrieval (process instance, tasks or jobs) make sure you "restore default filters", a small icon on top of the data table

    Migrate data base

    As of major version, there have been some changes to the data base model that must be applied on data base that was used by version 6.5. jBPM comes with upgrade scripts as part of jbpm installer distribution (alternatively you can find those scripts in github). They are split into data base and version that they upgrade (from -> to)

    An example from github:
    jbpm-installer/src/main/resources/db/upgrade-scripts/postgresql/jbpm-6.5-to-7.0.sql

    So this means that the upgrade script is for PostgreSQL db and provides upgrade from 6.5 to 7. Same can be found for major data bases supported by jBPM.

    Logon to the data base and execute the script to perform migration of the data model.

    Configure KIE Server

    Last step is to configure brand new KIE Server version 7. A separate WildFly instance is recommended for it (especially for production like deployments) to separate authoring/management from runtime servers so they won't affect each other.

    Most important is to configure WildFly server so it has:
    • same security domain as instance hosting workbench
    • same data sources that point to the active db as had 6.5 workbench 
    • and possibly other execution related configuration 
    NOTE: Extra note about security is that users who are going to use workbench to interact with KIE Server (like starting processes, completing tasks) must have kie-server role assigned - security domain that is used by KIE Server.

    KIE Server can be started with following command:
    ./standalone.sh
     --server-config=standalone-full.xml 
    -Djboss.socket.binding.port-offset=150 
    -Dorg.kie.server.id=myserver 
    -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller 
    -Dorg.kie.server.location=http://localhost:8230/kie-server/services/rest/server 
    -Dorg.kie.server.persistence.dialect=org.hibernate.dialect.PostgreSQLDialect 
    -Dorg.kie.server.persistence.ds=java:jboss/datasources/psjbpmDS

    Explanation of each line (skipping first line as it's command to start server):
    2. select WildFly server profile to be used - requires full as KIE Server comes with JMS support
    3. port offset to avoid port conflicts when running on same machine as workbench
    4. server id that must match server template name defined in workbench
    5. controller location - workbench URL with /rest/controller suffix
    6. kie server location - actual location of the kie server (it must include port offset) 
    7. hibernate dialect to be used
    8. JNDI name of the data source to be used

    NOTE: If the kjar was using singleton strategy then file that keeps track of ksession id of that project should be copied from workbench (WILDFLY_HOME/standalone/data/org.jbpm/HR/1.5-jbpmSessionId.ser) to same location in KIE Server's WildFly instance. This is important when you have facts in ksession or timers)

    Once server is started, it will connect to workbench and collect list of containers to be deployed (in our case it's just one org.jbpm:HR:1.5)


    KIE Server is now connected and ready for execution, you can find it in Deploy -> Execution Servers where it should be now visible as active remote server:

    And when navigating to Tasks or Process Instances you should find your active instances left from 6.5

    And that's it, now you're ready to take advantage of new features coming in 7 without a need of leaving your 6.5 work behind :)

    środa, 1 marca 2017

    Prepare for future ... KIE Server Router extension

    KIE Server Router was built to cover two main areas:

    • hide the complexity of many KIE Server instances hosting different projects (kjars)
    • aggregate information from different KIE Servers regardless if they share the same db or host the same projects (kjars)
    As described in previous articles, it can find the correct KIE Server instance for the incoming request based on container information that is sent through router. So the basic logic is that router keeps information (routing table) that have following information:
    • container id mapped to KIE Server location (url)
    • container alias mapped to KIE Server location (url)
    • server id mapped to KIE Server location (url)
    in the context of this article, only first two are relevant. They are used to find the right server (or one of them) to forward the requests to.

    The KIE Servers (Kjar 1 and Kjar 2) can have as many instances as they need that will be connected to the same db and will use the same set of containers deployed to them.

    Whenever new request comes in, router will try to find if it contains container information and if so will attempt to find the route to the server that supports it. The situation is very simple if the request comes with container id (as opposed to container alias). As container ids are unique (and should be versioned to provide maximum flexibility) so the look up of the route is easy - just get it from routing table.
    A bit more complex scenario is when the alias is used. Obviously this is more appealing to end users (client apps) as they don't have to change the URL whenever new container is deployed. So instead of using container id in the URL, container alias can be used to make the client less coupled with the constantly changing environment. 
    By default KIE Server Router deals with container aliases same way as for container ids. Looks them up in the routing table to find the server location. Then it's the individual KIE Server that is responsible to find out concrete container that should be used to deal with that request. That is perfectly fine as long as there is no alias shared across different KIE Server - by different I mean KIE Server that host different kjars (usually different versions of the kjar).

    As illustrated at the above diagram in such case, Kjar 2 should then take over entire responsibility for given container alias and host both versions of the project. As soon as it up and running the Kjar 1 server should be removed and then the Kjar 2 takes over entire load. 

    This architecture promotes reusability of the runtime environment for the same domain - usually the version of given project are incremental updates that stays in the same domain. So that is what is available by default and the logic to find the proper container is within the KIE Server itself.

    ... but real world is not as simple ...

    There are more options for deployments that would then put more responsibility in the KIE Server Router instead. Like for example taking the action to find out the latest container for an alias across all available containers. For example, if we want our clients to always start with latest available version of a process then client will use alias for container and router should resolve it before forwarding to KIE Server. That would mean that the architecture is that each KIE Server hosts single kjar and each version will bring in new KIE Server instance connected to the router but all of them will share the same alias.

    The above setup is not covered by default in KIE Server router because the alias can be resolved to different servers that might not have the kjar deployed that given request is targeting. To allow to cover this use case KIE Server Router was enhanced to allow certain components pluggable:
    • ContainerResolver - component responsible for finding the container id to be used when interacting with backend servers
    • RestrictionPolicy - component responsible for disallowing certain endpoints from being used via KIE Server Router
    • ConfigRepository - component responsible for keeping KIE Server Router configuration, mainly related to the routing table
    Since KIE Server Router can run in various environments these components can be replaced or enhanced compared to the defaults. For instance to support the described scenario - router to be responsible to find the latest container and then forward it to that kie server. This can be done by implementing custom ContainerResolver and plug it into the router. 

    Implementations are discovered by router via ServiceLoader utility so the jar archives bringing in the new implementation need to have included service descriptor:
    • ContainerResolver 
      • META-INF/services/org.kie.server.router.spi.ContainerResolver
    • RestrictionPolicy
      • META-INF/services/org.kie.server.router.spi.RestrictionPolicy
    • ConfigRepository
      • META-INF/services/org.kie.server.router.spi.ConfigRepository
    A sample implementation of all these components can be found in this project at github.

    Since default KIE Server Router is ran as executable jar, to provide it the extensions the execution needs to slightly change:

    java -cp LOCATION/router-ext-0.0.1-SNAPSHOT.jar:kie-server-router-proxy-7.0.0-SNAPSHOT.jar org.kie.server.router.KieServerRouter

    once the server is started you will see log output stating what implementation is used for various components:

    Mar 01, 2017 1:47:10 PM org.kie.server.router.KieServerRouter <init>
    INFO: KIE Server router repository implementation is InMemoryConfigRepository
    Mar 01, 2017 1:47:10 PM org.kie.server.router.proxy.KieServerProxyClient <init>
    INFO: Using 'LatestVersionContainerResolver' container resolver and restriction policy 'ByPassUserNotAllowedRestrictionPolicy'
    Mar 01, 2017 1:47:10 PM org.xnio.Xnio <clinit>
    INFO: XNIO version 3.3.6.Final
    Mar 01, 2017 1:47:10 PM org.xnio.nio.NioXnio <clinit>
    INFO: XNIO NIO Implementation Version 3.3.6.Final
    Mar 01, 2017 1:47:11 PM org.kie.server.router.KieServerRouter start
    INFO: KieServerRouter started on localhost:9000 at Wed Mar 01 13:47:11 CET 2017


    So that would be all for this article. Stay tuned for more updates and hopefully more out of the box implementations for KIE Server Router.