Thursday, March 19, 2015

The Secret Feature of Bucketsets in Oracle Business Rules

In Oracle Business Rules one can use so-called "Bucketsets". I never liked the term as it is not in the dictionary (did you mean bucketseat?), and never understood what is wrong with "list of values" (LoV) as that is what it is.

See for example the following bucketset that defines a list of values to be used for some status field:


Anyway, bucketsets are typically used in decision tables to define the set of values to be used in conditions. Unfortunately you cannot use them to define the set of values to be used in actions. At least, that is what the UI seems to suggest, as bucketsets do not appear in the values you can choose from.


But don't get fooled, you are being misguided, as it works after all. Just type it in as [bucketset_name].[value_name] and be surprised that it works!


Funny enough this is not the case for IF-THEN rules.


 Let's call it room for improvement.

Sunday, December 07, 2014

OBPM versus BPEL, That's the Question

Recently I was pointed to the so-called Oracle Learning Streams http://education.oracle.com/streams which provide short presentations on all kind of topics.

While ironing my clothes on a Sunday afternoon, I watched one with the title "Leveraging OBPM vs BPEL" by David Mills. An excellent story where he explains in less than 13 minutes the high-level difference using a practical example.

One reason why I like about this stream is that it is in line with what I preach for years already. Otherwise I would have told you it sucked, obviously.

The main point David makes is that you should use the right tool for the right job. OBPM aims at orchestrating business functions, whereas BPEL aims at orchestrating system functions. The example used is an orchestration of system functions to compose an Update Customer Profile service, which then can be used in a business process, orchestrating business functions where one person is involved to approve some update, while someone else needs to be informed about that. Watch, and you'll see!

For understandable reasons the presentation does not touch the (technical) details. Without any intention to explain those, one should think about differences in the language itself (for example in BPEL you cannot create loops while in BPMN that quite normal to do), and also in the area of configuration and tuning (for example in case of BPEL there are more threads to tune, and you can do in-memory optimization, etc.).

Maybe I find some time to give you a more detailed insight in those more detailed differences. Would help if you would express your interest by leaving a comment!

Wednesday, November 19, 2014

Why You Should Create BPM Suite Business Object Using element (and not complexType)

In this article I describe why you always should base your business object based upon an element, instead of a complexType.

With the Oracle BPM Suite your process data consists of project or process variables. Whenever the variable is based on a component, that component is either defined by some external composite (like a service), or is defined by the BPM composite itself, in which case it will be a Business Object. That Business Object is created directly or is based upon an external schema. Still with me?

When using an external schema you should define the business object based upon an element instead of a complexType. Both will be possible, but when you define it based upon a complexType, you will find that any variable using it, cannot be used in (XSLT) transformations nor can be used as input to Business Rules.

As an example, see the following schema:


The customer variable (that is based on an element) can be used in an XSLT transformation, whereas the order variable cannot:

The reason being that XSLT works on elements, and not complexTypes.

For a similar reason, the customer variable can be used as input to a Business Rule but the order variable cannot:

Of course, if you are a BPEL developer, you probably would already know, as there you can only create variables based on elements ;-)

Thursday, November 13, 2014

Did You Know that You Can Buy OUM?

The Oracle Unified Method (OUM) Customer Program has been changed in that, next to the already existing option to get it by involving Oracle Consulting, you now also can buy it if (for some reason) you don't want to involve Consulting.

Next to that there also is the option to purchase a subscription (initial for 3 years, after which it can be renewed annually) allowing to download updates for OUM.

OUM aims at supporting the entire Enterprise IT lifecycle, including the Cloud.

Thursday, October 30, 2014

Work-around Instance Migration Limits of BPM Suite 11g

The following describes a work-around for 2 situations for which instance patching and migration is not supported, being changing the level of an activity, and removal of an embedded sub-process. In short this work-around consists of re-implementation of the activities to move, and emptying the reusable sub-process.

There are a couple of restrictions for the Oracle BPM Suite that can make that process instances cannot be patched (deployment using same revision number) or migrated (deployment using new revision number, and then move from old to new revision). Two of them are that you cannot change the scope of an activity (like moving it in our out of an embedded sub-process) or removal of an embedded sub-process. For both situations there is a work-around, that I can demonstrate with one case.

More information about instance patching and migration can be found here for BPM 11g (11.1.1.7) and here for BPM 12c (12.1.3).

The Work-Around


Suppose you have a process (A) like this:





And you want to change it to this (in real life you may want to move the activity inside to a totally different location, or even remove the embedded sub-process altogether) to this model (D):
As migration of running instances is not supported:
  1. Because of changing the scope of the activity, and
  2. Because of removal of the embedded sub-process
you will get an error like this when trying to do so:

 However, what you can do, is change the model like this (C):

The trick is that the Say Goodbye activity has not been moved outside, but re-implemented. In this case I created a new activity with the same name, and reused the existing task definition. So I only had to redo the data mappings.

You won't win any prize for most beautiful process model with this but it works. In practice you want to collapse the embedded sub-process and rename it to something like "Empty".

The Proof of the Pudding

To make sure it actually works for running processes I used an extra step between (A) and (C), being this model (B):
 First I started with the first model, created 2 instances with one in Say Hello, and the other in Say Goodbye. I then deployed the last model with "Keep running instances" checked (instance patching). The result was that both instances were automatically migrated (i.e. just kept on running).

Then I created a start situation of 3 instances, each of them being in a different activity. The most interesting is the one in Say Goodbye in the re-usable sub-process, as in this case the token is in an activity that is going to be removed.
 In line with the documentation, when deploying model (C) all instances were put I status Pending Migration. Using Alter Flow I was able to migrate the instances in the first and last activity as-is. After that I could successfully complete them.

The interesting one though, is in the second activity:

I was able to migrate that with Alter Flow by moving the token from the embedded sub-process (not the activity inside the embedded sub-process!) to the last activity:

The engine never knew what hit it ;-)

Monday, November 25, 2013

Browsing the Meta Data Services Repository of the Oracle SOA/BPM Suite 11g

In this article I explain a handy way to browse the MDS on the SOA/BPM server from JDeveloper, as well as how to download its content using Enterprise Manager, and finally an (as far as I know) undocumented feature to look up artifacts using a browser.

This article has been updated on November 26 to include the option regarding downloading the MDS content.

The Meta Data Services (or MDS for short) of Oracle's SOA/BPM Suite is used to manage various types of artifacts like:
  • Process models created with Process Composer,
  • Abstract WSDL's and XSD's,
  • Domain Value Map's (DVM), and even
  • Artifacts  of deployed composites.

Browsing the MDS from JDeveloper

To find out what actually is deployed in the MDS you can setup an MDS connection within JDeveloper to the server. Such a connection can be handy, for example to verify if replacements of variables in the MDS artifacts are properly done when deploying. Using this connection you can open those artifacts in JDeveloper and check the source.

To create an MDS connection go to the Resource Palette -> New Connection -> SOA-MDS. This will pop-up a tab from which you can create a database connection to the MDS for example the dev_mds schema. Having created the database connection you have to choose the partition to use for the SOA-MDS connection. To be able to check-out processes created whith Composer from the MDS or to save them in the MDS, you create a SOA-MDS that uses the obpm partition. As the name already suggests, this is in BPM-specific partition. To browse the other artifacts I mention above, you use the soa-infra partion, which is shared by both SOA and BPM.

In the figure below you can see two types of connections, above to the soa-infra and below to the obpm partition. In the (soa-infra) apps you can find the reusable artifacts that you have deployed explicitly (like abstract WSDL's, XSD's, EDL's).

What you also see is a deployed-composites folder that shows all composites that have been deployed. When expanding a composite, you will find that all artifacts are shown. This is a much easier way to verify that you do not deploy too many artifacts to the server then by introspecting the SAR file, I would say. Except for .bpmn files (that at the time of writing are not yet recognized by this MDS  browser) you can open all plain text files in JDeveloper.


Downloading the MDS from Enterprise Manager

Now let's assume that you have not been given access to the MDS's DB schema on the environment (perhaps because it is Production), but you do have access to the Enterprise Manager. For this situation my dear colleague Subhashini Gumpula pointed me to the possibility to download the content from the MDS as follows:

soa-infra -> Adminstration -> MDS Configuration  -> and then on the right side of the screen: Export.


This will download a soa-infra_metadata.zip file with its content!

Looking up Artifacts in the MDS Using a Browser

Now let's assume that you also have not been given access to Enterprise Manager on the environment, but you can access using the HTTP protocol. Thanks to my dear colleague Luc Gorrisen I recently learned that you can browse it using part of the URL of the composite, as follows:

http://[server]:[port]/soa-infra/services/[partition]/[composite_name]/apps/[artifact_folder]

For example, to look up the abstract WSDL of some ApplicationService that is used by some StudentRegistration business process, I can use the following URL.

http://capebpm-vm:7001/soa-infra/services/default/StudentRegistration/apps/ApplicationService/1/ApplicationServiceStorage.wsdl

Mind you, this is not restricted to only the WSDL's it is using.

Ain't that cool?!

Wednesday, November 20, 2013

How to Make File-Based MDS in JDeveloper Work for both Windows and Linux

In this article I explain how you  can modify the JDeveloper adf-config.xml file to make it work for both Windows, as well as Linux.

If in a JDeveloper application you point to artifacts in a file based MDS residing in a Windows folder for example "d:\projects\MDS", then JDeveloper will create an new entry in the adf-config.xml file (that you can find in Application Resources -> Descriptors -> ADF META-INF) pointing to the absolute location of that folder:



The problem with this is that if you have other colleagues that use Linux instead of Window, it's not possible to make it work for both by defining some relative location (as you could do with ant), like "../../projects/MDS", so now what?

As in most cases, the solution is so simple that I successfully missed it many times: use an environment variable! What worked was creating an environment variable with name MDS_HOME, and after that I could use it like this:

Problem solved! Such an evironment variable you can create in Windows as well as Linux.

I have not yet fully tested if this works for everything, like deploying with ant, but if it doesn't I expect you can easily fix that by modifying the ant-sca-compile.xml file in your JDeveloper installation folder by adding a property "mds_home" in there as well.

Customizing (or rather Hacking) Oracle BPM 11g Business Exceptions

In this article I explain how you can add custom attributes to Oracle BPM 11g business exceptions. Mind that this is not officially supported.

One of the fun things of giving a training like the Advanced BPM Suite 11g course that I'm running now, is that students ask questions to which you don't know the answer. But hey, you are the teacher, and that won't do, so off you go!

One question asked yesterday was if it is possible to have more attributes on a business exception than just "errorInfo". First question should be: "why do you want that?". Well, let's assume that you at some higher level you want to have access to context information that is only available where the exception is thrown, like a local variable. Of course you can concatenate all info in one long semi-colon separated string or something, but then you probably have to have other logic to transfer that back into something readable.

If you look closely at the definition of a business exception, you will notice that it uses a generated WSDL and XSD. What I did was adding an extra element "name" to the XSD of a business exception with name FailProcess (created for the previous blog article) as follows:


I then restarted JDeveloper to see what would happen, and not totally to my surprise: nothing! As in: it works! I also tried it run-time, and no problem there either, as you can see in the following figure:


One warning though. As the generated XSD clearly states, it should not be altered manually, and any change may be overwritten. It obviously is not a supported hack I show you here, and you should expect that any change of the business exception via the UI will break your code. Therefore let's hope that changes like this in some next version are supported by JDeveloper as well.

Tuesday, November 19, 2013

Why you may consider NOT using the Terminate End Event in Oracle BPM Suite

In this article I explain why you should try to avoid using the Terminate End Event in Oracle BPM 11g.

According to the BPMN specification, the Terminate End Event is supposed to terminate a process instance at the level at which it is raised, including any ongoing activity for sub-processes. But is should not terminate any higher level (parent) process.

With OBPM 11g it works differently (at least up to PS6), as raising a Terminate End Event will actually terminate the composite's instance. Except for human tasks, because as clearly stated in the documentation: "Human tasks are independent from BPMN processes. If you terminate a BPMN process while it runs a user task, the associated human tasks keeps running independently". The reason is that the human workflow engine runs separately from the BPM engine.

In the process below this will result in the situation that the Review Data task will still show in Workspace, while the associated process instance is already terminated. That was not the original intention of this model.


This flow shows what actually happens:


Conclusion: before considering using the Terminate End Event be well aware of its behavior. Consider alternate types of modeling, like raising a business exception that is caught by the parent process, which then uses an Update Task activity to withdraw all human tasks before actually ending. In this way you can prevent any human doing accidentally doing work for nothing.

Such a model would look like this:


Monday, September 02, 2013

Exceptions, Business as Usual?

In this article I describe some aspects considering using BPMN exceptions to handle business exceptions. The conclusion is that you should carefully consider if doing so is appropriate, or that using a combination of a gateway and End event would be a better option.

Bruce Silver writes in his book BPM Method & Style that he used to use BPMN Exception end events only for technical exceptions. For business exceptions he used to use a combination of a gateway and an end state test.

When I first read that I found that to be a peculiar remark, as - coming from a Oracle BPM 10g direction - I used them for business exceptions all the time! Moreover, as most technical exceptions could be caught and managed in BPL (the 10g, Java-like scripting language) I even adviced people to to use Exceptions for technical exceptions only if unavoidable with the argument that the business audience is not interested in technical exceptions, and that therefore they should be left out of the model if possible. After all, wouldn't you agree that the first model (in which exceptions are used for business exceptions) looks simpler that the second one (which is more the gateway/end-state type of solution)?


Using BPMN Error end event

Using gateways

The model presented is inspired by a business process model with a similar complexity, but then bigger. From a functional point of view both models do the same. Examples like this may explain why later on Bruce Silver changed his mind based on feedback from his students.

With Oracle BPM 10g it was easy to write some generic (technical) business exception handling process with a retry option (using the BACK action). With this retry you could simply return to the happy path even when an Exception end event had occurred.

The first time I started to have second thoughts about using Exceptions for business exceptions was when I found that with 11g this back functionality is no longer possible (although it is supposed to come back in 12c in some way or another).

The second second thoughts came when I realized that throwing an Exception and catch that in an Event Sub-process, also has some other peculiarities, as I will explain using the model below:



In this sample model there is an Event Sub-process with a non-interrupting Get Status Message start event, and a Return Status Message end event. This exposes a getStatus services operation that can be called by some 3rd party to find out where the process is, and for that returns a status that is set by the Set Rejected Status and Set Reconsidering Status Script activities. When the Rejected Error event is thrown this is caught by the Event Sub-process with the Rejection Error start event. An Event Sub-process that starts with an Error start event, is interrupting by definition.

Patterns like this (where some operation or service is exposed to interact with a running instance) are very common in my practice. As a matter of fact, as far as I recall more than half of the models I created have a similar Event Sub-process, either to get or to set some process data on the run.

The issue that I found is that when the order is rejected without the option to reconsider - meaning that the Rejected Error event is thrown, the normal flow of the process is aborted (because of the interrupting nature of Error events). As a result, when the process is in the Confirm activity, and the getStatus operation is called, it won't react because that operation it is tied to the normal flow, which is no longer active. In contrast, when the process is in the Reconsider this is not a problem.

It is unclear to me what the BPMN specifications say about this. I can imagine that this behavior is in line with the specifications, or that the specifications are not explicit about what that behavior should be. In any case, this is how it works with 11g, which made me realize that throwing Error end events in case of business exceptions has some drawbacks that might make that modeling by using a gateway plus end state is not so peculiar after all.

Friday, August 02, 2013

OUM approved by Open Group as an Architectural Method

A bit late perhaps to write about it, but as of the middle of July the Oracle Unified Method (OUM) has been approved by the Open Group (responsible for TOGAF) as an accepted architectural method!

I case you did not already do so, you might be interested to know that OUM blends an architectural method with a project delivery method based on the Unified Process. It is therefore reasonable to state that OUM is your "One Stop Development Method" (OSDM). Can I claim this acronym somewhere?

Friday, May 17, 2013

Handling Service Calls from Oracle BPM

In this posting I explain why in Oracle BPM Suite 11g you should consider using reusable subprocesses to handle service calls. The example is based on a synchronous service, but the same holds for asynchronous and fire & forget services.

If you have never experienced during process design with BPMN that the final process model became twice, if not three times more complex than you thought it would be, than you haven't been doing process design for real. You might for example have experienced how, what initially looked like a simple service call, finally exploded in your face. Let me tell you how it did in mine. I made up the example I'm using, but it is not far from reality.

It all started with a simple call to a synchronous service to store an order using some Insert and Update Order Service, or Upsert Order for short. The initial process model looked as simple as this.



This process is kicked off as a service with some initial customer data, after which a SalesRep enters an order that then gets stored using the Upsert Order service. But then I found that to use this service, I had to instantiate some custom request header. So I had to add a Script activity, as you cannot do that in the Service activity itself. The result was this:


But now I was exposing technical aspects in a business process, so to prevent the business audience to get distracted from that, I hid the complexity inside an embedded Store Order subprocess, like this:



And inside the embedded subprocess like this:

 
So far so good. But then I found that I also had to create a message id, log the request and response message, and catch exceptions to forward them to a generic exception handling process because the business might be involved to solve data issues. So before I knew it, my embedded subprocess looked something like this:


Hurray for the embedded subprocess, which hid all this complexity! But then I had to call the same Upsert Order service a second time, requiring me to do all the wiring again. Arghh!! Which brought me to the "brilliant" idea of pushing all this logic to a reusable subprocess, like this:


So in the business process itself, I only have to map the (context-specific) request and response messages to and from, what now has become the generic, and reusable wiring of the service call, like this:


Now you might ask yourself why we did not do the service wiring in PBEL. After all, implementing logic regarding service calls (like exception handling, more complex data transformations) is easier in BPEL. But not every BPM developer is also good in BPEL, making BPMN the better choice from a development process point of view. However, the next evolution could be to create some even more generic Service Request/Response Handler that is capable of calling any synchronous service, taking an anyType input and output argument.

Wednesday, February 13, 2013

BPM: how to time out synchronous calls to external services?

This article has been updated on November 13, to fix naming a wrong property oracle.soa.local.optimization.force instead of oracle.webservices.local.optimization.

In the Oracle SOA Suite for calls to synchronous services the maximum time a composite should wait for a service to respond, can be set using the SyncMaxWaitTime (which has a default of 45 seconds). For services deployed on the same engine, this also works for the BPM Suite.

However, for the BPM Suite 11.1.1.6 (PS5), and probably all versions before that, this property is ignored for calls to external services (that is, not being deployed on the same engine). A BPMNConfig specific version of this property is not present (yet?).

A work-around is to set the oracle.webservices.httpReadTimeout binding property on each specific reference of the composite.xml, as shown in the picture below. The weblogic.wsee.wsat.transaction.flowOption comes automatically with that.


It is a bit more work, as you have to do that for each individual reference. On the other hand, this gives you the opportunity to set it for one reference different than for another one.

As a matter of fact you don't even have to set these properties design-time. By navigating to the deployed composite, and then go to the Service/Reference Properties, you will see that you can set this property run-time as well. I haven't checked (yet) if you do both which one will prevail. I would guess the run-time one.

Related property is oracle.webservices.local.optimization, which can be set to false to force that the SOA Suite will not optimize service calls by using Java-level calls, but instead use the SOAP stack. This property is not necessary to set for external references, as for that it will always use SOAP. However, you can use it if you want to set it for services calls within the same SOA server.

Another property is the oracle.webservices.httpConnTimeout which can be used to configure the maximum time the composite should wait to connect to the service. You don't have to set that to time out waiting for the response.

Interesting reading regarding Oracle Fusion Middleware timeout settings it a recent posting from Manish Kumar Gupta.

Thursday, October 11, 2012

BPMN Naming Conventions

So far I have not found many references discussing BPMN naming conventions, and there is no official one. So I thought I share a few of mine that I developed over the years. Hope you can make use of it.

Most of them are captured in the following picture.


Unfortunately I cannot depict them all, so here are a few more:
  • Do not use the component type name in its name, for example do not use Expenses Process, Submit Expenses Activity, Expenses Rejected Exception.
  • Name a process so that its name fits in the following phrase: "the [Process Name] process", for example: "the Order Handling process".
  • Give an activity a short, but meaningful name. So:
    • Don't use uncommon abbreviations,
    • To keep the reader focused on the keywords, do not capitalize prepositions and possessive pronouns ("Review and Finalize Order"),
    • Avoid articles and pronouns ("Add Order Lines" instead of "Add all Lines to the Order").
  • Name Catch Events, Business Exceptions, and Signal Events so that is reads like an answer to the question: "What happened?", "Well the Customer Cancelled".

Like most naming conventions, the prime reason is to have consistency. So don't treat it like a religion, and don't think that mine are better than yours. Of course they are, but just don't think that. Most of them are more like a guideline.

To help you with your choice, consider this:
  • Using active verb-noun phrases is a psychological thing. Don't ask me why but somehow it helps your brains to determine the right scope of an activity. For example, one person may interpret Order Handling as one single activity (done by one person as one logical "transaction") while some other guy considers it to be a complete process. If you call it Handle Order that just "sounds" more like one single activity. It is not for nothing that this also is a de facto standard for use cases.
  • Using the question mark for exclusive gateways helps with limiting the amount of text you have to put in the diagram, as all conditions on the outgoing sequence flows read as a simple answer to the question of the gateway.
Let me know if you have some more, or when yours are better than mine!

Monday, April 23, 2012

When to Use Custom XPath Functions in the SOA Suite

If you want to know how to create custom XPath functions, then have a look for example at Anthony Reynold's Blog entry on the topic. I have little to add to that excellent posting except for how to add parameters, which is done using the <param> tag as in the following example:



And the following simple piece of Java code to get the parameter from the list:

  public static Object compressGuid(IXPathContext iXPathContext, 
                                    List arguments) {
    String guid = (String)(arguments.get(0));
    ...
  }



If you want to know when or why to use custom XPath functions, then read on.

Before I come to some sort of a conclusions, first let's discuss some of the specifics of the mechanism of adding custom XPath functions.


One Function, One Class

First observation is that a custom XPath function is exposed by a class that implements the IXPathFunction interface. This interface enforces implementation of a call() method which executes the XPath function. As there can be only one call() method in the class, it can realize only one function. So if you want to implement more functions you will need to create as many classes as there are functions. To support reuse you can let each of those classes extend an (abstract) super class that will contain the common behavior and (static) reusable attributes.


No Runtime Versioning

A second observation is that the class to implement an XPath function ends up in a jar file that is deployed in the SOA infra structure. A class is not a service, so there can be only one implementation of that class at the same time. If you need to deploy a new version of the XPath function while keeping the previous one, you either need to put it in a different package or give it another name. On top of that you will also have to give the <function> another name (in the example resulting in a compressGuid, and compressGuid2). How ugly do you want it to get? An obvious other aspect is that the XPath function by definition is reusable for everybody that is deploying on the SOA engine on which the jar file is deployed.


No Right-Mouse-Click Deployment

By now you will realize that you cannot simply deploy a new version of the jar file on the server by just right-mouse-clicking it and then choose "deploy", and that this is for a reason. You will have to copy the jar file to a specific folder, run an ant script, and restart the server.


Conclusion

Custom XPath functions is not the silver bullet to do any type of Java embedding in SOA or BPM processes. Especially as a BPM developers you just might have hoped for this as, unlike in BPEL, there is no Java Embedding activity in BPM (also for a good reason you might argue, but that's another blog entry).

You should only create XPath function for specific types of cases, while applying some best practices:

  1. Only use it for XPath functions that clearly fits between all other already available XPath functions. An example is an XPath function I just created (based on the brilliant algorithm of my dear colleague Tonio Voerman), which compresses the 32 character Oracle GUID into 30 characters (because the column in which we have to store it, is only 30 characters long). After all the (existing) oraext:generate-guid() is also an XPath function.
  2. Be sure that the XPath function indeed is an obvious candidate for reuse (our compress-guid() function will be used by every BPM process). Otherwise, other solutions like including a Sprint Context in your composite, probably are better.
  3. Do your utter best to get the proper requirements for the interface, and functionality, to prevent you find yourself in a situation (and trouble) where you have to deploy a new version of the function next to an existing one.
  4. Test the XPath function thoroughly, preferable using some unit-testing framework like JUnit

Unbelievable, this is actually the first time in my life that I manage to post two entries on the same day. Normally I don't even find the time to do two in a week, or even a month for that matter!

OBPM 11g Multi-Langual Process Models

Sometimes simple things are simple!

Take for example creating process models that you can show in 2 different languages. With OBPM this is so simple, that I at first I could not find it because I was looking in the wrong place.

That is: simple if you realize that you can also access (BPM) Project Preferences from the BPM Navigator (next to the Project Properties that can be accessed from the Application Navigator). But once found it's dead simple. Just add an extra language on the languages tab, as shown in the next figure.



You can then enter a second localized label to the pool, events, activities and gateways by going to its properties, and then click on the globe and enter the label.



After that, you can modify the language of your model on the fly by changing the default language, for example from English:


into Dutch:


If you were paying attention, or looked carefully, you will have noticed that I was not mentioning the role and sequence flow. Alas, those are not multilingual.

Oh, and eh... there is a small bug. The label of the pool only refreshes after reopening the model.

Wednesday, February 29, 2012

OBPM 11g And Human Workflow Services

In this posting I describe a case in which I demonstrate the working of the Human Workflow Services of the Oracle BPM Suite 11g. Or rather the SOA Suite 11g, as it is used for human workflow in BPEL as well.

But before I do so, a brief explanation of what the Human Workflow Services is.


So What is it?

It probably is the most used part of what also is known as the SOA Suite API. The Human Workflow Services provide services to, how surprising, handle human tasks. But there also is an API for Oracle Business Rules, B2B, and the User Messaging Services, to mention a few others.

Ever worked with the OBPM's Workspace or Worklist? Just realize that practically all data you see in there and all actions you can perform are provided by the Human Workflow Services. If you ever need to provide a custom worklist, for example to embedded it in some 3rd party application, the Human Workflow Services is what you will be using.

The most commonly used services probably are the following:
  • TaskQueryService To provides operations to query tasks, based upon keywords, category, status, business process, attribute values, etc.
  • TaskService Provides single task operations, to manage task state, to persist, update, complete, reassign, or escalate tasks, etc.
  • IdentityService To authenticate users (when loggin on into the workspace), to lookup user properties, roles, group memberships, etc.
Other services are the TaskMetaDataService, UserMetaDataService, TaskReportService, RuntimeConfigService, and TaskEvidenceService.

Except for the IdentityService all other services have both a SOAP as well as a remote EJB interface. The IdentityService is a thin SOAP layer on top of WebLogic.


How to Use It?

All services require authentication, except for the IdentityService (obviously). Authentication is based upon a user name/password combination or a previously determined login token. It is a best practice to only authenticate the user once per session, using a user name/password combination. The engine will instantiate a workflow context (similar to a servlet context) and returns a token that references the workflow context. This token can then be used in successive calls to the API. This prevents that with every call a new workflow context needs to be instantiated, which is a relatively expensive operation.



It is also a best practice to use identity propagation of an already logged on user, which in case of SOAP can be done by passing on a SAML token. All SOAP services (except for the IdentityService) have two different end points: one that requires a SAML token and one without. To prevent getting frustrated because of problems with authentication, be aware that when using a tool like soapUI the default end point probably is the SAML one. If you cannot use identity propagation, you will be using the TaskQueryService.authenticate operation to get the workflow context token.

A typical pattern to execute a human task, would be as follows:
  1. Requiring a token using the TaskQueryService.authenticate operation,
  2. Getting a list of tasks based upon some filter using the TaskQueryService.queryTasks operation,
  3. Acquiring (locking) a specific task for execution using the TaskService.acquireTask operation,
  4. Updating the payload of the task using the TaskService.updateTask operation,
  5. Setting the outcome (commiting) the task using the TaskService.updateOutcome operation.
Hereafter I will give an example of this 5-step pattern to execute a human task.

The addresses of the WSDL's are not too consistent, I suppose to prevent us from becoming lazy. The address of the TaskQueryService WSDL is:

http://host:port/integration/services/TaskQueryService/TaskQueryService?WSDL

The address of the TaskService WSLD is:

http://host:port/integration/services/TaskService/TaskServicePort?WSDL


Executing a Human Task Example

Step 1

To get a workflow context token, the following request would give me one for the user "cdoyle" with password "welcome1":



The response will be a <workflowContext> element with a <token> in it that can be used in successive calls.


Step 2
Now I can query tasks for the user cdoyle. Let's assume I want to query all tasks that either are assigned to the group cdoyle belongs to or explicitly have been assigned to cdoyle. I can use the TaskQueryService.queryTasks operation with the following query (the simplest I can think of):



As you can see the token has been copied into the request. To get all tasks I have set the <taskPredicateQuery> attributes startRow and endRow to 0. I could also have left these attributes out all together. The startRow and endRow attributes support paging of tasks, which can be handy in case of a custom worklist. The response is a <taskListResponse> with a list of <task> elements. Each <task> element has subelements like <title>,
<priority>, and <taskId>.


Step 3

The <taskId> uniquely identifies the task instance, and is used in the next call to the TaskService.acquire operation, to lock a task, for example:



The response of this operation will contain a <payload> element which is of type anyType and contains the human task payload as configured in the process. In my example the payload concerns a (how original!) a sales

<QuoteReqest> element:



The response also contains a <state>, <substate> and <taskId> elements, that will show that the task has been assigned and acquired:


Step 4

Now let's assume that this is the payload of a Review Quote task, and that I want execute that task by updating the quantity to 2 and setting the status from "new" to "OK".

To update the payload of the task, I use the TaskService.updateTask operation. This has a lot of attributes. I will keep it simple, meaning that I don't add attachments or comments, etc., so for me it suffices to copy and paste the complete <task> element from the response of Step 3 into the <task> element of the request:


In its response it will give the updated payload, and (among a lot of other things) it will say who did the update and when:


Step 5

To commit the task, I use the TaskService.updateOutcome operation. You will also find that this operation has a lot of attributes. I will keep it simple again, and copy and paste the complete <task> element from the response of Step 4 into the <task> element of the request of the TaskService.updateOutcome operation (left out a lot of the elements for reasons of brevity):



If you watch carefully, you will see that the <taskId> is both in the <systemAttributes> subelement of <task> as well as a separate <taskId> element of its own (at the bottom). The latter one is used to determine the task instance.

The value I entered for outcome, has to correspond with the possible outcomes I defined for the Human Task definition:



In its response this operation will give back the changed payload, plus a number of other elements, among them the <state> element that will have the value COMPLETED now:


hAPI programming!