Sunday, December 01, 2019

OIC Integration: defining and using constants

Oracle Integration Cloud does not have native support for constants, but it is easy to set this up. This post discusses how.

For integrations there are two ways to define constants in the Oracle Integration Cloud:
  • Lookups
  • Variables

Lookups are initially meant to support mapping of values from one domain to another. For example, one domain has country code using two letters ("NL") whereas another domain uses three letters ("NLD"). The lookup can then be used to "translate" the value from one to the other ("NL" <-> "NLD"). The same feature can also be used to support configurable constants by providing a list of name-value pairs. For example, in the following SMColor lookup two different name-value pairs have been stored, "YELLOW" with value "yellow", and "BLUE" with value "blue":






In an integration you can use this lookup to get the value by name using an XPath lookup function. As I will show hereafter, there are two different XPath expressions, each being used in a different context.

Variables are set in an Integration using the Assign activity. You can define a variable with a specific name and a value. You can also use a combination of the two. In the following example you see the variable "red" being defined with value "red" and the variable "blue" is initiated using an XPath expression that on its turn gets the value from the lookup name-value pair with name "BLUE":




Now let's have a look at how this can be used in a Mapper activity. In the following picture you see three different elements being mapped from respectively the lookup, the variable and the combination of both:




The response of my service looks as follows:

{
  "statusFromLookup": "yellow",
  "statusFromVariable": "red",
  "statusFromBoth": "blue"
}

Note that the XPath expression used to initiate the variable, is different from the Mapper activity. The difference of expression is whether the lookup function is used from XSLT as the Mapper activity does, or not.

When used in a Mapper activity use an expression like the following:

dvm:lookupValue ("tenant/resources/dvms/SMColors", "Name", "YELLOW", "Value", "not found") 

Otherwise use an expression like this:

dvm:lookupValue('oramds:/apps/ICS/DVM/SMColors.dvm','Name','BLUE','Value','not found')

Now when to use what? Some pointers:
  • To change a variable in an integration, you will have to modify the integration and reactivate it. You can do that as a new, minor version to prevent downtime. Not a task for any type of OIC user.
  • The threshold for someone to change a Lookup is lower. More suitable for values that need to be changed run-time (you don't have to re-activate the using integrations) for example by an Application Administrator.
  • It is easy to make a mistake in the XPath expression. So when you have to use it in an integration multiple times, consider using the combination as mapping a variable is simple.
  • Execution of an XPath expression has a small, but still a performance penalty.

For Structured and Dynamic Process there also is more than one way, which I will also discuss in some next blog topic. None of the above solutions support "versioned" parameters. I will discuss how to do that as well.

Sunday, November 17, 2019

OIC Process: Auto-Mapping Elements in the Data Mapper

Hereafter I describe a 'hidden feature' regarding data mapping in Oracle Integration Cloud - Process.

When mapping data in the Oracle Integration Cloud (or OIC for short) you sometimes discover that elements you want to map from are not always available as a source on the left-hand side. As I recently found out (thank you Eduardo Chiocconi!) that does not necessarily mean that they are not available for mapping.

An example might be including some elements of the request in the title of the process instance. Until now I always did this by including a Data Mapper right after the Start Event. However, the same I could have achieved in the Start Activity itself.

The following figure shows how I map the value of the "title" predefined variable to itself, concatenated with some values from the request (customer name and id):



As you can see in the Processes tab of the Workspace both the title and the concatenated values are visible. Saves you Data Mapper activity :-)


Monday, October 21, 2019

Microprocess Considerations

In this article I discuss some considerations when applying the Microprocess Architecture, and how those can impact the design of the process.

This article has been updated on November 11 2019 after feedback from Luc Gorissen, and on December 28 after feedback from Sushil Shukla.

As pointed out in the article about the Microprocess Architecture, one should carefully consider if this is the right architecture for the process application to create. Considering the implications (for example one single business process can end up comprising multiple process applications) it should not be considered to be a "one-size fits all" kind of architecture.

Guidelines that can help you to determine if and where is a fit, are the following:
  1. Do process instances have a longer time span during which one must be able to incorporate changes to the process (in one way or another)?
  2. Is the process expected to change often?
  3. Does it concern a complex business process, where business functions can be executed isolated from each other (and with that potentially can be reused)?
  4. Are multiple business units involved in the flow?

If you answered one or more of these questions with yes, it probably is a good candidate. If not, probably not. As I will discuss hereafter you do not have to implement the Microprocess Architecture on all parts of the application. There are also alternative solutions like abandoning a running instance and handle it manually, or restarting a process instance. Such alternatives are out-of-scope of this article though. Maybe I will discuss them at some point in the future.

Before explaining the rationale behind these criteria, let me first explain that instance migration refers to moving a running stance of a process from one version of the application to the next. For this to be possible, the next version needs to be backward compatible with the one in which the instance runs. At the time of writing of this article, The Oracle Integration Cloud (OIC) does not yet support instance migration, but will soon. But even when it does, there will be limitations. As it is yet to be seen what those are, I can say not much about them but you can imagine that an instance which is in a Receive activity (waiting to be called) cannot be migrated to a version from which that activity has been removed.

Now lets discuss the criteria that makes (part of) your application a candidate for the Microprocess Architecture in more detail.

Point 1 is a clear indicator, as you cannot assume instances can always be migrated to the new version that has the changes incorporated. An example is a long-running legal processes that has to cater for changing procedures and laws. Or a move house process initiated by the customer some time before the move will actually happen, and in the meantime the organization or customer situation may have change changed, requiring the move house to be handled differently. As long as the top-level process is not changed in a non-backward compatible way, the applying the Microprocess Architecture may support this at a great length.

Point 2 might be less obvious unless you start thinking about what it means when you have changed the process and there are instances running in previous version that cannot be migrated. You should try to avoid having multiple versions (or revisions as you would call them) of the same process application being deployed, but may be forced so. This will have impact on the process engine, not only from a performance but also operations perspective. Someone who has to analyze the flow of the process will have to be aware there can be many revisions for (part of the application) that all work differently.

Point 3 addresses the level of functional modularization that can be achieved. Often it is already a pretty natural way of development to implement isolated business functions in modules, or in the context of this article, microprocesses of their own that also can be maintained and deployed isolated from each other. An example is some generic omni-channel notification feature to inform customer about the status of something like an order, service request, or complaint. Another example is a reusable process to handle technical faults. The microprocesses can be reused, but there still is a top-level process that determines the orchestration or choreography of the microprocesses. In case of a Dynamic Process, the business rules determining the choreography can be also be implemented as modules of their own that (currently) only can be changed dynamically as long as the rules are data-driven, and the interface of those rules do not change. All in all the Microprocess Architecture mainly addresses flexibility to the microprocesses, and less to the top-level process. One therefore should strive to have as little business logic in the (long-running) top-level process as possible, and delegate this to the microprocesses and rules.

Point 4 is a very clear indicator. Whenever parts of a business process are executed by different business units it always will be a good idea to group business functions by business units in such a way that they are isolated from each other, resulting in microprocesses of their own. Obvious rationale that a business unit can then execute its own roadmap for changes of the process without unnecessary interference of the roadmaps of other business units.

Now let's discuss how this can be applied to a process application or parts of it.

When a process starts there may be quite a few activities that are executed before it has to pause for a longer period of time. Or said differently, before it reaches a human activity that may take days or weeks before it will be executed, or a point where it has to wait for a message from some external application, organization of organizational unit (in BPM-speak these activities would be in a different pool). The implementation of the activities up to that point may not require to be microprocesses of their own. After all, once started any change to the process cannot be applied to those activities anymore, they will already be done. In contrast, for all activities coming after that you still have an opportunity to execute those differently.  In other words, any point where the process can be paused for a longer period of time should be considered to be the end of a microprocess, and the first activity after that to be the start of a new one.

An activity may represent a business capability, that may consist of several smaller but strongly related steps. It would be wise to isolate these from the rest of the process, so that this set can be maintained and and operated separately. This then will be a microprocess of its own, or even a microservice.

Finally, as argued, changes in the top-level process will impose a challenge at some point. To some extend this can be addressed by letting the cross-over from one phase to the other be the start of a new microprocess. For example, the first phase may concern the sales cycle to a customer. The customer may need some time to consider the offer. Once the product has been sold the delivery process can start. This can be a good opportunity to start a new microprocess, implying a split of the main process into two separate ones, a "Sales" and a "Delivery" process.

Sunday, September 15, 2019

The Process Group Pattern


This article describes the Process Group Pattern, which can be used to correlate process or integration instances that all support the same business process. It is also one of patterns supporting the Microprocess Architecture.

Updated on 2019-09-16 to include screenshot of processes in Workspace.

For a somewhat more complex process, and especially when applying the Microprocess Architecture, you may have more than one process and probably several integration applications that make up the implementation of one single business process. This implies that when executing a business process there will be 2 or more instances of the process, and integration applications. Now how can a business user or Applications Administrator correlate all these instances to monitor the flow of the business process?



The on-premise Oracle BPM Suite (and SOA Suite) has the concept of "flowId" which is an id that is generated by the BPM engine at the start of the first instance and then "passed on" from one instance to the other. Using Enterprise Manager, by means of the flowId one can easily follow how one process or integration calls the other, and by putting it in the process or integration instance title, also in the Workspace. The Oracle Integration Cloud (OIC) does not have the concept of flowId, as least not yet. Now what to do? Here comes the Process Group to the rescue.

The Process Group Pattern is relatively simple. It includes a unique "processGroupId" that, like the flowId, is generated at the start of a process flow and then passed on from one process or integration to the other. A robust way to get a processGroupId is by using the GUID you can get from OIC (or the BPM Suite).

However, unlike the flowId, the processGroupId is unique over all engine instances you may have. When using a GUID, it is even unique over the world. Uniqueness over engine instances becomes important when at any point in time you have two or more of those and when some of the components of the process application are deployed on a different instance than the one starting it. Also unlike the flowId, the processGroupId is persisted in a custom database and can be kept persisted beyond the life cycle of the business process (which after purging will have disappeared from the engine's database). Finally, you can return the processGroupId as a response to the start operation of the process, allowing the starting application storing a cross-reference to the Process Group instance.


To support that instances can be queried by processGroupId in the Workspace, you can set the title of the instance as a first activity in every individual process application. For OIC integrations you can make it one of the tracking variables. The below shows how in OIC this would look for process instances:


Next to the processGroupId, you can also persist more meta data information about the business process, ending up with a business object looking like the following:
 

The combination processGroupType and businessId should be unique, at least for running instances. By storing this combination together with the processGroupId you have a mechanism to prevent duplicate instances of the same business process to be started.

For an even more complex business proceess consisting of a main and a few functional subprocesses, you can introduce an extra layer by adding a Process Group Instance business object. This might come handy for example when you have a stand-alone, reusable business process (like a Signing process) that is called by the main process.


Assuming that a functional process application is deployed on only one process engine at the same time, the processGroupId might be filled out with the technical id of the first instance as generated by the engine.

More formally:

Context:
The implementation of a business process is made up of several components (process or integration applications) and there is no out-of-the-box way to relate the instances of these components to each other in a (custom) Workspace. One might also want to have a recording persisted of meta data of the business process after its instance(s) have been purged from the engine's database. Or one might need a means to prevent duplicate instances of the same business process to be started.

Solution:
A Process Group is the collection of instances of components that make up the flow of one single business process. It includes a unique processGroupId that is generated at the very beginning of the business process, which is persisted in a custom database, together with a combination of a businessId and processGroupType. There can only be one combination of businessId and processGroupType for any running Process Group instance at any time. The processGroupId can be returned by the start operation to support cross-reference from the starting application. 

For more complex process applications an extra Process Group Instance layer can be added as a child of the Process Group, to support business process applications made up of two or more functional process applications. A Process Group Instance is the collection of one or more instances of tightly coupled components that together make up one single process application, which in principle is reusable.

Implication:
A custom database is required for storing the Process Group and Process Group Instance. A function is required that returns a processGroupId that is guaranteed unique over process engine instances when (at some point in the future) components of the business process need to be deployed on two or more engines.

Monday, August 05, 2019

OIC: How to Force Dehydration in Processes

This article describes a trick how you can force a Structured Process instance in OIC to dehydrate.
OIC Process uses the database to store its state, which is called dehydration. In contrast, restoring that state from the database is called hydration. Dehydration automatically happens at points where the process may have to wait for a 'longer' period of time, for example at a Receive or User activity, or a Timer Catch event. Dehydration is also the point where the transaction of the process instance ends (and a new one starts).

Sometimes you may want to force dehydration. For example, you may have a Structured Process for which the operation to start it should synchronously return some value, while the process performs several other steps before it reaches the first dehydration point, and the transaction ends. The out-of-the-box behavior will be that the start operation will not return a response before it has ended the transaction, which may imply that the consumer has to wait for a relative long time
In the following process model some business data is stored synchronously, and then some other process is started synchronously, making that the transaction does not end before it has reached the End event:


A simple way to make it return the response sooner, is to include a Timer Catch event of 2 seconds (or longer, but 2 suffices). This will make the process engine force to dehydrate the state of the instance:


For those who know the (on-premise) Oracle BPM Suite: exactly the same trick as we used there 😉

Sunday, July 28, 2019

OIC: Making a REST Integration Returning a 404 instead of 500

In this article I describe how to return a HTTP 404 (resource) Not Found with a REST integration that on its turn calls another REST service that returns a 404.
 
This article is superseded by my article Fault Handling in OIC, which gives you the proper way to do this.

When an integration invokes the GET action on a REST service that returns a 404, the integration will raise an APIInvocationError. As a result, the integration on its turn will respond with a HTTP 500 error, which is typically not what you want.

Embedding the invoke in a Scope gives you the option to add a Fault Handler:



Chosing the APIInvocationError gives you the option to configure how any APIInvocationError should be handled. As you can see below, I have configured it to use a Switch, where the top flow will make it return a 404:



In all honesty this is not watertight, because I filter on all APIInvocationErrors where the type is empty (“”). The reason being that all the elements, type, title, detail and errorCode are empty, so I cannot filter on anything.



As I found out, you will also run into this situation when the URL of the Connection used for the invoke is wrong, and probably a few other situations as well. I rely on the assumption that my integration is properly configured, so that the most likely cause of an APIInvocationError indeed is that it concerns a 404.

To make my integration return a 404, I map this as a hard-coded value to the errorCode:



Except for the errorCode I have hard-coded the other elements as well. Probably not exactly according to the specifications, but good and especially clear enough for me:



In the meantime, in the background there is a SOAP fault with a reason containing the HTTP 404, so at some point I hope this will be exposed so that I can filter in a reliable way:










Saturday, July 27, 2019

OIC: Handling Optional Elements in a REST Integration

This article is a follow-up of the a previous article where I discuss how to handle optional elements in case of XML in the Oracle Integration Cloud (OIC). In the following I discuss how to create an integration that invokes VBCS REST service and works in (almost) the same way as the VBCS REST service itself.

A challenge with mapping is always how to handle optional elements. In the previous posting that I refer to above, I describe a way how to deal with this in case of XML messages. As I found out (the hard way) this is cannot be applied 100% to JSON.
 

I have made it work for an integration that invokes the REST service on a VBCS business object. As there are challenges especially with numeric fields and references (foreign keys), I have used a simple BO called Detail having a string field, a number field and a reference. With VBCS BO’s the latter implies a number field that references the (number) id field of another BO it refers to, which in this case is called Master.
 

The Master BO looks as follows (ignore the create/update fields, those are generated by default and for the example not relevant):



The Detail BO looks as follows:



As you can see, name (string), master (reference to Master.id, number) and age (number) are all optional.

I created a single REST integration that, using the OIC Pick Action feature has a POST, GET and PATCH action to create, get and update a Detail:





Use if-function for Mapping Input

Except for the PATCH action, all mappings to the requests of the invoke use the if-function to check if the source has a value, and only if it has maps it to the target:



Use string-length() for Mapping Output

The mappings from the invokes to the response of the integration all use the string-length() function to check if the response element of the invoke has a value, and if so maps it to the target.


I make use of the fact that internally JSON is transformed into XML and there is no payload validation, so numbers also can be checked using string-length(). By doing it like this the element will be left out completely, instead of being returned as an empty string “” or failing in case of a number. This is not 100% as the VBCS service works (which will return null instead of leaving the field out), but for me that is not an issue when using the integration.

Special Case: PATCH

In case of a PATCH I need my integration working so that left-out elements are not updated (i.e. stay untouched) and that I can nullify them by passing null as a value. For the invoke to the VBCS REST service this will fail for number elements (with JSON a number is a primitive type that cannot be null). I therefore apply a trick by using a JSON sample payload that treats all elements as string, including the master and age (both number):

{"name": "Huey", "master": "1", "age": "15"}


This, in combination with the if-function when mapping the request to the PATCH invoke, makes it work the same way as the VBCS REST service works. For the response I use the string-length() as described above.


The picture on the left side shows an example of an invoke to the PATCH action with all elements present, and on the right side where all elements are nullified:




As you can see, nullifying the master results in a reference to a row with Master.id = 2, which happens to be the only Master row with no name. The VBCS REST service works the same way, so apparently some ‘intelligence’ is applied here. When adding an extra Master with no name so that now I have two of those, VBCS can no longer decide which one to take and nullifies the reference to the master altogether:


When I leave out any element in the request, the field in the BO is not touched. It’s a bit boring to see, so I will spare you the screenshots. You will have to trust me on that.




Tuesday, July 09, 2019

The Fault Encapsulation Pattern


This posting discusses an integration pattern where you return a fault as a message instead of as a fault, to prevent that the execution of the integration is indicated as having errored.

 
There are a couple of situations where you may not want a synchronous integration to return a fault to its consumer. Examples are:
  • Some back-end system is raising a fault which is not really a fault but a way to give the consumer a particular outcome. Like a credit limit check that returns OK when the limit is not reached, but otherwise gives a CreditLimitReached fault. 
  • A call to the back-end system may time out, telling the integration that the system is not available, which may be a regular state. For example, the integration calls the system to check if it is still running, and if it is tell it to shut down. When the system is already shut down the call will time out.
The reason you may not want to return a fault to the consumer of your integration, at least not as a fault, might be that this flags the execution of the integration to be errored. For example, integrations in the Oracle Integration Cloud (OIC) will show up in the Dashboard as errored instances. That on its turn should trigger Systems Administration to have a look why it failed, only to find out it did not as that is normal behavior. Before you know it, Systems Administration stops having a look, also when there is something seriously wrong with your integration.

To prevent this from happening you may want to handle the fault as an alternate flow instead of an exception flow. This is what the Fault Encapsulation pattern is about. 

Fault Encapsulation Pattern

In simple terms, when applying the Fault Encapsulation pattern, you don't return an error for business faults, but instead encapsulate the error in a "message" element of the response which is an optional part of the normal response.

The following "BPEL-ish" diagram shows how this looks. 


The invoke to the back-end system is a scope with a catch block that catches the error, wraps it in a normal message and then returns the response. In OIC this works in a similar way.

More formally:

Context:
A business fault in a synchronous service operation should not stop its processing, to allow returning other information than the fault alone.
A business fault caught by a synchronous service operation that otherwise executed properly, should not flag the operation as failed to prevent false positive error notifications. Instead handling of the fault should be part of normal process execution by the consumer.

Solution:
The fault in the synchronous service operation is caught using an exception handler that wraps the fault in a message element. The message element is an optional part of the regular response message of the synchronous service operation. System faults in the processing of the synchronous service operation itself are handled as regular faults, in case of SOAP by raising a SOAP Fault, or in case of REST by returning a 4xx or 5xx HTTP status code.

Implication:
The consumer cannot use any regular fault handling mechanisms to handle the business fault. Instead it will have to check for the message element being present in the response and act on that.