Wednesday, September 21, 2011

Tips for Installing Oracle BPM Suite 11g FP4

Edited on October 20, 2011 regarding the location of OPatch (due to progressive insight, and some comments).

When installing Oracle BPM Suite 11g FP4 I ran into a couple of things, of which I thought I better write that down for some next time. If you are a SOA Suite install crack, don't bother to read on, as I probably have nothing new to tell you. In case you are new to it or do installations only occasionally, you might find this saving you considerable time. It would have done so in my case.

You typically install the SOA/BPM Suite on top of WebLogic. The MW_HOME is the (middleware) folder containing the wlserver_10.3 folder (among others). The Common Oracle Home will be [MW_HOME]\oracle_common, while the SOA_ORACLE_HOME will be [MW_HOME]\Oracle_SOA1. In the readme for the PS4 installation ORACLE_HOME refers to both, so read the instructions carefully.

The first instruction is to make sure you have the latest OPatch. When you download it, you will find that the instructions of OPatch itself say you can find OPatch in ORACLE_HOME. There actually is one in both ORACLE_HOMES. Probably the same one, I did not bother to find out and copied the download OPatch to both folders.

The instructions are particularly written for Unix, so needs some "translation" to make it applicable for Windows:
  • Where it reads "$ORACLE_HOME" you should replace that by "%ORACLE_HOME%" (duh!)
  • To patch the Oracle Common Home it tells you to use the following command:

    $ORACLE_HOME/OPatch/opatch napply -invPtrLoc $ORACLE_HOME/oraInst.loc

    It took me some time to find out that the invPtrLoc only applies to Unix only, and that for Windows you can simply leave it out. On Windows the inventory information is located in the registry instead of the oraInst.loc file. So the command for Windows is simply:

    %ORACLE_HOME%\OPatch\opatch napply

PSA stands for Patch Set Assistant. I skipped the backup of the database, because as a developer I can simply recreate the SOA INFRA any time without loosing any valuable information.

The instructions will tell you to run PSA from the ORACLE_HOME\bin folder. Now that's a bit confusing as there is a psa.bat in the bin folder of both ORACLE_HOME's. But if you read the PSA instructions carefully, you will find it refers to SOA_ORACLE_HOME ([MW_HOME]\Oracle_SOA1. An educated guess already made me already think so.

If you are using OTN you may not be referred to the proper post-installation guide. The one it currently points to, is the post-installation steps for Oracle SOA Suite for Healthcare Integration. If you need that do that first, but then you still have to do the BPM post-installation instructions.

BTW, this guide discusses the commands for Unix as well as Windows, and specifically uses SOA_ORACLE_HOME. But that's for weenies, and not us developers, as we like finding out the hard way, right?

While I'm at it I might as well past the command I used for updating the policy store, as it may just fit your environment as well:

wlst.cmd %soa_oracle_home%\bin\ --username weblogic --password welcome1 --wlsHost localhost --adminServerListenPort 7001

The post-installation instructions tell you to delete the DOMAIN_HOME\servers\...\tmp. Many of us developer probably choose to deploy the SOA Suite on the Admin Server, so the only servers\tmp folder you then have is DOMAIN_HOME\servers\...\AdminServer\tmp. According to the documentation about domain configuration files, you can simply delete the contents of that folder, so that is what I did. Have some patience, as that folder contains GB's of files.

If you followed my example, the next post-installation instruction may make you start cursing, as it will tell you to copy some library into a sub-folder of the tmp you just deleted. I, on the contrary, kept my usual cool, and just started the server. I still kept my cool when looking at several exceptions, not recalling if those were new, of whether I've seen those before. So I bounced the server and found that the exceptions disappeared. Also the WebLogic Console and the Enterprise Manager appear to function normally (pfew!).

I stopped the server, and continued with task 3, regarding copying the jar file adflibSOAMgmt.jar. And bounced the server again.

Finally, the readme on OTN mentions that you should have a The one I have also mentions a Both instructions do not mentions to actually install them. I did both in the usual way (JDeveloper -> Help -> etc.).

That's when I was ready.

Tuesday, September 20, 2011

Oracle BPM FP4 Is Out!

Finally, it's there, the long awaited Feature Pack 4 for the Oracle Business Process Modeling Suite.

Instructions for Customers to get this are as follows:

  1. This is available as a patch under the following bug id: Patch 12413651: SOA PS4 BPM FEATURES PACK

  2. The patch is password protected and is available to those BPM customers that request it from support by filing an SR

  3. It is intended for use by BPM customers only. SOA Suite customers should check with Support and Product Management before requesting it.

Information on PS4FP will be blogged at soon.

Documentation is available at

Wednesday, September 14, 2011

OBPM 11g: Showing More Detailed Log Info

One of those too stupid to be discussed items is how you can make the SOA Suite show detailed information about the payload for activities like a script activities, or what is going on inside a business rule. After all, every SOA/BPM developer is assumed to know how to set log levels, right?

Well every developer is also assumed to know what they say about assumptions, and how frustrating it is to loose valuable time on finding out how to do simple things. So bare with me while I state the obvious.

Showing Detailed Payload Information

When the audit level is set to Production (which seems to be the default), only data associations for asynchronous activities are logged. You can see which level you are using, e.g. in the Audit Trail page, as shown below.

Because I have set the log level to Development, I do not only see detailed information about the payload that left the Handle Time-out sub-process, but also of the instance system fault.

This log level can be set in the Enterprise manager, but (as the pop-up with the information indicates) not in the Log Configuration, but in the SOA Infrastructure Common Properties. You can find it as shown below.

Show What's Going on in Business Rules

Another one of those "too obvious" things is showing detailed information about what is going on inside a business rule.

When you ask you always get an answer like: "You can turn on debug tracing in the Rules SE by setting the log level for to TRACE:32 to get more detailed logging of what is happening." Absolutely a valid answer, but when you have not had to much exposure to setting log levels, this can take you some time to find out how to do.

This is done by changing the Log Configuration, as follows:

Friday, August 12, 2011

Oracle SOA/BPM - Searching and Reporting Using Process Payload

For searching business process instances using specific attributes from the payload, the regular way to do so when using the Oracle SOA Suite is by using mapped attributes (formerly known as flex fields).

Using mapped attributes is pretty straight-forward. You configure them in the Administration tab of the Workspace, and map a specific element of the payload onto a mapped attribute. Once it has been mapped, the attribute can be added as a column to the task list and is available for filtering. The mapped attributes can also be used in custom code via the API.

Things to Consider

Mapped attributes have some important aspects to consider:
  • They can only be used for simple type attributes (String, Number, Date),
  • Mappings are task-specific, so to be able to search for example on an order.status element throughout the process you have to map it for each activity,
  • There is a limited amount of mapped attributes (20 Strings, 10 Number and 10 Dates),
  • Changes to mapped attributes are only applied to instances instantiated after the mapping took place,
  • When instances are purged all historical data of those instances will no longer be available. In a production environment, purging of instances typically is done by a Systems Administrator with no (functional) knowledge of specific processes.
Especially the latter two aspects may require a different approach to secure full flexibility regarding searching and reporting on instances. This might for example be the case when there is a requirement that historical data should be kept indefinitely, or should only be purged in a controlled way by a Applications Administrator. In case of BPM, the requirement for a flexible approach on searching and reporting on process instances is pretty common.

In such cases an alternative to using mapped attributes (and reporting using the dehydration store) is to have a some custom database in which significant updates to process data are being stored. The advantage over using flex fields would be that:
  • There is no limitation in the amount of attributes,
  • The data of old instances can be manipulated, e.g. by providing default values for new attributes,
  • Management of the custom database can be delegated to some Application Administrator that does have (functional) knowledge about the process.
The advantage of not needing to create a mapping per task, will obviously be over-compensated by the fact that the process has to do a service call every time the data needs to be saved, but again this buys back a lot of flexibility.

Instead of using service calls to save this data, you may consider composite sensors. A composite sensor is a specific type of BPEL Process Manager Sensors. Be aware though that composite sensors can only monitor incoming and outgoing messages, and not changes of the payload within a process instance. For this reason in most cases this won't be an alternative.

Tuesday, August 09, 2011

Using JAXB for Manipulating Payload of Human Tasks in SOA Suite

In some cases you may want to manipulate the payload of a human task of an OPBM or BPEL process instance using JAXB. An example would be when you are using some other framework than ADF Faces for creating the UI, and you want to work with Java objects instead of manipulating XML programatically.

To do so with JDeveloper, you can generate the JAXB content model by right clicking the xsd and choose "Generate JAXB x.x Content Model".

Be aware that you must do this using the human-task-specific payload, and not the xsd that was used to define the variable that gets passed into that human task, otherwise to your sad surprise you will get all kind XML validation issues when trying to push the data back to the process. Fortunately on its turn the xsd of the human-task-specific payload imports the original xsd, so when the original xsd changes, the human task specific payload automatically changes with it. You still have to regenerate the JAXB content model to let it reflect the changes.

The following picture shows the OrderCreationPayload.xsd of an OrderCreation human task. It imports an Order.xsd. The JAXB content model has to be generated using the OrderCreationPayload instead of the Order.

Friday, June 03, 2011

DWM FT and Hyperion BPM Solution Workbench for Essbase Retired

As of December 2010, OUM provided full support - Estimating, Delivery, and Training - for Business Intelligence (BI) Custom engagements (including Hyperion Essbase). This support represented a major milestone in the evolution of OUM and enables BI Custom (including Hyperion Essbase) practices to transition from the legacy methods to OUM.

The following methods and their associated estimating models therefore have been officially retired:

  • Oracle Data Warehouse Method Fast Track (DWM FT)
  • Hyperion BPM Solution Workbench for Essbase Engagements

Tuesday, May 24, 2011

Use Cases or User Stories

When teaching use cases, a question that comes up now and then is what the difference is between a use case and a user story as used on agile projects. There is a lot to be found on the internet discussing this. Two useful references and a start for further investigation are mentioned below. What I try to do here is capture my conclusion of these discussions. But before coming to this, let's briefly discuss what is what.

User Story

A user story is a short statement about what a user wants to do, and why. A user story typically fits on an index card. The idea is that the development team uses this user story as a starting point of a conversation with this user about how this should work in practice, to get an understanding of what needs to be done to support that.

On some projects a particular format is being used for user stories, being "As a ... I want ... so that ...". Reviewing this format it becomes clear that the first ... can be compared with the actor of a use case, and the last one with the goal of a use case (although in many cases tend to be finer-grained). So the big difference apparently is in the middle, where it is described what the actor does to achieve that goal.

Use Case

Next to specifying a goal, a use case captures one or more scenarios describing the interaction between an actor and a system to achieve that goal. Scenarios may be captured using a format like:

1. This use case starts when the actor does ...
2. The system responds by doing ...

3. The actor does ...
x. The use case ends when ...

There is one main success scenario (happy path), there may be one or more alternate scenarios (other ways to achieve the same goal), and there may be one or more exception scenarios (describing what happens when that goal is not achieved).

You also capture what triggers the use case (which in many cases will be repeated as the first sentence of the main success scenario), pre-conditions (what needs to be in place when the use case starts), and post-conditions (what will have been achieved when the use case ends).

What's Different?

So, apparently, a use case elaborates more on what the actor does to achieve a goal. A user story has just one statement regarding that and, compared to a use case, concerns just one scenario. Also, compared to a user story, a use case adds a trigger, pre-conditions and post-conditions to that.

Some people say that use cases capture too much detail, and claim that user stories are better for this reason. Other people state that in practice user stories tend to oversimplify things, resulting in the first iterations taking much more time than anticipated. The way to resolve that is by elaborating more on the user story. When you think about it, that makes sense, doesn't it? I mean, at the end of the day even on agile projects the developers need to deal with details as well. They only may do it in a different way.


In my opinion the core difference has best been captured by a guy called Jim Standley who stated that a user story is a promise to have a conversation and a use case is the recording of that conversation.

So rather than arguing if user stories are better than use cases or vise verse, I think the better question is: "How formal do you need to be?". The more formal, the more appropriate use cases start to become.

Added to this, you should also think about "When do we need to be this formal?". I can imagine systems being developed starting with user stories, and then write use cases after the fact, because some external testing team requires that, or because the customer needs it for system maintenance.

Some Suggestions

If you are on an agile project creating use user stories, but you are required or may need to create use cases later on, make sure that a user story matches exactly scenario, and name them uniquely. That should not be too difficult.

Then, when the user stories have been realized and you need to deliver the use cases, you can then use the name of the user story as the name of the corresponding scenario. You combine all user stories that share the same user goal into one use case. Add the trigger, pre-conditions, and post-conditions to that, stir for one minute, and you're done!

Further Reading

Stellman/Green: User Stories vs Use Cases

Discussion at Allistair Cockburn: A user story is to a use case what a gazelle is to a gazebo.

Tuesday, May 10, 2011

Old Methods Die, Long Live OUM!

Just wanted to let you know that the retirement of the following methods is planned for June 01, 2011:

* Oracle Data Warehouse Method Fast Track (DWM FT)
* Hyperion BPM Solution Workbench for Essbase Engagements

Thursday, March 17, 2011

OUM 5.4 Has Been Released!

Yesterday OUM 5.4 has been released. Among other things, the following improvements has been made:
  • Tactical SOA View added
  • Various techniques regarding monitoring and improving SOA (instrumentation)
  • A white paper about how to apply OUM with Scrum
  • Several SOA templates
The Tactical SOA View helps to get up-to-speed with OUM on SOA projects that do not (directly) have an enterprise approach.

The templates concern a Service Contract, a Service Catalog spreadsheet in case you don't have an Enterprise Repository, and some more.

Check it out!

Thursday, March 03, 2011

Installing OBPM Suite 11g PS3 using XE

When installing the OBPM Suite 11g PS3, I ran into an issue with the Metadata Services (MDS) schema on my XE database. The error I got when I tried to start the WLS SOA domain was "ORA-04063: package body "DEV_MDS.MDS_INTERNAL_SHREDDED" has errors". I checked the schema and found that all three existing packages were invalid. Recompile did not solve the issue.

I Googled a bit and found out that I had to re-run the Repository Creation Utility, as apparently there is an extra environment variable to set before you run this. I fixed it as follows:
  • Open a dos-box in the ..\rcuHome\bin folder
  • Enter: set rcu_jdbc_trim_blocks=true
  • Run rcu.bat
  • Drop the existing repository
  • Recreate a new one
  • All invalid objects disappeared!
BTW, the tips in a previous posting, are still valid. Especially the one regarding setting the JVM memory setting of the WLS SOA domain:

set DEFAULT_MEM_ARGS=-Xms512m -Xmx768m