Thursday, December 03, 2009

Use Cases, Explaining Main Success, Alternate and Exception Scenarios

Once you understand the difference between a main success scenario, an alternate scenario and an exception scenario, you may wonder why it took you so long to get there. The reason probably being that the explanation was pretty abstract, talking about scenario's that do or don't have the same goal, post-conditions that add to or are instead of those of the main success scenario, blah, blah, blah...

Having gone this road myself, I started to wonder if there is a simpler way to explains what should be pretty natural for most of us. And as always, finding an example that relates to day-to-day life did the trick for me to explain it to others. The example being the following.

Suppose that have to drive from A to B. Your goal being to be in time for a customer meeting. The post-condition being that you successfully reached your destination in time.

So the main success scenario is that you drive from A to B, without interruptions.

An alternate scenario would be that you have to take a small detour by going through a gas station to get some gas. An extra post-condition this detour might add to the one of the main success scenario, could be that you have to obtain the gas bill, otherwise you employer won't cover your expenses.

An exception scenario might be that your car breaks down, and you will never be in time for the customer meeting. A post-condition of this exception scenario might be that you have to inform you customer that you have to cancel the meeting. This post-condition replaces the post-condition of the main success scenario.


Other exception scenario's might concerning a serious traffic jam, getting busted for speeding, car-jacking, and so on.

Now that was not too difficult, was it?

Sunday, November 29, 2009

Is There Life After Oracle BPM Studio 10g?

Let me begin with the answer, which is: absolutely!

After a Thomas Kurian discussed the positioning of Oracle BPM (fka ALBPM) a year ago (is it that long? Yes it is!), I must admit I was worried about the strength of Oracle BPM being properly appreciated. For quite some time focus seemed to be solely on the Oracle BPA with BPEL (from the SOA Suite) combination only.

But times (and vision?) have changed since then. Recently I was in a conference call in which a preliminary version of BPM 11g was demo-ed. And I was surprised in a positive way. Oracle BPM 11g seems to preserve all the good of 10g, while at the same time it is really integrated with the rest of the product stack.


Some Things That Struck My Eyes

With BPM 11g Oracle managed to continue the ALBPM "experience" of easy process modeling and implementation. In a series of short iterations you transform the BPMN 2.0 process you collaboratively modeled with the business analysts in an executable one, without a paradigm shift like you for example would have when going from BPMN to BPEL. For me this always has been one of the major strong point of BPM. What has been added to this is a so-called zero-code environment of developing screen and including external resources (like services).

Some of the more technical features that I found appealing were:
  • Usage of the common adapter framework, as is already available in the SOA Suite,
  • Support for easy (ADF-Faces) development of rich task forms with AJAX support (without bothersome JSP development),
  • A web-based UI for modeling processes by business analysists,
  • Native integration with Oracle BAM,
  • A composite (SCA) view on business processes, which (among other things) supports native integration with BPEL,
  • Native integration with Oracle Business Rules,
  • Unification of the BPM and BPEL worklist,
  • With all this, an improved support for true Business Process Management.

What About The BPA Suite?

The positioning of BPA has not changed, but became more clear instead. BPA is typically for a more enterprise approach to business analysis, including a top-down approach to business process modeling, including BPMN. When you have a human-centric business process, you can decide to export these to Oracle BPM 11g. Otherwise, BPEL is the obvious choice.

Boy, I can't wait to get my hands dirty with BPM 11g, which hopefully will be somewhere in March 1010!

Friday, November 27, 2009

OUM 5.3 Has Been Released

November 13 there has been an announcement that the Oracle Unified Method version 5.3 has been released. As explained in a previous posting, we do offer OUM to our customers.


What's New?

Although the increase in version number (from 5.2 to 5.3) seems to suggest this to be a minor upgrade, some of us will receive as a major upgrade.

For example, this version includes an initial support for Business Intelligence (BI) and Enterprise Performance Management (EPM) implementation. We also added a view for Software Upgrades, which will help in in quickly determining which tasks to consider for an upgrade of Oracle software products, including middleware, database, enterprise application products and Business Intelligence solutions.

Regarding the other, existing views, a lot of improvements have been applied. Especially Envision (the focus area of OUM covering enterprise / business level aspects) has been upgraded, and is rapidly reaching a "mature" state. Two of the topics that I would like to point out are IT Governance and IT Portfolio Planning, mainly because I have been personally involved in that (sorry, too hard to resist)!


Training

Perhaps even more important than having a comprehensive method, is being able to provide training on that, and assist in applying the method. In the last year The Oracle Methods Team therefore has put considerable effort in creating various training modules, from high-level overviews that make you understand what OUM is all about, to more task-oriented modules around requirements gathering or analysis and design.

Customers won't find them in the curriculum of the Oracle University (yet) but don't let this keep from for asking for it, as we can deliver customer training.

See you in class!

Wednesday, September 30, 2009

OBPM Business Exceptions and Heirs

Often the solution of a problem is having an eye for detail, like properly reading an error message most of the times is half the solution. Finding out how to create heirs for the Oracle Business Process Modeling Suite (OBPM) appeared to be yet another thing requiring an eye for detail.

A colleague insisted it worked for him. However, no matter how I tried it simply did not for me. We checked versions numbers, build numbers, both the same. Then another colleague came by the other day, and for him it worked too. Then I started to pay attention to the details and watched how he did it.

What I did was right-click an existing Business Exception -> Create Heir, resulting in an ordinary business object. No matter how hard I kicked it, it refused to becoming throwable. What he did was this:
  • Create a new business exception, but not as an heir.
  • Then in the properties tab set Type Inheritance to "Behavior Inheritance"
  • And set Inherits Behavior to the Business Exception you want to inherit from
As simple as that.

Monday, September 28, 2009

BPM & Use Case, Who's Counting?

(O)BPM and Use Cases

Business process modeling and implementation using the Oracle BPM Suite should be an iterative process. Because of that you try to postpone the need to create (paper) specifications as long as possible. After all, your running BPM prototype is the specification. Otherwise, while you are iterating, the paper specification would need to be changed as well, resulting in a lot of overhead which would not add much value, but cost the more.

However in some situations it me be necessary to create (preferably nearly after-the-fact) specifications, for any combination of the following reasons:
  • Acquire official approval of the detailed requirements
  • Support change control
  • Provide the base for test scenarios.
When your customer is standardized on use cases, that is a reasonable format for those specifications.

A Quiz

Now let's do a quiz and count use cases in a simple BPMN diagram that has been created with OBPM. For those that are not familiar with use case modeling: a use case captures requirements from the perspective of an actor that wants to achieve some goal using the system. So how many use cases do you count in the following business process, when for now you ignore the Receive Order by JMS activity?


The good answer is three. In case you had another answer, read on!

Counting Use Cases

The first use case is Place Order. The (primary) actor is the customer, and the goal is placing an order. The global interactive activity Place Order covers that, and will kick of the process. As a result the first thing that will happen, is that the system automatically notifies the account manager through the automatic Notify Account Manager activity. You may think this is the second use case, but consider this:
  • The notification is triggered by entering the order by the customer
  • The notification will take place immediately after entering the order, right after the user pressed the submit button.
So from the actor's perspective, sending the notification is part of the single-setting and a direct result of the actor's doing. The account manager is a secondary actor in this use case.

Still Not Convinced?

When your thinking was that the notification is part of the second use case, which by the way is Handle Order, consider this. The receiving of the notification by the account manager may have happened much earlier than the actual handling of the order by that account manager. So the notification itself is not an integral part of the single setting in which the account manager handles the order. So, yes receiving the notification is the trigger of the Handle Order use case, but not part of that use case itself (i.e. not the first step of the scenario).

For a similar reason as discussed for the Place Order use case, the (automatic) Create Back-order activity is part of the Handle Order use case. The creation of the back-order actually is handled through an alternate scenario of the Handle Order use case.

The last and final use case is the Ship Order, obviously.

Requirements vs Solutions

Now let's review the Receive Order by JMS activity, and let's assume that this only supports a different channel through which the same customer can place an order. The JMS queue may for example be used to pass in some SMS message. From a requirements perspective you can state that this is part of the Place Order use case. The argument being that using a OPBM screen or sending a text message are just two different solutions to the same actor goal: placing an order. Supporting different channels is "just" a supplemental requirement for the Place Order use case.

Now if you really think about it, you probably will agree with me that when doing a more detailed analysis of the Place Order use case, will reveal several reasons for (in the end) having at least two use cases, one for placing the order through a screen and another one for placing it by SMS. If you are a fancy use case modeler you could even model the Place Order use case as a super-type use case, having to siblings: Place Order through Workspace and Place Order by SMS.

So when you had "four" or "five" as your answer, depending on your reasoning, you might have been correct as well.

Introducing a Timer

Another interesting aspect to consider is what happens when the Notify Account Manager activity is not done directly after the customer issued the order, but for example by a trigger at 8:00 in the morning. In that case, the notification is no longer part of the single setting Place Order.

Mind that for the reason explained earlier, this still does not mean that the notification becomes part of the Handle Order use case. But who is the primary actor here and who is having the goal? In this case the primary actor is the timer. It depends on how you formulate the goal, but for reasons of simplicity, let's assume that it is the customer having a goal of the account manager handling the order as soon as possible. But granted, it is debatable.

Conclusion

So, in conclusion: unless there is a timer involved, all automatic activities are part of the interactive use case that precedes them, which may be outside the OBPM process (in another system sending the message).

Tuesday, July 21, 2009

Oracle Enterprise Repository Installation Issues

The other day I installed the Oracle Enterprise Repository 10.3 on my laptop. In principle the Oracle Enterprise Repository, or OER for short, is a web application featuring a rich user interface that requires Java Web Start.

Apart from the fact that the order of the steps in the installation guide is not exactly as you want to follow them, the installation on Windows XP is relatively easy. However other than for simple tools like TextPad or Total Commander for some reason in my case installations are not supposed to be bump-free (some higher God is making sure they are not). With the finish line in sight, I already thought this was an exception to that rule.

I started the application and the welcome screen rendered OK, the final thing to do is to click on Edit/Manage Assets, and then ...


Yes, you see that right, the browser equivalent of a blank face. Now what?

As we all know, logging and JavaScript are the perfect couple, so other that checking, double checking / triple checking all the steps made, I had no clue what to do. Except for contacting a colleague who just might have ran into the same issue, and surprise surprise, she had! Clever as she is, she already found out that Java Web Start of Java 1.6 and OER 10.3 are not the best friends.

So like in her case I was able to fix the issue by disabling Java 1.6 in the user Java runtime settings, et viola!


Fortunately for me she has better things to do that blogging about silly installation issues (like taking care of a baby).

Friday, June 19, 2009

OUM & Software

Some time ago I got a question in a comment if the Oracle Unified Method (OUM) is or will contain any software components like we used to have with CDM (like CDM RuleFrame and Headstart).

The answer is no. And the reason is as follows.

CDM was restricted to Oracle Designer/Developer and the usage of the Oracle Database with that. OUM is not based upon any specific tool set, just simply because there are that many that (currently) it simply is impossible to do so. Maybe someday, when we "fused" all our products into one consistent tool stack, who knows ... But I don't see that happing in a future near me.

On the other hand, I hope we will be able to add some technology and even tool specific guidance to OUM for some of the tools we have.

Oracle BPM and Java objects

Why using Java for business objects?

When creating business objects for Oracle BPM (10.1.3), I have a couple of compelling reasons for basing them on a Java class model. The reasons being:
  • Ease of migration
  • Testing
  • Reuse
At the end of the day, Oracle will get rid of the proprietary PBL language and target for Java instead. So building your business objects on top of a Java class model, might be a good strategy to ease migration to any one of the new versions.

Oracle BPM leverages CUnit (and PUnit), both being proprietary testing frameworks. When you base your business objects on Java, this also offers the opportunity to test your objects using the de facto standard unit testing framework JUnit.

BPM business objects can only be used by BPM. A Java class model can be reused almost anywhere, including BPEL, and (obviously) Java applications.


How using Java for Business Objects?

Just follow the instructions for catalogueing components in your catalog. To leverage the extra functionality that BPM business objects offer (including creating object presentations), create a BPM business object for every top-level object type that you need, as an heir of the Java class. Don't do them all, only the ones you need the extra functionality for.

I put the Java objects and BPM objects in the same catalog, and name the BPM objects after the Java classes, post-fixed by "BO", for example CustomerBO for the Customer Java class.

Finally, to ease synchronization between JDeveloper (or your preferred IDE) and Oracle BPM, after first-time catalogueing, generate the jar file with the Java classes directly to the lib folder of your BPM project.

Wednesday, March 18, 2009

Service Oriented Confusion (SOC)

In a previous article I presented a service taxonomy. As I explained over there, deciding upon a proper service taxonomy supports a proper service layering. I should have said this is only one of as set of taxonomies (plural) that you could use, even at the same time.

Moreover, I could also have explained how different service taxonomies can help the process of service discovery. For example, a taxonomy along business domains (like Customer Relationship Management) can help to get an overview of all services provided by a specific business domain.

But I would also like to state a word of caution here. I have seen organizations using an IT related taxonomy including classes like data services and application or utility services in the communication with business analysts. The problem is, that to a business analysts such a taxonomy is difficult to understand, and may look pretty arbitrary. Where does a data service differ from a business service? Do they not both deal with "data"? Yeah, but eh ...

Apart from that, what value does it add to them? Why should they care? I can't tell you that to be honest. What I can tell you is that I've seen situations where business analysts only thought they understood and started to do analysis. The damage done ...

It is my experience that the only two classes of the taxonomy that I presented in my previous article, and that make sense to the average business analyst, are process service (a service implementation of a business process) and business service. From an IT point of view, a business service may translate into one data service, utility service, or some composite service that orchestrates two or more data/utility services, etc.



I've been with an organization that does not strive for business services to be reusable, because in their definition, a business service is a service that supports a specific business purpose, that may be unique across the organization (as shown in the picture above). However, from an IT point of view the services used to construct the business service definitely should be reusable.

Now this may or may not work for you, so you may want to use a different taxonomy for layering. As long as you make sure that it clear what kind of taxonomy is used for what kind of stakeholder, and you only use a taxonomy that makes sense in the universe of interest of the stakeholder.

Friday, February 06, 2009

If It Ain't Broken, Fix It!

For already more than three years I'm the Java developer of some Oracle-internal sales application. Up to that point the application had been maintained by several people who obviously had different development styles.

I'm more a kind of guy that practices principles like coding by intention, responsibility driven design, and refactoring with the main goal to achieve a simple design that is easy to understand and therefore better to maintain. Those principles had not been on the top list of my predecessors. The result being that, to implement a change request or bug fix, you almost always had to debug the code and do extensive code reviews before you even knew where to start. I'm not joking when I say that the same kind of functionality was never implemented twice the same way. There was hardly any pattern to discover, except anti-patterns and a lot of bad practices:
  • lengthly methods, often of 100+ lines
  • names for classes, attributes and methods that do not resemble what it does
  • classes and methods having too many responsibilities
  • no proper layering and separation of concerns
  • and so on, and so forth
At a specific point in time I had (and took) the opportunity to start refactoring key parts of the application, and never stopped since. I did refactor for the sake of refactoring, but restricted myself to the code that I needed to touch anyway. While doing so I tried to apply best practices and extrapolated these to other parts of the application to keep the solution consistent.

The result being that after less than half a year, newcomers needed only half the time to understand the application good enough to do their job. And those parts that had been refactored properly, required only half the time to change. Cross my heart: I dare to state that any investment in this making the application code base simpler, has paid back itself several times already!

Many developers will recognize this situation, and may want to do something about it, but run against a brick wall called "project manager", who is just a plain idiot, not understanding the value of good coding practices, but only focusing on getting the job done in as less time as possible, right? Wrong! The key responsibility stake of the project manager is exactly that: delivering on time and within budget, and most of them are doing that job pretty well.

Unlike what you may think the project manager is not the key stakeholder, but the customer is. So rather than trying to convince the project manager, you must make the customer aware of how a simple design:
  • will reduce their maintenance costs (which still is about 80% of the original development time) and perhaps even more important,
  • will make their IT better aligned with the business, because it will be quicker to change a well designed system than spaghetti.
Try to convince the customer that to achieve this, a simple design is an absolute must, and therefore from time to time you need to clean the kitchen because otherwise at some point the kitchen will get so dirty that almost any meal you make in there, will be a serious health hazard. That cleaning the kitchen is what refactoring is all about.

Even when the project plan is already finished and the budget already fixed, it is likely that there still are plenty of opportunities to help the project manager with reducing the remaining development time, if only you take the time to find them. The biggest problem for applying best development practices is probably you not spotting these opportunities or not being able to communicate them well.

Monday, February 02, 2009

OUM Available for Customers

Finally, the Oracle Unified Method (OUM) is available for all our customers! Until now it was only available for Oracle Certified and Oracle Certified Advantage Partners, but that has changed now.

Customers that contract Oracle Consulting for an engagement of two weeks or longer can download OUM if they meet some additional criteria. Contact your Oracle Consultancy sales representative about the details.

Be aware that soon we can also deliver training on OUM (we already do so internally), so don't forget to ask for that too!

Friday, December 19, 2008

Service Taxonomy and Requirements

Having seen how organizations can struggle with it, convinced me that before you start to capture the requirements of a serious amount of services, you rather have a proper classification scheme identified up-front. A service taxonomy as such a classification scheme is called, is important for a couple of reasons:
  • The requirement attributes are likely to differ somewhat between classes, making that you might want to use different templates,
  • A service taxonomy supports a proper positioning of services in the application and technical architecture,
  • A service taxonomy supports a proper layering of services.
There is no single best taxonomy, as it depends on aspects like the number of services, and the nature of those services. However, a good starting point could be a classification like the following one.

First of all the following (more or less atomic) classes of services can be distinguished:

Business Services that are recognized as such by the business, like Retrieve Customer Profile, Print Order Confirmation.

Data Services that query or store data, like Retrieve Customer, Retrieve Customer Orders, Update Order.

Application Services, being all other services, like Print File and other 'utility' services.

The above classes concern atomic services, meaning that the services have a restricted, clear responsibility and therefore in principle are re-usable. When used in the context of use cases (or business processes) they typically will not cross steps.

Whenever applicable, you can also recognize Process Services as being the type of services that deal with service orchestration. Process services typically have a specific use case / business process as a scope. That use case can be a user-goal or summary use case (spanning multiple lower level use cases).

Batch Services are a specific kind of orchestration services that process multi-record messages in an asynchronous way.

Other Taxonomy Aspects

Your taxonomy can leverage more than one kind of classification, for example one that organizes business and process services by business domain.

Also different levels of services could be distinguished. This is where we talk about service granularity. For example, a higher level data service Retrieve Customer Data can consist of two lower level services Retrieve Customer and Retrieve Customer Address. The level of a service won't have much impact on the way you specify it functionally though. Also, specifying finer grained services is more likely to be the subject of analysis and design rather than requirements definition. The discussion about service granularity for requirements definition is likely to be that of user-goal versus summary use cases.

Be aware that the taxonomy you use during requirements does not necessarily need to be 1:1 with the one that is used for analysis and design. The meta-requirement for the taxonomy used during the requirements definition, is that it should be recognizable by business people. There should be a clear mapping to the taxonomy that you will use during analysis and design, though.

Generic Requirement Attributes

The requirement attributes of all service could consist of the following:
  • name
  • purpose (one, two sentences max)
  • description (what does it functionally, i.e. without getting technical)
  • type (within the taxonomy)
  • triggering events
  • pre-conditions
  • post-conditions
  • synchronous/asynchronous (request/response vs fire & forget)
  • message format in
  • message format out
  • list of values (for attributes where applicable)
  • transformation
  • validation of incoming message
  • list of (logical) errors that can be the result
  • metrics
  • business indicators (e.g. to collect BAM metrics)
Typical standard pre-conditions (that you might decide not to specify but capture as an architectural principle) are:
  • consumer has been authenticated and authorized to use the service
  • message format in complies to the XSD, meaning failing to do so will result in technical error
Message format in and out have the following sub-attributes:
  • structure
  • attribute names
  • data type
  • format mask
  • mandatory
Attributes can also have a list of values that applies. List of values can be static (i.e. a list of fixed values, typically implemented in XSD) or dynamic (to be implemented with DB-query). Static list of values should not be captured in the service descriptions itself (for reasons of maintainability) but in a separate document.

Transformations typically are captured using a two-column-format, one column describing the "query" (just one source attribute, or multiple attributes with some operations), the other the target attribute.

The validations to specify concern checks that are not considered to be part of the pre-conditions.

Metrics can have the following sub-attributes:
  • average calls/per time unit
  • peak volumes
  • max response time
A service can call one or more other services, each having their own message format in and out and transformations involved in that, which then also needs to be specified.

Specific Attributes

In case of Data Services also the following attributes can be applicable:
  • data source (name of the database, or other)
  • a (restricted) data model
  • key attributes
  • (logical) query, insert or update statement

Application and Batch Services typically also can have the following attribute:
  • algorithm

Process Services also have a process orchestration that is described using an activity diagram (preferable using the BMPN notation) or a sequence diagram. Process Services often have a human interaction being involved.

Generic Supplemental Requirements

In case of asynchronous services you are likely to have a need for an "error hospital". Preferably you have some generic mechanism for that. For each individual service you then only need to specify how long and with what frequency a retry needs to be done, before it will end up in the error hospital. The requirements for the error hospital itself will be generic, describing how a systems or application administrator can cancel or retry the service (when possible).

Other generic supplemental requirements might concern:
  • audit requirements
  • logging
  • security
Per service you only specify how the generic mechanisms apply.

Wednesday, December 10, 2008

Oracle BPM Enterprise 10gR3 for WebLogic on XP

Officially Oracle BPM Enterprise 10gR3 is not supported on Windows XP. When you try to install it using the default path, you probably will experience some exceptions like a ClassDefNotFound or an IllegalStateException. So what to do when you're bound to XP but want to juggle with BPM Enterprise nevertheless? This is how I did it. But remember it's not supported, so don't start complaining when it does not work for you.

What I did was to run the installer up to the point where it asks you to start the configuration wizard, and then quit. The configuration wizard offers the option to install a WebLogic Server (WLS) for you. However, I created a WLS domain myself, with only an admin server and no managed server, and called it OBPM (but you can call it DonaldDuck or whatever you like). Once successfully created, I started the server.

The OPBM Enterprise configuration wizard has an option that let's you choose to change an existing WLS domain rather than to create a new one, to deploy the process engine and web applications on. So I chose my OBPM domain.

The ClassDefNotFound error I prevented by not choosing to install the Oracle BPM transport provider. You can install it later, and perhaps when you concentrate on that, the ClassDefNotFound error can be tackled as well. I did not yet get that far. I explicitly configured the tablespaces of the engine and directory schema's, but had to key in the names in upper case (e.g. in my case USERS and TEMP, instead of users and temp), otherwise the installation of the schema's failed telling me that the tablespaces did not exist. Weird.

Anyway, using the installation path as I described above, resulted in a flawless installation on my XP Professional

Need to know more? Check out the Oracle BPM 10.3 Configuration Guide.

Friday, December 05, 2008

Dynamically Switching Rules Dictionary Versions

Suppose that you are using Oracle Business Rules in a Java application. And suppose you want to be able to change and test your rule set without bothering other people. Wouldn't it be nice to be able to work in a copy of a dictionary and activate that only for your user session? This can be achieved as follows.

I assume one dictionary of which there can be multiple versions. Of course the same principle can be applied to multiple dictionaries.

First implement some mechanism for storing the default version of the dictionary, and that will be retrieved every time before calling the rules repository. I use a properties file for that, because you can change a properties file dynamically. Then have some page from which the user can pick an available version from a list. Once a version has been chosen that differs from the default, that will be the version used for that user session.

Now how to get the versions of a dictionary from a repository? That can be done using the following Java code snippet, which is based on a WebDAV repository:


// repositoryURL is a String containing the URL to your repository
// rulesDictionaryName is a String containing the name of the dictionary

RepositoryType repositoryType =
RepositoryManager.getRegisteredRepositoryType("oracle.rules.sdk.store.webdav");
RuleRepository rulesRepository =
RepositoryManager.createRuleRepositoryInstance(repositoryType);
RepositoryContext repositoryContext = new RepositoryContext();
repositoryContext.setProperty("oracle.rules.sdk.store.webdav.url", repositoryURL);
rulesRepository.init(repositoryContext);

dictionaryVersions = rulesRepository.getMarkerNames(rulesDictionaryName);
for (int i=0; i < dictionaryVersions.size(); i++)
{
dictionaryVersions.set(i, ((String)dictionaryVersions.get(i)));
}


When the test was satisfactory, the chosen version can be stored in the properties file, making the default for every new user session.

Monday, December 01, 2008

Moving a WLS Domain

The following describes how to move your own sandbox kind of WebLogic Server domain. This method is not supported and should not be used to move any other kind of domain. It is also not guaranteed that any applications deployed on it, will move flawlessly with it. And it I tried it only on Windows.

So why describe it anyway? Well, just like me, you might have not payed attention when creating a domain and it therefore may have ended up in your product home directory. And you might, just like me, be too lazy to want to run the wizard and create a new one. Not that it is so much more work to do so. Finally, just like me you might be interested to know where WLS stores this kind of information.

Moving a domain can be done as follows:
  • Copy the domain directory and everything under it, to the location where you want it to be
  • Search all the files in the domain directory for the fully qualified path name where the domain was located (e.g. d:\oracle\product\bea\domains\MyDomain, and replace that by the fully qualified path name where you want the domain to be (e.g. d:\wls\domains\MyDomain). You will find a couple of files, among them .cmd files.
  • Find the nodemanager.domains file (in my case in the d:\oracle\product\bea\wlserver_10.3\common\nodemanager\ folder). Find your domain in there and modify that manually (e.g. to d\:\\wls\\domains\\MyDomain)
  • Finally, change the short-cuts to start and stop your domain to point to the correct locations.
That did it for me!

Friday, October 24, 2008

To BPA or to BPM, that's the question

The other day I heard Thomas Kurian talking about the future of the Oracle technology stack. Among other things, he mentioned the Oracle BPA Suite and the Oracle BPM Suite (fka BEA AquaLogic BPM). According to Thomas, the "structured" BPA Suite aims at rigorous process modeling and simulation, where the "agile" BPM Suite aims at iterative process modeling.

Methodological me tended to hear that he was saying that the BPA Suite was more suited for plan-based development, where the BPM Suite would better support agile development. Knowing both of these tools, I found that a somewhat rigorous statement by itself, so I dug into the subject somewhat more.

Yes, the Oracle BPA Suite has a huge amount of modelers and an astronomical amount of symbols. And yes, being able to use the tool to its full extend requires a learning curve that looks the inverse of the stock markets of these days. But does that mean you cannot use it in an agile way? Not really, because being agile is less about how long it takes to learn something, but more about being able to apply it to create new or change existing software.

On the other hand, would the Oracle BPM Suite not be suitable for rigorous process modeling and simulation? Again I don't see why. You could easily have a very rigorous way of developing systems with strict procedures to follow, including an extensive QA process and still use a tool like the BPM suite.

Maybe we need to have a look at the subject from a different angle. Let's start with a limited feature comparison. Limited in that I restrict it to a development environment based upon a SOA with a focus on the high level topics of the models that are supported, how Governance can be enforced, and the type of systems that you can build.


Models and Governance

In a nutshell, with the BPA Suite you can start by defining business processes from a high-level context diagram up to a detailed design. At the higher levels you can use Value (Added) Chain Diagrams, or VACD's for short, that at a specific point map on BPMN diagrams. Those BPMN diagrams can then be transformed into a BPEL design, which finally can be implemented using JDeveloper. The higher-level modeling can be done by Business Analysts while the detailed BPMN model typically will be done by someone with a developers background.

With the BPA Suite you can define roles and restrict access to specific modelers of the tool. This allows you to implement a rigorous Governance model you might have, where the responsibilities for the different roles are well supported by the tooling.

With the BPM Suite you can define processes using BPMN (or similar notation, each being just another view on the same source). Next to that three different "profiles" are supported: Business Analyst, Business Architect, and Developer. The only difference being that a Business Analysts sees the least and a Developer the most details of the same process.

Unless you start doing really fancy things, the BPM Suite does not support a rigorous Governance model as far as responsibilities are concerned, as nothing prevents a Business Analyst from changing his profile to developer to see what has been coded under the hood and even change that.

So the BPA Suite supports a rigorous enforcement of Governance and capturing of requirements, where the Suite BPM supports that hardly if at all. But as such I don't want to call that an "agile" aspect of the BPM Suite.

Regarding Governance, there is one other significant difference. The BPA Suite itself does not result in an executable model. BPMN can be transformed to BPEL, but that is only a blueprint. After sharing the blueprint with development (as it called), a developer picks up the BPEL blueprint and implements it.

The BPM Suite leverages XPDL which provides a means to draw business process models, but also is an executable model. So for the BPA Suite you could say there is a strict hand-over from design to implementation, where with the BPM Suite there is no such thing.


Type of Systems

Currently the BPM Suite clearly is coupled to BPEL. In our practice BPEL is heavily used to integrate systems, with little to no human workflow involved. But that does not mean that you cannot use it for processes that almost completely involve human interaction, and in practice it probably will.

Although at first sight the BPM Suite primarily seems to target processes that involve human interaction, nothing will keep you from developing processes that are fully automated. BPM processes can be deployed as services with a WSDL interface, and call other services, allowing it to be used for service orchestration only.

My conclusion is that regarding the type of system you can build, there is not really a significant difference between the two. Although with the BPM Suite it is much easier to create a user interface than with the BPEL human workflow. But are processes with a lot of human interaction agile and processes that are not rigorous?


Conclusion

When talking about rigorous versus agile and iterative, the significant difference I see is that with BPM it is hard if not impossible to build a solid wall between analysts and developers. And yes, it is true that agile development methodologies do promote breaking down that wall wherever possible.

By the way did I already say sorry for the lack of pictures this time? No? Sorry!

Thursday, October 09, 2008

AIA & Canonicals, Just Good Friends?

Last time I talked about the Canonical Data Model (CDM) pattern. What I haven't discussed yet, is how this pattern is implemented in the Oracle SOA Suite.

Actually the SOA Suite itself does not explicitly support this pattern, simply because enforcing any specific data model should not be seen as the task of any infrastructure tooling, including the SOA Suite. Nevertheless, the concept of a common data model can very well be implemented for specific industries. The goal being that parties in that industry can communicate more easily because by using a CDM they actually talk a common language, at least to some extend.

This concept already is being used in various "industries". Examples are CDM's for governments allowing central and local governments to exchange information with each other about their citizens. CDM's in the health care industry allow doctors, insurance companies and other stakeholders to exchange information about patients.

More recently, Oracle introduced the Application Implementation Architecture or AIA for short. As in the communication about it the focus seems to be on the common process aspect perhaps a little bit of a hidden feature of AIA, but the industry foundation packs of AIA could not exist without the CMD's they are based upon. Perhaps the common data model is of even greater value than the common process.

Once defined a CDM should be accessible by every process that is making use of it, preferably by means of XML schemas. In case of the SOA Suite, rather than importing (and with that copying) the schemas in each and every BPEL project, a way to do this is by including all coherent XML schemas of a CDM in one single, dummy BPEL process, as explained by Marc Kelderman.

Monday, October 06, 2008

The Case for Canonicals

You might have heard people talking about a "Canonical Data Model" or CDM for short. You might have even heard the rumour that having a CDM is critical success factor in achieving the true benefits of a Service Oriented Architecture. But in the meantime you still have to encounter the first situation in which it actually is being used. Is a CDM really a must-have or just another buzzword?

First let me try to explain what a CDM actually is, apart from just being one of the integration design patterns. In short you could say that a canonical data model provides a generic view on the structure of data that systems deal with, like for example a generic concept of what a Customer is, what attributes it should have and the data types and formats of those attributes are.

It might surprise you that having a common view of an entity like "Customer" often is far from common practice. Imagine a big organization like a bank having many systems with different purposes, because of merges often from different companies. Such an organization can easily have as many definitions in their data dictionary of "Customer" as they have systems that deal with customers.

Now what if such an organization needs to integrate all these systems with SOA using XLM transformations for that? If there are N systems to integrate, than in principle there are N * (N - 1) mappings possible for each type of Customer. In case of 4 systems that need to exchange customer data, that already means 12 mappings, as you can see in the following picture. But if you define one generic definition and map to and from that definition, than the maximum number of mappings are 2 * N. In case of 4 systems that means only 8.


A larger bank easily has hundreds of applications with dozens of different definitions of "Customer", let's say 30. Then the difference is 870 versus 60! And that is only for Customer, and there are plenty of other entity types that needs to be exchanged as well, like Account, Address, etc. You get the picture?


So the incentive to use the Canonical Data Model design pattern, is to reduce the number of mappings and with that the inter-dependency between systems, the complexity of the overall integration and, last but not least, the maintenance of all that. For larger organizations this can make a huge difference.

Having said all this, it probably never is the case that all systems need to integrate to each other, let alone that this all the time requires a two-way mapping of every entity involved. To know if an entity should have a definition in a CDM, it depends in how many mappings it will be involved in. When there are more than three systems that all need to exchange an entity in a bi-directional way, than a canonical definition of the entity starts to make sense.

The case against canonicals is that in some organizations it might prove to be far from trivial to get a common view of how the generic definition of a specific entity should look like.

Friday, August 22, 2008

About Business Processes, Use Cases & SOA (4 & End)

This posting is a follow-up on a previous posting in which I explained what artifacts might be appropriate in case of creating an analysis model out of a set of system use cases for a SOA project.

In this third and last posting, I will show how the (platform independent) analysis model can be transformed into a (platform specific) design that leverages BPEL.


Design

The design involves mapping the analysis to artifacts that actually are implemented. Assuming that the Technical Architecture dictates a Service Oriented Architecture that uses BPEL to implement services and service orchestration, the design would at least include the following:

  • WSDL’s
  • A diagram design of the BPEL processes to implement
  • A database design (that supports retrieval of resident information and storage and retrieval of parking permits and their applications)
An example of an XSD already has been provided. Including WSDL’s in this case would not add much value, and a database design is too obvious. A design of the BPEL process to implement, in this case sufficiently has been provided by the activity diagrams already created. This probably will often be the case.

To show how the implementation might look like the following two BPEL process diagrams are included.
A part of the ParkingPermitProcess looks as follows:


Not surprisingly the initial business process diagram and the implementing BPEL process flow look very similar.

The same holds true for the user goal level Validate Parking Permit use case that looks as in the following picture (that actually zooms in on the ValidateParkingPermitApplication step of the previous picture):


In practice, you can imagine a human workflow step to be included that supports manual intervention of the outcome by Parking Services.

This ends a short history about the road you could walk from business process modeling to a BPEL implementation, and that with use cases as well. Now was that impressive or what?!

Thursday, August 14, 2008

About Business Processes, Use Cases & SOA (3)

This posting is a follow-up on a previous posting in which I explained business process models can be drilled down to user-goal level use cases and how to do service discovery based upon process models.

In this third posting, I will show how the requirements that have been captured so far, can be transformed into an analysis model that will provide the base to build the design on.

Analysis

What kind of artifacts should be created during the Analysis process depends on the nature of the use cases and the architecture that is going to be used to implement them. The most apparent candidates for adding detail are activity or sequence diagrams, class diagrams and (where useful) collaboration diagrams (used to describe how various “components” work together). For SOA projects also message descriptions and message transformations are obvious candidates.

For services that support more than one operation you also need to specify the names of each individual operation, as well as the types of their iand output messages.

Depending on its granularity, a service supports a summary use case (like the Parking Permit process), a user goal use case (like Apply for Parking Permit) or a subfunction use case. No example of the latter has been included, but you can imagine that for each separate channel (letter, email, SMS) a subfunction use case could be created that describes a generic notification services that can be used to send all kind of notifications to third parties.

Unless parallel development is be done where other (parts of the) system(s) depend on a service to be build, at this point there is not much value in specifying the exact WSDL of the service. For any external service you use, you need to have at least the WSDL location if not the WSDL itself.

Activity Diagrams

When there is a flow involved, it might be useful to create an activity diagram. An obvious example would be a use case for a service that will be implemented as a BPEL process. As already noted, the activity diagram for the Validate Parking Application is an example of that. But use cases that involve a user interface with a complex screen flow, also are good candidates.

Class Diagrams

When there is data involved, a class diagram seems to be the obvious choice to document that. Many people doing SOA projects never seem to make a class diagram. Realize though that class diagrams add as much value to SOA projects as they for example do to pure Java/XML projects, as messages are about handling data as well. Having class diagrams available can add great value to optimizing the process of defining message formats and transformations.

When analyzing the Parking Permit use case the following classes can be recognized: Resident, Parking Permit Application, and Parking Permit. A distinction has been made between Parking Permit Application and Parking Permit, as not all applications will result in a permit, and the application follows a process with statuses that do not apply to the permit itself and vise verse.

Not very explicit in the description but obviously needed if you think about it, is a notion of an Area. You will need this in the process of determining if there are sufficient parking lots available. You might argue that the reason for not having discovered this earlier is because the way the waiting list is being processed has not been worked out to the proper level of detail.

The class diagram for the Parking Permit use case could look as in the following figure:



Message Descriptions

What format to use for a message descriptions probably foremost depends on who needs to validate it. Some people will prefer more “logical” descriptions in Word, while more technical oriented people probably have no problems with XSD’s.

Mind that to be able to create XSD’s that translate 1:1 to an implementation, it is important to understand the technique of implementing messaging. For example, initially one XSD has been created for the Resident and one for the Parking Permit Application, only to discover that it was more practical to create one XSD containing both and use that as the format for the message going from the resident to the Parking Process service. If you pay more attention to detailing the Apply for ParkingPermit use case, you probably would find that out up front.

For the sake of example the response message has been kept simple by returning a string stating that the application has been received successfully. The description of the request message looks as follows:


Other message descriptions that need to be made are the request and response messages for the Verify Ownership, Verify Electoral Registers and Verify Residence use cases.
You should keep the message descriptions separate from the use case descriptions, to support reuse. Refer to them from the use case descriptions instead.

Message Transformations

How message are transformed from one format to the other depends on the situation. For example, in case of BPEL, transformations can be done by doing one or more assign operations or by using an XSLT transformation, which in both cases may involve complex XPath queries or regular expressions.

For this reason transformations probably are best described in text, as has been done in the following example:



The example is trivial in that transformation consists of using a subset of the fields of the source and mapping those 1:1 on fields of the target message.

You can image than in practice often more complex transformations need to be done, for example in case of n:m mappings. In practice using a two-column format often suffices, where the (left) Source/Transformation column specifies which source fields are involved and any logic that needs to be applied to that, and where the (right) Target column specifies what the (single) target field is.

As with message formats you should keep the message transformations separate from the use case descriptions to support reuse.

Where to Stop

Once the analysis model has been worked out to a sufficient level of detail, you are ready for the design. What “sufficient” means in this context, depends on many factors, the most important ones being the following:

Nature of Engagement

In some cases a formal process needs to be followed that requires every change of the specifications to be approved up-front. On the other side of the spectrum there is the agile approach by which part of the (detailed) requirements are captured while validating iterations of a working program.

Skills and Habits of Developers

Developers are most comfortable with what they are used used to working with. However, you should be aware that when every developer involved needs to get used to one broadly accepted way of creating specifications almost always outclasses the situation in which developers need to get used to as many different styles as there are other developers. Not to mention the effort that needs to be put into the validation process involved with that.

For most projects that involve requirements analysis, class models are very useful to developers, even in the case of BPEL development.

Application and Technical Architecture

The development of data-oriented systems that depend on a well-structured database can highly benefit from creating a detailed class diagram. In case of use cases that involve a user interface, activity diagrams normally only add value when there is a complex dialog involved.

For SOA projects that primarily deal with messaging it probably suffices to detail classes to the level that all attributes and their types are known. Message formats and transformations often need to be worked out in detail. In general, activity or sequence diagrams add great value to business processes and services, as they support a more effective implementation.

Other architectures, for example identity management/security, on their turn require yet other details.

A final remark that needs to be made at this point is that use case analysis should not be confused for a technical design. Although some people state that class models, and activity diagrams that describe system-internal behavior, are “technical” they actually only capture detailed requirements or validate higher-level requirements from a different angle.

To be continued ....