Wednesday, December 19, 2007

Publish Your (BPEL) Process

As sometimes silly things can take a disproportional amount of time, I thought I share with you what I found on documenting a BPEL process diagram.

Agile as I am (ahum) I try to prevent putting too much in written documentation as possible. The reason is obvious, in general after the initial publication documentation is hardly ever used again, and its expiry date passed before you know it. Among others, ways to prevent superfluous documentation are an iterative development involving end-users, simple but effective design, coding standards, effective naming strategy, and sufficient inline comments.

I think there will be not many occasions making it worthwhile to publish a BPEL process diagram, as developers rather open it in JDeveloper and other people probably will not understand it anyway. Actually, the only occasion I can think of is when you want to document how to create a BPEL process, like in a how-to, or like in my case, where I want to document how you can get from business processes via use cases to an actual implementation (in some posts to come more about this subject).

Anyway, if you feel the need for whatever reason, this is how you can do it.

Unlike the UML diagrams, for BPEL process diagrams there is not a menu option like Model -> Publish that you can use. In finding out how to do it I first tried the hard way of course, Googled on it and read online documentation to no avail. It was only then that I did the obvious and had a second, better look at the interface and found out a couple of "mystery buttons" at the top of the diagram. Of course, I had seen and actually used two of them before (for validation), but the others? Never gave them a thought.


Obviously there is a print button, but to the right of that sits a camera-shaped icon Create JPEG Image, which (surprise, surprise!) creates a JPEG image! There are a few other buttons
that makes me curious to find out what kind of people actually wants to use them, but also another one that I can picture to be useful, which is the pallet-shaped icon "Diagram Properties".


This pops up a window with various options to "optimize" the diagram (still need to find out what that means, as trying it did everything but that), options for coloring, and so on. An interesting option is the possibility to add annotations that (as the online help states) "provide descriptions in activities in the form of code comments and name and pair value assignments". If you have experience with that, drop me a comment as I sure would like to know.

Tuesday, December 04, 2007

OUM is OUT!

It's official now, the Oracle Unified Method, or OUM for short, is available outside Oracle! Currently only for Certified Partners, but it is a start.

Now why should you care? Well you might care when you are in an organization that uses (part of) the Oracle product stack and:
  • Already uses the Rational Unified Process, but having difficulties with figuring out how to use the Oracle product stack with that, or
  • Is not yet using any formal method, but clearly recognizes the need for one, or
  • Has a need for some method that covers more that "just one project" at a time.
As I always explain, OUM is for Oracle what RUP is for IBM / Rational, and more. Next to covering the Unified Process, OUM also addresses issues that are specific for the Oracle tool stack, and goes beyond the Unfied Process by addressing cross-project, enterprise-level issues as well.

Oracle tool stack specific issues are covered in three ways. First there is the concept of supplemental guides. With a supplemental guide extra guidelines are added to the method that cover a specific architecture or product. For example, a supplemental guide can add extra, specific tasks. Currently there is the SOA Supplemental Guide, and work is being done on the Identity Management and a Business Intelligence Supplemental Guides.

Secondly Oracle tool stack specific issues are covered in task guidelines, for example by giving examples of how to execute a task and how the tools fit in there. Also, the task guidelines can provide links to more information, for example online-documentation or white papers (of which a few have been written by me). A bit similar to the so-called tool mentors in RUP.

I hear some of you say, "Duh, you already had something similar with CDM for Designer/Developer, so big surprise (not)!". But then you are not fully appreciating the magnitude of the issue we have with a huge (and still growing) tool stack. It's not just Designer/Developer anymore. And to be honest, because of this magnitude OUM is far from what CDM used to be for Designer/Developer where it comes to offering tool-specific guidance. But we will get there, one day. After all, there is a limited number of IT companies in the world, and although you might think other wise, also a limited budget, so it will stop somewhere eventually.

As I said, OUM also covers enterprise-level issues. For that we have added what we call "Envision". Envision is not an extra phase. It's more like a collection of processes that cover aspects that should be dealt with not only in the context of a project. Instead, they should be handled at an enterprise level.

The process currently covered by Envision are:
  • Enterprise Business Analysis
  • Adoption and Learning
  • Enterprise Architecture
  • IT Portfolio Management
  • IT Governance
In the future we might also cover Operations and Support, for which we currently refer to ITIL.

As the following figure illustrates, Envision kind of "folds" around projects.


An Envision process has to start somewhere, ideally within the context of a project on its own. However, in practice it is more likely that the process will be initiated within the context of the first project delivering artifacts that at some point in time are recognized as needing to be at the enterprise-level. For example, during some project a technical architecture might be created that turns out to become the start of a reference architecture for following projects.

Finally, Envision also provides the glue between processes or methods that already exists within the enterprise and a project. I hope this glue is strong enough to let this message sticks with you.

Friday, November 30, 2007

Check Your SOA

Finally found the time to finish this posting which I started to write like three months ago! It's about time, as it might not take too long before it's content will start to get more and more obsolete.

Some time ago I was asked to check whether the SOA Suite had been properly installed at a customer site. So what do you do when you are asked to check something? You go and look for a checklist! At least that is what I do, and not verify surprisingly, found nothing. As you might recall from an earlier article I have the memory of a gold fish, so to prevent that I need to reinvent that wheel some next time, I thought I write it down. If not for you, than for myself.

The following describes a checklist regarding the installation of the Oracle SOA Suite 10.1.3.0 or 10.1.3.1 and with the patch to version 10.1.3.3 on top of that.

By the way, the current 10.1.3.1 version you can download from OTN. You should also be able to find the patch on OTN, but don't ask me how. Using the regular way it pointed me to MetaLink. After some inquiries I got a direct link the Windows version as well as the Linux version. Don't get confused by the fact that you actually are downloading files to install the Oracle Application Server. Installing the SOA Suite is part of that and one of the installation options.

Unless you are running the SOA Suite just for reasons of doing some howto's or create some "look mammy, my first BPEL-process!", you rather not install the ORABPEL, ORAESB and ORAWSM schemas in the Oracle Lite instance that comes with the default installation because of it's limitations.

Schemas Log File

One of the first things you can check is the log file of the creation of the schemas. Installation of the schemas is a separate step and will have been done before installation of the Application Server / SOA Suite. The irca[yyyy-mm-dd]_hh-mm-ss[AM|PM].log file will reside in the directory from which the irca.bat or .sh has been run, e.g. [SOA_Oracle_Home]\install\soa_schemas\irca\ There should be no error reported in this file other than errors related to dropping non-existent database objects (in case of first-time installation).

Installation Log Files

The next things you might want to check are the log files of the installation of the Application Server / SOA Suite itself. In case of installation on a Windows machine there will be two bigger log files in the in the folder C:\Program Files\Oracle\Inventory\logs\ (on Windows, dunno where they are on Linux) with names like installActions[yyyy-mm-dd]_hh-mm-ss[AM|PM].log. One is of the installation of 10.1.3.1 and the other one of the installation of 10.1.3.3. As they are big you better scan them for strings like "ERROR" (in uppercase).

Finding out the Ports

To check the installation you need to know the OPMN and http port. By default the OPMN (request) port of the Application Server will be 6003. The Integration Server will run on the http port, by default 8888. Both ports might have been configured differently during and after installation.

The ports that have been used during installation are configured in the [SOA_Oracle_Home]\bpel\utilities\ant-orabpel.properties file. Make a note of the values of the opmn.requestport and http.port properties.

In case the port numbers have been changed after installation, the actual OPMN port can be found in the [SOA_Oracle_Home]/opmn/conf/opmn.xml file. Search for a line like .

Whenever the http port has been changed after installation, the actual value can be found in the [SOA_Oracle_Home]/Apache/conf/httpd.conf file using the Port and Listen properties.

Checking the Consoles

The next thing you can verify is if every component works properly. The following assumes that the Application Server / SOA Suite has been installed on a machine which an IP address that can be resolved to the host name "localhost", and using the (default) http port 8888.

The components that you should check are:
  • HTTP Server
  • Application Server Control
  • BPEL Console
  • ESB Console
  • Web Service Manager Control
  • Rule Author

Except for the HTTP Server, every other component will require you to log in. By default the user name will be oc4jadmin. The actual user name can be found in the [SOA_Oracle_Home]/j2ee/home/config/system-jazn-data.xml file. Look for a user with display-name OC4J Administrator. The password will be obfuscated so if you don't know, ask your customer.

HTTP Server
Check if the HTTP Server works using the URL http://localhost:8888. This should show the "Welcome to Oracle SOA Suite 10.1.3.3" page, from now on referred to as the Application Server Control. From there you can browse to the Application Server, BPEL, ESB and Web Services Manager Control.

Application Server Control

Clicking the link to the Application Server Control (or typing in http://localhost:8888/em in your browser) should bring you to the login page of Application Server Control. When logged-in the Cluster Topology page should show, displaying the Application Server itself with at least two OC4J instances in it, one normally called home and the other one normally called oc4j_soa.

When the Application Server Control is not up, it might be that you have ran into the issue as described in section "7.9 Accessing Oracle Enterprise Manager after Applying Patch". It states that this happens in rare cases, of which my customer apparently was one.

When the control is up, click on the home instance and then on the Applications tab. From there expand the Middleware Services and then Other Services. That should show a WSIL-app application deployed on it. Click that and go to the Administration tab and on the row that reads View Proprietary Deployment Descriptor tab click on the Go to Task icon. Somewhere at the right it should read deployment-version="10.1.3.3.0".

When patching a 10.1.3.0 installation, one of the (manual) post-application tasks is to redeploy this web application. In the list with post-application tasks it is described that you have to deploy it manually only when patching 10.1.3.0. My customer patched a 10.1.3.1 installation, but for some reason the WSIL-app wasn't. In this way I discovered my customer had not yet done any of the post-application tasks.

I therefore suggest to check the deployment-version of each web application, just to make sure. It's not that much work any way. The other web applications you can check are deployed on the oc4j_soa instance and can be found by going to the Applications tab and expand the Middleware Services. That should list all the SOA Suite components, being:
  • Rules
  • ESB
  • WSM
  • BPEL
Mind that all these SOA Suite components are just web applications that have been deployed on the oc4j_soa instance. You can expand them all, check if they are up and have the correct version number in the same way as I did for the WSIL-app.

BPEL Control
Clicking the link BPEL Control from the Application Server home page should bring you to the login page of the BPEL Control. Normally you log in using the same credentials as for the Application Server Control, but these might have have been changed after installation.

When logged-in you should see the home page of the BPEL Control. Probably there is not much to see yet, so we move on to the next step. Later on we will deploy a simple BPEL process on the oc4j_soa instance and then you will use this control to test it.

Log out of the BPEL Control. This will bring you back to the login page. Follow the link on the login page to the BPEL Admin console and login. If you have never seen this before, it might be interesting to review and see what is being configured in there. Close the window once you're done.

ESB Control
Clicking the link ESB Control from the Application Server home page should bring you to a login page of the ESB Control home page. As with the BPEL Control, normally you use the same credentials as for the Application Server Control.

When logged-in you should see the home page of the ESB Control. There probably is not much to see here either, so we move on to the next step. Later on we will deploy a simple ESB service on the oc4j_soa instance and then you will use this control to test it.

Web Service Manager Control
Clicking the link Web Services Manager Control from the Application Server home page should bring you to a login page of the Web Service Manager Control. I discovered that it failed to show on my computer and can't replay what I saw at my customer's site so I leave that to your own imagination what to do here.

Rule Author
There is no link on the Application Server home page for the Rule Author (at least not in my case) so you have to check that by typing in http://localhost:8888/ruleauthor in your browser. This should bring you to the login page of the Rule Author.

Once logged in you should see the "Welcome to Oracle Rule Author!" page. As the Rule Author help is deployed using a separate web application, follow the link to "Help" in the corner right at the top and check that the help for Rule Author is coming up.

Creating JDeveloper Connections

After installation of the SOA Suite and having verified it runs you should be able to connect to the Application Server and the Integration Server from JDeveloper in order to be able to deploy BPEL and ESB processes on it.

If the connections have not been made already, start JDeveloper (10.1.3.x) and create an Application Server connection using the OPMN port you found out earlier (default 6003). After that creating an Integration Server connection using the http port you found out earlier (default 8888).

Testing BPEL Deployment

The next thing you can do is checking if you can deploy BPEL and ESB processes. I brought a prepared simple Hello World BPEL process with me, as shown below.


The assign takes some string (a name) as input and replies "Hello [name]. The time is now [date + time]", as shown in the next picture:


You should be able to deploy that on the Integration Server by right-clicking on the project and choosing Deploy and follow the link to the Integration Server. You probably deploy on the BPEL domain called "default," but your customer might have created other domains as well (which you can find out by going to the BPEL Admin console discussed earlier).

Once the Hello World project has been deployed, using the BPEL Control you can navigate to that process which will bring you to the Initiate tab. On that you can enter a value in input and press the Post XML Message button. That should show you a result similar to the following:


Hello jan. The time is now 2007-11-30T11:54:37+01:00


Testing ESB Deployment

Now the Hello World BPEL process has been deployed, you might want to test the deployment of an ESB service. What I did was creating a new ESB project that I called HelloWorldESB and using the ESB diagrammer executed the following steps (unless specified otherwise left everything default):
  • Create a directory on the application server, for example d:\input and put in there a plain text file called name.txt with in it one string (the name of your customer or whoever is watching every step you take, for example).
  • Make a copy of the name.txt file. You'll find out why.
  • From the Component Pallet -> Adapter Services drag a File Adapter to the diagram and call it readName.
  • Click on the icon next to the WSDL File field and using the Adapter Configuration Wizard create an adapter that reads a file and polls the directory you just created for the file named name.txt with a frequency of 5 seconds. Define a schema for it using the Define Schema for Native Format button and create a new native format of file type Delimited and browse to the name.txt file to sample it. Finally enter a name for the record, e.g. "name" and finish the wizard.
  • Double click the routing service (named readName_rs), expand the Routing Rules and using the green plus button navigate to the Hello World BPEL process you deployed before until you reach and select the process operation.
  • Create a new transformation mapper file by pressing the button next to Transformation Map and connect the C1 field from readName.wsdl to the input field of the HelloWorld.wsdl.

After saving an closing the routing service, the ESB diagram will look like this:


You can now deploy it on the Integration Server by right-clicking the project, choose Register with ESB and choose the Integration Server connection. After a successful registration is should show a window saying:

Registration of Services Successful

BPELSystem.default.HelloWorld.readName_RS created
BPELSystem.default.HelloWorld.readName created

If you navigate to the directory containing the name.txt file, you probably will find that it's gone. No panic, this indicates that the ESB service already is working, and will have read and processed the file!

You should able to verify this by going to the ESB Control and click on the printed-paper-page-looking button at the top-right side (what were they smoking when creating that?). At my customer's site it showed the ESB instance alright. However when replaying this at home I found out there is a problem with my ESB installation, as it failed to show instances with a ORA-00904 error. Too bad, I don't have the time to fix it. Anyway, let's be positive and assume that your case is similar to my customer's so that clicking the instance will show you it's flow.

You should also be able to go to the BPEL Control and verify that the ESB service actually called HelloWorld service. Click on the most recent instance of that process and go to the Flow tab. That should show you the flow of the executed BPEL process. When you click on the replyOutput icon, it should pop up a page displaying a message similar to the one you saw earlier when testing the service, this time with the content of the name.txt file.

This is where I concluded at my customer's site that they had installed the SOA Suite alright. Haven't heard otherwise yet, so I assume my conclusion wasn't premature!

Wednesday, October 24, 2007

Demystifying Business Process vs Use Case Modeling

The other day I was part of an interesting discussion that started with the statement that there is a problem with OUM (Oracle Unified Method), or to be more precise the Unified Process being use case centric, while these days much development is based on business process modeling. The problem being that because of this people involved with business process modeling might think that OUM does not properly fit their needs.

I often hear people talk about use cases, and too often find out they actually do not know what a use case is. I’m convinced that anyone that does know both business process modeling and use case modeling, would not say such a thing as they would realize that a business process model is just another representation of the same thing.

Let me explain and convince you that OUM supports business process modeling quite well and does so for as long as it is there.

Now, I’m not going to explain what use case modeling all is about, but rather point you to the white paper. I wrote about that subject. However, what I probably do need to explain is that you can have use cases at different levels. My paper is based on OUM and the original work of Alistair Cockburn, who presents the following levels:



Before I continue I need to make the distinction between the conceptual notion "use case", being the interaction of an actor with a system to achieve a specific goal, and a "use case description", being a narrative description of that interaction.

The key point that I’m trying to make here is that a use case should be specified as a scenario. You might realize that at every level you can describe that scenario either by using a use case description, as an activity diagram or both, whatever suits your needs. And you might also be aware that at any level above the user goal use cases, an activity diagram actually is a description of a business process at some level. So there you are ...

What some of you folks that are into the Business Process Modeling Notation (BPMN) might not be aware of, is that UML activity modeling and BPMN just are two different schema techniques for doing the same task, as is clearly explained in a white paper by Stephen A. White. OK, granted, some patterns are more effectively handled by BPMN (like the concept of ad-hoc process to support the Interleaved Parallel Routing pattern), but that concerns minor details only.

So now I proved that a business process model and a use case description can describe the same thing, I hope with that you realize that the only thing you need to do to transform a business process model into a use case description, is by creating a narrative description out of that. But let me bring this academic discussion down to a practical level and discuss how to get from business process models to use cases. Otherwise, why bother, right?

Mind that narrating a business process results in a summary use case description. Many people not being aware that there are different levels of use cases, probably will use the notion "use cases" only to mean the user-goal level use case descriptions, a use-goal use case being defined as a use case for which the primary actor can go away happily after finishing it. If you are not aware of this it is likely you will have a hard time working with use cases. Just to warn you.

When going from business process models to user cases, you will be aiming at a model of user-goal use cases (and a couple of subfunction use cases going with that) as normally that should be the lowest level at which you capture requirements. Before you can do so, you need to make sure that the lowest level business process models contain activities at the level of user-goal use cases only. If that is not the case, fix that first.

Once that has been done you need to take just one more step from there. How you do that is up to you of course, but you could create a use case description for each activity in that diagram that is a candidate for being implemented and start detailing from there. Whenever useful, you can add an UML activity diagram to that. To prevent confusion, you probably better not use BPMN for that.

Thursday, September 27, 2007

How to Dump Subversive Revisions

Most things in life you can more easily get rid of than an unwanted revision in Subversion. As Simon and Garfunkel sang, there are even 50 ways to leave your lover!

Now why is that so hard? It starts with the principle that revisions should build upon each other, and when necessary can be reversed with another revision, so there should be no need to do so. Subversion therefore is designed never to lose information. What this philosophy does not take into account is that sometimes people make mistakes, or even worse are clueless, and for example start to make changes and commit that to a branch instead of the trunk. Sometimes that happens not only once, but twice!

Of course you can argue that people should not make mistakes and the clueless should never be allowed to work with Subversion. Hey, I'm with you, but you need to start somewhere, and I found that many people find it hard to really understand how tagging, branching, switching and all works, and sometimes need considerable time to get there. So let's face it, shit happens and then we feel a need to dump that in the sewer.

If you ever need to do so, there are two ways. The first way is the official way, which works as follows. By the way, in my example I'm using TortoiseSVN as a client:
  • Make sure you are in the right trunk, tag or branch for which you want to undo some revision.
  • Using "Show Log" find the revisions you want to undo. You can multi-select revisions.
  • Right-click the selection and choose "Revert changes from this revision". This will revert all committed changes locally.
  • Commit that changes (make sure that this time you are committing to the right URL!)
  • If you switched before you started, don't forget to switch back to the HEAD of the trunk afterwards.
The second way is the "hard way", which consists of making a dump of the repository using the svnadmin dump command of the command line tool. When using that, you can make a dump of a specific range of revisions and leave out the revision(s) you don't want using the -r [lower rev[:upper rev]]switch. After that you create a new repository and load the dump into that using the svnadmin load command.

There are a couple of issue with the hard way. First of all, unless you physically remove the folders of the old repository, the old repository will still be there. And people need to be aware that they need to go to a new repository instead. Of course you can tweak that manually by first deleting the folder of old repository and then create a new one using the same name and only after that load the dump back again.

Secondly, the hard way as described, provides no solution for the situation in which you want to remove a revision from some branch while in the meantime a new revision has been created in the trunk, because you cannot provide a set of ranges while dumping. That can be tweaked by making another dump containing the latest revisions and manually "merge" that with the other dump, but that can get complex.

Finally, the repository might have become a big mama and dumping and loading that can take considerable time.

So my conclusion: you need to have a damn good reason not to do it the official way. I'm therefore most interested in the rumor I've heard, that they plan to introduce the svnadmin obliterate command. Hopefully does that allow for removal of a complete revision as well.

Tuesday, September 18, 2007

Business Rules as Usual

There will be some enhancements in the Oracle SOA Suite 11g regarding versioning of rules for Oracle Business Rules. Until then we will have to do with versioning of dictionaries and repositories. This topic will address some best practices I have with versioning and Oracle Business Rules 10.1.3.x.

Versioning of Dictionaries

Versioning of dictionaries aims at rules administrators, which can either be a business analyst or a developer. Versioning of a dictionary is done by saving a particular dictionary using another version number. Sounds trivial, not? Think again. The dictionary probably will work in combination with some application calling the rule engine, providing it with a particular version number to work with. So if you want to change something that should be effective immediately, you should save the dictionary using that particular version number. But what if there is a problem and you want to revert to the previous version? Or would you not rather want to be able to test your changes first before disrupting production? In other words, you want to be able to use at least two different versions: one to be used for testing and one for real.

What I always have is two dictionaries, one version for example with number 2.1.0 for production and another one with number 2.1.x for testing purposes. The test version I can always recognize by the 'x' at the end. Which version the application is supposed to work with, is read from a properties file that I can change on the fly. What I also could do, is create some preference screen through which the rules administrator can select the version to be used during a particular session, the one from the properties file being the default.

As property files can be written, that screen could easily be enhanced to allow changing the default. In this way the rules administrator will have a way to change existing rules or add new ones and test them first, before bringing them into production. The latter can be done by either overwriting the 2.1.0 dictionary with the 2.1.x version, or by saving the 2.1.x dictionary as 2.1.2 and set that as default. The rules administrator will also have the option to activate particular rules for a specific period of time, by putting them in a specific version, and set that as default only during that period.

Versioning of Repositories

Versioning of repositories aims at developers. The prime reason you might want to version repositories, is that during maintenance of the application the fact definitions may change. Fact definitions are dictionary specific, so why not version dictionaries instead? Of course you also use a particular dictionary to make the changes, but obviously you want to make sure you keep a clear track of what version of a repository works with what version of the application. And of course you are using some proper source control system like Subversion to manage you configurations, right? Right! And as that system probably is file based, you want the dictionary to be in a file you can put under version control, obviously.

So what I do is that with every change (or set of changes) I make and that have been properly tested, I make an export of the rules repository and commit that to the repository of the version control system together with the corresponding sources (XSD's or Java classes).

And business rules as usual.

Tuesday, September 11, 2007

Analize This Business Rule!

Last week I attended a session in which the new-to-come SOA Suite 11g was being presented by Clemens Utschig-Utschig to Oracle Partners as well as internal employees.

Many great new things to come, like:
  • One integrated SOA Suite
    Regarding the versions up to 10.1.3 the SOA Suite consists of (at least) four different products (BPEL, ESB, BAM, OWSM) that happen to be on one installation CD and 'coincidentally' get all installed during one installation. With 11g this will have changed, from an architectural point of view as well as presentation-wise. For example, ESB will have become part of the infrastructure of the SOA Suite (taking care of routing requests to services), rather than a component on its own. And instead of four different consoles (that not only look like they have been created by independent teams, but probably are as well) there will be one integrated console.
  • Many enhancements regarding versioning, deployment and unit-testing of services.
  • Multiple BPEL processes in one project (and the possibility to drill down from one BPEL process to subprocesses)
Just to name a few.

One thing that I have not mentioned but for some reason want to point out in particular, is the fact that JDeveloper 11g will have the Rule Author of Oracle Business Rules integrated in the IDE. That's right folks, with 11g you will be able to connect to and maintain a rules repository using a Swing client rather than the current web client. Clemens did not demo that, so I can't tell you much about how that works, but at least it looked promising. Can't wait to get my hands on that!

But at the same time it raised the question what this means for having a 'stand-alone' Rule Author aiming at business analysts that (given the promise of rule engines and introducing agility to business rules with that) you would expect to be an important user group of rules engines. Not that I have seen that work in practice yet, but that is the promise. Will that disappear, meaning that we have concluded that we want business analysts keep their hands off rules repositories, at least not as long as Oracle Business Rules is concerned?

Fortunately, we seem to be working on a browser-based rule authoring tool as well, aiming at business analysts or any other user that for some reason you do not want to get starting to use JDeveloper. Off course, priorities, hurricanes or Bush eating a pretzel can change that any second, but I have not given up hope for business rules and agility yet!

Monday, September 10, 2007

Oracle Rule Author on Stand-Alone OC4J 10.1.3.x

For quite a while I could not use Rule Author anymore on my stand-alone OC4J, the reason being that I was not able to connect because of the error '[code=CANT_CONNECT_LOOPBACK] Cannot connect due to potential loopback problems'. I thought, as the loopback problems are only potential, why not give it a try and see how far we get. But my computer thought differently and refused to cooperate.

So I Googled this error, found quite a few links, none of them useful to resolve my issue. As issues like this can easily take a lot of time, and as I had the SOA Suite with Rule Author already running in a virtual machine, I decided to leave it as it was and used that instead. That was, until my virtual machines started to freeze every now and then, especially in the middle of the heat. You can run, but you cannot hide, can you?

For some stupid reason it never occurred to me to search for the loopback problem on OTN, probably because I never associated this error message with Oracle Software. As such that assumption appeared to be correct, but a colleague of mine did nevertheless and found this topic on one of the forums. Boy, how silly I found myself to learn that it was related to the proxy exception list of my web browser! I must have checked that at some stage, but probably at the wrong moment. Arrgggh!

Anyway, as I now can connect again to my stand-alone OC4J again, I thought it would be nice to be able to avoid crashing virtual machines and all that. So I redeployed Rule Author like I was used to, that is using the ruleauthor.ear and rulehelp.ear files from my SOA Suite installation, only to discover that this did not work at all. Right. So, being a clever guy I searched OTN to see how to deploy Rule Author on a stand-alone OC4J, only to find ... nothing! But no panic, Google is still in the air, helping me to a topic that offered me the missing pieces on the IT-eye blog.

Difference with my situation is that I didn't want to install it on the embedded OC4J that comes with JDeveloper, as I want to be able to upgrade JDeveloper without needing to redeploy Rule Author again. So the following instructions are somewhat different to those of the IT-eye version:

Prerequisites:
  • ruleauthor_s.ear and rulehelp_s.ear
  • rl.jar, rulesdk.jar, webdavrc.jar, jr_dav.jar
The ruleauthor_s.ear and rulehelp_s.ear I got from the /rules folder of the SOA Companion CD 2, the jar files can be found in the [JDEV_HOME]/integration/lib folder of that same CD.

Steps to deploy (all paths relative to /j2ee/home):
  1. Using the Application Server Control, (re)deploy both the Rule Author and the help file with that.
  2. Create a /rules/lib folder and copy the jar files to that
  3. Configure a rules library in the /config/server.xml file of your OC4J instance, by adding the following code snippet to the shared libraries already configured:
    <shared-library name="oracle.rules" version="10.0" 
    compatible="true">
    <code-source path="../rules/lib/"/>
    <import-shared-library name="oracle.http.client"/>
    <import-shared-library name="oracle.xml"/>
    </shared-library>
  4. Add the oracle.rules library just created to the default set of shared libraries available, by adding the following line to the /config/system-application.xml file:
    <imported-shared-libraries>
    ...
    <imported-shared-library name="oracle.rules">
    <imported-shared-libraries>
  5. (Re)start your OC4J instance
Rule Author should now be up and running!

By the way did I already told you that I was able to fix the problem with my virtual machines by upgrading to the latest Workstation 5 version? No? OK, I was able to fix the problem with my virtual machines by upgrading to the latest Workstation 5 version. I hope.

Thursday, September 06, 2007

The End Is Near ...

... of our customers needing to wait for the Oracle Unified Method (OUM for short)!

Last week version OUM Release 4.4.0 has been made available internally, and I happen to know for a fact that this version will become available to customers. How exactly is yet to be determined officially, but it has already been packaged to be released, so that should not take very long.

And as you have been waiting for more than a year for this, you can bear another few weeks, wouldn't you say?

Thursday, August 23, 2007

White Paper Business Rules in ADF Business Components Revamped!

Finally, the white paper Business Rules in ADF Business Components has been revamped!

As usual it took much more work than anticipated, especially as I made the 'mistake' to ask two subject experts (Steve Muench and Sandra Muller from the JHeadstart Team) to review the first draft. If I had not done that, the paper could have been published a month ago! But no, I could not help myself, I had to be thorough, I had to be me, and of course they provided me with insights that have had a significant impact on its contents. But all for the better, otherwise some of it already would have been obsolete the minute it hit the street.

"So what?", those of you who have not seen it before, might ask yourself. Well let me try to explain without copying too much of what already is explained in the paper itself.

First of all it is our (Oracle Consulting's) experience that analyzing, and implementing business rules takes a significant part of the total effort of creating the average application. Although being a very productive persistence framework as it is, this still holds true for ADF Business Components (which is part of Oracle's ADF), or ADF BC for short. Moreover, despite all our efforts trying to create the ultimate application, most resources still are being put in maintenance rather than in creating the original. So there is a lot to be gained for business rules in this regard, and that is what the white paper intends to address.

Does the paper present 'the right way' for ADF BC? Well, perhaps there is a better way, I don't know. But it is a good way, as it is consistent and makes use of the best that ADF BC has to offer. And because it is consistent (and well documented) maintenance also becomes easier, as when the developers that created the original application used it (and then ran away like lightning to build the next hip thing), they will not have left behind a maintenance nightmare. At least not what the business rules are concerned.

"So, what's new?", those of you who have sleep with the previous version of the paper under their pillow, might ask yourself.

Let me give you this short list and for the rest of it refer you to the paper itself:
  • Capturing business rules using UML class model
    Why? Because plenty of people still want to capture requirements (including business rules) before there are tables and entity objects and therefore cannot make use of an ADF Business Components diagram (see also the article UML Rules! I posted some time ago).
  • Setting up framework extension classes
    Doing so makes introducing generic functionality later on so much easier and therefore is strongly advised in general.
  • Deprecated custom authorization in ADF BC
    This in particular concerns the horizontal authorization rules (restricting the set of rows of an entity you are allowed to insert, update or delete). The reason to do so is that you probably rather use Virtual Private Database for that.
  • 'Other Attribute Rules' dropped
    As I discussed in the article How to Pimp ADF BC Exception Handling, there are no compelling arguments anymore for not using built-in validators or method validators, making that the category Other Attributes Rules could be dropped, and we now suggest implementing them using these validators.
  • New built-in attribute validators
    ADF BC provides the Length, and Regular Expression validators for attributes.
  • 'Other Instance Rules' dropped, 'Delete Rules' added
    The category Other Instance Rules is dropped for the same reason the Other Attribute Rules category has been dropped. This with the exception of rules that make use of the delete() method, which rules are now in the new category 'Delete Rules'.
  • Registered Rules
    Using Registered Rules you can create generic method validators you can use for multiple entities. An example is a reoccurring validation of an end date that must be on or after a begin date.
  • UniqueKey Validator
    Compared with just checking 'Primary Key' for all primary key attributes, this one and only entity-level built-in validator helps to make validation of the primary key predictable and consistent, and also supports providing a user-friendlier error message.
  • 'Change History' and 'Cascade Delete' added
    These two categories are subcategories of Change Event Rules with DML. When using JAAS/JAZN ADF BC offers built-in support for recording date/user created/updated information, which has been documented in the Change History category. As a result of introducing this category, the 'Derivation' category has been renamed to 'Other Derivation'. Furthermore, ADF BC also supports Cascade Delete by defining an association as being a 'composition'.
  • Sending an email using JavaMail API
    The previous version of the paper here and there referred to some 'clex' library which was part of the now long-gone Oracle9iAS MVC Framework for J2EE. If you remember that you really are an Oracle veteran! Anyway, the classical example of a 'Change Event Rule without DML' is sending an email when somebody changes something in the database, which example now is based on the JavaMail API
  • Message handling
    Especially this subject has been revised significantly. Among other things you can specify error message together with the validator, which message will end up in an entity-specific message bundle. By creating custom exceptions you also can use one single message bundle, as I explained in the article How to Pimp ADF BC Exception Handling.
Do you need more in order to get you to download the new white paper? I can hardly imagine.

Friday, August 03, 2007

How to Pimp ADF BC Exception Handling

When you have been doing things in a particular way for a long time, you sometimes find that in the mean time your way has become the slow way. Not necessarily the wrong way as it works (otherwise you would not have kept doing it all the time, would you?), but you're just not fashionable anymore. Like wearing tight pants in the 60's or wide pants in the 80's. Or using vi instead of a state-of-the-art IDE like JDeveloper or Eclipse for that matter (oh yes, I dare!).

I recently discovered that I became unfashionable by using the setAttributeXXX() and validateEntity() method for implementing business rules in ADF BC instead of using Validators. Not that I didn't know of Validators, I just thought that they would not give me proper control over exception and message handling. Because one of the things I would like to have, is one single message bundle in which I could use a consistent coding of my messages, like APP-00001, APP-00002, etc. Even more important, the other thing I would like to have is that I can translate all messages to Kalaallisut the minute I need to make my application available for the Inuit people of Greenland.

As you might know, up till JDeveloper 10.1.3 ADF BC will create an entity-specific message bundle to store the messages you provide with Validators. So with many entity objects big chance you end up with many message bundles, making that keeping error codes consistent becomes a nightmare. And what about all the separate files you need to translate! You might find yourself no longer to be able to tell the messages for the bundles.

But as Steve Muench was very persistent in trying to convince me using Validators I finally gave in and tried finding a way to tackle my problem, and succeeded! No worries, don't expect rocket science from me. I hate complex or obscure code, as building maintainable Information Systems already is hard enough as it is, and therefore always try to practice Simple Design.

I assume that you have created a layer of framework extension classes as described in section 2.5 of the ADF Developers Guide and that your entity base class is called MyAppEntityImpl. If you have not yet created such a layer, do that first and come back after you finished. Chop chop!

Basically the steps are as follows:
  • Create extension classes that extend the exceptions that ADF BC will throw when validation fails
  • Override the constructor of the super class and pass in the message bundle in the call to super()
  • Override the setAttributeInternal() and validateEntity() methods in the MyAppEntityImpl.java entity base class and make they thrown your exceptions instead of the default ones.
Does that sound simple or what? No? OK, let me show you how I did it.

Attribute-level Validators will throw the AttrValException. So what I did was create a MyAppAttrValException as follows:

package myapp.model.exception;

import oracle.jbo.AttrValException;
import myapp.model.ResourceBundle;

public class AttrValException extends AttrValException
{
public MyAppAttrValException(String errorCode, Object[] params)
{
super(ResourceBundle.class, errorCode, params);
}

/**
* When the message contains a semicolon, return the message
* starting with the position after the semicolon and limit the
* text to the message text from the resource bundle.
*
* @return the stripped message
*/
public String getMessage()
{
String message = super.getMessage();
// strip off product code and error code
int semiColon = message.indexOf(":");
if (semiColon > 0)
{
message = message.substring(semiColon + 2);
}
return message;
}

And this is how I override the setAttributeInternal in the MyAppEntityImpl:

/**
* Overrides the setAttributeInternal of the superclass in order to
* pass in a custom message bundle to MyAppAttrValException subclass
* of the AttrValException. To be able to uniquely identify the
* entries in the message bundle, the error code is extended with the
* fully qualified class name of the Impl.
*/
protected void setAttributeInternal(int index, Object val)
{
try
{
super.setAttributeInternal(index, val);
}
catch (AttrValException e)
{
String errorCode = new StringBuffer(getClass().getName())
.append(".")
.append(e.getErrorCode())
.toString();
throw new MyAppAttrValException(errorCode, e.getErrorParameters());
}
}

In a similar way you can handle entity-instance-level Validators and Method Validator by extending the ValidationException and overriding the validateEntity() method to throw your MyAppValidationException.

As said, the message you provided when creating build-in Validators and Method Validators end up in an entity-specific message bundle. I've been told that this is going to change in JDeveloper 11g, but currently there is no stopping ADF BC from doing that.

So, at a convenient point in time (for example when most of your Validators have been implemented and tested), what you need to do is copying the error messages from the entity-specific message bundles to your custom message bundle.

To prevent duplicates in the keys of the custom message bundle, you should extend the key of each message with the fully qualified class name of the Impl.java file of the entity object where it's coming from. Otherwise duplicates might occur whenever you have two different entity objects, both with an attribute with the same name and a build-in Validator specified for them. The overriden setAttributeInternal() method assumes you did.

The entries in the message bundle would look
then similar to this:

{ "myapp.model.adfbc.businessobject.EmployeeImpl.Salary_Rule_0",
"APP-00001 Employee's Salary may not be below zero" },
...


Well that wasn't rocket science, was it?

Friday, July 13, 2007

We Are Proud of Borg

I case you wonder why I don't write that much lately, my wife just got out of the hospital with a so-called Ilizarov frame, with the purpose to try and fix an arthritis in her ankle (caused by an ill-treated complicated fracture many, many years ago).

Believe me, anyone who've seen her was impressed! She looks like an unfinished Robocop, a kind of Transformer that just got out of it's egg. When she wants something from you, you just know that resistance is futile. Every time she walks I have a hard time suppressing the urge to make 'uhnk-chuck uhnk-chuck' sounds. Fortunately she doesn't read my blog (I hope) and if she does, I'm just too fast for her as long as I don't let myself get cornered.

What also is impressive is the amount of time you need to spend in the beginning to take care of some one in her condition. People with children might remember the first couple of weeks, how chaotic things can be and you trying to get a grip on the situation and finding a modus operandi to get you through the day without going wacko. Picture that, plus a full-time job and the Dutch health care system of today and you cannot help feeling sorry for me. Fortunately there are plenty of people around me willing to help out, so forgive me for making things look worse than they are. I needed an excuse for not writing that much and thought a little exaggeration could do some good here.

But what is most impressive is the courage of my wife, to go through the operation knowing what she nows about the frame that she will have to suffer that for three months. That really makes me proud!

Wednesday, July 04, 2007

How to Prevent Your Rule Gets Fired in ADF BC?

When using the Oracle ADF (Application Development Framework), implementing data-related business rules in the persistance layer is a good practice. In ADF this business layer is called ADF Business Components (aka BC4J, or Business Components for Java). This article will go briefly into this subject, just enough to let you get an appetite for the upcoming revised white paper Business Rules in ADF BC. So don't eat too much of this apetizer, to leave room for the main course to come!

Implemening business rules in ADF Business Components means implementing them in so-called entity objects. For an application that uses a relational database, to a certain extend you can compare an entity object with an EJB entity bean, as like an entity bean the purpose of the entity object is to manage storage of data in a specific table. Unlike EJB entity beans, ADF entity objects provide hooks to implement business rules that go way beyond what you can do with EJB entity beans.

I won't go into detail about these hooks. When you're interested, enough documentation about the subject can be found on the internet (to begin with the Steve Muench's web log) or read the white paper! I will let you know when it is available and where to find it.

Where I do want to go into detail is one aspect, being how to prevent that rules get fired unneccessarily, for example for reasons of performance. I assume some basic knowledge of ADF Business Components, so if you don't have that, this is where you might want to stop reading.

There are the following typical options, provided as methods on any EntityImpl:
  • The isAttributeChanged() method can be used to check if the value of an attribute has actually changed, before firing a rule that only makes sense when this is the case.
  • Furthermore there is the getEntityState() method that can be used to check the status of an entity object in the current transaction, which can either be new, changed since it has been queried, or deleted. You can use this method to make that a rule that only gets fired, for example when the entity object is new.
  • There is also the getPostState() that does a similiar thing as getEntityState() but which takes into consideration whether the change has been posted to the database.
Sometimes knowing whether or not something has changed does not suffice, for example because you need to be compare the old value of an attribute with the new one. That typically happens in case of status attributes with restrictions on the state changes. Normally you would be able to do so using the getPostedAttribute() method, that will return the original value of an attribute as read from or posted to the database. However, that won't work when you are using the beforeCommit() method to fire your rule, as at that time the change already has been posted, so the value getPostedAttribute() returns will not differ from what the getter will return.

"So what", you might think, "when do I ever want to use the beforeCommit()?". Well, you have to as soon as you are dealing with a rule that concerns two or more entity objects that could trigger the rule, as in that case the beforeCommit() is the only hook of which you can be sure that all changes made are reflected by the entity objects involved.

Suppose you have a Project with ProjectAssignments and you want to make sure that the begin and end date of the ProjectAssignments fall within the begin and end date of the Project. A classical example, I dare say. Now the events that could fire this rule are the creation of a ProjectAssignment, the update of the Project its start or end date or the update of the ProjectAssignment its start or end date.

Regarding the creation of the ProjectAssignment, that you can verify by using the getEntityState() which would return STATUS_NEW in that case. Regarding the change of any of the dates, you can check for STATUS_MODIFIED, but that also returns true when any of the other attributes have been changed.

Now suppose that, as there can be multiple ProjectAssignments for one Project, you only want to validate this rule when one of those dates actually did change. The only way to know is by comparing the old values with the new one. As explained before, as the hook you are using will be the beforeCommit() method, the getPostedAttribute() also will return the new value as that time the changes already have been posted to the database.

Bugger, what now? Well, I wouldn't have raised the question unless I would have some solution to it, would I? The solution involves a bit yet pretty straightforward coding. I will only show how to solve this for the Project, for the ProjectAssignment the problem can be solved likewise. The whole idea behind the work-around is that you will use instance variables to store the old values so that they are still available in the beforeCommit().

Open the ProjectImpl.java and add the following private instance variables:

private Date startDateOldValue;
private Date endDateOldValue;

In general you can use a convention to call this specific type of custom instance variables [attribute name]OldValue, to reflect their purpose clearly. Now you need to make that these variables are initialized at the proper moment. This will not be the validateEntity(), as that can fire more than once with unpredictable results. No, it should be the in the doDML() of the ProjectImpl, as follows:

protected void doDML(int operation, TransactionEvent e)
{
  // store old values to be able to compare them with
  // new ones in beforeCommit()
  startDateOldValue = (Date)getPostedAttribute(STARTDATE);
  endDateOldValue = (Date)getPostedAttribute(ENDDATE);

  super.doDML(operation, e);
}

Now in the beforeCommit() you can compare the startDateOldValue with what will be returned by getStartDate(), etc. Although it might not look like it at first sight, this might be the most trickiest part as you need to deal with null values as well. In JHeadstart the following convenience method has been created to tackle this, in the oracle.jheadstart.model.adfbc.AdfbcUtils class:

public static boolean valuesAreDifferent(Object firstValue
              , Object secondValue)
{
  boolean returnValue = false;
  if ((firstValue == null) || (secondValue == null))
  {
    if (( (firstValue == null) && !(secondValue == null))
        ||
        (!(firstValue == null) && (secondValue == null)))
    {
      returnValue = true;
    }
  }
  else
  {
    if (!(firstValue.equals(secondValue)))
    {
      returnValue = true;
    }
  }
  return returnValue;
}

This method is being used in the beforeCommit() of the ProjectImpl, as follows:

public void beforeCommit(TransactionEvent p0)
{
  if ( getEntityState() == STATUS_MODIFIED &&
       ( AdfbcUtils.valuesAreDifferent(startDateOldValue,
              getStartDate()) ||
         AdfbcUtils.valuesAreDifferent(endDateOldValue,
              getEndDate())
       )
    )
  {
    brProjStartEndDate();
  }
  super.beforeCommit(p0);
}

Finally, the actual rule has been implemented as the brProjStartEndDate() method on the ProjectImpl as follows:

public void brProjStartEndDate()
{
  RowIterator projAssignSet = getProjectAssignments();
  ProjectAssignmentImpl projAssign;
  while (projAssignSet.hasNext())
  {
    projAssign =
      (ProjectAssignmentImpl)projAssignSet.next();
    if (((getEndDate() == null) ||
         (projAssign.getStartDate().
              compareTo(getEndDate()) <= 0)
        ) &&
         (projAssign.getStartDate().
              compareTo(getStartDate()) >= 0)
       )
    {
      // rule is true
    }
    else
    {
      throw new JboException("Project start date must " +
        "be before start date of any Project Assignment");
    }
  }
}

Well that wasn't to difficult, was it?

Thursday, June 21, 2007

Did You Found Your Balance With Java?

When I was young I practised judo for a couple of years. Got myself a brown belt and nothing after that, as I wasn't the competition kind of guy. I could have managed getting the black belt on technique alone (by doing so-called kata's), but that would require a lot of practise and time, which I did not had. One of the reasons being that at the same time I also practiced jujutsu (got myself a green belt for that).

When I went to the university I visited a different dojo, again only for a couple of times because of lack of time, but long enough to understand what in the previous dojo the sensei failed to teach me: proper balance. Imagine that during a randori (that is sparring) you jump around too much, giving the opponent many opportunities to show you every corner of the dojo. Well, that was me.

I have not been doing martial arts for many years now, so I was way beyond any frustration about this. That was until on the recent J-Spring of the NL-JUG (spring conference of Dutch Java User Group) when I learned about the JavaBlackBelt community. As is described on the home page of their web site: "JavaBlackBelt is a community for Java and related technologies certifications. Everybody is welcome to take existing exams and build new ones." I couldn't help it, I had to check it out. Am I still jumping around, or have I found my balance with Java?

Well, I expect it will be a long way before I know, as again I don't have much time. But I managed to get at least a yellow belt! So for now I find some comfort knowing that at least I'm beyond the level of Java-newbie.

Friday, June 15, 2007

Cooking with OUM

The other day I had a really challenging discussion about how to apply OUM (Oracle Unified Method) / UP (Unified Process) on a running project. As with all projects, it had to be decided what needed to be done to deliver a system, no surprise there. The challenging part first of all was in that we tried to fit an incremental approach into a waterfall-like offering. Secondly the approach had to be supported by people that had no previous exposure to OUM nor UP. So how can you bring that to a success without boiling peoples brains?

Let me start with the good news, which is that we reached a point that people have an idea about what they are going to do, and made that plan themselves rather than me telling them what to do. That is exactly how you want things to be in the end, so what more do you want? I should be happy!

The bad news is that something 's nagging the back of my head, being that I made a couple of mistakes along the road, resulting in me not having the feeling the message was understood properly. But as a famous Dutch philosopher once said: "every disadvantage 'as it's advantage", in this case being that that me feeling miserable is not your problem, and we all might learn from it.

In retrospect I think the one and only real mistake was skipping the part in where you explain the principles behind OUM and UP, and being so naive to think I could do that along the road. OK, I jumped on a running train and the approach needed to be there yesterday, so maybe I'm excused, but boy, what a mistake that was as it made discussions so difficult. But let's skip the nag whine part and let me explain what I would do differently next time.

First of all I would explain very well the difference between a task and a deliverable. When that is clear, I would be ready to explain how iterations go in OUM. Pretty important when you need to explain how exactly you do high-risk activities first (a principle that I luckily did not forget to explain up-front).

Unfortunately, I presented all tasks using the name of what OUM calls 'work products', giving the impression that every task results in a deliverable that actually is going to be presented as such to the customer. As in my proposal there were quite a few tasks to do, it looked like I proposed creating a pile of deliverables, scaring the hell out of people.

In OUM the term deliverable is reserved for a "product" that we actually present to the customer and that you likely will find in the project proposal or project plan. In order to get to that deliverable, you sometimes do multiple tasks, like creating an initial version of some document, reviewing that with the customer, in a next phase adding details using some diagrams, reviewing it again, etc., before you finally come up with one concrete deliverable.

Every (intermediate) task results in a "work product" that might or might not be presented to the customer as a deliverable. The important thing to notice is that, although the tasks might have different names and associated codes, you actually work on multiple iterations of the same deliverable.

Below an example of how the Use Case Model is created in three iterations (I left out any review tasks between each iteration). The rounded boxes are the tasks, the iterations of the work products are the square ones. By drawing a box around the three Use Case Model iterations I express these will be combined into one single deliverable.



How many work products you plan to combine into one deliverable, is up to you. Three words of advice, though.

First try to prevent multiple people working on the same document at the same time, because that unnecessarily introduces the risk of people needing to wait on each other. When you anticipate this might happen, that’s a strong contra-indication for combining work products.

For this reason, in the example the Use Case Realization has not been combined with the Use Case Model, as in this case the assumption is that the Use Case Realization will be worked out by a Designer, while the Use Case Model has been worked out by a (Business) Analyst, both having different responsibilities.

Secondly, do not iterate the same work product over a long period of time, because you might get the feeling it never finishes. You definitely should not let deliverables iterate cross phases that need to be signed-off. Not many customers want to put their signature on a document in a state of “draft”. I always try to prevent this kind of signing-off way of working as it can be killing for agility, but when there is no other option, be aware that after signing-off normally a deliverable can only be changed after a formal change request procedure (and that takes valuable time).

In the example, this is one of the reasons the MoSCoW-list has not been combined with the Use Case Model, as the assumption is that the MoSCoW-list has been used to define the scope and priorities of high-level requirements, providing the baseline for the activities during the Elaboration phase.

Finally, wherever you combine work products, keep track of the link to the tasks that create them, and make explicit in what iteration the deliverable is, for the following reasons:

  • When keeping track of the original tasks you can make use of the work breakdown structure the method offers, supporting estimating and progress tracking (for some silly reason some managers like to know how far you are from time to time).
  • Knowing what task you are performing facilitates using templates and guidance the method offers for that task.
  • It allows for people outside the project to understand what you're trying to achieve with a specific work product, enabling them to have the right perspective when reviewing it. Otherwise there is a big risk of expecting the wrong thing and not talking the same language. I have seen this going nasty a couple of times when the customer brought in their own experts, believe me.
  • Whenever you want to know how a specific workproduct should look like, you can ask questions like: "does anyone have a good example of a RD.011 High-level Business Process Model for me?" and surprise everyone around you. You might even get what you asked for rather than some document that you have to study for an hour or so to finally conclude it's not what you need. Have you been there, done that?
These are benefits you directly start to profit from right here right now. But what about some next time? Would it not be nice when you could reuse your deliverables as an example for one of your next projects? Or even better, would you not be proud when one of your deliverables ends up in the company-library of outstanding examples, giving you the feeling you have achieved all there is to achieve, quit your job, sell your house and go and live like a king in France, as we Dutchmen say?

Hmmm... Suddenly I start to understand why sometimes it is so hard to get good sample deliverables in some organizations. It may be because of some company policy forbidding such a library.

Tuesday, June 05, 2007

UML White Papers Released

Remember the UML white papers I talked about updating them? Well they got published today!

So when you are interested in getting started with:
  • UML use case modeling
  • UML class modeling
  • UML activity modeling
go to the JDeveloper Technical Papers section on OTN and download them from there. I expect the average paper to take not more that one to two hours to read and digest. Happy reading!

Friday, June 01, 2007

Recipe for Stress

When capturing specifications, it is always a challenge to weight effort against risk. The more you specify, the less risk there is of missing requirements. At least that is what you hope. Unfortunately, experience shows that no matter how hard you try, you never get them all. So where do you stop? When is it time to go home and grab a beer?

The other day I reviewed some use cases. Some of the use cases were worked out pretty detailed. There even was a log-in use case with a (conceptual) screen layout. But as this was only the start of the project no supplementary requirements had been defined yet. As an answer to of the question of how detailed one should get while capturing requirements, I provided the following example based on the log-in use case (which by the way is subfunction rather than a user goal).

Your customer might agree that you can leave the functional requirements for a log-in use case to "the user can log in by providing either a user name and a password or a customer number and a password", and not ask for a screen layout as for most people that is pretty straightforward.

However, at the same time supplementary requirements to this use case might be something like:
  • At a minimum passwords must be changed every 2 months
  • Passwords must be at least 8 characters in length, and a mixture of alphabetic and non-alphabetic characters
  • Passwords may not be reused within 5 password changes.

Notice how the functional requirement is very brief, only one short sentence. However, the supplementary requirements are and should be pretty specific, as often it is far from trivial how to implement them.

Thinking about something "simple" as securing passwords might also trigger the users to define other security requirements as well, like securing web services. When doing a mapping from sequirity requirements to an initial technical architecture, you might decide to use Oracle Web Server Manager. That might imply the need for some extra setup and expertise to do so. And it all started with a simple screen with only two items on it!

So what do we learn from this?

Failing to recognize supplementary requirements in the beginning, is a good recipe for a lot of stress later on. That is nice to know for the masochistic project managers out there. For those who are not like that you better make sure you have discussed the supplementary requirements with your customer to a sufficient level to be able to baseline at least is a little bit reliable.

Not discussing supplementary requirements and think it just will blow over is a mistake. Most customers have only a vague idea if any, and will expect us to tell them what their supplementary requirements should be. Failing to do so might jeapordize your relationship with your customer.

Friday, May 25, 2007

UML rules!

Currently I'm updating the white paper Business Rules in ADF BC. Doing so it occurred to me that there are still many questions to answer what to do when you want to record business rules in UML. With this posting I will share some of my idea's. JDeveloper 10.1.3 will be my tool and ADF BC4J the targeted persistence layer, but I expect the idea's to more generic and also applicable to EJB 3.0 and POJO's for that matter.

In "the old days" when we were young and still using the function hieararchy diagrammer, business rules were either an intrinsic part of the entity relationship diagram or recorded in function descriptions. As this sometimes resulted in the same business rule being recorded more than once (in different functions) and too often in an inconsistent way, we found it to be a neat idea to record them explicitly, that is as functions of their own, and link them to the functions they applied to. This has been a very succesful approach for quite a few years.

At the same time UML arrived as the modeling language for object-oriented analysis and design, and more recently BPMN for business process modeling. As I have never come upon any information system for which there were no business rules whatsoever, you can imagine that it was quite a shock for me to discover that there was no similar thing in place for UML at that time. Of course, a UML class diagram as such allows for expressing more business rules than an entity relationship diagram, but that covers only part of it, doesn't it? As far as the rest is concerned, we were kind of back to square one, meaning that we had to record business rules as part of use cases, for example. The BPMN specification even explicitly excludes business rules from their scope, so what do you do when using BPM?

Some improvements have arrived since the beginning of UML. At some point (not sure since when) constraints have been added (as a stereotype). Also OCL has emerged as a formal language to capture business rules. You can use OCL to record the business rule in a constraint. I don't know about you, but I still have to meet the first person that speaks OCL, let alone the kind of persons we call customers. So in most cases I expect constraints to be recorded using some natural language.

Now let's asume you are using UML to capture functional requirements. Business rules can be found during any stage, for example while doing use case modeling. When starting to create use cases it perfectly makes sense to record rules as part of that. Use cases have some properties that can be used for recording business rules, being pre-conditions and post-conditions. Pre-conditions must be met before the use case can be executed. A typical example would be "the user must be logged on", but could also be something like "the user must be a manager to view detailed employee information".

Post-conditions can be devided in minimal guarantees and success guarantees. Minimal guarantees indicate what must hold true when none of scenarios of the use case have finished successfully. A success guarantee describes a succesfull execution of the use case. Examples of success guarantees are:
  • The employee information has been recorded successfully
  • A change of the employee salary must have been logged
  • When the customer has boarded 15 times or more on a flight, its frequent flyer status must be set to silver elite.
Then there are so-called invariants, being conditions that always should hold true (before, during and after execution of any use case). Examples would be
  • End date must be on or after begin date
  • There cannot be a non-vegetarian dish in a vegetarian dinner.
Your use case template might not have a property called invariants. But hey, it's your use case, so why not add it?

By now you might wonder how to deal with the problem I pointed out at the beginning, being that you captured the same business rule more than once for different use cases. You might also find yourself recording business rules that relate to each other, or even contradict. Don't worry to much about that in the beginning (using the Unified Process that would be while doing Requirements Modeling). Just record the business rules when and where you find them. You only risk a confused customer when you start record business rules as artifacts on their own.

During some next iteration, you detail the use cases. By that time you might have a class model or will start to create one (using the Unified Process that would be during Requirements Analysis). At this point it makes sense to pull business rules out of the use cases and into constraints. Doing so you are able to remove duplications, and inconsistencies.

If your tool supports publishing use cases together with the artifacts linked to that (like classes and constraints ), you might consider to remove the business rules from the use case to prevent duplication and inconsistencies.

Using JDeveloper you normally add constraints to UML class diagrams and link them to the classes they relate to. But you can also include them in use case diagrams and link them to use cases. Finally, you can link related business rules together, which is convenient when you want to decompose business rules.

Having done all this you are not only able to track what rules apply to what classes but also to what use cases. Handy for example for testing purposes.

Finally, you can include the same contraints in (for example) ADF Business Component diagrams, link them to entity objects, classify them or whatever fancy stuff you like to do. Check out the new version of the Business Rules in ADF BC for ideas about classifying business rules when using BC4J. I will let you know when it get's published.

Got to stop here folks. A long and sunny weekend just knoked at my door, and I intend to open ...

Wednesday, May 16, 2007

Yet Another Useless Teaser

Yes! Today the Oracle Unified Method (OUM) Release 4.3.0 has been announced!

Ermmm. That was an internal announcement, as we do not yet have it available for customers. Me and my big mouth! Well, now you know I should say some more. There is no way back, is there?

To begin with I can tell you that it will not take long before OUM will be made available to customers, as the conditions for that are being discussed almost as we speak. I cannot tell you the exact details yet, as otherwise I would have to kill you. Also, I do not know the details myself so I have to wait as well. So now you still know nothing, do you?

Well, I did write about OUM before, so if you haven't read that article yet, what's keeping you? Furthermore, if it is any consolation to know (which by the way also was the name of a Dutch death metal band that split in 1999), I will tell you what and how as soon as we can deliver. So stay tuned!

Monday, May 14, 2007

JDeveloper Use Case Modeler Revisited

The other day I wrote about the UML Class Modeler of JDeveloper 10.1.3 and concluded that not much has been changed since JDeveloper 10.1.2. I cannot say the same for the UML Use Case Modeler of 10.1.3. Well, number-wise there still are not that many enhancements, but importance-wise the more.

First of all two new use case objects have been added, being the System Boundary and Milestone. The System Boundary can be used to organize use cases by subject, and typically will be used to organize use cases by (sub)system. The Milestone boundary can be used to organize use cases, for example by iteration in case you develop the system incrementally. You can put all use cases that are within the scope of a specific increment in the Milestone of that increment.

In the example I have used System Boundaries to organize use cases by subsystem. One subsystem being the front-office and the other one the back-office of some service request system.

System Boundaries and Milestones can be used together and a use case can be in more than one of each. For example, the use case Enter Service Request can be in both the "Service Request Front-Office" System Boundary as well as in the "First Increment" Milestone.

The most appealing change concerns user-defined use case templates. The JDeveloper UI has been changed and now allows you to create new pallet pages with user-defined components. There are four different type of pallet pages you can create, one for cascading style sheets, one for Java, one for code snippets and one for use case objects. The latter is the one I'm interested in right now.

You can create your own version for every type of use case objects (actor, use case, system boundary, or milestone), the most interesting one being that for use cases. Normally on a project you will have two different types of use cases, so-called "casual" ones and "fully dressed" ones. The difference is in the number of properties you specify, where a fully dressed one has more properties. The casual one is typically used for simple use cases for which "just a few words" will suffice to describe them. For more complex use cases you will use the fully dressed version.

To my taste both use case templates that are shipped with JDeveloper lack a few properties, being:
  • Use Case number (it is a good practice to number use cases for reference)
  • Priority (in general it is good practice to prioritize everything you do in a project)
  • Status (is the use case just started, draft, or more or less stable)
  • Brief Description (describe what the use case is about in just a few sentences).
To adjust the templates to include the above properties I normally had to adjust the original ones in the appropriate JDeveloper folder (after making a backup copy, of course). But now I can make a copy to a project specific folder (which I can put under version control) rename them to whatever I want, and add them to JDeveloper as components in a newly created pallet page.

There is one caveat though. When you start creating use cases, it might not always be clear if a use case is simple or complex. Normally you will create use cases in iterations, starting with just a Brief Description and add the scenario later. It then may turn out that the use case is a lot more complex than you initially thought. Unfortunately you cannot transform a casual use case to a fully dressed one (or visa versa).

But rather than creating every use case as a fully dressed one to be on the safe side, I choose to live dangerously and add extra properties when needed. Under the hood every use case starts as a copy of one of the templates and consists of XML/HTML tags. Using the Source view you can simply copy the extra fully-dressed properties from a fully dressed use case to a casual one and fill them out.

Last thing to say is that the use case editor has become a lot more userfriendly. Also nice to know, wouldn't you say so?