Thursday, August 23, 2007

White Paper Business Rules in ADF Business Components Revamped!

Finally, the white paper Business Rules in ADF Business Components has been revamped!

As usual it took much more work than anticipated, especially as I made the 'mistake' to ask two subject experts (Steve Muench and Sandra Muller from the JHeadstart Team) to review the first draft. If I had not done that, the paper could have been published a month ago! But no, I could not help myself, I had to be thorough, I had to be me, and of course they provided me with insights that have had a significant impact on its contents. But all for the better, otherwise some of it already would have been obsolete the minute it hit the street.

"So what?", those of you who have not seen it before, might ask yourself. Well let me try to explain without copying too much of what already is explained in the paper itself.

First of all it is our (Oracle Consulting's) experience that analyzing, and implementing business rules takes a significant part of the total effort of creating the average application. Although being a very productive persistence framework as it is, this still holds true for ADF Business Components (which is part of Oracle's ADF), or ADF BC for short. Moreover, despite all our efforts trying to create the ultimate application, most resources still are being put in maintenance rather than in creating the original. So there is a lot to be gained for business rules in this regard, and that is what the white paper intends to address.

Does the paper present 'the right way' for ADF BC? Well, perhaps there is a better way, I don't know. But it is a good way, as it is consistent and makes use of the best that ADF BC has to offer. And because it is consistent (and well documented) maintenance also becomes easier, as when the developers that created the original application used it (and then ran away like lightning to build the next hip thing), they will not have left behind a maintenance nightmare. At least not what the business rules are concerned.

"So, what's new?", those of you who have sleep with the previous version of the paper under their pillow, might ask yourself.

Let me give you this short list and for the rest of it refer you to the paper itself:
  • Capturing business rules using UML class model
    Why? Because plenty of people still want to capture requirements (including business rules) before there are tables and entity objects and therefore cannot make use of an ADF Business Components diagram (see also the article UML Rules! I posted some time ago).
  • Setting up framework extension classes
    Doing so makes introducing generic functionality later on so much easier and therefore is strongly advised in general.
  • Deprecated custom authorization in ADF BC
    This in particular concerns the horizontal authorization rules (restricting the set of rows of an entity you are allowed to insert, update or delete). The reason to do so is that you probably rather use Virtual Private Database for that.
  • 'Other Attribute Rules' dropped
    As I discussed in the article How to Pimp ADF BC Exception Handling, there are no compelling arguments anymore for not using built-in validators or method validators, making that the category Other Attributes Rules could be dropped, and we now suggest implementing them using these validators.
  • New built-in attribute validators
    ADF BC provides the Length, and Regular Expression validators for attributes.
  • 'Other Instance Rules' dropped, 'Delete Rules' added
    The category Other Instance Rules is dropped for the same reason the Other Attribute Rules category has been dropped. This with the exception of rules that make use of the delete() method, which rules are now in the new category 'Delete Rules'.
  • Registered Rules
    Using Registered Rules you can create generic method validators you can use for multiple entities. An example is a reoccurring validation of an end date that must be on or after a begin date.
  • UniqueKey Validator
    Compared with just checking 'Primary Key' for all primary key attributes, this one and only entity-level built-in validator helps to make validation of the primary key predictable and consistent, and also supports providing a user-friendlier error message.
  • 'Change History' and 'Cascade Delete' added
    These two categories are subcategories of Change Event Rules with DML. When using JAAS/JAZN ADF BC offers built-in support for recording date/user created/updated information, which has been documented in the Change History category. As a result of introducing this category, the 'Derivation' category has been renamed to 'Other Derivation'. Furthermore, ADF BC also supports Cascade Delete by defining an association as being a 'composition'.
  • Sending an email using JavaMail API
    The previous version of the paper here and there referred to some 'clex' library which was part of the now long-gone Oracle9iAS MVC Framework for J2EE. If you remember that you really are an Oracle veteran! Anyway, the classical example of a 'Change Event Rule without DML' is sending an email when somebody changes something in the database, which example now is based on the JavaMail API
  • Message handling
    Especially this subject has been revised significantly. Among other things you can specify error message together with the validator, which message will end up in an entity-specific message bundle. By creating custom exceptions you also can use one single message bundle, as I explained in the article How to Pimp ADF BC Exception Handling.
Do you need more in order to get you to download the new white paper? I can hardly imagine.

Friday, August 03, 2007

How to Pimp ADF BC Exception Handling

When you have been doing things in a particular way for a long time, you sometimes find that in the mean time your way has become the slow way. Not necessarily the wrong way as it works (otherwise you would not have kept doing it all the time, would you?), but you're just not fashionable anymore. Like wearing tight pants in the 60's or wide pants in the 80's. Or using vi instead of a state-of-the-art IDE like JDeveloper or Eclipse for that matter (oh yes, I dare!).

I recently discovered that I became unfashionable by using the setAttributeXXX() and validateEntity() method for implementing business rules in ADF BC instead of using Validators. Not that I didn't know of Validators, I just thought that they would not give me proper control over exception and message handling. Because one of the things I would like to have, is one single message bundle in which I could use a consistent coding of my messages, like APP-00001, APP-00002, etc. Even more important, the other thing I would like to have is that I can translate all messages to Kalaallisut the minute I need to make my application available for the Inuit people of Greenland.

As you might know, up till JDeveloper 10.1.3 ADF BC will create an entity-specific message bundle to store the messages you provide with Validators. So with many entity objects big chance you end up with many message bundles, making that keeping error codes consistent becomes a nightmare. And what about all the separate files you need to translate! You might find yourself no longer to be able to tell the messages for the bundles.

But as Steve Muench was very persistent in trying to convince me using Validators I finally gave in and tried finding a way to tackle my problem, and succeeded! No worries, don't expect rocket science from me. I hate complex or obscure code, as building maintainable Information Systems already is hard enough as it is, and therefore always try to practice Simple Design.

I assume that you have created a layer of framework extension classes as described in section 2.5 of the ADF Developers Guide and that your entity base class is called MyAppEntityImpl. If you have not yet created such a layer, do that first and come back after you finished. Chop chop!

Basically the steps are as follows:
  • Create extension classes that extend the exceptions that ADF BC will throw when validation fails
  • Override the constructor of the super class and pass in the message bundle in the call to super()
  • Override the setAttributeInternal() and validateEntity() methods in the MyAppEntityImpl.java entity base class and make they thrown your exceptions instead of the default ones.
Does that sound simple or what? No? OK, let me show you how I did it.

Attribute-level Validators will throw the AttrValException. So what I did was create a MyAppAttrValException as follows:

package myapp.model.exception;

import oracle.jbo.AttrValException;
import myapp.model.ResourceBundle;

public class AttrValException extends AttrValException
{
public MyAppAttrValException(String errorCode, Object[] params)
{
super(ResourceBundle.class, errorCode, params);
}

/**
* When the message contains a semicolon, return the message
* starting with the position after the semicolon and limit the
* text to the message text from the resource bundle.
*
* @return the stripped message
*/
public String getMessage()
{
String message = super.getMessage();
// strip off product code and error code
int semiColon = message.indexOf(":");
if (semiColon > 0)
{
message = message.substring(semiColon + 2);
}
return message;
}

And this is how I override the setAttributeInternal in the MyAppEntityImpl:

/**
* Overrides the setAttributeInternal of the superclass in order to
* pass in a custom message bundle to MyAppAttrValException subclass
* of the AttrValException. To be able to uniquely identify the
* entries in the message bundle, the error code is extended with the
* fully qualified class name of the Impl.
*/
protected void setAttributeInternal(int index, Object val)
{
try
{
super.setAttributeInternal(index, val);
}
catch (AttrValException e)
{
String errorCode = new StringBuffer(getClass().getName())
.append(".")
.append(e.getErrorCode())
.toString();
throw new MyAppAttrValException(errorCode, e.getErrorParameters());
}
}

In a similar way you can handle entity-instance-level Validators and Method Validator by extending the ValidationException and overriding the validateEntity() method to throw your MyAppValidationException.

As said, the message you provided when creating build-in Validators and Method Validators end up in an entity-specific message bundle. I've been told that this is going to change in JDeveloper 11g, but currently there is no stopping ADF BC from doing that.

So, at a convenient point in time (for example when most of your Validators have been implemented and tested), what you need to do is copying the error messages from the entity-specific message bundles to your custom message bundle.

To prevent duplicates in the keys of the custom message bundle, you should extend the key of each message with the fully qualified class name of the Impl.java file of the entity object where it's coming from. Otherwise duplicates might occur whenever you have two different entity objects, both with an attribute with the same name and a build-in Validator specified for them. The overriden setAttributeInternal() method assumes you did.

The entries in the message bundle would look
then similar to this:

{ "myapp.model.adfbc.businessobject.EmployeeImpl.Salary_Rule_0",
"APP-00001 Employee's Salary may not be below zero" },
...


Well that wasn't rocket science, was it?

Friday, July 13, 2007

We Are Proud of Borg

I case you wonder why I don't write that much lately, my wife just got out of the hospital with a so-called Ilizarov frame, with the purpose to try and fix an arthritis in her ankle (caused by an ill-treated complicated fracture many, many years ago).

Believe me, anyone who've seen her was impressed! She looks like an unfinished Robocop, a kind of Transformer that just got out of it's egg. When she wants something from you, you just know that resistance is futile. Every time she walks I have a hard time suppressing the urge to make 'uhnk-chuck uhnk-chuck' sounds. Fortunately she doesn't read my blog (I hope) and if she does, I'm just too fast for her as long as I don't let myself get cornered.

What also is impressive is the amount of time you need to spend in the beginning to take care of some one in her condition. People with children might remember the first couple of weeks, how chaotic things can be and you trying to get a grip on the situation and finding a modus operandi to get you through the day without going wacko. Picture that, plus a full-time job and the Dutch health care system of today and you cannot help feeling sorry for me. Fortunately there are plenty of people around me willing to help out, so forgive me for making things look worse than they are. I needed an excuse for not writing that much and thought a little exaggeration could do some good here.

But what is most impressive is the courage of my wife, to go through the operation knowing what she nows about the frame that she will have to suffer that for three months. That really makes me proud!

Wednesday, July 04, 2007

How to Prevent Your Rule Gets Fired in ADF BC?

When using the Oracle ADF (Application Development Framework), implementing data-related business rules in the persistance layer is a good practice. In ADF this business layer is called ADF Business Components (aka BC4J, or Business Components for Java). This article will go briefly into this subject, just enough to let you get an appetite for the upcoming revised white paper Business Rules in ADF BC. So don't eat too much of this apetizer, to leave room for the main course to come!

Implemening business rules in ADF Business Components means implementing them in so-called entity objects. For an application that uses a relational database, to a certain extend you can compare an entity object with an EJB entity bean, as like an entity bean the purpose of the entity object is to manage storage of data in a specific table. Unlike EJB entity beans, ADF entity objects provide hooks to implement business rules that go way beyond what you can do with EJB entity beans.

I won't go into detail about these hooks. When you're interested, enough documentation about the subject can be found on the internet (to begin with the Steve Muench's web log) or read the white paper! I will let you know when it is available and where to find it.

Where I do want to go into detail is one aspect, being how to prevent that rules get fired unneccessarily, for example for reasons of performance. I assume some basic knowledge of ADF Business Components, so if you don't have that, this is where you might want to stop reading.

There are the following typical options, provided as methods on any EntityImpl:
  • The isAttributeChanged() method can be used to check if the value of an attribute has actually changed, before firing a rule that only makes sense when this is the case.
  • Furthermore there is the getEntityState() method that can be used to check the status of an entity object in the current transaction, which can either be new, changed since it has been queried, or deleted. You can use this method to make that a rule that only gets fired, for example when the entity object is new.
  • There is also the getPostState() that does a similiar thing as getEntityState() but which takes into consideration whether the change has been posted to the database.
Sometimes knowing whether or not something has changed does not suffice, for example because you need to be compare the old value of an attribute with the new one. That typically happens in case of status attributes with restrictions on the state changes. Normally you would be able to do so using the getPostedAttribute() method, that will return the original value of an attribute as read from or posted to the database. However, that won't work when you are using the beforeCommit() method to fire your rule, as at that time the change already has been posted, so the value getPostedAttribute() returns will not differ from what the getter will return.

"So what", you might think, "when do I ever want to use the beforeCommit()?". Well, you have to as soon as you are dealing with a rule that concerns two or more entity objects that could trigger the rule, as in that case the beforeCommit() is the only hook of which you can be sure that all changes made are reflected by the entity objects involved.

Suppose you have a Project with ProjectAssignments and you want to make sure that the begin and end date of the ProjectAssignments fall within the begin and end date of the Project. A classical example, I dare say. Now the events that could fire this rule are the creation of a ProjectAssignment, the update of the Project its start or end date or the update of the ProjectAssignment its start or end date.

Regarding the creation of the ProjectAssignment, that you can verify by using the getEntityState() which would return STATUS_NEW in that case. Regarding the change of any of the dates, you can check for STATUS_MODIFIED, but that also returns true when any of the other attributes have been changed.

Now suppose that, as there can be multiple ProjectAssignments for one Project, you only want to validate this rule when one of those dates actually did change. The only way to know is by comparing the old values with the new one. As explained before, as the hook you are using will be the beforeCommit() method, the getPostedAttribute() also will return the new value as that time the changes already have been posted to the database.

Bugger, what now? Well, I wouldn't have raised the question unless I would have some solution to it, would I? The solution involves a bit yet pretty straightforward coding. I will only show how to solve this for the Project, for the ProjectAssignment the problem can be solved likewise. The whole idea behind the work-around is that you will use instance variables to store the old values so that they are still available in the beforeCommit().

Open the ProjectImpl.java and add the following private instance variables:

private Date startDateOldValue;
private Date endDateOldValue;

In general you can use a convention to call this specific type of custom instance variables [attribute name]OldValue, to reflect their purpose clearly. Now you need to make that these variables are initialized at the proper moment. This will not be the validateEntity(), as that can fire more than once with unpredictable results. No, it should be the in the doDML() of the ProjectImpl, as follows:

protected void doDML(int operation, TransactionEvent e)
{
  // store old values to be able to compare them with
  // new ones in beforeCommit()
  startDateOldValue = (Date)getPostedAttribute(STARTDATE);
  endDateOldValue = (Date)getPostedAttribute(ENDDATE);

  super.doDML(operation, e);
}

Now in the beforeCommit() you can compare the startDateOldValue with what will be returned by getStartDate(), etc. Although it might not look like it at first sight, this might be the most trickiest part as you need to deal with null values as well. In JHeadstart the following convenience method has been created to tackle this, in the oracle.jheadstart.model.adfbc.AdfbcUtils class:

public static boolean valuesAreDifferent(Object firstValue
              , Object secondValue)
{
  boolean returnValue = false;
  if ((firstValue == null) || (secondValue == null))
  {
    if (( (firstValue == null) && !(secondValue == null))
        ||
        (!(firstValue == null) && (secondValue == null)))
    {
      returnValue = true;
    }
  }
  else
  {
    if (!(firstValue.equals(secondValue)))
    {
      returnValue = true;
    }
  }
  return returnValue;
}

This method is being used in the beforeCommit() of the ProjectImpl, as follows:

public void beforeCommit(TransactionEvent p0)
{
  if ( getEntityState() == STATUS_MODIFIED &&
       ( AdfbcUtils.valuesAreDifferent(startDateOldValue,
              getStartDate()) ||
         AdfbcUtils.valuesAreDifferent(endDateOldValue,
              getEndDate())
       )
    )
  {
    brProjStartEndDate();
  }
  super.beforeCommit(p0);
}

Finally, the actual rule has been implemented as the brProjStartEndDate() method on the ProjectImpl as follows:

public void brProjStartEndDate()
{
  RowIterator projAssignSet = getProjectAssignments();
  ProjectAssignmentImpl projAssign;
  while (projAssignSet.hasNext())
  {
    projAssign =
      (ProjectAssignmentImpl)projAssignSet.next();
    if (((getEndDate() == null) ||
         (projAssign.getStartDate().
              compareTo(getEndDate()) <= 0)
        ) &&
         (projAssign.getStartDate().
              compareTo(getStartDate()) >= 0)
       )
    {
      // rule is true
    }
    else
    {
      throw new JboException("Project start date must " +
        "be before start date of any Project Assignment");
    }
  }
}

Well that wasn't to difficult, was it?

Thursday, June 21, 2007

Did You Found Your Balance With Java?

When I was young I practised judo for a couple of years. Got myself a brown belt and nothing after that, as I wasn't the competition kind of guy. I could have managed getting the black belt on technique alone (by doing so-called kata's), but that would require a lot of practise and time, which I did not had. One of the reasons being that at the same time I also practiced jujutsu (got myself a green belt for that).

When I went to the university I visited a different dojo, again only for a couple of times because of lack of time, but long enough to understand what in the previous dojo the sensei failed to teach me: proper balance. Imagine that during a randori (that is sparring) you jump around too much, giving the opponent many opportunities to show you every corner of the dojo. Well, that was me.

I have not been doing martial arts for many years now, so I was way beyond any frustration about this. That was until on the recent J-Spring of the NL-JUG (spring conference of Dutch Java User Group) when I learned about the JavaBlackBelt community. As is described on the home page of their web site: "JavaBlackBelt is a community for Java and related technologies certifications. Everybody is welcome to take existing exams and build new ones." I couldn't help it, I had to check it out. Am I still jumping around, or have I found my balance with Java?

Well, I expect it will be a long way before I know, as again I don't have much time. But I managed to get at least a yellow belt! So for now I find some comfort knowing that at least I'm beyond the level of Java-newbie.

Friday, June 15, 2007

Cooking with OUM

The other day I had a really challenging discussion about how to apply OUM (Oracle Unified Method) / UP (Unified Process) on a running project. As with all projects, it had to be decided what needed to be done to deliver a system, no surprise there. The challenging part first of all was in that we tried to fit an incremental approach into a waterfall-like offering. Secondly the approach had to be supported by people that had no previous exposure to OUM nor UP. So how can you bring that to a success without boiling peoples brains?

Let me start with the good news, which is that we reached a point that people have an idea about what they are going to do, and made that plan themselves rather than me telling them what to do. That is exactly how you want things to be in the end, so what more do you want? I should be happy!

The bad news is that something 's nagging the back of my head, being that I made a couple of mistakes along the road, resulting in me not having the feeling the message was understood properly. But as a famous Dutch philosopher once said: "every disadvantage 'as it's advantage", in this case being that that me feeling miserable is not your problem, and we all might learn from it.

In retrospect I think the one and only real mistake was skipping the part in where you explain the principles behind OUM and UP, and being so naive to think I could do that along the road. OK, I jumped on a running train and the approach needed to be there yesterday, so maybe I'm excused, but boy, what a mistake that was as it made discussions so difficult. But let's skip the nag whine part and let me explain what I would do differently next time.

First of all I would explain very well the difference between a task and a deliverable. When that is clear, I would be ready to explain how iterations go in OUM. Pretty important when you need to explain how exactly you do high-risk activities first (a principle that I luckily did not forget to explain up-front).

Unfortunately, I presented all tasks using the name of what OUM calls 'work products', giving the impression that every task results in a deliverable that actually is going to be presented as such to the customer. As in my proposal there were quite a few tasks to do, it looked like I proposed creating a pile of deliverables, scaring the hell out of people.

In OUM the term deliverable is reserved for a "product" that we actually present to the customer and that you likely will find in the project proposal or project plan. In order to get to that deliverable, you sometimes do multiple tasks, like creating an initial version of some document, reviewing that with the customer, in a next phase adding details using some diagrams, reviewing it again, etc., before you finally come up with one concrete deliverable.

Every (intermediate) task results in a "work product" that might or might not be presented to the customer as a deliverable. The important thing to notice is that, although the tasks might have different names and associated codes, you actually work on multiple iterations of the same deliverable.

Below an example of how the Use Case Model is created in three iterations (I left out any review tasks between each iteration). The rounded boxes are the tasks, the iterations of the work products are the square ones. By drawing a box around the three Use Case Model iterations I express these will be combined into one single deliverable.



How many work products you plan to combine into one deliverable, is up to you. Three words of advice, though.

First try to prevent multiple people working on the same document at the same time, because that unnecessarily introduces the risk of people needing to wait on each other. When you anticipate this might happen, that’s a strong contra-indication for combining work products.

For this reason, in the example the Use Case Realization has not been combined with the Use Case Model, as in this case the assumption is that the Use Case Realization will be worked out by a Designer, while the Use Case Model has been worked out by a (Business) Analyst, both having different responsibilities.

Secondly, do not iterate the same work product over a long period of time, because you might get the feeling it never finishes. You definitely should not let deliverables iterate cross phases that need to be signed-off. Not many customers want to put their signature on a document in a state of “draft”. I always try to prevent this kind of signing-off way of working as it can be killing for agility, but when there is no other option, be aware that after signing-off normally a deliverable can only be changed after a formal change request procedure (and that takes valuable time).

In the example, this is one of the reasons the MoSCoW-list has not been combined with the Use Case Model, as the assumption is that the MoSCoW-list has been used to define the scope and priorities of high-level requirements, providing the baseline for the activities during the Elaboration phase.

Finally, wherever you combine work products, keep track of the link to the tasks that create them, and make explicit in what iteration the deliverable is, for the following reasons:

  • When keeping track of the original tasks you can make use of the work breakdown structure the method offers, supporting estimating and progress tracking (for some silly reason some managers like to know how far you are from time to time).
  • Knowing what task you are performing facilitates using templates and guidance the method offers for that task.
  • It allows for people outside the project to understand what you're trying to achieve with a specific work product, enabling them to have the right perspective when reviewing it. Otherwise there is a big risk of expecting the wrong thing and not talking the same language. I have seen this going nasty a couple of times when the customer brought in their own experts, believe me.
  • Whenever you want to know how a specific workproduct should look like, you can ask questions like: "does anyone have a good example of a RD.011 High-level Business Process Model for me?" and surprise everyone around you. You might even get what you asked for rather than some document that you have to study for an hour or so to finally conclude it's not what you need. Have you been there, done that?
These are benefits you directly start to profit from right here right now. But what about some next time? Would it not be nice when you could reuse your deliverables as an example for one of your next projects? Or even better, would you not be proud when one of your deliverables ends up in the company-library of outstanding examples, giving you the feeling you have achieved all there is to achieve, quit your job, sell your house and go and live like a king in France, as we Dutchmen say?

Hmmm... Suddenly I start to understand why sometimes it is so hard to get good sample deliverables in some organizations. It may be because of some company policy forbidding such a library.

Tuesday, June 05, 2007

UML White Papers Released

Remember the UML white papers I talked about updating them? Well they got published today!

So when you are interested in getting started with:
  • UML use case modeling
  • UML class modeling
  • UML activity modeling
go to the JDeveloper Technical Papers section on OTN and download them from there. I expect the average paper to take not more that one to two hours to read and digest. Happy reading!

Friday, June 01, 2007

Recipe for Stress

When capturing specifications, it is always a challenge to weight effort against risk. The more you specify, the less risk there is of missing requirements. At least that is what you hope. Unfortunately, experience shows that no matter how hard you try, you never get them all. So where do you stop? When is it time to go home and grab a beer?

The other day I reviewed some use cases. Some of the use cases were worked out pretty detailed. There even was a log-in use case with a (conceptual) screen layout. But as this was only the start of the project no supplementary requirements had been defined yet. As an answer to of the question of how detailed one should get while capturing requirements, I provided the following example based on the log-in use case (which by the way is subfunction rather than a user goal).

Your customer might agree that you can leave the functional requirements for a log-in use case to "the user can log in by providing either a user name and a password or a customer number and a password", and not ask for a screen layout as for most people that is pretty straightforward.

However, at the same time supplementary requirements to this use case might be something like:
  • At a minimum passwords must be changed every 2 months
  • Passwords must be at least 8 characters in length, and a mixture of alphabetic and non-alphabetic characters
  • Passwords may not be reused within 5 password changes.

Notice how the functional requirement is very brief, only one short sentence. However, the supplementary requirements are and should be pretty specific, as often it is far from trivial how to implement them.

Thinking about something "simple" as securing passwords might also trigger the users to define other security requirements as well, like securing web services. When doing a mapping from sequirity requirements to an initial technical architecture, you might decide to use Oracle Web Server Manager. That might imply the need for some extra setup and expertise to do so. And it all started with a simple screen with only two items on it!

So what do we learn from this?

Failing to recognize supplementary requirements in the beginning, is a good recipe for a lot of stress later on. That is nice to know for the masochistic project managers out there. For those who are not like that you better make sure you have discussed the supplementary requirements with your customer to a sufficient level to be able to baseline at least is a little bit reliable.

Not discussing supplementary requirements and think it just will blow over is a mistake. Most customers have only a vague idea if any, and will expect us to tell them what their supplementary requirements should be. Failing to do so might jeapordize your relationship with your customer.

Friday, May 25, 2007

UML rules!

Currently I'm updating the white paper Business Rules in ADF BC. Doing so it occurred to me that there are still many questions to answer what to do when you want to record business rules in UML. With this posting I will share some of my idea's. JDeveloper 10.1.3 will be my tool and ADF BC4J the targeted persistence layer, but I expect the idea's to more generic and also applicable to EJB 3.0 and POJO's for that matter.

In "the old days" when we were young and still using the function hieararchy diagrammer, business rules were either an intrinsic part of the entity relationship diagram or recorded in function descriptions. As this sometimes resulted in the same business rule being recorded more than once (in different functions) and too often in an inconsistent way, we found it to be a neat idea to record them explicitly, that is as functions of their own, and link them to the functions they applied to. This has been a very succesful approach for quite a few years.

At the same time UML arrived as the modeling language for object-oriented analysis and design, and more recently BPMN for business process modeling. As I have never come upon any information system for which there were no business rules whatsoever, you can imagine that it was quite a shock for me to discover that there was no similar thing in place for UML at that time. Of course, a UML class diagram as such allows for expressing more business rules than an entity relationship diagram, but that covers only part of it, doesn't it? As far as the rest is concerned, we were kind of back to square one, meaning that we had to record business rules as part of use cases, for example. The BPMN specification even explicitly excludes business rules from their scope, so what do you do when using BPM?

Some improvements have arrived since the beginning of UML. At some point (not sure since when) constraints have been added (as a stereotype). Also OCL has emerged as a formal language to capture business rules. You can use OCL to record the business rule in a constraint. I don't know about you, but I still have to meet the first person that speaks OCL, let alone the kind of persons we call customers. So in most cases I expect constraints to be recorded using some natural language.

Now let's asume you are using UML to capture functional requirements. Business rules can be found during any stage, for example while doing use case modeling. When starting to create use cases it perfectly makes sense to record rules as part of that. Use cases have some properties that can be used for recording business rules, being pre-conditions and post-conditions. Pre-conditions must be met before the use case can be executed. A typical example would be "the user must be logged on", but could also be something like "the user must be a manager to view detailed employee information".

Post-conditions can be devided in minimal guarantees and success guarantees. Minimal guarantees indicate what must hold true when none of scenarios of the use case have finished successfully. A success guarantee describes a succesfull execution of the use case. Examples of success guarantees are:
  • The employee information has been recorded successfully
  • A change of the employee salary must have been logged
  • When the customer has boarded 15 times or more on a flight, its frequent flyer status must be set to silver elite.
Then there are so-called invariants, being conditions that always should hold true (before, during and after execution of any use case). Examples would be
  • End date must be on or after begin date
  • There cannot be a non-vegetarian dish in a vegetarian dinner.
Your use case template might not have a property called invariants. But hey, it's your use case, so why not add it?

By now you might wonder how to deal with the problem I pointed out at the beginning, being that you captured the same business rule more than once for different use cases. You might also find yourself recording business rules that relate to each other, or even contradict. Don't worry to much about that in the beginning (using the Unified Process that would be while doing Requirements Modeling). Just record the business rules when and where you find them. You only risk a confused customer when you start record business rules as artifacts on their own.

During some next iteration, you detail the use cases. By that time you might have a class model or will start to create one (using the Unified Process that would be during Requirements Analysis). At this point it makes sense to pull business rules out of the use cases and into constraints. Doing so you are able to remove duplications, and inconsistencies.

If your tool supports publishing use cases together with the artifacts linked to that (like classes and constraints ), you might consider to remove the business rules from the use case to prevent duplication and inconsistencies.

Using JDeveloper you normally add constraints to UML class diagrams and link them to the classes they relate to. But you can also include them in use case diagrams and link them to use cases. Finally, you can link related business rules together, which is convenient when you want to decompose business rules.

Having done all this you are not only able to track what rules apply to what classes but also to what use cases. Handy for example for testing purposes.

Finally, you can include the same contraints in (for example) ADF Business Component diagrams, link them to entity objects, classify them or whatever fancy stuff you like to do. Check out the new version of the Business Rules in ADF BC for ideas about classifying business rules when using BC4J. I will let you know when it get's published.

Got to stop here folks. A long and sunny weekend just knoked at my door, and I intend to open ...

Wednesday, May 16, 2007

Yet Another Useless Teaser

Yes! Today the Oracle Unified Method (OUM) Release 4.3.0 has been announced!

Ermmm. That was an internal announcement, as we do not yet have it available for customers. Me and my big mouth! Well, now you know I should say some more. There is no way back, is there?

To begin with I can tell you that it will not take long before OUM will be made available to customers, as the conditions for that are being discussed almost as we speak. I cannot tell you the exact details yet, as otherwise I would have to kill you. Also, I do not know the details myself so I have to wait as well. So now you still know nothing, do you?

Well, I did write about OUM before, so if you haven't read that article yet, what's keeping you? Furthermore, if it is any consolation to know (which by the way also was the name of a Dutch death metal band that split in 1999), I will tell you what and how as soon as we can deliver. So stay tuned!

Monday, May 14, 2007

JDeveloper Use Case Modeler Revisited

The other day I wrote about the UML Class Modeler of JDeveloper 10.1.3 and concluded that not much has been changed since JDeveloper 10.1.2. I cannot say the same for the UML Use Case Modeler of 10.1.3. Well, number-wise there still are not that many enhancements, but importance-wise the more.

First of all two new use case objects have been added, being the System Boundary and Milestone. The System Boundary can be used to organize use cases by subject, and typically will be used to organize use cases by (sub)system. The Milestone boundary can be used to organize use cases, for example by iteration in case you develop the system incrementally. You can put all use cases that are within the scope of a specific increment in the Milestone of that increment.

In the example I have used System Boundaries to organize use cases by subsystem. One subsystem being the front-office and the other one the back-office of some service request system.

System Boundaries and Milestones can be used together and a use case can be in more than one of each. For example, the use case Enter Service Request can be in both the "Service Request Front-Office" System Boundary as well as in the "First Increment" Milestone.

The most appealing change concerns user-defined use case templates. The JDeveloper UI has been changed and now allows you to create new pallet pages with user-defined components. There are four different type of pallet pages you can create, one for cascading style sheets, one for Java, one for code snippets and one for use case objects. The latter is the one I'm interested in right now.

You can create your own version for every type of use case objects (actor, use case, system boundary, or milestone), the most interesting one being that for use cases. Normally on a project you will have two different types of use cases, so-called "casual" ones and "fully dressed" ones. The difference is in the number of properties you specify, where a fully dressed one has more properties. The casual one is typically used for simple use cases for which "just a few words" will suffice to describe them. For more complex use cases you will use the fully dressed version.

To my taste both use case templates that are shipped with JDeveloper lack a few properties, being:
  • Use Case number (it is a good practice to number use cases for reference)
  • Priority (in general it is good practice to prioritize everything you do in a project)
  • Status (is the use case just started, draft, or more or less stable)
  • Brief Description (describe what the use case is about in just a few sentences).
To adjust the templates to include the above properties I normally had to adjust the original ones in the appropriate JDeveloper folder (after making a backup copy, of course). But now I can make a copy to a project specific folder (which I can put under version control) rename them to whatever I want, and add them to JDeveloper as components in a newly created pallet page.

There is one caveat though. When you start creating use cases, it might not always be clear if a use case is simple or complex. Normally you will create use cases in iterations, starting with just a Brief Description and add the scenario later. It then may turn out that the use case is a lot more complex than you initially thought. Unfortunately you cannot transform a casual use case to a fully dressed one (or visa versa).

But rather than creating every use case as a fully dressed one to be on the safe side, I choose to live dangerously and add extra properties when needed. Under the hood every use case starts as a copy of one of the templates and consists of XML/HTML tags. Using the Source view you can simply copy the extra fully-dressed properties from a fully dressed use case to a casual one and fill them out.

Last thing to say is that the use case editor has become a lot more userfriendly. Also nice to know, wouldn't you say so?

Thursday, May 10, 2007

JDeveloper Class Modeler Revisited

About two years ago I published a couple of white papers about UML, unit testing, and source control, and how to do that using JDeveloper 9.x / 10.1.2. These white papers still can be found on the white paper archive on OTN and are:
  • Getting Started with Use Case Modeling
  • Getting Started with Class Modeling
  • Getting Started with Activity Modeling
  • Getting Started with Unit Testing
  • Getting Started with CVS
As since then the world has turned around a couple of times I took the time and started revisiting them and bring them up-to-date with JDeveloper 10.1.3.

The first paper I have revisited is the one for UML class modeling. To be honest, not much has changed. The only thing I find worth mentioning is that for a class attribute you now can specify if and when it can be updated, using a new property called Changeability. Do I hear a big "Wauw!"? No? Hmm.... Well, you know, of all the UML modelers it is the most mature one, as it also is the base of the Java class, ADF business components, and database modelers, which are (implementation specific) specializations of the UML class modeler.

Only thing that really bugs me is that you still cannot specify unique identifiers for a UML class. I must admit, it's not in UML 2.0 as well, but we are a company that happens to sell relational databases (among other things) and for many of us specifying a unique identifier is like riding a bike is for Lance Armstrong! One of the consequences is that it still is a bad idea to transform a UML class model to ADF business components, because (as I explain in the white paper) ADF entity objects require primary keys and the transformer therefore alphabetically picks the first attribute, and that is not necessarily the right one. The result is a lot of (error prone) work fixing the ADF model, making that I advice against transforming a UML class model to ADF business components. And that's too bad as most people I know use ADF.

"But hey, you are talking JDeveloper 10.1.3, and is that not already an archived version?", I hear you say. Well, almost folks. Recently the JDeveloper 11g - Technology Preview has been released, and YES! the new features overview tells me there is a major change regarding the UML class modeler.

From a feature point of view there still is not much to tell, although one of my enhancement requests from two years ago has been implemented, being that attributes can now also visually be order logically (instead of alphabetically) allowing you, for example, to put the most important ones at the top and attributes that logically belong together above each other. Is that a big deal? Believe me, you want to review a big UML class model it is!

But yet I do have the impression the UML modelers of JDeveloper are going to be taken more seriously than before, as the new features page also state that the UML class modeler has been rewritten on a new graphical engine that provides better better performance and scalability. And not only that (and I quote): "Future releases will see other modelers being re-hosted on this new framework". Initially we will most benefit from this as it will greatly improve the usability of the modelers, not only from an editing point of view but also regarding publishing the models. And whoever has been on a project that involved more than just a few tables knows how important that can be.

Can't wait to get my hands dirty on that! But first I need to revisit the other modelers as well, so watch out for new postings about that. I already can tell you that the UML use case modeler of 10.1.3 definitely has more seriously been changed!

Monday, May 07, 2007

My Agile Project (based on a true story)

What makes a project 'agile'? Are you only doing an agile project when you apply all principles of eXtreme Programming for example? From the way I phrased these questions you probably already concluded that I think differently.

What agile projects really make agile is best formulated in the Agile Manifesto. Among others the following principles are included:
  • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  • Working software is the primary measure of progress.
  • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  • Continuous attention to technical excellence and good design enhances agility.
Not specifically included (probably assumed to be intrinsic) is the following principle:
  • Address the biggest risks first.
I think these principles best explain why my latest project has been such a success, meaning that it as delivered in time, within budget, with a happy customer, and last but not least, happy end-users!

But before I explain, let me first tell you a little bit about the project. It concerns maintenance of an existing system that is over more than five years old. The system has been build using Oracle Designer / Developer, with a lot of PL/SQL code in the database of which a considerable part enforces rules and regulations. Rules and regulations being changed significantly was the reason to start the project and make the system adapted accordingly.

The major challenges we focussed were these:
  • A first increment should go into production before January (it was end of October when I got first involved so we had only two months).
  • The PL/SQL code was set up using a lot of (what we in retrospective may call) bad practices, like one huge package that 'does it all' with tightly coupled procedures of which many had no clear responsibility.
Although we talk about rules and regulations, many things were up for interpretation. When you read one of my previous posts you know I try to avoid a Big Design Up-Front where feasible. Normally you have to justify this to people that have not yet bought into agile development, but in this case the time constraint made it even impossible to create such a thing.

So what you gonna do? Well, first of all we (meaning a team of 3 developers, a client-project manager and ambassador user) decided to meet on a daily basis to monitor the status, exchange information and address issues as soon as they emerge. In this setting we made up a high-level MoSCoW-list, determined the dependencies between the requirements, and decided upon their prorities (primarily based on dependencies and biggest risks first). Using this as input, we then thought of an approach how we best could adapt the existing system in an incremental way. We created a low-level MoSCoW-list with a scope of two weeks, delivered what we produced and based on our experience created the next low-level MoSCoW-list.

Sounds easy, wouldn't you agree? Yeh, right...

We are dealing with people here and in general people have a few pretty persistent habitst that must be dealt with. Like many people find it counter-intuitive to design and implement functionality just to achieve a foreseable goal, knowing that in some near future they probably have to change it as a result of evolving insight. Main reason is that we have a tendency to try to do it right the first time to prevent needing to change code later on which just seems like overhead. Unfortunately experience shows almost none of us is capable overseeing everything that needs to be taken into account upfront. Discrepancies between vision and reality emerge the minute we start to build the real thing and especially become apparent when end-users are confronted with it.

Another tendency we have, is starting with a couple of simple things just to get a sense of achievement, while postponing the more complex issues, hoping they kind of solve themselves as time goes by. But those issue are the ones with the biggest risks associated with them and when they do not solve themselves, it often is too late and you find yourself being in big trouble. I have never done research to find out, but I do believe that addressing high-risk issues too late is the major cause of budget overrun, deadlines not being met or deliverance of considerable less functionality than planned for.

So what agile methods therefore promote is developing small increments that are delivered frequently. This will prevent us from going the wrong direction for too long as we will notice it soon after that happens. To prevent that changing things later indeed results in a lot of overhead, we maintain a simple design by creating the simplest thing that could possibly work. Whenever the design starts to get complex, we refactor to make it simple again. To allow for refactoring without breaking the functionality, we apply unit testing to prove the system still works afterwards. To reduce risk we ensure that the code one person creates works with to code of others, by continuously integrating everything we produce. And wherever feasible, we start with those aspects that are the least clear to us how to proceed with, to find out as soon as possible how they can be resolved and what the impact on other aspects is.

All this is not just a nice theory. We did just that and proved it works!

We consulted each other at least once a day, keeping eachother on track with maintaining a simple design and refactoring when necessary. On an average we delivered every two weeks and from increment to increment added new functionality while at the same we improved the existing poor architecture. The result is a system that not only does what it is supposed to do, but which also is based on a solid architecture with excellence through simplicity!

Only time will tell, but I have a strong feeling that we considerably reduced the costs for maintenance and perhaps made that the system will survive for yet another five years.

Tuesday, May 01, 2007

Can I take you(r method) home with me?

Already a couple of years I'm a member of a group within the Oracle Methods Group that is responsible for the Oracle Unified Method, or OUM for short. OUM is based on the Unified Process (UP), as is the Rational Unified Process (RUP) and as such has many similarities with RUP.

"So why yet another method?", you might ask. Well, there are plenty of reasons, but to mention a few important ones:

  • Years ago Oracle created the Custom Development Method. That method has been a great success for ourselves as well our customers, and many people are used to it. While creating OUM we are able to make it relatively easy for people to "migrate" from one method to the other by using the same kind of structure.

  • RUP is optimized for being used with Rational/IBM tooling. And as Oracle has it's own extensive set of tooling it makes sense to have a method taking that into consideration. Of course, to a certain extend RUP is configurable and extendable, but that has its limitations and would not facilitate the first point.

  • Maybe you do not know Oracle for running IT projects, but we do have a lot of expirience, especially regarding projects that involve a database or a Service Oriented Architecture and found that there is a lot of room for improvement in how the (Rational) Unified Process addresses those issues, if at all.

  • Last but not least, the Unified Process does not address cross-project/enterprise-level issues very well. Nowadays there is also the Enterprise Unified Process. Although a valuable extension to UP for us it still has a few shortcomings. Among them that it still does not address all aspects we found to be important to cover, like IT Governance.

On a very short notice the Oracle Methods Group will make OUM 4.3 available. That version will have a first cut of what we consider to be necessary to add to UP to make it an enterprise-level development method. For that we added an extra "dimension" to UP which we call Envision, as well as a couple of enterprise-level processes.

OUM 4.3 will also be the first release that (to a limited extend) is available for our customers as well. As the specifics are yet to be determined, I cannot tell you more yet, but I will keep you posted.

I know, going to the pub telling people that I'm working on OUM "which now has Envision as well!" won't get me the hottest chicks wanting to take me home. But hey, to most girls telling you are in the IT business is a turn-off anyway, so now you know I dare to tell you I'm quite exited about this release going to hit the streets!

And as soon as it does, I will let you know ...

Wednesday, April 25, 2007

When the going get's tough ...

Man, have I been busy lately!

Actually that is a very lame excuse for not having posted for a long, long time. The excuse even gets lamer, when considering the reasons for being busy, namely being lead developer / project manager for two projects at the same time. So what happened to me using agile development princples and being in control and all that crap? Do I not only talk the talk but also walk the walk?

Let me put it this way. If I hadn't had done so I probably would have had a complete burn-out, as just like in real life both projects became the busiest at exactly the same time. And just like the average male, I'm very bad at doing two things at the same time. So I had to go back to my manager and say: "Hey I only have one brain (and what a fine brain that is, ahum) but you are asking me to do a two-brain's job. So if nothing is going to change, I need to make one of those jobs a no-brainer so that I can concentrate on the other one." And as nothing changed, that was what I did.

This made the other project manageable again, but for someone like me it did not make my job much fun anymore as I like to do things properly. So that was when I decided to go to Costa Rica for holidays. When I got back, I felt soo much better again. I even recovered the energy to start thinking again about reviving this blog. It still took two months to actually do it, but what the heck.

So, all you nice people out there, my lesson for today is: when the going get's tough, just walk away and have a vacation!

Well that's not true of course, but it does help. So next time I hope to write about my experience on the better of the two projects that I dare to qualify as a really 'agile' project.

Tuesday, October 03, 2006

How to get agile in an agile way

For some peculiar reason I'm involved with three customers at the same time wanting me to assist them in changing their way of doing systems development. And I say peculiar, because although this is my expertise for many years already, there has not been even one customer asking me to help them with this for almost five years. Maybe it's the sign of the time, or maybe it is because I stopped dying my hair letting my gray hairs come through and customers finally start to take me seriously. I don't know.

Anyway. It is at the same time I see many people turning away from a Big Design Up Front, plan-based methodologies or (at the other side of the spectrum) from working without any formal method at all. Both have in common that they have an urgent need to get a better grip on the (planning of) the development process, and it is likely they will meet each other somewhere 'in the middle' where the agile methodologies are.

It's funny to notice that getting started with agile methodologies in itself also is best done in an agile way. With that I mean starting with a minimal set and from there, discover where and when you need to scale up. But although you start with a minimal set it still is important you are aware of which direction you want to go. It is even more important to ensure that the people that will be doing the actual work, agree this is the direction you all want to go. Otherwise the risk of resistance up to a level where you will fail misserably will be unacceptable high.

For that reason many of the techniques I suggest to introduce into the development proces, I actually use for introducing the techniques themselves. Like a meta-method if you will. Depending on what I think would fit, this could be using the MoSCoW principle for prioritization the changes. But also make them aware of the similarities between the Onsite Customer practice and them being involved in changing the development process.

This has many advantages. The most important ones are that I verify early in the process if people actually understand the technique and if it will work for them. While doing so I have the opportunity to use a kind of trial and error way of implementing a technique, and securing it not only in a formal way but also by creating enough support for it to last after I have left the building.

Monday, September 25, 2006

Practice what you preach

Man, I could not have imagined in my wildest dreams that I would write about time boxing and prioritization again that soon. Or couldn't I? Whish it were true, as knowing myself it would be the mother of all lies to say so, as it was almost the day after my last posting that faith struck again. Read on and you will understand why I haven’t been updating this blog for more than two weeks now.

What was the case? I was just about to move in two weeks and still had a lot of things to do. One of them visiting a SOA Suite 'code camp' in Slovakia on which I was to present about Oracle Business Rules, and I still needed to prepare myself. But at the same time many things needed to be done privately to prepare for the move. I already had some extremely busy weeks before, so I saw going to Slovakia as an opportunity to escape and get some rest before the move itself. And boy, what a mistake that was.

As always it started with too many things to do in too little time. We have a parquet floor in our new house that looked so dull that I nearly fell asleep only by looking at it. So we decided that it needed a different colour. But also the ceiling, walls, casing, etc needed to be repainted. I hoped that during my stay in Slovakia the painting could be done and the floor soured. Painting alone would have taken two weeks the least when I would have done it myself, but as my brother in law was doing it and since he is a professional I hoped for some magic here.

And magic he did, but when I got back on Friday the floor still needed to be soured. I hoped (but was not really expecting) to be able to sour and clean in one day. Two of my best friends helped me out on Saturday, but on Sunday I was on my own. And despite my best efforts it took me until Monday before souring was finished and then I still had to clean, paint and enamel. So despite the fact I was supposed to go to the office I was pushing myself to the limit while trying to get the floor to the stage that it was painted, working for eight hours in a row, no rest except for a quick coffee.

I had to compensate Tuesday evening to get some urgent work done, so on Wednesday I still had not packed even one cardboard box. And what a contrast that was with my original planning that assumed that I would start packing somewhere during the Sunday! As a result I had one Wednesday evening and one Thursday left to pack before Friday, which was the day the remover would come. And I only had that Thursday because I could switch it with Friday, which originally was my day off.

So what you then do is stop thinking, start packing like a madman, and see how far you get. And at 01:00 we were almost done! But not quite, so when the remover came at 9:00 it was an utter chaos in our old house, and a couple of boxes were carried away while I was still packing them. There was only a small comfort in hearing that the removers were used to this because they saw this happening all the time. Despite my wife pledging me not to do so, I had to leave at 11:00 to the office. Later on my wife told me that that was the moment she knew for sure she was married to a lunatic.

Now what has this all to do with time boxing and prioritization? Everything! It perfectly illustrates the situation that we encounter very often being a customer wanting every requirement to be implemented, which in theory makes time-boxing using the MoSCoW principle and leave out Should Haves if they don’t fit anymore in the time-box (when there is a temporary work-around) almost impossible. In my case the requirement was that everything should be packed before the removers would come. The time-box is obvious: Friday 9:00 the removers are there, period.

However, no matter what kind of customer you have in practice MoSCoW can at least be used to determine a best order in which things should be done, doing the most important things with the biggest risks first. In my case that was preparing big and heavy things for removal, like securing the washing machine. I got that done. And even though customers say they want every requirement implemented, when no time or budget is left they suddenly are capable of making compromises and drop Should Haves like they couldn't care less. Just like that. In my case the compromise was that the removers left enough stuff behind so that I needed to drive myself two more times to get the old house empty as a work-around.

Maybe some next time I will tell you about the similarities between something that all people understand, being that you don’t leave your garbage to the new owners of your old house, and the principle of refactoring, which only a few software developers seem to understand.

Wednesday, September 06, 2006

Excuse me, but what are you talking about?

You don't even need to watch carefully to discover that often two people that seem to be talking about the same, in fact could not be further from that. "Tell me something new", you might react. And you're right, I'm not telling something new, it happens all the time. And that is the saddest thing in my business, because it costs us all a huge amount of time, money and frustration.

Knowing all this it always amazes me how apathetic we seem to be about it, because it is so easy to do something about it. Try asking people what exactly it is they mean when they tell you something, that's all. Simple enough, wouldn't you say?

The other day I had a conversation with someone about preventing overrun on activities, and he said he was using the time boxing principle for that. But from the rest of the story it became clear that there still was a considerable track record of deadlines not being made. And as always the problem was that the customer kept on adding new requirements during the project and was not able to prioritize them properly, naturally. Now where have I heard that story before?

So I simply asked this guy what he meant with this "time boxing" principle. Well, not exactly like that, I must admit. In practice it is not always as that simple as that because before you know it people get the impression you think they are stupid or something. Or even worse they start to think that you are stupid, and you don't want that, do you? So what I asked instead was how they had implemented this "time boxing" principle. Do you notice how different the question is, but that in principle I ask the same?

And what he told me is that they had divided the functionality into smaller pieces that, on an average, could be implemented in a couple of weeks and that they had set a final delivery date for every piece of functionality to deliver. Plain and simple, right? "So what happens when you find out that something takes more time than anticipated?", I asked. "Well, we have planned for contingency, of course", was the reply. "And what if they customer finds out that the specifications are not complete?". "Well we create a change request for every extra thing they want and see if we can implement it in the time box. Otherwise we postpone it to the next time box or, if they really, really need it, increase the scope and set a new end date. But that will cost them, of course". "Have you ever hear of the MoSCoW principle?", I asked. "Huh?".

So what I found out with this simple question, is that the time boxing principle had been implemented only poorly, because they were not applying a proper prioritizing principle, which coincidentally will make time boxing fail without.

I'm not going to explain here what time boxing and the MoSCoW principle are all about. You can easily find out elsewhere. And I'm also not going to tell you that using them is as easy as it sounds, because it's not. Maybe some other time I will tell you what you can try to do about that. I will also not explain here that normally you don't plan for contingency when you do time boxing and what a waste of time making change request forms is in that case.

But I do hope I gave you an example how asking simple questions can help you in
preventing people from wasting each others and especially your precious time.

Wednesday, August 30, 2006

Live longer with source control!

I confess, I made a bad mistake some time ago. I started to work on a project that had no proper source control system in place. Instead, zip files with source code were copied back and forth and merging was done manually using WinMerge. And I did not take a stand and say: "I want a source control system or I'm outta here!". Not that it would have helped, but it is the principle that counts.

As a result I spent at least two hours every week merging code. And then the risk that something would go wrong and I would need to spend at least another hour to find out what was going on and correct the problem! As I have the memory of a gold fish (*) those were the two hours I picked up the phone saying "I call you back!" only to put it down without taking the effort to notice who was calling. People could have said my boss was standing outside giving free money to whoever asked for it without me hearing them, let alone being interested. By Turing, what a stress!

Now I'm in a similar project situation where we do use a proper source control system, in this case Subversion. If they are using a source control system in Heaven, I'm sure it is Subversion. Although not flawless, but what a blessing it is! But if you are using CVS, that's OK too. And there are probably more cool source control systems that I don't know of, but since I don't know them, who cares.

By the way, did I already tell you I wrote a white paper about how to use CVS in combination with JDeveloper? No? OK, I wrote a white paper about how to use CVS in combination with JDeveloper. You can download it from OTN. It is based on JDeveloper 9.x and perhaps a little bit outdated, but it still explains the principles of source control and what it can do for you. JDeveloper 10.1.3 already offers much better integration, also with Subversion. I hope to be able to write a white paper about that too. Maybe it helps when you call my manager and tell him you want me to write it. Just say the word and I'll give you his phone number!

Anyway, the point I try to make here is this. The effort of installing a source control system like Subversion and maintain it, is only a fraction of the time people spend on things like manual merging. And then the risk of things going wrong. Come on project managers, you can do without that! It is hard to give you exact metrics, but based on experience I dare to say you already crossed the break-even point when two people worked together for a month. After that, you definitely keep money in your pocket. And the developers are happier too!

And once you have it, you suddenly realize it offers opportunities you were not aware of they existed before. Like going back to a previous version (for example the one that is in acceptance test) on instant, fix an issue there and almost with one click of the mouse go back to the current version and move on. Or find out who implemented that one great feature so you can give him or her the medal of honor of the programmer of the week!

I promised myself that next time they send me to a project where no proper version control is in place, I simply refuse saying that it's bad for my health. I advice you to do the same and tell your project manager I said so. I'm sure no further explanation will be needed.

(*) Actually, the saying 'having the memory of a gold fish' appears to do great injustice to this fellow creature. If it's true what they say at Goldfish pass memory test I would praise myself fortunate when I would have the memory of a gold fish.

Monday, August 28, 2006

Don't ever let me hear you say "non-functional" again

The other day I had I discussion with a colleague about what many people call "non-functional" requirements (as opposed to functional requirements), being requirements regarding performance, security and so on. This colleague argued that “non-functional” exactly expressed what these requirements are.

“OK”, I said. “Suppose you want to buy a car. Can you agree with me that its top speed is a non-functional requirement? Especially since normally any car can easily drive over 120 km/hour, which is the speed limit in Holland anyway, right?” “I agree that the top speed is a non-functional requirement, yes”, he replied. “So what you are saying here is that you might not even take a look at the specifications, but just assume that it can drive fast enough. But at the same time you would not take the car when it would not meet functional requirements like having a CD player, enough space to stow your kids and a sun-roof?”, I asked. “Yes”, he said, “that’s what I’m saying”.

“Now what if you are a sailor that happens to live in Munich but who sails from Kiel. Mind you, the distance between these two cities is about 700 km (435 miles). But fortunately - with the exception of a few spots - there is no speed limit in Germany! You have a lovely wife and you want to spend as much time with her as possible. Would you then have a look at its top speed?”, I asked. “Ermmm…”

The point I had made here is that what some people consider to be non-functional requirements could very well be a functional one to somebody else. Like logging would be a non-functional requirement for the average user, while an EDP auditor might have a totally different opinion about that.

So please stop using the term “non-functional”, people! In real life people say non-functional to something that does not work. It is confusing and imprecise, where the development of information systems is a very precise job. Use “supplementary requirements” instead. The only disadvantage I can think of is that I still have problems saying "supplementary requirements" without twisting my tongue.