Cool DSL with Groovy

Domain model design has never been confused with Ďeaseí. From the dawn of its conception, generating executable Unified Modeling Language (UML) diagrams meant sweat and frustration. Generating stubs & skeletons, alone, led one into quagmires of Java Enterprise Edition. Yet it is inherently possible ó and actually has been for ages ó to design a programming language dedicated to solve specific domain problems; effortlessly and quickly.

DSL is not a foreign thing. Developers are steeped in DSL, even if unwittingly. Frameworks are DSL. A macro is also DSL.
When you write a function? That too is steeped in DSL. But functions are often a dirty business; un-standarized; quirky, whereby even domain experts face a daunting task when unraveling and then being put to task to update such rapscallion boilerplates (that is if they do not themselves compound the problems).
A programmer worth his reputation would find it necessary to isolate and clean up all non specific domain terms within the DSL. This literally means rooting through and setting aside all secondary-in-importance setup code, allowing the domain expert to establish or reestablish the primary foundation code.
This is most times done for large money for an end user; the helpless customer.

Given an appropriate DSL that fits their needs, customers could write all of the
code that they need themselves, without having to be programmers.

Now let us imagine, or better, take for granted, that a good DSL experience can exists.
That given an appropriate and standardized DSL environment, even the customer,
with a few simple tools and grasp-of-concept, can self-write code, gracefully, and
specific to their needs without the cost and hassle of having to become, or running to,
programmers. And that is where Groovy comes in.

Whatís Groovy? Itís Java as it should be. Easy and intuitive, it offers new features unknown to its parent yet (Iím still waiting for release 7), and come up with those benefits that form the basis of the DSLs that we will develop.

A fundamental feature is the MOP, the ability of changing runtime the properties and the behaviour of objects. It allows us to respond to method calls that do not exist in the class, in other word to ďpretendĒ that these methods exist.
Another essence to consider is the Closure. Itís the real power of Groovy. The extreme flexibility of this kind of object/method allows us to change its behaviour replacing its delegate class, on the fly.

Closure delegate. Groovy has three variables inside each closure for defining different classes in its scope: this, owner, and delegate.
The this variable refers to the enclosing class. The owner variable is the enclosing object of the closure. The delegate variable is the same as the owner, unless that delegate is substituted.

class MyClass {
def closure = {
println "this class:"+this.class.name
println "delegate class:"+delegate.class.name
def nested = {
println "owner nested class:"+owner.class.name
}
nested()
}
}
def tryme = new MyClass().closure
tryme.delegate = this
tryme()
/*
output:
this class:MyClass
delegate class:Script1
owner nested class:MyClass$_closure1
*/

When a closure encounters a method call that it cannot handle itself, it automatically relays the invocation to its owner object. If this fails, it relays the invocation to its delegate. one of the reasons builders work the way they do is because they are able to assign the delegate of a closure to themselves.

7 steps to MDA revolution

scrumI didn’t believe that such a successful project was such a rare event in the IT industry, that’s why I’ve never caught another chance for applying the learned lessons again. I thought that the experience accrued on Model Driven Architecture will be reusable in other circumstances, though I’ve never seen concepts as executable UML or MDA either applied or mentioned in the following commitments I’ve pursued into.
The idea of this project wasn’t conceived by external consultants thirsting for selling their cool technology; instead, it was born and grew up just inside the development team. The architecture’s transition had been gradual, and little by little, as new automation scenarios penetrated our excited minds, we moved as many as possible development processes under MDA framework.
Despite my early impressions while considering to undertake the project, the upper management embraced it and laid down investments counting on the benefits that this new approach would provide to the development.
What is difficult to change is the modus operandi of a 300 employee company that offers banking services and applications, which is engaged in one of the most conservative field in technology and development methodologies by default. It was about a significant jump in the services development and as the PM remarked: “We are developing as dinosaurs, don’t you know what the hell happened to them?”, the way to MDA was traced.

The issues we faced with the introduction of modelling notions would be defined as practical contingencies rather than theoretic or philosophical reasons, foremost the mess in the business layer. It raised reliance and maintenance weaknesses with time, even security holes that sounded so bad in such a company with a plenty of† banks as customers.
The hundreds of cases developed by dozens of engineers turning over throughout the months in the java development area had reached the critical mass, enough to trigger an explosion/implosion of the whole system. On the other hand, the applications can stand up only by high costs of maintenance and lazy deliveries, due to the difficulties on integrating incoming services with the underlying system.
The application layer managed the data flows between clients at the top and feeds and legacy information systems at the bottom. On their way, they affected several mixes by business process rules hardcoded in obscure java classes. Unfortunately, most of those shaking details were lost, because of the policy related to the development, which didn’t claim about missing documentation, and then it was so damned annoying to go back and take over old artefacts for maintenance or updating rules. Only skilful programmers might extricate the balled up code. The critical mass had to drop down and be brought to lower temperatures quickly. New developments and dozens of† incoming features were planned, so a deep refactoring was a must; it can wait no more.

How was the domain layer implementation that popped up the highlighted problems?
The developer’s effort was mainly focused on the creation of java classes implementing a Command and defining the service to the framework through an xml descriptor. The input and output of such a command was a raw DOM argument, which was parsed to extract the input data needed by the business transaction, the most part of coding was regarded for parsing and filling the response’s service that was a raw xml document too. I think it isn’t agreeable to put most of the developing efforts merely on managing input/output data and mapping, but this was the daily job.
Apply the MDA take time, it was an one year evolution, and it would be summarized with:

  1. XSD barriers. It was necessary to set some boundaries for developers, in order to get a minimum of control over the data flows. Each service had its own formal validation on input/output data, though no restrictions were settled on how implementing the services. Never ever elements or attributes not defined in advance by commitments.
  2. Pojo. Replace the raw document with simple pojo as an argument in the call-back methods; this operation aggregates the formal validation with an easy approach on data manipulation. The binding xml-java isn’t hurdle, it is automatic and many available libraries can accomplish this step.
  3. First hints with EMF. Xsd files are models for xml data, EMF is a MOF java implementation, a general abstraction for writing all sorts of models, I don’t linger over it now, but it represented a jump to the service modelling. EMF is an open source library enclosed in the Eclipse platform easy to use and customizable, it aims to separate the abstract model from the ground.
  4. Choice of technology. The play with EMF opened new horizons on modelling facilities. Hooked by this methodology to design SOA applications I realized EMF is not enough, the UML (which core principles are inherited from MOF) can fit much better with my purpose to design the object model, define the process flow and the user experience. UML offers diagrams that you can join together, static and dynamic model may describe most of application structure and behaviour.
  5. Executable UML. What do you do with this bunch of diagrams if you can’t transform them in real artefacts and plug-in them in your SOA framework? Not so much, keeping UML diagrams without related transformations and executions is merely fine for documentation, not much more than this. At that time the company joined the Rational beta-program and I started to develop Eclipse compliant plugins which leveraged the power of UML2 eclipse implementation.
  6. How to define data mapping? One of the main obstacles encountered was the data mapping between two different structures. It happens when you need to connect two or more components inside a service call, and each of those have different data structures. In this case UML doesnít provide any help and you have to customize the model with special stereotypes and profiles.
  7. Sequence and state diagrams. Class diagram were used to generate java classes, xsd files, copy cobol. Sequence diagrams on the other hand describe the flow of processes and their business rule, even conditional instructions which may be transformed to bpel or custom service descriptors. State diagram shows its benefits modelling the user experience and the steps to complete an operation, it easily tracks the state of sessions and will be transformed into the MVC system, as well as in whatever rich client forms.

Have your say.

Chain of failures on blocking threads

chain of failuresI came back to Milano little time ago and I’ve bumped into an API implementation in this new job. This will be a library that aims to interact with a remote application through a simple text-based protocol.
The typical process is a sequence of authorization – session initialization – commands processing – session disposing each of which enclosed in atomic request/respose interaction. The simplest and most immediate approach provides to write the protocol stubs, and manage them through simple methods that elaborate such commands at low level handling tcp sockets and the client/server handshaking with synchronous calls.
Sometimes the simplest is the best way, but not this time, especially within multi layer structured systems, where every component depends on many others, and any of those can fail.
This task rings as an alarm bell to me due to a recent project that looked like this one, and I can still remember the effects of hangs and missed responses in a SOA context; fortunately the event happened during a load test:

The application was a client interacting with openfire through XMPP. The investigation uncovered a bug that caused a dead lock in a connection pool in certain conditions, the consequences were easily predictable as the fast resource exhaustion, causing soon an application break down. The application server was over but also the client side was unrecoverable since the unresilient application’s architecture didn’t foresee hang requests.

What is unacceptable is the chain of failures that a problem like this can disseminate along the process path, what about combined systems where one side does not expect the other side to hang off if it stops responding?
Domino is a pleasant show, you watch all pieces tracing doodles during their falls, it’s funny but only when it doesn’t look like your system when it works.

Don’t play domino, be skeptical (and use concurrent package)

Blocking threads may happen every time you attempt to get resources out of a connection pool, deal with caches or registred objects, or make calls to external systems as this unfortunate experience above. I mean to be distrustful of each component you inquiry decoupling systems as necessary as to skirt the failure propagation. If your component is properly protected from its neighbours the probability of failure clearly drops down .
What does this mean in practice?
If you’re dealing with sockets you’re unaware of peer status, except when you send or receive bytes, then check the connectivity polling with fake sends and using setSoTimeout(int timeout) to prevent blocking reads.
However, I find much more effective isolating the whole business unit in a single timeboxed job, because delays may also come from huge responses as unbounded result set or file fecthing.
If you allow the clients to set timeouts, the request thread quit the operation when the call is not completed in time. Easy?
Concurrent programming is hard and it requires high skills and it is even discoraged unless you don’t want to reinvent the wheel. The java.util.concurrent package helps to craft your code with timeout controls as in the following example where I’m encapsulating a job unit (a login) into an ExecutorService.

public class Login implements Callable {

The Login action implements the Callable interface; despite Runnable it may throw checked exceptions when executed.

 Login login = new Login(user, password);
 Future<?> res = exe.submit(login);
 try {
   res.get(commandTimeout, TimeUnit.MILLISECONDS);
 }
 catch (ExecutionException e) {
   log.error("error on login", e);
 }
 finally {
   res.cancel(true);
 }

The tip shows to launch the callable through ScheduledThreadPoolExecutor.submit and waiting the task’s end through Future.get(long timeout, TimeUnit timeUnit). By specifying the timeout value the operation will be completed in time , otherwise a TimeoutException will be thrown.

N.B.: in this last case when timeout occours the ExecutorService doesn’t seem to take care about the still open thread, so don’t forget to execute Future.cancel(true) in the final statement.