Archive for the ‘programowanie’ Category

Introduction

In this post I’m going to describe how to use hibernate id generator in order to produce portable solution for genrating prefix sequences with user-defined – part of an issue tracking system (very simple for the sake of brewity).

Requirements

As you might have guessed it – the requirement towards it are not that new and they resemble what we all know from other systems in the field. Each ticket is created within its own workspace and prefix which might be defined per workspace. Ticket number should start from 0 and increase. In case user decided to chanage prefix:

  • the old issues should keep it,
  • while new ones should be named after it
  • all numbering should not be influenced by prefix change

Solution idea

The entity hierarchy below reflects part of business with Client entity used as root for all the rest of nodes and having references to Projects (yes for simplicity of this post there’s nothing else :))) which you can see depicted in diagram #1.

Diagram #1

From technical point of view this dit look much like a sequence sequence where project-id would be used as primary key and increased value in second column would be used as ticket number.

 

Technical details

MySQL is the choice of RDBMS so it felt natural to use some of its built-in functonality and just wrap it around with Hibernate. So my idea was to use a composite primary-key with auto-increment column which would allow to set index per project and restart counter each project is inserted.

An DDL sql for such a table looks fairly simple:

CREATE TABLE animals (
    grp ENUM('fish','mammal','bird') NOT NULL,
    id MEDIUMINT NOT NULL AUTO_INCREMENT,
    name CHAR(30) NOT NULL,
    PRIMARY KEY (grp,id)
) ENGINE=MyISAM;

And with simple inserts:

INSERT INTO animals (grp,name) VALUES
    ('mammal','dog'),('mammal','cat'),
    ('bird','penguin'),('fish','lax'),('mammal','whale'),
    ('bird','ostrich');

Produces:

+--------+----+---------+
| grp    | id | name    |
+--------+----+---------+
| fish   |  1 | lax     |
| mammal |  1 | dog     |
| mammal |  2 | cat     |
| mammal |  3 | whale   |
| bird   |  1 | penguin |
| bird   |  2 | ostrich |
+--------+----+---------+

Copied from mysql docs

This is exactly the behaviour I wanted so you might ask why not use it and stop wasting other’s time with this post. Well there’s a little detail hidden in the DDL statement above. Careful reader might have noticed that it requires you to use MyISAM engine/table type which among all other things is non-transactional. If you try to use it with InnoDB engine you end up with global auto-increment despite the id column.

Hibernate generators

From the very beginning all of this sql was supposed to be wrapped inside hibernate generator but after finding out that pure MySQL solution was out of the table I decided to analyse what exactly are generator and what they have to offer. This result in reading generator discussion after which I found out that hibernate team has almost solved my problem…

MultipleHiLoPerTableGenerator

If you’re like me – than you have probably never heard of  MultipleHiLoPerTableGenerator which can generate sequence per table and solves some issues related to:

  • creating table with dialect-specific ddl
  • creating initial sequence entry if non-existent
  • using locking on row-level for concurrent access
  • uses separate table for sequences which means that read-only queries are not affected
  • doing retry in case sequence was updated in-between
  • and last but not least – its portable

Everything comes with it’s price of course – this generator’s code is barely readable not to mention someone trying to debug what is actually happening.

IdGeneratorWithDynamicKey

So I decided to follow path showed by hibernate team and do some copy-pasting of their generator and start my own so here’s step-by-step:

  1. An artificial entity was created called TaskNumber.
  2. It contains embeddable id consiting of Project and number
  3. Embedded id is translated into database composite key.
  4. Composite key ensures that there are no two tickets with the same number in the same project.
  5. Id field is annotated with generator annotation.
  6. Generator takes care of creating table for storing sequences
  7. Before each insert of TaskNumber it increments per-project sequence
  8. Newly updated sequence is applied to TaskNumber and persisted

Summary

As one might expect the code is complex and hard to debug – the only positive side is that all the bumps have been handled already by hibernate team and it should produce less headaches in future when real concurrency and silmultaneous updates occur.

See it for yourself 🙂

Introduction

In this post I’m going to describe  how cyclic dependencies can cause headache after long hours of debugging and how to solve them using publisher-subscriber design pattern in form of very simple message relay/bus.

I’ll stick to simple environment:

  • jquery 1.10
  • requirejs 2.1
  • jasminejs 1.3.1

There will be some optional dependencies introduced on the way:

  • phantomjs
  • nodejs
  • grunt
  • madgen

After presenting solution I’m going to present a working example with interchangable parts on both notifier and listener side.

Problem description

So there you have it – you built your application using requirejs where modules are decomposed using MVC pattern – model is managed in the DAO layer which communicates with online (remote) and offline (local) database. Configuration for both of them is stored locally in Setting module and at some point Settings got saved in the database which introduced cyclic dependency:

DAO <-> Settings

So the only way to deal with it in current architecture was to give DAO knowledge about Settings and Settings about DAO. This leads to a very complex application structure, which may also be called everyone-knows-everyone.

e.g.


define("settings", ["fs", "dao"], function(fs, dao) {

  var fsSettings = fs.read("settings.json");
  // eval it
  var settings = Object.create(fsSettings, dao.read("settings"));

  function read(key) {
   return settings[key];
  }

  return {
    read: read
  };
});

define("dao", ["settings"], function(settings) {

  function Dao(url){}

  Dao.prototype.init = function daoInit(url) {};

  var dao = new Dao(settings.get('url'));

});

RequireJS solution

You could use special exports keyword (described here) but this means that your modules preserve tight coupling anti-pattern.

Publisher/subscriber – message bus

Publisher subscriber is a design pattern that allows low coupling and high cohesion between different modules of the system. A simple requirements written in BDD/jasmine style would look like this (included below):

describe("message bus", function() {

  it("should allow adding listener for event", function() {

    mBus.addEventListener("myEvent", function(){});

    expect(mBus.length("myEvent")).toEqual(1);

  });

  it("should allow removing listener for event", function() {

    var fn = function(){};

    mBus.addEventListener("myEvent", fn);

    expect(mBus.length("myEvent")).toEqual(1);

    mBus.removeEventListener(fn);

    expect(mBus.listeners("myEvent").length).toEqual(0);

  });

  it("should allow broadcast", function() {

    var fn = jasmine.createSpy('fn'),

    o =  {prop:11};

    mBus.addEventListener("myEvent", fn);

    expect(mBus.length("myEvent")).toEqual(1);

    mBus.notify("myEvent", o);

    expect(fn.calls.length).toEqual(1);

  });

  it("should allow additional data within broadcast", function() {

    var fn = jasmine.createSpy('fn'),

    o =  {prop:11};

    mBus.addEventListener("myEvent", fn);

    expect(mBus.length("myEvent")).toEqual(1);

    mBus.notify("myEvent", o);

    expect(fn.calls.length).toEqual(1);

    expect(fn).toHaveBeenCalledWith(o);

  });
})

As you can see in it is assume that apart from holding references to listeners message bus is stateless, which will makes some things down the line easier.

Let’s make these tests pass!


define(function() {

  var listeners = {};

  function addEventListener(event, fn){
    listeners[event] = listeners[event] || [];
    listeners[event].push(fn);
  }

  function removeEventListener(event, fn) {
    if (listeners[event] === undefined) {
      // do nothing
    }
    else if (listeners[event].length === 1) {
      listeners[event] = []
    } else {
      var id = listeners[event].indexOf(fn);
      if (id !== -1) {
        listeners[event].splice(id, 1);
      }
    }
  }

  function removeAllListeners(event) {
    if (!event) return;
      listeners[event] = [];
  }

  function notify(event, data) {
    if (listeners[event] === undefined) {
      // skip
    } else if (listeners[event].length) {
      listeners[event].forEach(function(fn) {
        fn(data);
      }
    }
  }

  function length(event) {
    return listeners[event] && listeners[event].length || 0;
  }

  function clear() {
    listeners = {};
  }

  return {
    addEventListener: addEventListener,
    removeEventListener: removeEventListener,
    removeAllListeners: removeAllListeners,
    notify: notify,
    clear: clear,
    length: length
  }

});

Rewrite components

So now we have very basic implementation of MessageBus, which can be applied to solve our problems with cyclic dependencies. We’re ready to rewrite our components, in a way that makes makes them independent of each other.


define(["MessageBus"], function(mBus) {

  mBus.addEventListener("dao:init", dbSettingsReady);

  var settings = readFsSettings();
  settingsReady(settings);

  function readFsSettings(){
	return {};
  }

  function settingReady(key, value) {
    mBus.notify("settingReady:" + key, value);
  }

  function dbSettingsReady(dbSettings) {
   settingsReady(dbSettings);

   settings.__proto__ = dbSettings;
  }

  function settingsReady(s) {
    for(var i in s) {
      settingReady(i, s[i]);
     }
  }

// no direct access to settings!

});
describe(["MessageBus"], function(mBus) {
 var dao = new Dao();
 mBus.addEventListener("settingReady:url", function(url) {
    dao.init(url);
    mBus.notify("dao:init");
 });

 return dao;
});

Testing

As you can see some of the logic is hidden behind events and there’s no way to access them directly, so how this makes our test cases easier?

spyies to the rescue!

Using jasminejs spy method we can pretend there’s an instance of mbus and inovke our logic as if there are real interactions.


describe("Settings", function() {

  // prepare mock dependencies
  var mBusMock = jasmine.createSpyObj("mBus", ["addEventListener"]);

  define("mBusMock", function() {
    return mBusMock;
  });

  var r = require.config({
    map: {
      'Settings' : {
        'MessageBus': 'mBusMock',
        'Dao': 'daoMock'
      }
    }
  });

  define(["Settings"], function(s) {
    describe("Settings", function() {
      it("should register 1 event listeners", function() {
        expect(mBusMock.addEventListener.calls.length).toEqual(1);
      });
      it("should register for dao:init event", function() {
        expect(mBusMock.addEventListener).toHaveBeenCalledWith("dao:init", jasmine.any(Function));
      });
      it("should read dao settings", function() {
        mBusMock.addEventListener.calls[0].args[1].call();
      });
    });
  });
});

Pros

No cyclic dependencies

As you can see now modules have no cyclic dependencies so we solved our main issue.

Encapsulation

If you look closer at settings modules you’ll notice, that currently there’s no way to acces its state from outside world, which helps us hide information from outside world.

Testing made easier

As you can see in last requirement for settings – by using jasmnejs spies, we can access registered listener and call it on our own, which helps allows us to test encapsulated logic and does not for module author to break its contract only for the sake of proper testing.

Cons

Application flow

When first entering world of indirect events you might be intimidated and loose control over what happens where – fortunately this is easy to tackle and after relatively small amount of time, when your way of thining adjusts to having „man-in-the-middle” you’ll discover a completely new world ahead of you – it really is worth to try!

Soft dependencies

As mentioned earlier there are no direct dependncies between modules, so static code analysis is much harder if possible at all but in case of such a dynamic language as JavaScript this should not be an issue – just prepare yourselves for it by introducing proper conventions and follow it consistently.

Exception handling

Since you have no direct access to your callee, you can find it difficult to eliminate bugs from your software. Just prepare yourself for situations where your subscribers throw an error and handle it consistently, e.g. by logging it.

You might also try to post another error event but beware not to create a deadlock.

Event driven lifecycle

This is somewhat similar to point #1 – in the beginning you can find it hard to properly define your modules/application lifecycle, because each of them will receive different event. After a while you’ll find out that grouping events and hiding them behind message bus makes your life much, much easier.

Summary

During this article I’ve gone through possible headaches caused by cyclic-dependencies with requirejs and pains it might cause.

Next thing was to decompose modules so they’re totally clueless of one another and communicate only through message bus.

Than I showed how to tackle testing with jasminejs spies and how testing was made much easier through real encapsulation of logic inside a module.

Useful links

 

Itroduction

In this entry I’m going to present a refreshed approach to application versioning in jee6 environment using EJB 3.1 MBean. My previous entry worked fine but had some limitations:

  • first and the most obvious is that it works only under JBoss 5.1 application server
  • secondly it requires java source code filtering for which maven wasn’t prepared
  • and third – it’s confusing for everyone looking at a project with src/main/template directory which is preprocessed by maven resources filter

Yes I could rewrite it so instead of maven it would use annotation preprocessor based on experiences from this and this post but annotation preprocessor for such a simple task just feels wrong.

So let’s start once again from requirements and I’ll walk you through very simple solution which should shed some new light on ejb xml descriptor.

Requirements

I want to have version of each component easily accessible for users. Version should be applied automatically during build and impossible to change afterwards. It should be also portable.

Solution description

MBean registration using EJB 3.1 Singleton

Registering MBean using ejb3.1 singleton is as simple as adam bien describes it, so I’m not going to analyse it furhter. The only thing to remeber is that you either use MXBean or use convention where your MBean interface has MBean suffix e.g. ApplicationVersionMBean.

XML descriptor and maven resources filtering

Now something that some of readers might consider as harder part – xml deployment descriptor. Some people hate it some people love it but it will let fulfill all the requirements:

  • it’s portable – it’s part of the spec
  • it has all the gadgets available through annotations
  • it promotes bean to enterprise level transparently
  • by declaring injection point and filtering a version number is injected into the bean where it stays forever 🙂

Direct link for the impatient ones.

Summary/Source code

As you can see xml descriptor has its advantages and solution provided is really funky:

  • it’s compact
  • it’s verbose
  • it doesn’t impose any external deps (no annotations)
  • it goes along with maven philosophy as to what should be filtered and what not
  • it’s compact – oh yeah I mentioned this one already 🙂

So there you have it source code on github as usual.

Introduction

In this post I’ll revisit memoization topic described some time ago in this post, with slight modification:

  • playground – ejb3.1 compilant container
  • resolve key-generation problem only tackled in previous installment.

Requirements

I want to be able to cache calls to my method in a way that is transparent with configurable cache duration.

Solution description

EJB3 Interceptor

In ejb3+ environment, the ability to put transparent wrapper to a method call is called interceptor – it lets you access target method as well as parameters. It’s even possible to communicate with your target using context but we’re not going to need it in this call.

So a CachingInterceptor aggregates both CacheManager as well as KeyGeneratorEJB3 and as you can see both are just interfaces meant to be used with any caching solution and any key generation algorigth of you choice. Currently CacheManger is a generic interface that lets user impose limitations on the type of keys used for cache.

Currently our cache implementation uses EHCache which gives user ability to do configuration in a separate file, so we’re going to stick with that.

As you can see there are different patterns for ehcache integration. We’re going to use cache-aside because this one is the easiest way to replace ehcache with any other caching mechanism.

KeyGeneratorEJB3 defines just one method that should be used as entry point for interaction with CachingInterceptor.

Cache key generation

As mentioned previously key generation was not taken under serious consideration in my previous entry  – that’s why I ended up with semantical toString which was easy way to go – but in case there’s no access to the underlying code might be an issue.

So this time I decided to use a slightly modified Visitor pattern* – I want to have a stateless bean so my keygenrator is supposed to return a value from its visit method (named generate in my case) which should be used as a key. This implies that all accept methods (named generate in my case) need to return string as well, which makes things not so transparent.

Summary

As you can see in this post EJB3 environment gives you even more flexibility to use a caching middle-man and with improved key-generation policy it’s finally in a right place.

 

*I did not put any hyperlink so it wont confuse anyone looking for a valid visitor pattern.

Introduction

In this entry I’ll describe how to perform data validation for GWT 2.3+ application running on JBoss AS 7.1. On both client and server I’ll use jsr-303 annotations (a.k.a. beanvalidation). The source code for this project is meant to be a startup project for real world application and thus contains 6 modules and dependency to proprocessor project (available also on github).

Requirements

Base requirement is that application can handle incomming data or fail gracefuly. It means that user should be notified about invalid data she entered in such a way she can correct it and save. On the other hand we still want to guarantee that server is protected against anyone bypassing this meaningful road adding wrong data. This leaves us with two paths:

  • client-side validation which can inform user of what she’s doing wrong and how to correct it
  • a double check layer on server side that will do exactly the same and thus maintain data-integrity

A careful reader has already noticed that there’s going to be some redundancy which in worst case might cause a maintenance-nightmare in which client and server  are completely separated parts of application and use duplicated parts of code for doing the same thing while doing other things different. To avoid it we’re going to introduce jsr-303 validations for defining our validation rules and annotation preprocessor, taking care of tranforming annotations into validation rules understood by client-side layer of application.

There might be questions raised why not to use gwt-beanvalidation – this feature has been brought to official GWT branch in 2.5 release. As side project it was possible to use it with GWT 2.4, which leaves 2.3 users on their own 🙂

Application flow/Tools

First let’s start with application data-flow, so we can follow it and mark critical parts in terms of data-validation.

Our GwtValidation application is a GWT-RPC application. On server side data is going to come from RDBM, throught JPA2.0 layer (this will be handled by Hibernate) but entities will not be used on client-side. Instead there will be simple pojo, with fields rewritten from entity (handled by dozer). DAO and Service layers will be defined as EJB3 stateless beans. Further down the line, DTO’s will be handled by GWT-RPC service (defined as 3.0 servlets) and serialized/deserialized into client-side code.

On the client-side data is available as DTO and transformed into form acceptable by user (html).

Since part of our application lives totally on client-side and is clueless of what serves its data (or at least this is assumed), we cannot rely on client-side only. That’s why there are 2 steps of the flow that require validation.

On the other hand we could resign of client-side totally, but this would require connection between errors generated on server-side with client which might get ugly – it’s easier do real client-side and user server side as last-resort, so it’s errors might be little less explanatory.

Introducing jsr303…

Both client and server operate on the same beans – DTO’s. On server-side DTO’s are converted into entities and back, while client-side operates on DTO’s only. That’s why they seem to be the best candidates for placing jsr-303 annotations.

The idea is that unless validation for DTO’s is succesfull they will not get converted into Entities, so validation should be performed on service-level methods.

…on server-side

Our services are EJB3 stateles beans injected into servlets using EJB DI. To get the job done we could call manually validation in each method, check results and in case of errors send them back to client. But this looks a looot of duplicated code, so another concept was introduced – BeanValidationInterceptor, which is EJB3 interceptor and can handle validation through available Validator object (injected by container) and in case there are any errors stop execution.

This leaves us with small problem – how to inform Interceptor that we want certain object to get validated. We could mark class as validable, but this is not really flexible solution as someone might want to operate on instances instead of classes. So maybe annotate service method parameters with just 2 annotations:

Sure we don’t want to perform any validation if field is null – so why not include it in Valid handling, well there are cases where you want NotNull and not necesarily Valid so this distinction is just for consistency.

This gives us enough flexibility and removes redundant code on server side. And we do get some bonus – orignally it was designed only for DTO’s but in case of search it’s good to have SearchDTO which might be validated using the same mechanism, eg. we don’t want to peform search when we got null search object.

… on client-side

Here it is somewhat more complicated. First we need to define our validation DSL – this is responsibility of Constraints class which contains all methods that should reflect available jsr-303 annotations and form our vocabulary. All of them return Constraint object with methods used when doing real validation.

Second part is to convert annotations into executable code – this responsibility of Jsr303Processor, which analyzes classes during compilation and generates class suffixed Constraints whose methods resemble its name after validated fields. So when a field firstName should be validated, there is a corresponding static method firstName() which returns List<Constraint> – these are validation rules.

Third step is applying them. Original idea was to apply these inside java code but than field value and field validation would be defined in different places which might be somewhat confusing. That’s why an abstrac type ValidableRow which is extended by TextRow – which is a proxy to the real form field and additionally is capable of gathering validation rules and passing them further.

The rest is contained in ValidableForm which contains some bilerplate to handle save action, apply validation and either save data or display validation errors.

Summary

Annotation preprocessor is really powerful tool and the author of this post is fully aware that project describe might seem a little childish. It’s purpose is to balance preprocessor and GWT-RPC project that depends on it. There are many topics completely omitted – eg. multiselect boxes, different return types, making pre-processor type-aware and improving code generation (notBlank instead of notNull for String fields/TextBox are a good example) and not to mention custom validators which are fully supported by jsr-303 spec.

In case you’re interested in any of those please leave some comments and I’ll try to come up with another post describing it in more detail.

Introduction

After spending some time with Liferay and doing a bunch of portlets, I noticed that they share some common functionality which is just copy-pasted which is wrong.

I also knew that Liferay offers its Service system – a service is a combination of Interface and Util class that land into tomcat’s lib/ext folder backed up by implementation located in either portlet or an ext environment and some spring xml to wire them up.

A first approach suggested that this is going to be a piece of cake but unfortunately I got NPE 🙁

So I started looking through Liferay source code to find out how service-builder generated services fetch implementation from global spring context and expose it to the rest of portlets through Util class. It all seemed as easy as casting Object into given interface buf unfortunately wasn’t and I got ClassCastException (spring forums recognizes as not programming to interfaces…)

I was kind of stuck.

So maybe there was something special about classes generated by service-builder  which makes them accessible from non-owning-portlet.

Class loader proxy

The answer is yes and it’s called Class loader proxy – the architecture that grew from the need to allow inter-portlet communication across tomcat class loader. It transformed into somewhat complex architecture, that requires following steps:

  1. Create an interface
  2. Wrap your code in static calls using service-util
  3. Acesses your impl through spring bean context
  4. Wrap bean aquired from bean context in Class loader proxy.
  5. Store proxy inside local portlet context.
As you can see number of layers that need to be written for a single method call can cause a headache. Additionally  CLP uses MethodKey/MethodInvocation classes internally which makes it even more complicated.
Exposing a single method through all these layers seemed rather exaggerated task but this way I could focus on building all the layers by hand and finally see my implementation class called properly from different portlets.

APT generator for boilerplate

At this point I felt the need for some kind of code-generator that would free me from declaring all these service/clp layers by hand and I reminded myself that Java 5 offered a tool called APT. APT is annotation pre-processor used by JPA2 authors to create type-safe Criteria API (in java6 it’s part of javac compiler). Using it is as simple as placing annotaton on desired item, declare processor being used and your annotation is pre-processed during some phase before compilation and compiled – magic!

Generating Liferay-service code with APT

After this a big lengthy introduction I can finally say that in this post I’m going to concentrate on building LiferayService processor that will generate Liferay’s service boilerplate code, using java interface as starting point. I’m fully aware of the factthat Liferay’s service builder generates interface from implementation but for most people I spoke with, this is counter-intuitive so I’ll stick to contract-first approach.

Afterwards I’ll show you how to create sample ext-plugin project in order to test your new processor in action.

If you don’t have any experience in building annotation processor I recommend reading Jorge Hidalgo annotation series which starts from the very basics and finishes on writing simple annotation Processor.

Define annotation

As you can see the annoation is only preserved for compile-time, that’s all we need for annotation processor.

@Retention(RetentionPolicy.SOURCE)
@Target(ElementType.TYPE)
public @interface LiferayService {
	public String value() default "portalClassLoader";
	public String initMethod() default "afterPropertiesSet";
}

LiferayService.java

Define view – templates for generated code

Now following best-practices adviced by Jorge from the formentioned blog entry I decided to use velocity template engine as view. We’re going to need 2 templates LiferayClp/LiferayUtil.

Their task is not that complicated:

  • add package declaration
  • add class declaration – using original name + suffix (Clp/Util)
  • add declaration for all passed methods
  • add init-method declaration
  • add class loader fetch

Define model – pass pieces of information from processor to view

All the pieces of information required by view are contained in processor model:
	public final Name getClassName() {
		return get(KEY_CLASS_NAME);
	}

	public final Name getPackageName() {
		return get(KEY_PACKAGE_NAME);
	}

	public final String getClassLoader() {
		return get(KEY_CLASS_LOADER);
	}

	public final String getInitMethod() {
		return get(KEY_INIT_METHOD);
	}

	public final Map getMethods() {
		return get(KEY_METHODS);
	}

	public final Map getModelMap() {
		return Collections.unmodifiableMap(modelMap);
	}

	public String getQualifiedName() {
		return getPackageName() + "." + getClassName();
	}

	public Set getSuffixes() {
		return EnumSet.allOf(Suffix.class);
	}
The code above is just an excerpt to see the main interface, you can view whole model here.

Get list of methods that need to be proxied

So now we have to decide what methods are the right ones that we would like to use in our service. Using interface as an entry point makes it a little bit easier since we don’t need to distinguish between implementation methods and some utility methods – getMethods can just take all methods that are abstract and non-native:


	private Map getMethodsToProxy(TypeElement classElement) {
		Map methods = new HashMap();

		List members = processingEnv.getElementUtils().getAllMembers(classElement);

		UniqueNameSupplier uns = new UniqueNameSupplier();

		for (Element member : members) {
			if (isInterfaceMethod(member)) {
				String methodName = uns.supplyUniqueName(member.getSimpleName());
				methods.put(methodName, member);
			}
		}

		return methods;
	}

It uses UniqueNameSupplier class which might need some extra attention – CLP class contains list of fields that share their name with proxied method name. So for each interface method we have a method declaration and field declaration but there is no mechanism that mimics method overloads for fields, so we just need to make sure that field names are unique. The uniqueness of CLP field is achieved by giving field name the same name as method and in case there’s a potential name-clash appending a number that prevents it – not a very sofisticated name-mangling.

Combine it all together

In order to use it inside IDE like Eclipse it’s best to have a simple archive – that’s why I decided to use maven assembly plugin with a simple configuration that just filters out all META-INF directories in order to preserve processor service file.

Summary

Basically this is it – there are some other extension points that you might use but feel free to explore them on your own:)

Sources and binaries

Sources are available on github.
Binaries in my private repo.

In my next installment I’ll write some basic usage scenario with ext-pluign and sample portlet.

That’s it!

Observer patterns is one of the core design patterns that every developer should know and use. It let’s you create components with low coupling and adhere to Hollywood principle.

Some pseudo-code for adding new observer might look like this:

void addObserver(String ev, Observer ob);
producer.addObserver("myEventName", new Observer(){public void observe(Producer producer){}});

But there are at least 2 issues with this design:

  • You have to explicitly define event name
  • You can only pass reference to a producer object.
You could of course decide that a producer can only dispatch one type of events and we don’t need to explicitly name them, but usually that’s not the case.

So what if you wanted to define your events using some kind of interface, eg:


public interface MyEvent{}
producer.addListener("myEventName", new Observer<MyEvent>(){public void observe(MyEvent me){}});

So this is a little bit better but you still need to pass name of event as String.

So maybe we could use java generics to use power of type-safety at compile time and still have all the benefits of low-coupling. Something that could look like this:


public <T> void addListener(Class<T> e, Observer<T> o);
producer.addListener(MyEvent.class, new Observer<MyEvent>(){public void observe(MyEvent me){}});

Where first parameter could be used as EventType declaration.

This is very close but has still one flaw – it forces redundant code:

  • once event type is passed explicitly as event definition
  • second during observer creation – template parameter

So maybe it could be somehow simplified into something like this:

producer.addListener(new Observer<MyEvent>(){public void observe(MyEvent me){}});

Some of you might say that this is impossible in Java due to type-erasure. This is all true – but there’s second part of it – generic types are available at runtime through getGenericSuperClass/getGenericInterfaces methods.

You can see source code for my type-safe event system on github but I think it needs just some clarification – why do you need that ugly MethodProxy class.
So after writing DefaultImplementation of event dispatcher interface I found out that compiler would not allow call to Listener’s on method with passed event instance. So I decided to find proper method using reflection and only internally resign of type-safety.
MethodProxy class creates proxy upon instantiation so it will report any problems very close to their cause.

So here’s what you can find already in the repo:

  1. Dispatcher interface with default implementation
  2. Simple event interface used as entry point for all events
  3. Simple listener interface used as entry point for all listeners
I guess there might be sample usage scenario:
Declare your event:
</pre>
public class MyEvent implements ProcessorLifecycleEvent{}
Create event dispatcher and register your listener:
ProcessorLifecycleEventDispatcher eventDispatcher = new DefaultProcessorLifecycleEventDispatcher();
eventDispatcher.registerListener(new ProcessorLifecycleEventListener<MyEvent>(){
public void on(MyEvent e) {
// some logic here
}
});

Publish new event:

eventDispatcher.dispatch(new MyEvent(){});
In the sample above MyEvent is very simple but it could take some data through constructor and act as full-blown DTO which simplifies greatly interactions beacause listener code doesn’t have to do any runtime casts – see example.
The whole project is part of another thing – Liferay service without service builder, which I’m going to describe soon 🙂

So enjoy.

Second version of Dequeue implementation as the first one contained bugs.

package pl.bedkowski.code.amazing;

import java.util.Iterator;

public class Dequeue<E> implements Iterable<E> {
	private Entry tail, head;
	private int size;
	
	public boolean push(E item) {
		tail = new TailEntry(item, tail);
		if (head == null) {
			head = tail;
		}
		++size;
		return true;
	}
	
	public E pop() {
		return cut(false);
	}
	
	private E cutTail() {
		E ret = tail.value;
		tail = tail.prev;
		return ret;
	}
	
	public boolean unshift(E item) {
		head = new HeadEntry(item, head);
		if (tail == null) {
			tail = head;
		}
		++size;
		return true;
	}
	
	public E shift() {
		return cut(true);
	}
	
	private E cutHead() {
		E ret = head.value;
		head = head.next;
		return ret;
	}
	
	private E cut(boolean headHead) {
		if (isSizeZero()) {
			return null;
		}
		E ret = headHead ? cutHead() : cutTail();
		--size;
		if (isSizeZero()) {
			tail = null;
			head = null;
		}
		return ret;
	}
	
	public int size() {
		return size;
	}

	/**
	 * Checks if both size and tail/head properties for null
	 * 
	 * @return true size is zero and both tail/head are null
	 */
	public boolean isEmpty() {
		return isSizeZero() && tail == null && head == null;
	}
	
	
	private boolean isSizeZero() {
		return size == 0;
	}
	
	@Override
	public Iterator<E> iterator() {
		return new Iterator<E>() {
			
			private Entry entry = head;

			@Override
			public boolean hasNext() {
				return entry != null;
			}

			@Override
			public E next() {
				if (entry != null) {
					E ret = entry.value;
					entry = entry.next;
					return ret;
				} else {
					return null;
				}
			}

			@Override
			public void remove() {
				throw new RuntimeException();
			}
		};
	}
	
	public Iterator<E> reverseIterator() {
		return new Iterator<E>() {
			
			private Entry entry = tail;

			@Override
			public boolean hasNext() {
				return entry != null;
			}

			@Override
			public E next() {
				if (entry != null) {
					E ret = entry.value;
					entry = entry.prev;
					return ret;
				} else {
					return null;
				}
			}

			@Override
			public void remove() {
				throw new RuntimeException();
			}
		};
	}
	
	private abstract class Entry {
		private Entry prev,next;
		private E value;
		
		private Entry(E value, Entry next, Entry prev) {
			this.value = value;
			this.next = next;
			this.prev = prev;
		}
		
		@Override
		public String toString() {
			return value.toString();
		}
	}
	
	private class TailEntry extends Entry {
		private TailEntry(E value, Entry prev) {
			super(value, null, prev);
			if (prev != null) {
				prev.next = this;
			}
		}
	}
	
	private class HeadEntry extends Entry {
		private HeadEntry(E value, Entry next) {
			super(value, next, null);
			if (next != null) {
				next.prev = this;
			}
		}
	}
}

Problem description

Recently I joined a project that uses tapestry5 – a very nicely organized framework for web development with its own IoC mechanisms, great error reporting and very productive. And to me it still holds closest to its goal – it should be easy to edit templates by non-programmers.

Let’s get to the point – on one of pages I needed to display a dropdown, which is quite nicely described in tapestry documentation but to make things a little bit more compliated the model for the dropdown must be persistent throughout requests. At first it looked like a piece-of-cake, just add @Persist annotation to the model field and that’s it. Under jetty (which is our development environment) it all looked nice but after moving to Glassfish (which is production environment) strange error apeared, stating that SelectModel cannot be stored in session since it doesn’t implement Serializable interface.

So I asked our tapestry guru what’s going on and why can’t I store SelectModel in session and he said that Tapestry components should not be stored in session and the only thing that I should keep in session is the backing list for which model should be regenerated for each request.

This sounded awkward to me – because I envisioned all this repetitive code for converting Lists into SelectModels so they could be properly displayed.

I knew there had to be a better way…

Solution description

Fortunately tapestry gives you a very nice way of hooking into it’s bytecode manipulation mechanisms by implementing ComponentClassTransformWorker2 (available since v. 5.3), so I decided on a following solution:

  • each field that is supposed to be displayed as dropdown should be marked with some specialized annotation (say @SelectModel)
  • there should be some magical way of transforming a list into a SelectModel behind-the-scenes so view gets it with no other line of code.

I started googling and I found very similar case – there was a mixin, that should be added to every label. This post gave me an idea how to plug my bytecode manipulation service into tapestry plugin mechanism.

Usage

  1. As mentioned in solution plan the whole thing starts with annotating field with @SelectModel annotation, which requires name of field that should be used as option label.
  2. And you need to register SelectModelPropertyWorker in you AppModule.

And that’s it – now you can start using your list field as SelectModel in your view under the same name.

As you may have noticed SelectModelPropertyWorkier injects ServiceModelFactory under weird name so the getter can produce you model. If you know a better way to do it – just leave a comment and I’ll definitely include it.

Known limitations

There are 2 things that you need to keep in mind when working with @SelectModel annotation:

  1. The field cannot have @Property annotation since tapestry default mechanism will try to generate getter and setter which might give weird exceptions
  2. Since there cannot be getter so you cannot add it manually as well

Source code

Source code available on github.

Introduction

Here you can find some seam3 archetypes that will help you get started when generating new application for tomcat 6/7.

They extend weld archetypes with some pieces extra:

  • richfaces 4.2
  • JPA2/Hibernate transactional EntityManager
  • tomcat data source pool

Seam3/tomcat archetypes

Because it took me a while to put all of them together, I decided to share a shortcut for those trying to use tomcat+jpa. You can use them directly from my private repository. There are two seam3 archetypes (due to this bug):

Available options

In order to make your work more productive I added some extra options, once you fill them, you have

name required default
jdbcUrl required
jdbcUser required
jdbcPassword required
jdbcJndiName false jdbc/Seam3Test
jdbcDriverClassName false com.mysql.jdbc.Driver
hibernateDialect false org.hibernate.dialect.MySQL5InnoDBDialect

Usage

So just add my public repo to your settings.xml:

<repository>
    <id>repo.bedkowski.pl</id>
    <url>http://repo.bedkowski.pl/maven2</url>
</repository>

And for mysql you can generate project with:

mvn archetype:generate \
        -DarchetypeArtifactId=seam3-rf4-jpa2-tomcat7 \
        -DarchetypeGroupId=pl.bedkowski.archetypes \
        -DarchetypeVersion=1.0 \
        -DgroupId=pl.bedkowski.code \
        -DartifactId=seam3-generated \
        -Dversion=1.0-SNAPSHOT \
        -DjdbcUser=test \
        -DjdbcPassword=test \
        -DjdbcUrl='jdbc:mysql://localhost/test?useUnicode=true&characterEncoding=utf8' \
        -DinteractiveMode=false

Go to your seam3-generated direcotry and run:

mvn clean install 

And in target directory you can find 2 wars:

  • seam3-generated.war
  • seam3-generated-nolib.war

First one is obvious and second one is just a convenience in case you need to synchronize your war on external server – in which case send over libs each time is time consuming, so you can store libs on your server and just send -nolib.war and recompress the file with libs appended.

Source code

As usual sources available on github: