Posts tagged ‘java’

Observer patterns is one of the core design patterns that every developer should know and use. It let’s you create components with low coupling and adhere to Hollywood principle.

Some pseudo-code for adding new observer might look like this:

void addObserver(String ev, Observer ob);
producer.addObserver("myEventName", new Observer(){public void observe(Producer producer){}});

But there are at least 2 issues with this design:

  • You have to explicitly define event name
  • You can only pass reference to a producer object.
You could of course decide that a producer can only dispatch one type of events and we don’t need to explicitly name them, but usually that’s not the case.

So what if you wanted to define your events using some kind of interface, eg:


public interface MyEvent{}
producer.addListener("myEventName", new Observer<MyEvent>(){public void observe(MyEvent me){}});

So this is a little bit better but you still need to pass name of event as String.

So maybe we could use java generics to use power of type-safety at compile time and still have all the benefits of low-coupling. Something that could look like this:


public <T> void addListener(Class<T> e, Observer<T> o);
producer.addListener(MyEvent.class, new Observer<MyEvent>(){public void observe(MyEvent me){}});

Where first parameter could be used as EventType declaration.

This is very close but has still one flaw – it forces redundant code:

  • once event type is passed explicitly as event definition
  • second during observer creation – template parameter

So maybe it could be somehow simplified into something like this:

producer.addListener(new Observer<MyEvent>(){public void observe(MyEvent me){}});

Some of you might say that this is impossible in Java due to type-erasure. This is all true – but there’s second part of it – generic types are available at runtime through getGenericSuperClass/getGenericInterfaces methods.

You can see source code for my type-safe event system on github but I think it needs just some clarification – why do you need that ugly MethodProxy class.
So after writing DefaultImplementation of event dispatcher interface I found out that compiler would not allow call to Listener’s on method with passed event instance. So I decided to find proper method using reflection and only internally resign of type-safety.
MethodProxy class creates proxy upon instantiation so it will report any problems very close to their cause.

So here’s what you can find already in the repo:

  1. Dispatcher interface with default implementation
  2. Simple event interface used as entry point for all events
  3. Simple listener interface used as entry point for all listeners
I guess there might be sample usage scenario:
Declare your event:
</pre>
public class MyEvent implements ProcessorLifecycleEvent{}
Create event dispatcher and register your listener:
ProcessorLifecycleEventDispatcher eventDispatcher = new DefaultProcessorLifecycleEventDispatcher();
eventDispatcher.registerListener(new ProcessorLifecycleEventListener<MyEvent>(){
public void on(MyEvent e) {
// some logic here
}
});

Publish new event:

eventDispatcher.dispatch(new MyEvent(){});
In the sample above MyEvent is very simple but it could take some data through constructor and act as full-blown DTO which simplifies greatly interactions beacause listener code doesn’t have to do any runtime casts – see example.
The whole project is part of another thing – Liferay service without service builder, which I’m going to describe soon 🙂

So enjoy.

Second version of Dequeue implementation as the first one contained bugs.

package pl.bedkowski.code.amazing;

import java.util.Iterator;

public class Dequeue<E> implements Iterable<E> {
	private Entry tail, head;
	private int size;
	
	public boolean push(E item) {
		tail = new TailEntry(item, tail);
		if (head == null) {
			head = tail;
		}
		++size;
		return true;
	}
	
	public E pop() {
		return cut(false);
	}
	
	private E cutTail() {
		E ret = tail.value;
		tail = tail.prev;
		return ret;
	}
	
	public boolean unshift(E item) {
		head = new HeadEntry(item, head);
		if (tail == null) {
			tail = head;
		}
		++size;
		return true;
	}
	
	public E shift() {
		return cut(true);
	}
	
	private E cutHead() {
		E ret = head.value;
		head = head.next;
		return ret;
	}
	
	private E cut(boolean headHead) {
		if (isSizeZero()) {
			return null;
		}
		E ret = headHead ? cutHead() : cutTail();
		--size;
		if (isSizeZero()) {
			tail = null;
			head = null;
		}
		return ret;
	}
	
	public int size() {
		return size;
	}

	/**
	 * Checks if both size and tail/head properties for null
	 * 
	 * @return true size is zero and both tail/head are null
	 */
	public boolean isEmpty() {
		return isSizeZero() && tail == null && head == null;
	}
	
	
	private boolean isSizeZero() {
		return size == 0;
	}
	
	@Override
	public Iterator<E> iterator() {
		return new Iterator<E>() {
			
			private Entry entry = head;

			@Override
			public boolean hasNext() {
				return entry != null;
			}

			@Override
			public E next() {
				if (entry != null) {
					E ret = entry.value;
					entry = entry.next;
					return ret;
				} else {
					return null;
				}
			}

			@Override
			public void remove() {
				throw new RuntimeException();
			}
		};
	}
	
	public Iterator<E> reverseIterator() {
		return new Iterator<E>() {
			
			private Entry entry = tail;

			@Override
			public boolean hasNext() {
				return entry != null;
			}

			@Override
			public E next() {
				if (entry != null) {
					E ret = entry.value;
					entry = entry.prev;
					return ret;
				} else {
					return null;
				}
			}

			@Override
			public void remove() {
				throw new RuntimeException();
			}
		};
	}
	
	private abstract class Entry {
		private Entry prev,next;
		private E value;
		
		private Entry(E value, Entry next, Entry prev) {
			this.value = value;
			this.next = next;
			this.prev = prev;
		}
		
		@Override
		public String toString() {
			return value.toString();
		}
	}
	
	private class TailEntry extends Entry {
		private TailEntry(E value, Entry prev) {
			super(value, null, prev);
			if (prev != null) {
				prev.next = this;
			}
		}
	}
	
	private class HeadEntry extends Entry {
		private HeadEntry(E value, Entry next) {
			super(value, next, null);
			if (next != null) {
				next.prev = this;
			}
		}
	}
}

This blog is a contitunation of my previous entry (polish only sorry) in which I presented a solution for hiding certain operations on a collection using jdk’s dynamic proxy mechanism. It consited of MethodInterceptr which checked if specific method was called and in such a case it reported and error using RuntimeException.

With this entry I’d like to present a different approach using google collection ForwardingObjects. It’s parent for all types of wrappers – one for each collection type and the advantage of using wrapper is that hey’re all designed as abstract classes implementing interface for given type of Collection, in case of Set it’s ForwardingSet (to continue older example) and the only thing you need to do as implementor is to telli wrapper how it can find its delegate by providing a delegate method.

Below you can find previous example rewritten to use ForwardingSet with disabled clear method. Last but not least – this version is waaay much cleaner 🙂 and you also let compiler do its job – if you check the previous post you’ll notice that there are two versions of this solution. The original one contained a but because I made a typo and the code checks for call to clean method instead of clear…

import java.util.Set;
import com.google.common.collect.ForwardingSet;

class StructElement3 {

	private class ForwardingSetNoClear extends ForwardingSet<String> {

		private Set<String> delegate;
		public ForwardingSetNoClear(Set<String> delegate) {
			this.delegate = delegate;
		}

		@Override
		protected Set<String> delegate() {
			return delegate;
		}

		@Override
		public void clear() {
			throw new UnsupportedOperationException("Cannot call clear");
		}
	}

	public StructElement3(Set<String> obj) {
		Set<String> forwardingSet =  new ForwardingSetNoClear(obj);

		// yeah yeah keep talking...
		forwardingSet.clear();
	}

}

Problem description

There is a part of your application that needs to display summaries of certain piece of data stored in data base and you need to specily  certain intervals for which summaries should be retrieved.

Requirements

I want to have an easy way of fetching summaries for some columns in certain table for specified interval.

Solution description

Plain SQL

The first solution that comes into play is glue some pieces of SQL together from sum and execute it – yes it’s fast, it works but it’s ugly.

Use HQL

Yes in this solution we’ve got some of the job perfomed by Hibernate – we’ve got our entity defined, there’s a bridge for each sql dilatect but… we still need to explicitly state columns and add calls to sum function in order to retrieve summaries.

Entity to Criteria mapping

But maybe we could somehow use the same Entity object that we have already defined for this specific table and instead of performing regular fetch, generate sql based on @Column definitions. That way we are able to use JavaBean property name as alias for sum and use result transformer to get the data back.

The only portion that’s left here is adding support for omitting some properties – we don’t want to do automated retrieval of SUM’s for name or id 🙂

Source code

Solution consists of 2 static methods responsible for:

There’s also 3rd method but it’s totally optional – it prevents NPE for returned values of a specifc type and it uses cglib-jdk5 Enhancer, so you don’t need to do explicit casting 🙂

You can see full source code on github.

Problem description

Imagine you have a project with hierarchical data structure with 4 levels, where 1st level serves as reference data for 2nd and so on. So your task as a developer is to present this on a web form with a checkbox for each value, where „checked” means that it has its own value and „unchecked” means that value of such a field should be taken from it’s parent, with exception for 1st level, which should use only its own values. Furthermore the DTO used for form should pass its state into remote EJB.

Requirements

So let’s summarize it as requirements:

I want to be able to bind a command object with pair of fields – one used for remote EJB and one for determining if field is available on this level. In case field is not available remote EJB object should be passed a null value.

Solution discussion

Simple if/else

The easiest solution is to use regular if/else block and in case field is not available set null passed value otherwise, eg:


if (myCommand.isFieldAvailable()) {
   myRemote.setField(myCommand.getField());
} else {
   myRemote.setField(null);
}

It does the job but it’s a bit of an overhead when you have 20 fields – all 100 lines look alike:

  • check if field is available
  • pass its value to ejb
  • set null otherwise

An experienced OOP developer, which I’m sure you are, sees a needless repetition here and it’s obvious that there’s got to be a better way.

Reflection

Another apporach is to use reflection with property names passed as strings and a helper method which retrieves these using reflection and applies them accrodingly, eg:


fromCommandToEJB("field", command, ejb);

Internally this method will encapsulate logic for checking if available flag is set and reacts to it. This version is much better but it still has one flaw – property name is passed as string and compiler will not warn you when any change in the interface takes place.

Delegating proxy

Proxy may sound a bit intimidating but cglib’s Ehnancer makes it very easy – the only requirement is that you can’t use final classes.

So the work-horse of this solution is MethodInterceptor that has 3 tasks:

  1. Intercept call to a method
  2. Determine if call should be handled (method should be a JavaBean getter).
  3. Handle supported method call (find suffixed method and execute it in order to check what value should be returned).

The first point is handled by cglib internally, so we’re not going to spend any more time on it.

Second point is the decission making part – which  is a slightly modified version of isGetter method presented by Jakob Jenkov in his article on reflection. We’re going to intercept both get and is methods but exclude the ones ending with Available suffix and instead of returning a boolean our method returns a String where null means that method should not be handled.

	private String getHandledPropertyName(Method method) {
		String methodName = method.getName();
		if (!(methodName.startsWith("get") ||
			(methodName.startsWith("is") &&
			!methodName.endsWith(suffix)))) {
			return null;
		}
		else if (method.getParameterTypes().length != 0 ||
			void.class == method.getReturnType()) {
			return null;
		}
		else if (methodName.startsWith("get")) {
			return methodName.substring(3);
		} else {
			return methodName.substring(2);
		}
	}

As you can see returned String is important so we can have decission making (get/is prefix) and propoperty name reading (everything following is/get prefix).

There is a small overhead – every property needs to have an isAvailable method, which in case of compound properties means that each call must be redirected to the „real” checker.

Let’s try an example here – your bean contains start/endDate method so endDate is not valid without startDate, which means that there should be a common method checking if both dates are set and only afterwards passing it for further processing but with proxy you need to have isStartDateAvailable and isEndDateAvailable which might make code analysis a bit harder, so remember to use proper comments to make others work easier.

Third point is very simple – for given property name, find correspoint isAvailable method, check its return value and either call original method, passing its return value or return null.

	private Object handleGetter(Method method, String propertyName) throws Throwable {
		String conditionPropertyName = StringUtils.uncapitalize(propertyName) + suffix;
		PropertyDescriptor conditionDescriptor = PropertyUtils.getPropertyDescriptor(myBean, conditionPropertyName);
		if (conditionDescriptor == null) {
			throw new NoSuchMethodException("Missing is"+StringUtils.capitalize(conditionPropertyName) + " method for property: " + StringUtils.uncapitalize(propertyName));
		}
		Method condition = conditionDescriptor.getReadMethod();
		if (condition == null) {
			throw new NoSuchMethodException("Missing is"+StringUtils.capitalize(conditionPropertyName) + " method for property: " + StringUtils.uncapitalize(propertyName));
		}
		if ((Boolean) condition.invoke(myBean)) {
			return method.invoke(myBean);
		}
		return null;
	}

I’ve added also a small utility static method, to make wiring the whole stuff a bit easier.

	public static <T> T create(T myBean, String suffix) {
		return (T) Enhancer.create(myBean.getClass(), new ConditionalPropertyInterceptor<T>(myBean, suffix));
	}

There’s just one last thing that needs attention – how to make sure that each getter has a corresponding isAvailable method. Yes you can take your chances and wait until application is deployed… but it’s much better to have so me kind of automatic testing – in this case I recommend using dozer library exclude transient properties and after excluding transient properties all the rest should pass.

Sources available on github

Problem description

Another approach to the memoization topic mentioned previously came to me quite recently based on a problem with webservice call.

Imagine you have a webservice request/response exchange that’s taking a long time not only due to network latency but also due to the fact that your ws-logic performs a complicated ldap query.

So there are at least 3 reasons that might cause you a headache:

  1. slow internet connection
  2. slow ldap search
  3. big number of results that need to downloaded

In order to improve this situation results of method call might be cached inside ws-client’s code using ehacache. Cached entries will be identified based on method’s signature and passed arguments. The arguments should implement Serializable interface but this is already fulfilled since objects are passed through webservice calls.

Caching should be transparent to the caller – which should not know whether results he got were fetched from remote server or local cache. Additionally it should be configurable which methods calls should be cached or not and last but not least – since cached entries are identified by some key – it should be possible to define custom key-generator that would give the user freedom in defning key algorithm.

All the above requirements are summarized below:

  • webservice client api defined as interface
  • webservice client implementation that performs call to server
  • methods are chosen based on annotations
  • wraper defined as dynamic proxy so calls to client api are transparently may be cached and user is not aware if results come from cache or real call

Solution description

The solution will be spring-specific since Spring supports ehacache out-of-the-box.

Existing solutions

There exists already at least 3 solutions:

and somehow all of them combined fill the picture of what I’d expect from such a library:

Requirements
  1. Easy configuration with possibly as little xml as possible – smth like <context:component-scan> element with reasonable defaults
  2. Defining one annotation that enables method caching
  3. Including/Excluding methods from caching using annotation
Solution description

The approach descried in this post is realised by providing 3 main elements:

  1. Custom namespace handler implementing NamespaceHandlerSupport
  2. Custom bean definition parser extending AbstractSingleBeanDefinitionParser
  3. Custom BeanPostProcessor implmenting BeanFactoryAware interface
  4. Custom xsd

All points above respond to the requrements:

Points 1 and 2 let you declare usage of method cache with just one line of xml (after defining memoize namesapace) and creates EhCacheManagerFactoryBean and EhCacheFactoryBean behind-the-scenes freeing you from writing these explicitly inside you xml application context.

<memoize:use-method-cache
   config-location="classpath:ehcache.xml"
   cache-name="pl.bedkowski.code.memoize.sample.METHOD_CACHE" />

Point 3 finds classes annotated with Memoize annotation inside current context and wraps them inside Memoizer proxy, as well as fetches EhCacheFactoryBean instance created in the previous step.

@Component("wsClient")
@Memoize
public class WebServiceClientImpl implements WebServiceClient {

The Memoize annotation contains 2 properties:

  • keyGenerator – key generator class (must implement KeyGeneratorinterface)
  • type – one of Type.INCLUDE_ALL/Type.EXCLUDE_ALL

I think the last one needs some further explanation – it lets you define strategy for handling methods.

  • Default is Type.INCLUDE_ALL – which means that all methods that have non-void return type will be cached unless explicitly marked with Exclude annotation.
  • The reverse is Type.EXCLUDE_ALL which means that no method call will be cached unless marked with Includeannotation.

Sources, packages

Source available on github
Binaries available in repo

UPDATE: See updated version using ejb 3.1 compilant with any application server here.

For my javascript project I was looking for an automated tool that would merge all my files into one, which then could be optimized by YUI compressor. I found combiner project developed by nzakas which used require-in-comment approach which was close to what I needed – my JS library uses require function for importing dependencies.

After forkeing it and started code analysis as well as other forks to see what changes would I need to make to adjust it to my needs and it looked like my improvement plan had the same points as most of the forks although I was planning to keep it as simple as possible, yet expandable.

So the plan was:

  • replace jargs with args4j
  • add generics to all collections
  • improve sorting and duplicates handling on the output collection by using TreeSet instead of List and proper Comparator
  • improve cyclic dependency management
  • improve files reading into SourceFile – remove todo List that was used for data exchange between processFile/processFiles methods
  • move file handling functionality into a subclass and thus enable fast exchange of the dependency algorithm – use it for css @import statement

First two points were no-brainers, so they went smoothly.

Sorting/duplicates removal

My biggest problem was the sorting algorithm (3rd point) – in the original version there was List with „manual” checks using List.contains function and after list has been filled with data it was sorted by using Collections.sort and appropriate Comparator.

This all worked fine but I was sure that the List could be replaced with TreeSet where both order and duplicates would be handled at once but original Comparator was a problem. It was good for sorting but not for insertion – when it found two non-related files with equal dependency size it returned 0 which in TreeSet terms meant that file would not be inserted  AT ALL. I checked TreeSet.add method and I found out that it uses binary search algorithm – so there are 2 cases to handle:

  • either there is no more elements, which meant element was inserted
  • Comparator returned 0, which meant that element was replaced.

And this was it – I just had to make sure that elements with the same number of dependencies are handled consistently until there is no more elements or the same element is found. This meant comparing 2 non-related element with the same number of depedencies by name.

Get rid of todo property

Second thing was processSourceFiles method that used extra list in order to exchange dependencies found by processSourceFile method. This looked weird from the beginning, so I was wondering if adding a return value to processSourceFile would help. Afterwards I noticed that processSourceFiles can be called recursively and thus letting go todo list.

Simplify cyclic dependency management

Than there was cyclic dependency handling – there used to be an extra loop that would check all files one-by-one if any of them is not already dependent on currently processed one. So I figured that it would be much easier to do the check inside addDependency which meant getting rid of all overloaded methods and leaving just one that accepts SourceFile object. Finally addDependency method checks if current object is already a dependency of added object (passed as parameter) and if it is, it returns false.

Subclass for reading files

So than removing file reading algorithm and adding Css handler was really easy because dependency reading was already fleshed out, so baiscally I just had to move it outside FileComparator class.

Ostatnio zaciekawił mnie fragment kodu korzystający z metody getGenericSuperClass, gdyż do tej pory cały czas byłem przekonany, że nie ma możliwości pobrania informacji o generycznym typie przekazanym w szablonie, ale okazało się, że nie jest to całkiem zgodne z prawdą – taka możliwość istnieje i już jakiś czas temu pisał o niej Neal Gafter na swoim blogu w poście zatytułowanym Super Type Tokens. Po jakiś czasie dodając ich ograniczenia.

Jak już pisałem – byłem kompletnie zaskoczony i myślałem, że to tylko dyskusja w akademickim stylu, ale szybko okazało, się że mogę go wykorzystać w moim kodzie testowym 🙂

Mój proces pisania testu jednostkowego zazwyczaj zaczyna się od stworzenia klasy poprzez wizarda eclipse’owego, wybrania metod do testowania i otwarciu wygenerowanej klasy. Później dopisuję obiekt podlegający testowi jako własność, która jest odtwarzana w metodzie oznaczonej adnotacją Before i tak w kółko.

Przykładowy kod mógłby wyglądać tak:

public class MyObject {
   public void func(){}
}

public class MyObjectTest {

   private MyObject obj;

   @Before
   public void init() throws Exception {
      obj = new MyObject();
   }

   @Test
   public void testFunc() {
   }
}

I dzisiaj uznałem, że chyba można napisać do tego jakoś automat, czyli generyczną klasę zawierającą jako parametr obiekt, który ma być testowany i dodatkowo metodę „odtwarzającą” go przed każdym testem:

import java.lang.reflect.ParameterizedType;

public abstract class MyBaseClass<T> {
   protected T tObj;
   private Class<T> clz;
   protected MyBaseClass() {
     clz = (Class<T>)((ParameterizedType)getClass().getGenericSuperclass()).getActualTypeArguments()[0];
   }

   @Before
   public void ____init() throws Exception {
      tObj = clz.newInstance();
   }
}

No i przykładowy test:

public class MyObjectTest extends MyBaseClass<MyObject> {
   // inicjalizacje zalatwia klasa nadrzedna
   @Test
   public void testFunc() {
   }
}

Ostatnio w projekcie powstała potrzeba odrzucenia części wyników asocjacji przed wysłaniem obiektu „w świat” 🙂

Zaczniemy od przeglądu możliwych rozwiązań:

  • najprostsze, które przychodzi do głowy to pobranie asocjacji get’erem, przejechanie się po wynikach i wybranie „właściwych”
  • można użyć złączenia, jednak problem pojawia się w przypadku OUTER JOIN’ów, bo Hibernate pobiera wszystkie elementy a dodanie warunku w wherze powoduje potraktowanie OUTER’a jak INNER’a i w przypadku braku wyników dostaniemy pusty zbiór wyników
  • można skorzystać z Hibernate’a 3.5, który dostał dodatkowy parametr do metody createCriteria, która to właśnie jest dodatkowym kryterium filtrującym dane
  • niestety dla tych „pechowców”, takich jak ja, którzy muszą używać starszych wersji jest też nadzieja 🙂

I na tym ostatnim elemencie będę się dzisiaj skupiał – otóż jak, poradzić sobie z tym problemem w elegancki sposób (bez konieczności zgłębiania wewnętrznych mechanizmów). Są to wspomniane już adnotacje @FiterDef/@Filter, a do opisu ich działania posłużę się przykładem prostego mapowania  dwóch klas – rodzica i dzieci, w relacji jeden-do-wielu, korzystając z adnotacji @OneToMany/@ManyToOne.

Zaczniemy od stworzenia klasy rodzica:


@Entity
@Table(name = "mock_object")
public class MockObject implements Serializable {

	@Id
	@GeneratedValue(strategy = GenerationType.AUTO)
	@Column(name = "id", updatable = false, nullable = false)
	public Long getId() {
		return id;
	}

	@OneToMany(mappedBy = "mockObject", cascade = CascadeType.ALL, fetch = FetchType.EAGER)
        public Set<MockSubObject> getSubObjects() {
		return subObjects;
	}
}

Ze względu na czytelność listing zawiera tylko niezbędne elementy – czyli encję MockObject, która odpowiada tabeli w bazie mock_object i zawiera pole id a także set sub objectów, którego definicja jest poniżej:

@Entity
@Table(name = "mock_sub_object")
public class MockSubObject implements Serializable {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @Column(name = "id", updatable = false, nullable = false)
    public Long getId() {
	return id;
    }

    @Column(name="name", length=50)
    public String getName() {
	return name;
    }

    @ManyToOne(fetch = FetchType.EAGER)
    public MockObject getMockObject() {
	return mockObject;
    }

Żeby nie było tak całkiem nudno, dorzuciłem jeszcze jakąś nazwę 🙂

No i spróbujmy napisać do tego left join’a:


Criteria crit = session.prepareCriteria(MockObject.class)
   .createCriteria("subObjects", "su", Criteria.LEFT_JOIN)
   .add(Restrictions.eq("su.id", 1L);

List<MockObject> mo = crit.list();

Zapytanie sprecyzowane w ten sposób spowoduje, że zbiór wyników będzie pusty nawet mimo tego, że tabela mock_object nie jest pusta.

Można temu zaradzić w następujących krokach:

  1. Dodając definicję filtra do encji MockObject.
  2. Deklarując warunek dla kolekcji subObjects
  3. Włączając filtr i wstawiając parametr przed wywołaniem zapytania.
  4. Dodając DISTINCT_ROOT_ENTITY_TRANSFORMER

Do dzieła:


// definicja filtra

@FilterDef(name = "findByName", parameters = { @ParamDef(name = "name", type = "string") })
public class MockObject implements Serializable {

Jak widać filtr przyjmuje jeden parametr o nazwie name i typie string.

Teraz właściwy warunek:


@Filter(name = "findByName", condition = "(name = :name)")
 public Set<MockSubObject> getSubObjects() {

I reszta podczas tworzenia zapytania – które wraz z włączonym filtrem umożliwia filtrowanie danych po atrybucie name:

        Criteria crit = session.createCriteria(MockObject.class)
        	.add(Restrictions.eq("id", 1L))
        	.addOrder(Order.asc("id"));

        session.enableFilter("findByName").setParameter("name", "name1");

        crit.setResultTransformer(Criteria.DISTINCT_ROOT_ENTITY);

        List<MockObject> lMo = list(crit);

Dla pewności można sprawdzić jeszcze wygenerowane zapytanie:

-- Hibernate:
select this_.id as id0_1_, this_.version as version0_1_, subobjects2_.mockObject_id as mockObject3_3_, subobjects2_.id as id3_, subobjects2_.id as id1_0_, subobjects2_.mockObject_id as mockObject3_1_0_, subobjects2_.name as name1_0_ from mock_object this_ left outer join mock_sub_object subobjects2_ on this_.id=subobjects2_.mockObject_id and (subobjects2_.name = ?) where this_.id=? order by this_.id asc

Wygląda na to, że dodatkowy warunek został uwzględniony 🙂

Źródła standardowo na gicie.

Ostatnio podczas pracy nad jednym projektem w wielu miejscach w kodzie należało przepisać przychodzące żadanie z jednego transfer object’a na drugiego (powody tegoż przepisywania są dla tego artykułu zupełnie nieistotne) w związku z czym w wielu miejscach w kodzie pojawiły się elementy do siebie łudząco podobne:

target.setSomeProperty(source.getSomeProperty())

Co oczywiście nasunęło mi myśl czy nie dałoby się tego jakoś wydzielić, ale ograniczała mnie myśl, że java to przecież nie taki dajmy na to JavaScript w którym takie figle to chleb powszedni.

Rozwiązanie podsunął mi dopiero Tomek swoim artykułem nt. BeanUtils – wystarczy przecież pobrać własności z jednego beana za pomocą metody describe i następnie korzystając z metody populate je zaaplikować – no i niby wszystko cacy, ale jest parę niuansów które mi przeszkadzają:

  • całe api jest statyczne…
  • dodawanie Converter‚ów jakioś tak topornie wygląda
  • nie ma możliwości definiowania własności, która zmieniła nazwę pomiędzy Beana’mi
  • i jak śię jakaś własność nie przepisze, to nie dowiemy się o tym dopóki nie będziemy jej próbowali pobrać, czyli znaaacznie za późno

Stąd pomysł na klasę BeanTransformer, która powinna umożliwiać:

  • definiowanie własności ze zmienioną nazwą
  • definiowanie własności do pominięcia
  • definiowanie Coverter’a z wykorzystaniem generycznego interfejsu, który sam będzie przenosił informację o typie dla jakiego ma być aplikowany
  • sprzątanie konwerterów po zakończonej konwersji

I tak właśnie narodziła się klasa BeanConverter, którą omówię poprzez opis jej metod i klas wewnętrznych upraszczających całą zabawę 🙂

No to zacznijmy od początku – zadeklarujmy klasę:


public class BeanTransformer {

}

Przydałaby się metoda pozwalająca definiować własności, które nie bedą nam potrzebne i przechowująca je w Set‚cie:

	private Set<String> skipProperites = new HashSet<String>();
	public BeanTransformer skip(String firstProperty, String... propertyNames) {
		skipProperites.add(firstProperty);
		if (propertyNames != null) {
			skipProperites.addAll(Arrays.asList(propertyNames));
		}
		return this;
	}

Jak widać przy okazji umożliwia chain’y i dodawanie więcej niż jednej własności za jednym zamachem.

Następnie potrzebujemy metodę pozwalającą zdefiniować własności, których nazwa się zmieniła:

	private Map<String, String> renameProperties = new HashMap<String, String>();
	public BeanTransformer rename(String fromProperty, String toProperty) {
		renameProperties.put(fromProperty, toProperty);
		return this;
	}

I teraz wystarczy przejechać się po własnościach, sprawdzić które na które przechodza, usunąć zbędne i sprawdzić czy coś zostało i ew. zgłosić błąd.

	private Set<String> filterCorrectProperties(final Map<String, Object> properties) {
		// first remove properties that should be skipped
		properties.keySet().removeAll(skipProperites);

		// than go over the rest of them
		Set<String> foundProperties = Sets.filter(renameProperties.keySet(),
				new Predicate<String>() {
					public boolean apply(String fromProperty) {
						if (properties.containsKey(fromProperty)) {
							Object v = properties.remove(fromProperty);
							String newKey = renameProperties.get(fromProperty);
							properties.put(newKey, v);
							return false;
						}
						return true;
					}
				});

		return foundProperties;
	}

Teraz gdy już mamy listę przefiltrowanych własności możemy je przepisać do docelowego beana

	@SuppressWarnings("unchecked")
	public <F, T> T transform(F fromBean, T toBean) throws NotUsedPropertiesException {
		try {
			Map<String, Object> properties = (Map<String, Object>) PropertyUtils.describe(fromBean);

			Set<String> foundProperties = filterCorrectProperties(properties);

			if (!foundProperties.isEmpty()) {
				throw new NotUsedPropertiesException(foundProperties);
			}

			BeanUtils.populate(toBean, properties);

			return toBean;
		} catch (IllegalAccessException e) {
			throw new NotUsedPropertiesException(e);
		} catch (InvocationTargetException e) {
			throw new NotUsedPropertiesException(e);
		} catch (NoSuchMethodException e) {
			throw new NotUsedPropertiesException(e);
		}
	}

Zakładamy, że reszta własności przepisuje się 1-1.

Jeszcze miłą opcją byłaby możliwość dodania własnego converter’a dla typów innych niż standardowe – tutaj skorzystam również z klasy szablonowej, która będzie rozszerzać interfejs Converter, ale dodatkowo definicja zawiera typ klasy który dany konwerter obsługuje, co daje nam 2 zalety:

  • Klasa niesie cały zasób informacji nt. konwersji
  • Część wspólnej logiki może być wydzielona na zewnątrz

Zacznijmy od samego interfejsu, który będzie prosty do bólu:

	public static interface Converter<T> extends org.apache.commons.beanutils.Converter {}

Teraz metoda rejestrująca takiego konwerterka nie musi mieć explicite podanej klasy, pobierze ją sobie z szablonu korzystając z metody getGenericInterfaces:

	public <T> BeanTransformer addConverter(Converter<T> c, boolean skipConverterImpl) {
		Type[] types = c.getClass().getGenericInterfaces();
		Class<T> clz = (Class<T>) ((ParameterizedType) types[0]).getActualTypeArguments()[0];
		if (!skipConverterImpl) {
			c = new ConverterImpl(clz, c);
		}
		converters.put(clz, c);
		return this;
	}

Jeszcze tylko pozostaje wyjaśnić znaczenie tajemniczej klasy ConverterImp – ona to mianowicie jest sposobem na wydzielenie części wspólnej funkcjonalności dla wszystkich converter’ów a jednocześnie nie wymusza na użytkowniku jej znajomości, wiec po prostu dekoruje metodę convert i dopiero dopuszcza do głosu konkretna implementację, jeżeli wspólna implementacja nie wie jak dany obiekt utworzyć, no ale dośc gadania:

	private static class ConverterImpl<T> implements Converter<T> {
		private Class<T> self;
		private Converter<T> target;
		public ConverterImpl(Class<T> self, Converter<T> target) {
			this.self = self;
			this.target = target;
		}
		@SuppressWarnings("unchecked")
		public Object convert(Class arg0, Object arg1) {
/*
			corrected after comment by bob
			if (self.isAssignableFrom(arg1.getClass())) {
*/
			if (self.isInstance(arg1)) {
				return arg1;
			}
			return target.convert(arg0, arg1);
		}
	}

Mając convertery możemy je sobie włączać i wyłączac przed i po konwersji:

	private void registerConverters() {
		for (Class<?> clz : converters.keySet()) {
			ConvertUtils.register(converters.get(clz), clz);
		}
	}

	private void deregisterConverters() {
		for (Class<?> clz : converters.keySet()) {
			ConvertUtils.deregister(clz);
		}
	}

// w metodzie transform dodajemy wywołania:
			registerConverters();
			BeanUtils.populate(toBean, properties);
			deregisterConverters();

Dla tych, których zaciekawiłem tym wpisem wrzuciłem projekt (wraz z unitami!) na githuba.