Tuesday, May 26, 2009

Integrating Author It Compiled Documents with Strutsonlin

Integrating the end product of a content management tool with a web application is painful. Even if the web application is built on top of an MVC framework. Author It is a CMS tool that will let you build an html based documentation. The focus on this discussion is integrating an end product which is an online help documentation with Struts. 

The documentation will allow users to view the help page regarding the current page by clicking on the Help button from the menu bar of the application. The user can also search for any specific help pages she/he wants to view. 

The integration effort requires several files to be created/edited as listed below

1. onlinehelp-config.xml
2. onlinehelp.jsp
3. common.js
4. mainMenuBar.jsp
5. web.xml
6. OnlineHelpProcessor class

and several classes as shown below from the diagram

This file contains the mapping between struts/tiles and the online help html documentation page through its numeric code which is the prefix to the file name of the help page (i.e. 207.htm). There are two parent nodes in it - nonupdate and update. 

nonupdate node contains mapping for help pages that are "only" non-editable web pages while the update node maps help pages to editable web pages.

The element is the tiles definition name attribute. This configuration is read and cached when the application starts up. Think of the element as the help page file name appended with .htm.

This is the original index.htm file generated by Author It tool, the entry point for all help documentation pages. There are three modifications to this file - import of the taglib struts-bean.tld, bean definition of the onlineHelpCode and the SRC attribute value of the frame element. 

The import of the bean taglib is needed to extract the SessionData bean which lives in the session and get the onlineHelpCode string value from it and plop it to the SRC element. It is important that this file is located where the jsp files are.

Of course we want the help page to be in a new window. This is where we add a new function that will open a pop-up window when the help page is invoked.

This is where the Help button is located and a javascript function to invoke the online help documentation in a new window is identified. This jsp file contains all the buttons necessary to provide functionality for the user.

We don't want the onlinehelp-config.xml file to be read each time we invoked the help pages. There's 3 ways to handle this file

1. Read this file, convert it to Java Bean/Map and put it in session.
2. Read this file, convert it to Java Bean/Map and put it in cache.
3. Read this file, put attributes in a Map and put it in a singleton object.

I prefer the latter for simplicity and less coupling though all of them promotes efficiency. If the bean is put in session we are introducing a tight coupling with the container's session implementation. If a third party caching mechanism is used then we are tied to that implementation.

In this example I used the third option and have a servlet intialize the class that reads this file which is called OnlineHelpProcessor (Singleton). The method called for initialization from this class is initializeOnlineHelpCodes(). Its just basically reading the file and putting attributes in a Map.

When the application starts up, the singleton class is initialized and all the elements from the config file is put into a Map ready for access anytime.

This class reads the onlinehelp-config.xml file and converts the elements into a key/value pair in a Map and provides a method that takes in a struts forward name argument and returns the onlinehelpcode value which is later used to load the right help page.

Tuesday, May 12, 2009

Using Chain of Responsibility Pattern

More than a year ago I was presented with a business requirement that has an accompanying process flow chart. The requirement  was basically validating an Insurance Agent. The following information were used to perform business rules/validation on a specific agent and see if he/she can do business at a certain area, or is able to use current authorization or if his/her current contract is valid. 

1. Authorization
2. Profile
3. Contract
4. Managing Agent

Looking from the document, there were sets of processes that contains business rules that needs to be performed but each process completion can lead to one or more possible processes to be run. This scenario is depicted below.

I chose a pattern to basically build the framework that will allow for easy integration of new processes into the mix and not complicate the validation implementation. If this is to be implemented without a framework, deeply nested if - else statements arises and increases cyclomatic complexity issues. The Chain of Responsibility Pattern seem to perfectly fit this requirement. This pattern comprise of several chains all link together and each chain is aware of if the request coming in is intended for it or not. If it is then it applies whatever process is needed on that request data. If not then it just passes the request to the next chain in the link. The process in the figure above is represented as a chain. In this case we have 9 chains in the link. 

Each of the 9 chains contains thick business logic validation implemetation that takes in an input and redirects the output to a different chain for further request validation. The process continues until one of the chains mark the validation process as completed and package the output data to whatever format the requester is expecting. 

This is the Class Diagram that reflects this design. In addition to the chain pattern, the Template Method Pattern was also used as can be seen from the hierarchy relationship of Chain interface, AbstractChain and its subclasses.

There are four things to consider from this diagram - Chain interface, ChainManager, AbstractChain and its implementation(chains).

Chain Interface
Defines the contract for implementing the methods add(Chain chain), processNext() and isLast(). The latter identifies a chain if its the last one to be executed. 

It is responsible for creating the chains, linking them altogether and kicking off the first chain. The pseudo code below shows this implementation.

Chain process1Chain = new Process1Chain();
Chain process2Chain = new Process2Chain();
Chain process3Chain = new Process3Chain();
Chain process4Chain = new Process4Chain();
Chain process5Chain = new Process5Chain();
Chain process6Chain = new Process6Chain();
Chain process7Chain = new Process7Chain();
Chain process8Chain = new Process8Chain();
Chain process9Chain = new Process9Chain();
DefaultChain defaultChain = new DefaultChain();


// Assign the point of entry chain,
Chain entryPointChain = process1Chain;

//Kick of the first chain

The process() method of this class reflects the Template Method Pattern. The power of polymorphism lies here. Its subclasses plugin the right logic at run time and these chains are decoupled from each other. 

As you notice the DefaultChain is included in the diagram. If you want to do some post-processing cleanup, provide the logic here.  

The beauty of this chain pattern is its extensible. From our example above, if we need to add another set of validation logic, all we have to do is create another chain and add it in the link and provide the business logic in its process() method. If a validation is no longer needed, just remove that chain from the link that implements the validation and the application should still run without any side effects.

Wednesday, May 6, 2009

When do we say its the right tecnology to use?

When is a technology the right tool to use? Do we influence the feature we want in an application from what is available from the technology or from what is required from the business requirement?

It's always a challenge to marry the right technology with the right requirement that would produce a fluid, solid application. One-Stop-Shop approach of resolving business issues is no longer becoming relevant in the global IT market. You pay so much that without realizing only a fraction of the features provided by that technology is only essential to what the business requires. A lot of companies now are resorting to only "get what you need" and "pay what you only need" mentality. This is where customization comes in. The beauty of customization is it delivers the solution that the problem it exactly expects. This strategy relieves companies from paying too much.

When the curse of overengineering starts to sink in, its a manifestation of wrong implementation. But what is overengineering? It's a combination of technology misuse and irrelevant complex implementation. If the implementation looks simple and it solves a business need then we just answered the first question above.

The danger comes when technology features molds how the application should behave. This deviates the application from concentrating into its prioritized functionalities and might deliver features not needed at all. Scope creep is the twin evil of this situation. Looking at it at the brighter side, this is the best opportunity to step back and review the requirements and correct and refine the implementation.

Implementing User Sessions for Crystal Reports Java SDK

If you are submitting requests for batch processing in Crystal Reports via its Java SDK, never login to the server per request. I've seen this implemented and it's just creating an unnecessary overhead. Here are the steps to create a session that would serve multiple requests.

1. Log on to Crystal Report Server

You call the metod CrystalEnterprise.getSessionMgr() which returns you an object of type ISessionMgr. From there call the method of the returned object below which returns an object of type IEnterpriseSession.

logon(String userName, String password, String ceServerName, String secEnterprise);

2. Create Logon Token

The IEnterpriseSession has a method getLogonTokenMgr() that returns ILogonTokenMgr. Call the method below to get the logon token string.

createWCAToken(String clientServerName, int validMinutes, int validNumLogon);

The validMinutes parameter represents how long do you want to maintain the session and the validNumLogon represents how many logons the client can perform in one session within the given valid minutes.

3. Logon to Crystal Server with Token

Call the method below from the class ISessionMgr and pass the logon token string that was previously generated. This returns to you a new IEnterpriseSession that has the token. All transactions will then use this session object until the minutes or log on counts are exhausted.


Of course, somehow this session will expire later on and the client application need to reconnect. The good news is you just have to follow the same steps. On the other hand, if the session is still valid and somehow the application needs to reset the connection, always check for the session if it already expired. If it is, release the token and logoff with the following sequence then proceed to steps 1 through 3 again.

releaseToken(logonTokenString); // called from ILogonTokenMgr
ceSessionWithToken.logoff(); // called from IEnterpriseSession with token
ceSession.logoff(); // called from IEnterpriseSession without token

This approach will be very useful for batch processing. You want to keep the pipe full while the report server is processing. With on demand request report processing, you don't want to keep a session open for a long time.

Friday, May 1, 2009

Java Multithreaded Web Service Performance Tips

Right after Java 5.0 Concurrency Model was released, MultiThreaded Java Web Service application is always an interesting subject. It's easy to tweak codes that are slowing down the application. But it was always a challenge to tweak it at the container level.

A former colleague of mine and me worked to figure out why our Web Service Application was slowing down and always leaves lots of run-away processes. Here are some of the findings we had implemented to keep our application rolling! The sample codes below reflects a reporting process where requests to generate a report is made and how the threads handle the status of each requests.

Use Java Concurrency API
There are only three classes that I've used to implement a MultiThreaded WS Application.
Runnable, ScheduledExecutorService, and ConcurrentLinkedQueue. 

The ScheduledExecutorService implements ExecutorService and is of type Executor.  Executor is responsible for abstracting the gory details of implementing threads into simpler methods. And it works all the time.

ScheduledExecutorService allows you to provide what the initial time is to run the thread associated with it, at what frequency, the thread to run and the unit of time. Was very easy to use and you can configure the thread pool size as well. The application I wrote had 5 threads running concurrently and it never had any problems at all. They all shared the ConcurrentLinkedQueue object resource and I didn't use the synchronized keyword to make this object thread safe since by default it already is.

Using the Runnable interface is better than just extending the Thread class. It promotes less coupling. Here's a sample of how this thread is plugged-in to the ScheduledExecutorService.

Implement the ServiceLifeCycle
Implementing ServiceLifeCycle will allow you to put shutdown hooks and implement its destroy() method where you can manipulate how the spawned threads are to be shutdown/killed. The init(Object context) method must also be implemented.

Provide Shutdown Hooks
The shutDownMonitor() method is called and the ExecutorService's awaitTermination() method is invoked which basically gracefully shuts down the spawned thread in the WS Container according to the specified time (might be in seconds, minutes, hours). You can also force to shutdown all threads by calling the method shutdownNow() if it takes longer than the specified time.

Use "application" Scoping
When you use "request" scope as configured in your wsdd file, you are creating one instane of the service per request. If several requests were triggered, your application is also spawning threads per request. The previous request that was processed left a trail of run-away processes. If the scope was "application" it will create only one instance of the service that will serve multiple requests. Hence, when the application runs it does not create run away processes since we already have a shutdown hook to kill all of the spawned threads before the application completely shutdown.

I think it all depends on what an application is trying to achieve, it might have more complex business process than this sample code we have and it requires more classes to use from the concurrent api. But in a multithreaded web service application, you definitely want to use the application scope and implement a shutdown hook.