Skip to main content

· 2 min read
Alejandro Revilla

/by apr When you first get to see jPOS-EE it looks like a puzzle, which is great for our business :) but you can find here very simple instructions to get you going with a basic jPOS-EE UI setup. The basic idea behind the jPOS-EE build system is that we have modules that get merged at compile time, enabling and fostering code reuse among different jPOS-EE applications. We have some standard modules (you can find those in the modules directory) and optional modules (those are in the opt directory). In order to create your application, you need to either copy or symlink selected modules from the opt directory and you obviously need to add your own modules. This step-by-step instructions use MySQL, you can use other RDBMS but we suggest that you get the system going with MySQL and then change it to use your preferred one.

  • Get jPOS-EE using subversion

    svn checkout http://jposee.googlecode.com/svn/trunk/ jposee

  • Copy or symlink the following optional modules into your modules directory:

    commons -> ../opt/commons eecore3 -> ../opt/eecore3 eeweb3 -> ../opt/eeweb3 hibernate3 -> ../opt/hibernate3 hibernate3_mysql -> ../opt/hibernate3_mysql jetty6 -> ../opt/jetty6 jpublish4 -> ../opt/jpublish4 status -> ../opt/status status_ui -> ../opt/status_ui

  • Create the database and grant permissions
    You can review the file devel.properties and change dbname, dbuser and dbpass or you can pick the defaults for this little exercise, i.e:

    CREATE DATABASE jposee; GRANT ALL PRIVILEGES ON jposee.* TO 'sa'@'localhost';

  • Build jPOS-EE
    Call ant (you need a recent version of ant, we are using 1.70 but 1.6.5+ would do

  • Use a little Java or BeanShell script to create your DB and initial user
    You can use bin/bsh to invoke a Beanshell. If you are on Windows you may want to use cygwin or create yourself a BSH.BAT based on our unix based bin/bsh script and run:

    import java.util.; import org.jpos.ee.; import org.hibernate.*;

    DB db = new DB(); db.createSchema (null, true);

    db.open();

    Transaction tx = db.beginTransaction(); User user = new User(); // user='admin', password='test' user.setNick ("admin"); user.setPassword ("66d4aaa5ea177ac32c69946de3731ec0"); user.setName ("System Administrator"); user.grant ("login"); user.grant ("operator"); user.grant ("useradmin"); user.grant ("sysconfig"); user.grant ("admin"); db.session().save (user);

    tx.commit(); db.close();

  • Start jPOS-EE
    You can either call bin/q2, move to the build directory and call java -jar jposee.jar or call ant run

  • Point your browser to http://localhost:8080/jpoeee

· 4 min read
Alejandro Revilla

/by apr/

The first few things that new jPOS users want to do within the first five minutes after downloading jPOS are:

  • Build it in Eclipse/Netbeans/IDEA/your-favorite-IDE-here.
  • Run it from a fifth-grade Java program with a lovely 10-page long main program where all components get initialized with hardcoded values for IPs and ports instead of using Q2
  • Get rid of our event log subsystem and use commons logging or log4j

If you don´t recognize yourself in the previous list, then you might be interested to know that we have a nice new optional module in jPOS-EE called stloglistener. I came across StringTemplate just a few days ago after reading Florin´s blog and decided to give it a spin. StringTemplate is just great and learning it is addictive. Besides my plans to use it in our JPublish applications (we are currently using Velocity) and some of our mail-sending stuff I decided to give it a quick run, and it came out to be a very nice addition to jPOS. Now you can fully customize jPOS logs using the new org.jpos.util.STLogListener. In order to use it, you need to install the stloglistener module (available in the opt directory of the jposee subversion repository) and add it to your logger configuration:

  <log-listener class="org.jpos.util.STLogListener">
<property name="path" value="cfg" />
<property name="skin" value="jpos.stg" />
</log-listener>

path is just the path to your template´s root directory (defaults to the cfg directory). skin can be either an ST group (*.stg) or the base path where you place individual *.stg files (i.e cfg/myskin/org/jpos/iso/ISOMsg.st) Here is an example of the configuration of a LogEvent:

org\_jpos\_util_LogEvent(realm,date,tag,evt) ::= <<
\### $date;format="dd/MM/yyyy hh:mm:ss.SSS"$ #############################
\# $realm$/$tag$
#
$evt:{ $it$$\\n$}$

This creates less XML-ish logs with a line of hashes separating each log event:

### 25/08/2007 07:22:42.664 #############################
# org.jpos.simulator.TestRunner/results
#
Echo Test [OK] 47ms.
Card not found [OK] 21961ms.
Invalid processing code [OK] 68ms.
Card ok - expiration mismatch [OK] 218ms.
Card ok - invalid account [OK] 460ms.

You can override how to render any component, just by adding a template for it. I tried a simple template for ISOMsg that would print the description of every field. Also tried a nice rfc822-like (mbox format) template that can be read by a mailer (nice to search, sort, etc.). You can easily create ANSI-coloured output, html output, protect some fields, whatever. ST works nice with Maps and Beans so we may have to add some getters to some of our classes in order to be able to customize them more easily. For example, I made org.jpos.transaction.Context.getMap() public so we could customize the output of the TransactionManager´s Context like this:

org\_jpos\_transaction_Context(o) ::= >>
$o.map.keys:{
k| $k$:$o.map.(k)$$\\n$}
$>>

Just for fun, I made a VMS skin (told you this is addictive) up to the point that I´m consider turning on our old MicroVAX just to see if I remember my SYSTEM password (not planning on reloading VMS off 20 TK50 tapes... :) I invite you to play with it and contribute some skins. For those of you not familiar with the jPOS logger subsystem, the main difference with other loggers is that we give the log listeners a reference to the live object, which usually contains multiple log entries associated with a given transaction. The listeners can act as filters by returning a new or modified log event (and that´s how we can tweak the content to make it for example PCI compliant). I have some ideas for new uses for our new friend. I think we could serialize ISOMsgs in IMF format (you have an IMF, right?) using for example YAML, apply ST templates based on the destination endpoint and re-load them off YAML (all the operation can be wrapped in an ISOFilter which could be less dangerous to customize by an end user than for example our BSHFilter).

· One min read
Alejandro Revilla

/by apr/ As of r2541 we have a new Space operation, Space.push. Space.push is similar to Space.out but places the entry at the head of the list. I tried hard to avoid adding new operations to the Space in order to keep it simple but The Gladiators Team found an extremely interesting use case for it. Under heavy load, when a transaction gets resumed, it competes with new incoming transactions in order to get its CPU slot, but this transaction has already competed once (at the beginning) and it's unfair to have it on queue again. With the new Space.push operation (implemented in r2541) and changes to the TransactionManager (implemented in r2542) a resumed transaction would get priority over a new one. jPOS-EE has been updated with a binary version of r2542.

· One min read
Alejandro Revilla

/by apr/ Here is IMO the top seven fallacies of jPOS applications:

  1. My single point of failure is very reliable
  2. My online application is so fast that I can do most of my inherently batch operations in realtime.
  3. I can trust my RDBMS/Application Server, it has replication/clustering!
  4. My RDBMS is so fast that it will never be the bottleneck so a caching strategy is not required.
  5. I don´t need an offline reporting/querying system, I can retain more data and run my reporting and ad-hoc queries against the online database
  6. Flow and load control are not required, My jPOS application will handle peaks and remote endpoint slowdowns gracefully.
  7. Automated tests and load tests are not required, we coded the application very carefully.

· 2 min read
Alejandro Revilla

/by apr/

The Gladiators Team I´ve spent last week working with Andy and an amazing team of top-notch developers that are building what I think is the biggest jPOS application ever. Code-named "The Gladiators Team", this group is developing a next-generation acquiring switch capable of delivering 900 tps continuous throughput. I was glad to see they are pushing jPOS to its limits using the latest and greatest features such as the TransactionManager´s continuations support. We found and fixed some nasty bugs that were not showing up under light load, there´s now a Context.resume() operation (you no longer need to re-queue a paused transaction), paused transactions can now have an expiration (they get automatically continued by the transaction manager) and we worked out a new interface for the MUX that will be able to operate asynchronously by using the new ISOResponseListener interface. The Gladiators Team They have some really good requirements related to system monitoring, realtime re-configuration and stats collecting so you can expect changes in jPOS in that front during the following weeks. (you can follow the changes in the jpos-commits feed). Working with this team for a full week was a learning experience, I learnt from their system architect (a very realistic, YAGNI-oriented type) that they use stochastics in their load simulation, a technique that allows them to simulate a production environment in a very realistic way. This makes a lot of sense, you don´t usually get 900 tps from a single source, you get connections from thousands of different merchants and dozens of acquirers simultaneously producing different kind of loads. You can expect some improvements in our clientsimulator in the future. This is not a 900 tps peak system, it is a 900 tps continuous system (they can probably peak at way above the 2000 tps mark). I got word yesterday that they managed to run the system at full 900 tps for 12 hours (not bad, huh?). The good news: four members of this team are now jPOS committers and their company is extremely open-source friendly, so we can expect great contributions from them in the near future. Ansar, Tushar, Manoj and Vishnu, Welcome!. I´m looking forward to see you again guys, (specially on Fridays :))

· One min read
Alejandro Revilla

/by apr/ For those of you using jPOS-EE' Web User Interface you''ll be happy to know that Florin is doing CPR to JPublish, the brilliant framework that we use in jPOS-EE. We are excited to know JPublish will continue to be supported and I can't wait to get my hands dirty with the new JPublish+DWR integration. For those of you looking for a clean and elegant web framework, look no further, go visit JPublish's new home. Kudos Florin!

· One min read
Alejandro Revilla

/by apr/

Google Techtalks: How Open Source Projects Survive Poisonous People

Hugo van der Zee (with the Cyclos project) pointed me to this nice Google TechTalk Video from the subversion guys.

We are a tiny little project compared with giant projects such as subversion, and we are lucky that we don´t have the problems that they describe, but it´s good to be aware of them before they happen. Interesting enough, I´ve identified the power-plant problem a couple of times through these years ("Hey, my new version of jPOS is almost ready!! I will send it for your review soon -- I´ll call it jPOS 7).

Understanding that this is actually a problem is a good first step to solving it.

· One min read
Alejandro Revilla

/by apr/ jPOS 1.6.0 (aka jPOS 6) represents almost two years of hard work that include bugfixes, performance tuning and new components as described in our ChangeLog as well as a completely redesigned component-oriented build environment pioneered by the jPOS-EE project. jPOS 6, distributed under a GPLv2 license, is considered a stable release and is being used by Fortune 500 companies to process millions of transactions per day. You can download it here. We will move our development version to 1.6.1.

· 2 min read
Alejandro Revilla

/by apr/

I'm happy to announce a new and exciting TransactionManager's feature -- drum roll... PAUSE... :) ... CONTINUATIONS!.

You can now PAUSE a transaction (at a participant boundary) by returning PREPARED|PAUSE (or ABORTED|PAUSE).

The transaction gets suspended and resumes once you place it again in the TransactionManager's queue (i.e. by calling txnmgr.queue (Context)).

Why this is important to us?

Imagine you have a transaction manager configured to run 100 simultaneous sessions.

Even if 100 is a big number, when you depend on external components (such as a remote host) that can go slow you can easily exhaust all sessions.

When you have a single remote host and it goes slow, there's nothing you can do about it, but when you connect to multiple acquirers and one of them goes slow, you don't want to penalize the rest.

This problem has been described here where we use multiple TransactionManagers (one per remote host) with a pre-configured reasonable number of sessions and we forward transactions from one transaction manager to the next.

With continuations, we can split a participant that has to wait on remote events (i.e. a QueryHost or QueryHSM kind of participant).

The first one just sends the request to the remote system and returns PREPARED|PAUSE.

The session gets free and ready to process new transactions.

Once we receive a response, we just queue the Context back into the TransactionManager (I'll add a Context.resume() method to ease finding the right space and queue) and the transaction continue processing.

· 3 min read
Alejandro Revilla

/by apr/ A nice side effect of going up the food chain (in addition to getting tastier food) is that you get to interact with really smart and knowledgeable people looking for elegant solutions to complex and challenging problems.

We have production systems now doing 600k txn/day, but we are now shooting for 2M/day.

We have systems that can do 300 sustained TPS for 30 minutes, but we face a requirement to do 2000 TPS sustained for five hours now.

In a recent meeting while Andy was doing a walk-through of their OLS.Switch TransactionManager's configuration, we where asked if the participants could run in parallel.

Although we tried to explain that the role of the TransactionManager was to assemble the flow of the transaction in an "execution chain" where the order had to be preserved, one the engineers there actually had a very good point. There are many situations where you want to run things in parallel, for example, you can do some database validations while firing a pin translation message to the HSM pool.

On a multi-database environment, you may want to simultaneously fire different queries to different databases. I liked the idea and started to work as soon as the meeting was over. With the new org.jpos.transaction.participant.Join participant, you can convert something like this:

<participant class="com.your.company.DoPinEncryption">
...
</participant>
<participant class="com.your.company.QueryDatabase">
...
</participant>

into:

<participant class="org.jpos.transaction.participant.Join">
<participant class="com.your.company.DoPinEncryption">
...
</participant>
<participant class="com.your.company.QueryDatabase">
...
</participant>
</participant>

just by wrapping them inside the Join participant and run both in parallel (you are not limited to just two inner participants, you can place there as many as you want). Join honors the AbortParticipant contract so you can have your inner participants called even if the transaction is meant to abort.

I had to do some minor changes to the TransactionManager implementation as we needed a reference to the QFactory in order to reate the inner participants. In order not to break existing participants and to avoid adding a new interface, the TM uses Reflection in order to attempt to push a reference to the TransactionManager to the participant at init time. If your participant has a method:

   public void setTransactionManager (TransactionManager mgr);

then it would get called. You can use that reference to pull a reference to the QFactory which in turns uses the the MBeanServer to instantiate the participants using an appropriate dynamic classloader. If you are still not using the TransactionManager in your jPOS application, you should consider giving it a try, we believe it's the way to go.