Seven Fallacies of jPOS applications

/by apr/ Here is IMO the top seven fallacies of jPOS applications:

  1. My single point of failure is very reliable
  2. My online application is so fast that I can do most of my inherently batch operations in realtime.
  3. I can trust my RDBMS/Application Server, it has replication/clustering!
  4. My RDBMS is so fast that it will never be the bottleneck so a caching strategy is not required.
  5. I don´t need an offline reporting/querying system, I can retain more data and run my reporting and ad-hoc queries against the online database
  6. Flow and load control are not required, My jPOS application will handle peaks and remote endpoint slowdowns gracefully.
  7. Automated tests and load tests are not required, we coded the application very carefully.

The Gladiators Team

/by apr/

The Gladiators Team I´ve spent last week working with Andy and an amazing team of top-notch developers that are building what I think is the biggest jPOS application ever. Code-named “The Gladiators Team”, this group is developing a next-generation acquiring switch capable of delivering 900 tps continuous throughput. I was glad to see they are pushing jPOS to its limits using the latest and greatest features such as the TransactionManager´s continuations support. We found and fixed some nasty bugs that were not showing up under light load, there´s now a Context.resume() operation (you no longer need to re-queue a paused transaction), paused transactions can now have an expiration (they get automatically continued by the transaction manager) and we worked out a new interface for the MUX that will be able to operate asynchronously by using the new ISOResponseListener interface. The Gladiators Team They have some really good requirements related to system monitoring, realtime re-configuration and stats collecting so you can expect changes in jPOS in that front during the following weeks. (you can follow the changes in the jpos-commits feed). Working with this team for a full week was a learning experience, I learnt from their system architect (a very realistic, YAGNI-oriented type) that they use stochastics in their load simulation, a technique that allows them to simulate a production environment in a very realistic way. This makes a lot of sense, you don´t usually get 900 tps from a single source, you get connections from thousands of different merchants and dozens of acquirers simultaneously producing different kind of loads. You can expect some improvements in our clientsimulator in the future. This is not a 900 tps peak system, it is a 900 tps continuous system (they can probably peak at way above the 2000 tps mark). I got word yesterday that they managed to run the system at full 900 tps for 12 hours (not bad, huh?). The good news: four members of this team are now jPOS committers and their company is extremely open-source friendly, so we can expect great contributions from them in the near future. Ansar, Tushar, Manoj and Vishnu, Welcome!. I´m looking forward to see you again guys, (specially on Fridays :))

How Open Source Projects Survive Poisonous People

/by apr/

Google Techtalks: How Open Source Projects Survive Poisonous People

Hugo van der Zee (with the Cyclos project) pointed me to this nice Google TechTalk Video from the subversion guys.

We are a tiny little project compared with giant projects such as subversion, and we are lucky that we don´t have the problems that they describe, but it´s good to be aware of them before they happen. Interesting enough, I´ve identified the power-plant problem a couple of times through these years (“Hey, my new version of jPOS is almost ready!! I will send it for your review soon – I´ll call it jPOS 7).

Understanding that this is actually a problem is a good first step to solving it.

jPOS 1.6.0 has been released

/by apr/ jPOS 1.6.0 (aka jPOS 6) represents almost two years of hard work that include bugfixes, performance tuning and new components as described in our ChangeLog as well as a completely redesigned component-oriented build environment pioneered by the jPOS-EE project. jPOS 6, distributed under a GPLv2 license, is considered a stable release and is being used by Fortune 500 companies to process millions of transactions per day. You can download it here. We will move our development version to 1.6.1.

TransactionManager Continuations

/by apr/

I’m happy to announce a new and exciting TransactionManager’s feature – drum roll… PAUSE… :) … CONTINUATIONS!.

You can now PAUSE a transaction (at a participant boundary) by returning PREPARED|PAUSE (or ABORTED|PAUSE).

The transaction gets suspended and resumes once you place it again in the TransactionManager’s queue (i.e. by calling txnmgr.queue (Context)).

Why this is important to us?

Imagine you have a transaction manager configured to run 100 simultaneous sessions.

Even if 100 is a big number, when you depend on external components (such as a remote host) that can go slow you can easily exhaust all sessions.

When you have a single remote host and it goes slow, there’s nothing you can do about it, but when you connect to multiple acquirers and one of them goes slow, you don’t want to penalize the rest.

This problem has been described here where we use multiple TransactionManagers (one per remote host) with a pre-configured reasonable number of sessions and we forward transactions from one transaction manager to the next.

With continuations, we can split a participant that has to wait on remote events (i.e. a QueryHost or QueryHSM kind of participant).

The first one just sends the request to the remote system and returns PREPARED|PAUSE.

The session gets free and ready to process new transactions.

Once we receive a response, we just queue the Context back into the TransactionManager (I’ll add a Context.resume() method to ease finding the right space and queue) and the transaction continue processing.

Parallel Processing

/by apr/ A nice side effect of going up the food chain (in addition to getting tastier food) is that you get to interact with really smart and knowledgeable people looking for elegant solutions to complex and challenging problems.

We have production systems now doing 600k txn/day, but we are now shooting for 2M/day.

We have systems that can do 300 sustained TPS for 30 minutes, but we face a requirement to do 2000 TPS sustained for five hours now.

In a recent meeting while Andy was doing a walk-through of their OLS.Switch TransactionManager’s configuration, we where asked if the participants could run in parallel.

Although we tried to explain that the role of the TransactionManager was to assemble the flow of the transaction in an “execution chain” where the order had to be preserved, one the engineers there actually had a very good point. There are many situations where you want to run things in parallel, for example, you can do some database validations while firing a pin translation message to the HSM pool.

On a multi-database environment, you may want to simultaneously fire different queries to different databases. I liked the idea and started to work as soon as the meeting was over. With the new org.jpos.transaction.participant.Join participant, you can convert something like this:

1
2
3
4
5
6
<participant class="com.your.company.DoPinEncryption">
...
</participant>
<participant class="com.your.company.QueryDatabase">
...
</participant>

into:

1
2
3
4
5
6
7
8
<participant class="org.jpos.transaction.participant.Join">
<participant class="com.your.company.DoPinEncryption">
...
</participant>
<participant class="com.your.company.QueryDatabase">
...
</participant>
</participant>

just by wrapping them inside the Join participant and run both in parallel (you are not limited to just two inner participants, you can place there as many as you want). Join honors the AbortParticipant contract so you can have your inner participants called even if the transaction is meant to abort.

I had to do some minor changes to the TransactionManager implementation as we needed a reference to the QFactory in order to reate the inner participants. In order not to break existing participants and to avoid adding a new interface, the TM uses Reflection in order to attempt to push a reference to the TransactionManager to the participant at init time. If your participant has a method:

1
public void setTransactionManager (TransactionManager mgr);

then it would get called. You can use that reference to pull a reference to the QFactory which in turns uses the the MBeanServer to instantiate the participants using an appropriate dynamic classloader. If you are still not using the TransactionManager in your jPOS application, you should consider giving it a try, we believe it’s the way to go.

jPOS and Cyclos

/by apr/ Cyclos I have been introduced to Strohalm NGO and the Cyclos project a long time ago by Todd Boyle but today I have been honored by a courtesy visit of Cyclo’s project lead, Mr. Hugo van der Zee. I’ve seen an impressive demo of the upcoming Cyclos3 and Hugo was introduced to jPOS and miniGL. We found many areas of collaboration:

  • Cyclos is going to support [m]POS transactions. jPOS is a natural fit there.
  • Cyclos has its own internal ad-hoc accounting system. They will seriously evaluate the posibility of moving to the more generic miniGL.
  • Both projects are highly aligned regarding the tools we use, the license we use and complement each other pretty well.

Hugo and I will be working in the integration but we are also looking for an skilled jPOS developer in our area to work full time in it (interested parties please contact us). Besides the technical aspects of Cyclos and jPOS, their implementations of community currencies are amazing and something really good for weak economies. Hugo has a background in economics and I was lucky to get a first-hand explanation of the rationale behind them. I was proud to know that our government through our major bank BROU sponsored by EMPRETEC are actively supporting an implementation here. I´m sure the collaboration and integration will be good for both projects, so Welcome Cyclos!.

miniGL multi-currency support

/by apr/ As a follow-up to the recent post about miniGL layers we have removed support for foreign amounts at the transaction.entry level, foreign currencies are now handled in their own layer. That means that if you have a checking account and a savings account (we usually figure out the account type based on the content of the processing code field) in multiple currencies, you don´t need a separate set of accounts (one per type), the same account can hold transactions in multiple currencies. As mentioned in the previous post, we use layers to deal with pending versus real transactions (think pre-auth/completion) and we found an easy way to add multi-currency support by defining a PENDING_OFFSET constant. We are still working in the layers plan for our jCard system, but as an initial take, we have defined the PENDING_OFFSET=1000 and we store transactions in their corresponding layer using the iso4217 currency code, so e.g.: we store real dollars in 840 and pending dollars in 1840 (same goes for other currencies). So far the new setup looks extremely simple and straightforward. A close friend and CPA has contributed a very nice idea to the mix, conversion layers. I´m still not sure if we want to handle them with the existing database schema or we want some supporting tables for that, it has been good food for thought for now.