I've recently embarked on an ambitious project to develop a new jPOS documentation, which is currently in its nascent stages. This tutorial is designed to evolve in tandem with ongoing jPOS development, and in time, it will replace existing documentation to become a dynamic, live tutorial and reference.
As it stands, the tutorial is in its early stages, but it already offers valuable insights for newcomers to jPOS, particularly with the setup primer and QBean information. Veterans of jPOS will find the sections on 'Environments' insightful, especially for applications in cloud deployments. This is also a prime opportunity to test and experience the jPOS Gradle Plugin in action.
Alongside the tutorial, we have a companion project on Github. It’s structured with distinct branches for each part of the tutorial, enabling you to access and execute the examples easily.
Since this is a work in progress and will be evolving alongside jPOS, your feedback is incredibly valuable. We encourage you to share your thoughts, suggestions, and insights, which will be instrumental in shaping this tutorial into a comprehensive and up-to-date resource for all things jPOS.
Here is the link.
]]>As applications grow and we move from a handful participants to several dozens, or even hundreds, it becomes very useful to put some constraints on what a participant can or can't do related to its Context.
On large jPOS applications, created by disparate teams, the TransactionManager configuration file is usually an excellent self-documenting starting point in order to figure out which transactions are supported, which participants are involved and how they are configured.
In addition, it is common for a participant’s implementation, before doing its job, to check for the presence in the Context of its needed input variables. CheckFields for example, needs a REQUEST
object. SendResponse, needs a RESPONSE
object.
To solve this, we've introduced a new required
element, verified by the TransactionManager before calling the participant's prepare method, that looks like this:
<participant class="org.jpos.transaction.CheckFields">
<requires>REQUEST</requires>
</participant>
In this case, the TM checks for the presence of a REQUEST entry in the Context. If it’s not present, the transaction aborts right away (as if the participant had returned ABORTED).
This is just syntactic sugar, as the participant can check (and actually check) that, but there's a little twist.
The participant does not receive the whole context, it just receives a clone of it with JUST the entries described in the <require>
element (which BTW, can be a comma separated list).
In addition to the require
element, we support an optional
element. i.e.:
<participant class="org.jpos.transaction.CheckFields">
<requires>REQUEST</requires>
<optional>TIMESTAMP, TIMEZONE</optional>
</participant>
You get the idea, if the optional TIMESTAMP
and TIMEZONE
entries are present, they would get copied to the mini Context used to call the participant.
A similar approach is used in order to selectively get the output of a participant. We use the provides
element like this:
<participant class="org.jpos.transaction.CreateDSRequest">
<requires>REQUEST</requires>
<provides>RESPONSE</provides>
</participant>
The TransactionManager, who has access to both the original Context and the miniContext offered to the participant, merges back ONLY the entries listed in the provides
method.
This is an experimental preview feature implemented in jPOS 3.0
(next). It needs to be battle tested in order to see if it makes sense to keep it, or it's just a useless idea. If we like it, and we continue supporting it, we may choose to enforce it on every participant. In a second phase we can define a TM-level promiscuous
property so that if one chooses the old behavior, you need to set it true, either globally at the TM config level, or at the participant level, perhaps with a promiscuous="true"
attribute.
See ChangeLog for details.
We will continue to maintain the 2.x.x series for a long while, so you can consider the 2.x.x series an LTS version, but all the fun and new features will happen in 3.x.x.
]]>In order to simplify its configuration Q2 now supports templates
that can be launched from a template deployer,
or programmatically using Q2's deployTemplate
method.
A template is identical to a regular QBean XML descriptor, and can be placed anywhere in the
filesystem, or classpath (including dynamic classpath) that has a special keyword called __PREFIX__
that are replaced at deploy time.
Q2's deployTemplate
method has the following signature:
deployTemplate(String resource, String filename, String prefix);
resource
is the resource location (e.g.: templates/channel.xml
). If the resource starts with jar:
, Q2 fetchs it
from the classpath (e.g.: jar:META-INF/q2/templates/channel.xml
).filename
is the QBean descriptor filename, e.g.: 10_acme_channel.xml
prefix
is the Environment prefix (see below).Imagine you have an environment (cfg/default.yml
) file that looks like this:
acme:
host: 192.168.1.1
port: 8000
emca:
host: 192.168.1.2
port: 8001
In a regular deployment, you could have two channel configurations like this:
10_channel_acme.xml
<channel-adaptor name="acme-channel-adaptor" ...>
<channel class="${acme.channel}" packager=...>
<property name="host" value="${acme.host}" />
<property name="port" value="${acme.port}" />
...
...
</channel>
<in>acme-send</in>
<out>acme-receive</out>
</channel-adaptor>
10_channel_emca.xml
<channel-adaptor name="emca-channel-adaptor" ...>
<channel class="${emca.channel}" packager=...>
<property name="host" value="${emca.host}" />
<property name="port" value="${emca.port}" />
...
...
</channel>
<in>emca-send</in>
<out>emca-receive</out>
</channel-adaptor>
With templates, you can have a single template like this somewhere in your
filesystem or classpath (say templates/channel.xml
).
<channel-adaptor name="__PREFIX__-channel-adaptor"
enabled="${__PREFIX__.enabled:false}"
class="org.jpos.q2.iso.ChannelAdaptor"
logger="${__PREFIX__.logger:Q2}">
<channel class="${__PREFIX__.channel}"
logger="${__PREFIX__.logger:Q2}"
packager="${__PREFIX__.packager:org.jpos.iso.packager.GenericPackager}">
<property name="host" value="${__PREFIX__.host}" />
<property name="port" value="${__PREFIX__.port}" />
<property name="header" value="${__PREFIX__.header}" />
<property name="connect-timeout" value="${__PREFIX__.timeout:5000}" />
<property name="packager-config" value="${__PREFIX__.packagerConfig}" />
</channel>
<in>__PREFIX__-send</in>
<out>__PREFIX__-receive</out>
<reconnect-delay>${__PREFIX__.reconnect:10000}</reconnect-delay>
<wait-for-workers-on-stop>${__PREFIX__.wait:yes}</wait-for-workers-on-stop>
</channel-adaptor>
and perhaps a mux.xml
connected to it:
----
<mux name="__PREFIX__" class="org.jpos.q2.iso.QMUX"
enabled="${__PREFIX__.enabled:false}"
logger="${__PREFIX__.logger:Q2}" realm="__PREFIX__">
<key>${__PREFIX__.mux.key:41,11}</key>
<in>__PREFIX__-receive</in>
<out>__PREFIX__-send</out>
<ready>__PREFIX__-channel.ready</ready>
</mux>
----
Then you can deploy a QBean descriptor like this:
<templates>
<template resource="templates/channel.xml" descriptor-prefix="10_channel_">
acme,emca
</template>
<template resource="templates/mux.xml" descriptor-prefix="20_mux_">
acme,emca
</template>
</templates>
The special text __PREFIX__
is replaced by Q2 in each file using the prefix acme
and emca
,
so the properties:
<property name="host" value="${__PREFIX__.host}" />
<property name="port" value="${__PREFIX__.port}" />
gets converted to
<property name="host" value="${acme.host}" />
<property name="port" value="${acme.port}" />
Very simple, it's just configuration sugar, but comes very handy in deployments with hundreds of services.
This is available in jPOS Next
(series 3.x.x) starting in 3.0.0-SNAPSHOT
availble in jPOS Maven repo, as well as Github next
branch.
TIMESTAMP
entry at the beginning of the transaction, and then have each participant control how it's going, so to speak. If the transaction has spent more than a threshold amount of time, it's better to abort the transaction right away because the client (i.e. POS device) probably has expired the transaction already and it's not waiting for our response anymore.
We can do that on each participant, or you can have a general-purpose timeout participant that you can place here and there.
But because this is a common use case, to encourage its use, in jPOS Next the TransactionManager honors a couple of new attributes at the participant level:
timeout
max-time
e.g.:
...
...
<participant class="org.jpos.jcard.CheckFields">
<property name="mandatory" value="PCODE,7,11,12,AMOUNT,PAN,41" />
</participant>
<participant class="org.jpos.jcard.SelectCurrency" />
<participant class="org.jpos.jcard.CreateTranLog" timeout="1000" max-time="5000">
<property name="capture-date" value="capture-date" />
<property name="checkpoint" value="create-tranlog" />
<property name="node" value="${jcard.node:99}" />
</participant>
...
...
In this example, three things can happen:
CreateTranLog
participant, the transaction has taken more than 5000ms thus far, the participant is not called, and the transaction continues with the ABORTED route.PREPARED
, we coerce the response to ABORTED
.ABORTED
.When Java started its new release cadence, we kept the baseline source compatibility to Java 8, while making sure jPOS was able to run on newer JDKs, but that prevented us to take advantage of new language features. There were features—like a long time waiting for multi-line Strings—that were nice to have, but one could still live without, but then there was one very interesting one for us, Project Loom (formerly fibers) that almost made it to Java 17, that we want to test at some large sites along with the TransactionManager.
Sealed classes, Records, Switch pattern matching, being prepared for JEP411 that will deprecate the Security Manager are all things we want to play with.
In addition, there are some changes we want to try in jPOS that would certainly rock the boat, related to audit logging (yes, we want to be agreegator/JSON/ELK friendly), using Strings instead of Integers in ISOMsg keys (so one doesn't have to create an arbitrary mapping on EMV related messages), Fluent APIs to create components for those that hate XML (funny, people find XML configs archaic but still love pom.xml
), in-memory deployments, per-participant profiling on the TransationManager, etc.
In addition, we need to get rid of some baggage. The compat_1_5_2
module will remain available in the repo, as a reference, but it doesn't have to exist in the main tree. Some AWT/Swing based classes may go back to jPOS-EE as a module, and we may have a sibling module using JavaFX in order to see if it takes traction (JavaFX looks promising).
The CLI is up for a refactor. It's super useful to implement jPOS commands, but the API, initially very tied to JLine has proven to be kind of problematic once we added SSH support. We can do better on that side.
So... here is what we are planning to do:
For the time being, there's a next
branch (already available in Github) that builds jPOS 3.0.0-SNAPSHOT
.
The master
branch currently has jPOS 2.1.8-SNAPSHOT
but once we release 2.1.8, we'll switch next
to become master
(or take the opportunity to rename it to main
) and use 2.1.8 as the forking point for an LTS v2
(or similar name) branch.
See ChangeLog for details.
Work has been started toward jPOS 3.0.0 targeting Java 17 as a baseline.
We will continue to maintain the 2.x.x series for a long while, so you can consider the 2.x.x series an LTS version, but all the fun and new features will happen in 3.x.x.
]]>New development version is 2.2.8-SNAPSHOT
.
ChangeLog here.
]]>Please see the ChangeLog.
See Resources Page for details.
]]>Please see the ChangeLog.
See Resources Page for details.
]]>Recently we spent two or three days hunting down a bunch of weird behaviors on a client's project.
It was a typical jPTS setup, and we were testing a Source Station (SS), talking to the central jPTS, talking to a destination station (DS). The different subsystems were chained together with the standard set of QueryHost
+ QMUX
+ ChannelAdaptor
pointing to the QServer
down the line.
The client wanted to do some stress load testing, and was using JMeter ^*^ with the jPOS-based ISO-8583 plugin (sending a very high load of messages to the SS).
Despite being a "standard setup", there were several custom features that were giving us all kinds of trouble (it's never a standard setup, is it?). The initial implementation was performing very badly with respect to transactions per second (TPS), and also losing many transactions due to timeouts. We had to isolate the different sources of the problems (extra queries to a very slow parallel database, a DB connection pool too small for the expected concurrency, an HSM that would choke when being hit with some ridiculously high number of TPS...). Things got much better after some refactoring, but we still got many responses with a timeout error code, or just dropped transactions, for which we never got a response. When analyzing the logs further, we saw some warnings from the multiplexers about duplicate keys, and we detected that it was better to use a different key set than QMUX
's default 11, 41
(the terminal id, field 41
, would often be constant, so we decided to add field 37
, the Retrieval Reference Number, to the key set).
But still, many transactions were timing out... until we detected something even stranger. For example, RRN's, or other fields, that would change in the response, returning a value that belonged to a different request. Our ISOMSg
's were having some weird genetic recombination!
Now, we should clarify that this system was developed mostly by the client, so we weren't familiar with all the details of their code and logic hidden in some of their custom classes. We were told that the DS was supposed to talk to a Mastercard remote node, but that at the moment it only had some "autoresponder simulator" in order to perform these tests. We imagined a simple RequestListener
that just changed and added a couple of fields, with a standard success response in field 39
.
In reality, the "autoresponder simulator", was a full transaction manager with several participants, one of which was a not-so-simple equivalent to the following (here, extremely simplified, for clarity and brevity)
public class SomeParticipant implements TransactionParticipant, Configurable {
private String requestKey; // GOOD
private ISOMsg theMsg; // BAD; DON'T DO!
private void readRequestFromContext(Context ctx) {
(1) theMsg= ctx.get(requestKey); // BAD; DON'T DO!
// ...manipulate theMsg...
}
protected int prepare(long id, Serializable context) throws Exception {
Context ctx = (Context)context;
(2) readRequestFromContext(ctx); // OOPS!
(3) theMsg.set(38, getApprovalCode()); // is theMsg still the same object
(4) theMsg.set(39, getResponseCode()); // ...and now?
(5) ctx.put(Constants.DS_RESPONSE, theMsg); // what about here?
return PREPARED | NO_JOIN;
}
public void setConfiguration(Configuration cfg) throws ConfigurationException {
requestKey = cfg.get("request-key", "REQUEST"); // GOOD
}
}
Can you spot the problem? The participant is using an instance variable, called theMsg
, to store a reference to the request message in line (1)
. And what's wrong with that? As explained in Chapter 9 of the jPOS Programmer’s Guide, the TransactionManager
creates only one instance of each participant, but there may be multiple concurrent transactions (i.e. TransactioManager
sessions) running. So, by using an instance member variable to store the value retrieved in line (1)
, each concurrent transaction (thread) being processed may be overwriting that value with a different ISOMsg
. In fact, at each step where we use theMsg
, such as in lines (2)
, (3)
, and (4)
, we could be working with a different ISOMsg
!
Also, notice how it's perfectly OK to use instance variables to store configuration options for this participant. These are usually read at the time of participant instantiation and initialization, and set in the setConfiguration()
method. Those values shouldn't normally change during the life of the participant.
So, what should we do? Always use either local variables, or pass the values as method arguments, or store them in the Context
object and retrieve them from there.
Never use the instance variables of a participant to store transaction state, or you may incur in hard-to-debug race conditions
(*) We may have a blog post about using JMeter with jPOS in the near future
]]>Please see the ChangeLog.
See Resources Page for details.
]]>Please see the ChangeLog.
See Resources Page for details.
]]>Please see the ChangeLog.
See Resources Page for details.
]]>Among the many helpful jPOS developers in our community, @marklsalter stands out for his professional, accurate, and detailed answers, but he has his standards when it comes to how you ask questions. You need to ask a smart question. jPOS has Mark (and Victor, and Andy, and Dave, and Matias, and Chhil, and Barzi, and, and, and), but if you go to any other open source community asking for free advice, you'll find another Mark, or worst than that, you'll find no Mark and your question will just get ignored and you won't even know why.
This is what you should expect as a response if you don't do your homework and ask a smart question (from a recent reply in jpos-users, in this case, related to a vague question about to the Transaction Manager - could have been anything else).
Please always start by asking a smart question.
Please read this now :-
http://www.catb.org/esr/faqs/smart-questions.html
Yes, the whole things, go on, treat yourself, it will take 5-10 minutes and save us both hours going forward.
Preparing to ask a smart question should cause you to read the available documentation, to understand it and enable you to make sure you include all the relevant details needed for another remote person to help you, but should also make sure you have understood the documentation and how it applies to your need.
I can honestly say that on this opening post that I could see that it was going to be another thread that would drag on - without you ...
By the way, I understand that a TM configuration might not be obvious straight away, but often the best things are worth the effort. I still refer to the documentation again and again (and again).
I will include some comment below to, in the hope that you read it and take the time next time to ask a smart question.
Remember as you read through that I am not taking the piss out of you, but trying to highlight why this is a terrible opening question and how you can (hopefully) help yourself next time.
]]>Please see the ChangeLog.
See [Resources Page][2] for details.
]]>The cryptoservice
module uses AES-256 to encrypt sensitive data, such as primary account numbers and protects the encryption key using PGP.
At start-up time, and at regular intervals, the crypto service generates a new AES-256 key, encrypts it using PGP using one or more recipient ids, and stores the resulting encrypted message in the sysconfig table, using the "key." prefix, and a unique key UUID, i.e.:
id: key.f55fe6ec-ed9e-47a1-a0fe-c63dcbf128cb
value:
-----BEGIN PGP MESSAGE-----
Version: BCPG v1.56
hQEMA6Nw6GrTY6BpAQgAs1pUIK3n2FkMyNmfxSZgpPMNFKz39TcfExiwDRtuw+Zg
wRgFw86SJiL1BB+IE+mPAeCz4hrUkzliiu/760NiXHQysIasWEvUZZqFRA+ecNrk
zARgB8vgGTNgxPHoYPafVD5TrxY9LdRpJcO//Wm2fEVw0xc4Q7vxbH7e9gDQfiuA
gcNYk96rVCdbZFKxyMC8fpM9ng6M4V9lxp5TXihzJQEKHWavctIrU2rBolE1WCY2
Oobs1hELW4rfMpVwfGQDtxcFSNDYkd9IO/WnFTtTAxGHs0u1/miRVxNHadLINdke
wXx6au9vq12tqlYaJY+BAEtJaAInwwT5/irHj5dlwtJ0AW2wO3Mwh+A+pGJvSd2T
xyep1pNtm7tMbisZyms0TiGz+6BX6F5ZKCG5UuvsIvTHd/VLp2uajE5NVPe92Y1F
lLbbMyUfxzBwNhwhdfOEWwRAmrt7AbMyAQHUCZAXgwXn7SXsdh8TTzLMsssViD9+
h7lfP9w=
=YyZk
-----END PGP MESSAGE-----
The key is used to encrypt subsequent data for a given period of time (defaults to one day) until a new key is automatically generated.
Here is a sample usage:
private void encryptCardData (TLCapture tl, Card card) [1]
throws Exception {
Map<String,String> m = new HashMap<>();
m.put ("P", card.getPan());
m.put ("E", card.getExp());
SecureData sd = getCryptoService().aesEncrypt( [2]
Serializer.serializeStringMap(m)
);
tl.setKid(sd.getId()); [3]
tl.setSecureData(sd.getEncoded()); [4]
}
getCryptoService()
just locates the CryptoService
using the NameRegistrar
kid
stands for Key ID, we store the key UUID heresecureData
is a general purpose blobThe crypto service can be configured using a QBean descriptor like this:
<crypto-service class='org.jpos.crypto.CryptoService' logger='Q2'>
<property name="custodian" value='demo@jpos.org' /> [1]
<property name="pubkeyring" value='cfg/keyring.pub' /> [2]
<property name="privkeyring" value='cfg/keyring.priv' /> [3]
<property name="lazy" value="false" /> [4]
<property name="keylength" value="256" /> [5]
<property name="duration" value="86400000" /> [6]
</crypto-service>
custodian
entries.aesEncrypt
, otherwise, a new one is created at service start.This allows jPOS nodes to encrypt data securely without storing the encryption key to disk.
NOTE: The transient encryption key is still in memory, so core dumps and swap should be disabled at the operating system level. This approach is still more secure than obfuscating encryption keys.
Decryption -- that can of course run in a different node, at a different time -- requires access to the private keyring, with its optional password. Said password can be entered manually, obtained from a remote service or HSM, etc. and it's a two step process.
First the key has to be loaded into memory, using the loadKey
method. Once the key is loaded, the aesDecrypt
can be called.
These are the method's signatures:
public void loadKey (String jobId, String keyId, char[] password) throws Exception;
public byte[] aesDecrypt (String jobId, String keyId, byte[] encoded) throws Exception;
Here keyId
, password
, and encoded
cryptogram don't require too much explanation, but jobId
does and here is the rationale. We could have a one-shot aesDecrypt
method accepting the private key password, but decrypting the AES-256 key using PGP is an expensive operation. In situations where you have extract a daily file, probably encrypted by just a handful keys, you don't want to decrypt the key on every aesDecrypt
call. We don't want to expose the key to the caller either, so the CryptoService keeps it in a private field. In order to do that, loadKey
caches the key (until it's unloaded), so it's cheap to call loadKey
followed by aesDecrypt
, after the first call where the key is actually decrypted, subsequent calls will be pretty fast.
In order to protect different clients from accessing keys loaded by other ones, we use a jobId
that can be something as simple as a UUID
or any nonce, only known to the caller. That jobId
can then be used to unload
those keys, using the unloadKey
and unloadAll
methods:
public boolean unloadKey (String jobId, String keyId);
public void unloadAll(String jobId);
There's also a no-args unloadAll()
that unloads all keys, and should be used with care.
NOTE: In order to simplify development and testing, and eventually to troubleshoot problems, we've also created a couple of CLI commands: aesencrypt
and aesdecrypt
.
TIP: If you're accessing the CLI using the command line q2 --cli
, remember that the default deployDir
is deploy-cli
instead of deploy
. You need a copy (or symlink) of 25_cryptoservice.xml
in that directory. If you ssh
to a running Q2 to reach the CLI, then you can ignore this tip.
For up-to-date information about this CryptoService module, please see the jPOS-EE guide.
]]>org.jpos.transaction.TxnId
class in the jPOS-EE txn
module that can be used to generate transaction ids in multi-node systems.
The id is composed of:
A typical ID long value would look like this: 173000702600000001
, and the toString()
method would show as 017-300-07026-000-00001
and the toRrn()
method would return 1bbfmplq9la9
.
TxnId
also has a handy toRrn()
method that can be used to create (and parse) 12-characters strings suitable to be used as retrieval reference numbers.
TxnId
can be used instead of UUIDs. It puts less pressure in the database index and provides chronological order.
NOTE: The last two groups, node-id
and transaction-id
are supposed to be unique. transaction-id
is easy to get from the transaction manager and node-id
is a tricky one, user has to ensure each node has a unique node-id
to avoid collisions.
Sample usage:
TxnId txnId = TxnId.create(DateTime.now(), 0, id);
Please see the ChangeLog.
Remember we are using Semantic Versioning so the change from 2.0.10 to 2.1.0 means a full rebuild has to be done in your applications. Some of the most notable changes are:
Map<String,Object>
instead of the old Map<Object,Object>
so that needs reviewOther than those two minor changes, jPOS 2.1.0 has a large number of improvements, including TransactionManager metrics, new org.jpos.rc
package, bug fixes and improved TransactionManager capacity.
jPOS-EE 2.2.4 has been released as well, new development versions are jPOS 2.1.0-SNAPSHOT
and jPOS-EE 2.2.5-SNAPSHOT
.
See Resources Page for details.
]]>People download jPOS just to use the ISO-8583 packing/unpacking and sometimes they don't even get to use the channels, multiplexers, servers, transaction manager. I see developers trying to stay away from Q2 probably because they don't know it, but it's quite simple to use.
So I wrote a little tutorial, http://jpos.org/doc/tutorials/jpos-gateway.pdf that walks you through the process of installing jPOS and writing a production grade gateway capable of processing thousands of transactions per second.
The tutorial has two parts, first you get it running (takes about 5 minutes), then the second part explains why that very simple configuration works.
The second part has plenty of links to the jPOS programmer's guide documentation (freely available), while it's just a couple pages, understanding it and following the links may take some more time and may raise some questions that we'll be happy to answer.
]]>