jPOS 2.1.2 has been released, new development version is now 2.1.3-SNAPSHOT
Please see the ChangeLog.
See Resources Page for details.
jPOS 2.1.2 has been released, new development version is now 2.1.3-SNAPSHOT
Please see the ChangeLog.
See Resources Page for details.
/by apr/
Among the many helpful jPOS developers in our community, @marklsalter stands out for his professional, accurate, and detailed answers, but he has his standards when it comes to how you ask questions. You need to ask a smart question. jPOS has Mark (and Victor, and Andy, and Dave, and Matias, and Chhil, and Barzi, and, and, and), but if you go to any other open source community asking for free advice, you’ll find another Mark, or worst than that, you’ll find no Mark and your question will just get ignored and you won’t even know why.
This is what you should expect as a response if you don’t do your homework and ask a smart question (from a recent reply in jpos-users, in this case, related to a vague question about to the Transaction Manager – could have been anything else).
Please always start by asking a smart question.
Please read this now :-
http://www.catb.org/esr/faqs/smart-questions.html
Yes, the whole things, go on, treat yourself, it will take 5-10 minutes and save us both hours going forward.
Preparing to ask a smart question should cause you to read the available documentation, to understand it and enable you to make sure you include all the relevant details needed for another remote person to help you, but should also make sure you have understood the documentation and how it applies to your need.
I can honestly say that on this opening post that I could see that it was going to be another thread that would drag on – without you …
By the way, I understand that a TM configuration might not be obvious straight away, but often the best things are worth the effort. I still refer to the documentation again and again (and again).
I will include some comment below to, in the hope that you read it and take the time next time to ask a smart question.
Remember as you read through that I am not taking the piss out of you, but trying to highlight why this is a terrible opening question and how you can (hopefully) help yourself next time.
jPOS 2.1.1 has been released, new development version is now 2.1.2-SNAPSHOT
Please see the ChangeLog.
See [Resources Page][2] for details.
In many jPOS systems, we secure sensitive data using ANS X9.24 DUKPT as described in the Encrypting sensitive data post. The approach served us well, but now we believe we have a better one, using PKI and AES-256.
The cryptoservice
module uses AES-256 to encrypt sensitive data, such as primary account numbers and protects the encryption key using PGP.
At start-up time, and at regular intervals, the crypto service generates a new AES-256 key, encrypts it using PGP using one or more recipient ids, and stores the resulting encrypted message in the sysconfig table, using the “key.” prefix, and a unique key UUID, i.e.:
id: key.f55fe6ec-ed9e-47a1-a0fe-c63dcbf128cb
value:
-----BEGIN PGP MESSAGE-----
Version: BCPG v1.56
hQEMA6Nw6GrTY6BpAQgAs1pUIK3n2FkMyNmfxSZgpPMNFKz39TcfExiwDRtuw+Zg
wRgFw86SJiL1BB+IE+mPAeCz4hrUkzliiu/760NiXHQysIasWEvUZZqFRA+ecNrk
zARgB8vgGTNgxPHoYPafVD5TrxY9LdRpJcO//Wm2fEVw0xc4Q7vxbH7e9gDQfiuA
gcNYk96rVCdbZFKxyMC8fpM9ng6M4V9lxp5TXihzJQEKHWavctIrU2rBolE1WCY2
Oobs1hELW4rfMpVwfGQDtxcFSNDYkd9IO/WnFTtTAxGHs0u1/miRVxNHadLINdke
wXx6au9vq12tqlYaJY+BAEtJaAInwwT5/irHj5dlwtJ0AW2wO3Mwh+A+pGJvSd2T
xyep1pNtm7tMbisZyms0TiGz+6BX6F5ZKCG5UuvsIvTHd/VLp2uajE5NVPe92Y1F
lLbbMyUfxzBwNhwhdfOEWwRAmrt7AbMyAQHUCZAXgwXn7SXsdh8TTzLMsssViD9+
h7lfP9w=
=YyZk
-----END PGP MESSAGE-----
The key is used to encrypt subsequent data for a given period of time (defaults to one day) until a new key is automatically generated.
Here is a sample usage:
private void encryptCardData (TLCapture tl, Card card) <1>
throws Exception {
Map<String,String> m = new HashMap<>();
m.put ("P", card.getPan());
m.put ("E", card.getExp());
SecureData sd = getCryptoService().aesEncrypt( <2>
Serializer.serializeStringMap(m)
);
tl.setKid(sd.getId()); <3>
tl.setSecureData(sd.getEncoded()); <4>
}
getCryptoService()
just locates the CryptoService
using the NameRegistrar
kid
stands for Key ID, we store the key UUID heresecureData
is a general purpose blobThe crypto service can be configured using a QBean descriptor like this:
<crypto-service class='org.jpos.crypto.CryptoService' logger='Q2'>
<property name="custodian" value='demo@jpos.org' /> <1>
<property name="pubkeyring" value='cfg/keyring.pub' /> <2>
<property name="privkeyring" value='cfg/keyring.priv' /> <3>
<property name="lazy" value="false" /> <4>
<property name="keylength" value="256" /> <5>
<property name="duration" value="86400000" /> <6>
</crypto-service>
custodian
entries.aesEncrypt
, otherwise, a new one is created at service start.This allows jPOS nodes to encrypt data securely without storing the encryption key to disk.
NOTE: The transient encryption key is still in memory, so core dumps and swap should be
disabled at the operating system level. This approach is still more secure
than obfuscating encryption keys.
Decryption — that can of course run in a different node, at a different time — requires
access to the private keyring, with its optional password. Said password can be entered
manually, obtained from a remote service or HSM, etc. and it’s a two step process.
First the key has to be loaded into memory, using the loadKey
method. Once the key
is loaded, the aesDecrypt
can be called.
These are the method’s signatures:
public void loadKey (String jobId, String keyId, char[] password) throws Exception;
public byte[] aesDecrypt (String jobId, String keyId, byte[] encoded) throws Exception;
Here keyId
, password
, and encoded
cryptogram don’t require too much explanation, but jobId
does and here is the rationale. We could have a one-shot aesDecrypt
method accepting the private key password, but decrypting the AES-256 key using PGP is an expensive operation. In situations where you have extract a daily file, probably encrypted by just a handful keys, you don’t want to decrypt the key on every aesDecrypt
call. We don’t want to expose the key to the caller either, so the CryptoService keeps it in a private field. In order to do that, loadKey
caches the key (until it’s unloaded), so it’s cheap to call loadKey
followed by aesDecrypt
, after the first call where the key is actually decrypted, subsequent calls will be pretty fast.
In order to protect different clients from accessing keys loaded by other ones, we use a jobId
that can be something as simple as a UUID
or any nonce, only known to the caller. That jobId
can then be used to unload
those keys, using the unloadKey
and unloadAll
methods:
public boolean unloadKey (String jobId, String keyId);
public void unloadAll(String jobId);
There’s also a no-args unloadAll()
that unloads all keys, and should be used with care.
NOTE: In order to simplify development and testing, and eventually to troubleshoot problems, we’ve also created a couple of CLI commands: aesencrypt
and aesdecrypt
.
TIP: If you’re accessing the CLI using the command line q2 --cli
, remember that the default deployDir
is deploy-cli
instead of deploy
. You need a copy (or symlink) of 25_cryptoservice.xml
in that directory. If you ssh
to a running Q2 to reach the CLI, then you can ignore this tip.
For up-to-date information about this CryptoService module, please see the jPOS-EE guide.
There’s a new handy org.jpos.transaction.TxnId
class in the jPOS-EE txn
module that can be used to generate transaction ids in multi-node systems.
The id is composed of:
A typical ID long value would look like this: 173000702600000001
, and the toString()
method would show as 017-300-07026-000-00001
and the toRrn()
method would return 1bbfmplq9la9
.
TxnId
also has a handy toRrn()
method that can be used to create (and parse) 12-characters strings suitable to be used as retrieval reference numbers.
TxnId
can be used instead of UUIDs. It puts less pressure in the database index and provides chronological order.
NOTE: The last two groups, node-id
and transaction-id
are supposed to be unique. transaction-id
is easy to get from the transaction manager and node-id
is a tricky one, user has to ensure each node has a unique node-id
to avoid collisions.
Sample usage:
java
TxnId txnId = TxnId.create(DateTime.now(), 0, id);
jPOS 2.1.0 has been released, new development version is now 2.1.1-SNAPSHOT
Please see the ChangeLog.
Remember we are using Semantic Versioning so the change from 2.0.10 to 2.1.0 means a full rebuild has to be done in your applications. Some of the most notable changes are:
Map<String,Object>
instead of the old Map<Object,Object>
so that needs reviewOther than those two minor changes, jPOS 2.1.0 has a large number of improvements, including TransactionManager metrics, new org.jpos.rc
package, bug fixes and improved TransactionManager capacity.
jPOS-EE 2.2.4 has been released as well, new development versions are jPOS 2.1.0-SNAPSHOT
and jPOS-EE 2.2.5-SNAPSHOT
.
See Resources Page for details.
I get to see dozens of third party jPOS gateway implementations just using 5% of jPOS capabilities.
People download jPOS just to use the ISO-8583 packing/unpacking and sometimes they don’t even get to use the channels, multiplexers, servers, transaction manager. I see developers trying to stay away from Q2 probably because they don’t know it, but it’s quite simple to use.
So I wrote a little tutorial, http://jpos.org/doc/tutorials/jpos-gateway.pdf that walks you through the process of installing jPOS and writing a production grade gateway capable of processing thousands of transactions per second.
The tutorial has two parts, first you get it running (takes about 5 minutes), then the second part explains why that very simple configuration works.
The second part has plenty of links to the jPOS programmer’s guide documentation (freely available), while it’s just a couple pages, understanding it and following the links may take some more time and may raise some questions that we’ll be happy to answer.
Every once in a while we receive a request from an investor or private equity firm who, in the process of conducting due-diligence on an existing or established potential portfolio company, encounters an unlicensed copy of jPOS. Often the company has released a commercial product which incorporates this unlicensed software. Investors want to know, how should we proceed and what are the responsibilities of the company vis-à-vis the license and source code?
The easiest and first answer is of course, the company should purchase a license and not rely on pirated software to conduct their business. The more subtle question is what does it imply when a start-up is willing to pirate software which is intended to be reasonably priced and positioned to benefit the authors and the overall Fintech software community as a whole?
Isn’t it risky to partner with a company that disregards good business practices, either because of bad faith or negligence?
If it were me, I’d invest somewhere else!
jPOS-EE has a new binlog module, a general purpose binary log that can be used as a reliable event store.
The jPOS BinLog has the following features:
Here is a sample Writer:
File dir = new File("/tmp/binlog");
try (BinLogWriter bl = new BinLogWriter(dir)) {
bl.add( ... ); // byte array
bl.add( ... ); // byte array
bl.add( ... ); // byte array
}
and a sample Reader:
File dir = new File("/tmp/binlog");
try (BinLogReader bl = new BinLogReader(dir)) {
while (bl.hasNext()) {
byte[] b = bl.next().get();
// do something with the byte[]
}
}
The BinLogReader
implements an Iterator<BinLog.Entry>
. Each BinLog.Entry
has two main methods:
BinLog.Rer ref()
byte[] get()
While iterating over a BinLog, it might make sense to persistently store its BinLog.Ref
in order to be able to restart the iterator at a given point if required (this is useful if using the BinLog to implement a Store and Forward.
The BinLogReader
has two constructors:
BinLogReader(File dir)
BinLogReader(File dir, BinLog.Ref ref)
the latter can be used to restart the iterator at a given reference point obtained from a previous run.
In addition to the standard hasNext()
method required by the Iterator
implementation, BinLogReader
also has a hasNext(long millis)
method that waits a given number of milliseconds once it reaches the end of the log, attempting to wait for a new entry to be available.
The goal behind the BinLog implementation is to have a future proof file format easy to read from any language, 10 years down the road. We found that the Mastercard simple IPM file format, that’s basically a two-byte message length followed by the message itself was suitable for that. The payload on each record can be ISO-8583 (like Mastercard), JSON, FSDMsg based, Protocol buffers or whatever format the user choose.
But that format isn’t crash proof. If a system crashes while a record is being written to disk, the file can get easily corrupted. So we picked some ideas from Square’s tape project that implements a highly crash proof on-disk persistent circular queue using a very small header. Tape is great and we encourage you to consider it instead of this binlog for some use cases, but we didn’t want a circular queue, we wanted a place to securely store events for audit or store and forward purposes, and we also wanted to be able to access the same binlog from multiple JVMs with access to the same file-system, so we had to write our own.
The on-disk file format looks like this:
Format:
256 bytes Header
... Data
... Data
Header format (256 bytes):
4 bytes header length
2 bytes version
2 bytes Status (00=open, 01=closed)
8 bytes Last element position
4 bytes this log number
4 bytes next log number
232 bytes reserved
Element:
4 bytes Data length
... Data
Each record has a length prefix (four bytes in network byte order) followed by its data. The header has a fixed length of 256 bytes but we found useful to make it look like a regular record too by providing its length at the very beginning. An implementation in any language reading a jPOS binlog can just be programmed to skip the first record.
At any given time (usually at end of day), a process can request a cut-over by calling the BinLogWriter.cutover()
method in that case, all writers and readers will close the current file and move to the next one (Readers can choose to not-follow to the next file, for example while producing daily extracts).
In order to achieve file crash resilience, each write does the following:
In an MBP with SDRAM we’ve managed to achieve approximately 6000 writes per second. On an iMac with regular disk the numbers go down to approximately 1500 writes per second for regular ISO-8583 message lengths (500..1000 bytes per record).
Due to the fact that the header is small enough to fit in an operating system block, the second write where we place the last element position happens to be atomic. While this works OK for readers and writers reading the file from different JVMs, that’s not the case for readers and writers running on the same JVM, even if they use a different file descriptor to open the file, the operating system stack has early access to the header that under high concurrency can lead to garbage values, that’s the reason the code synchronizes on a mutex
object at specific places.
The binlog
CLI command is a subsystem that currently have two commands:
binlog
accepts a parameter with the binlog’s path, i.e: binlog /tmp/binlog
So a cutover can be triggered from cron using the following command:
q2 --command="binlog /tmp/binlog; cutover; exit; shutdown --force"
jPOS 2.0.10 has been released, new development version is now 2.1.0-SNAPSHOT
This release upgrades JLine to 3.0.0, adds new CLI commands and subsystems, upgrades Gradle to 3.1.0, allows license to be read from external file and removes double-logging in PADChannel. The main reason for this small release is to flush small enhancements before we make some bigger changes to the TransactionManager with improved audit logging (those are already available in 2.1.0-SNAPSHOT). It also updates the copyright year to 2017 and changes the copyright owner to the new entity jPOS Software SRL.
Starting with this release we’ll move to semantic versioning, so next release is going to be 2.1.0 and requires a full clean build of your projects in order to make sure some of the API changes don’t break your system.
see ChangeLog for details.