Skip to main content

142 posts tagged with "jpos"

View All Tags

Pause Timeout

· 4 min read
Alejandro Revilla
jPOS project founder

In 2007, a long time ago, when hardware was not as fast as it is today, we introduced CONTINUATIONS that had the ability to PAUSE (or park) a given transaction while waiting for a response from a remote server. Think of it as some kind of reactive programming.

If you have, say, a 4 CPU machine and need to process one thousand simultaneous transactions, without continuations you would need to set your TransactionManager sessions to a disproportionately large number (such as 1000), which has no relation to your hardware, making the entire system slower due to the context switch overhead.

With continuations, a small number of sessions (e.g., twice the number of CPUs you have) can handle a large number of transactions, because those waiting for a response are offloaded from the transaction manager until the response arrives.

The typical candidate for continuations is the QueryHost participant, which is implemented like this:

  mux.request(m, t, this, ctx);
return PREPARED | READONLY | PAUSE | NO_JOIN;

The PAUSE modifier tells the TM to park that particular transaction until the context is resumed as part of its Pausable interface implementation. For the record, Pausable looks like this:

public interface Pausable {
void setPausedTransaction(PausedTransaction p);
...
...
void resume();
}

and the standard TransactionManager’s Context (org.jpos.transaction.Context) implements it.

When we call the asynchronous mux.request implementation and pass it an ISOResponseListener (QueryHost implements ISOResponseListener) and the current Context as a handback object, the transaction gets resumed and queued back with high priority when a response is received or the MUX times out, allowing it to be picked up by the next available session in the TransactionManager.

Futures were not popular in those days. Otherwise, the void request(ISOMsg m, long timeout, ISOResponseListener r, Object handBack) signature of the MUX’s asynchronous call would have probably been implemented using them.

QueryHost, which uses QMUX, as well as many other participants such as HttpQuery in jPOS-EE, use this facility to handle a large number of simultaneous transactions without requiring a large number of sessions—which would require a platform thread—in their transaction manager and are guaranteed to resume the context so that the transaction can continue its route within the TransactionManager.

This delicate mechanism works very well at production sites processing billions of transactions. However, we can’t guarantee that user-implemented participants can reliably time out, always, sooner or later, to complete the transaction and avoid a memory leak.

And here comes the reason for this post: In situations where participants may pause a transaction and not resume it after a timeout, we have a pause-timeout optional property that you can configure in the TransactionManager, e.g.:

xml
<property name="pause-timeout" value="300000" />

This is a safety net for those situations. If for some reason your participant doesn’t resume, the TransactionManager will auto-resume it for you after a reasonable amount of time to prevent a memory leak. Contexts are lightweight and consume little memory, so those leaks manifest themselves after several days or even weeks depending on your system load, making 5 minutes a reasonable default.

But the pause-timeout feature comes at a price. For every Context that we pause, a TimerTask is created to cancel the transaction. TimerTasks, popular 20 years ago, are quite brittle as they rely on a single platform thread calling every TimerTask Runnable WITHOUT CATCHING unchecked exceptions, which can cause the entire Timer facility to fail and create significant problems for other components such as the TSpace’s garbage collector.

Takeaways

  • The pause-timeout feature can be used if you fear your pausable participants may not resume. It’s usually used for debugging purposes after you experience warnings related to high in-transit transactions. A high in-transit indicates that at least one transaction got stuck without ever finishing. You can see this in the log by noticing that the tail ID gets stuck and doesn’t move up.

  • If you decide to use a pause-timeout, remember that this is a safety net that should never be necessary. Setting a short timeout conflicting with the timeout you use at QueryHost (e.g., 15 seconds, or 30 seconds) is not advisable because it increases the chances of a race condition between the normal resume caused by the QueryHost expiration and the resume caused by the TimerTask runnable trying to do the same. If you use a pause-timeout, make it a long one.

Final Comment

All this goes away in jPOS 3. Continuations make no sense in jPOS 3’s TransactionManager because sessions are handled by virtual threads, making it entirely feasible to have a large number of sessions in the 100k range. All the delicate asynchronous programming we required almost 20 years ago is now magically handled by Project Loom’s virtual threads.

Structured Audit Log

· 13 min read
Alejandro Revilla
jPOS project founder

jPOS Structured Audit Log (SAL)

DRAFT

Unlike most blog posts published here that are final, this one is being updated in parallel with the work we are doing on the structured audit log effort for jPOS. It will be finalized once this notice is removed.

The jPOS XML logs, written a quarter of a century ago, were intended to serve as an audit trail capable of being reprocessed later in the event of a major system failure. Over the years, they proved to be very useful, saving the day during migrations, testing, and even production issues.

Originally, these logs were structured audit logs, similar to what we aim to achieve now. However, as the system grew, the highly flexible Loggeable interface was sometimes improperly implemented, occasionally resulting in invalid XML. While still reasonably intelligible to the human eye, this made it difficult to parse the XML content automatically. Moreover, PCI requirements have become stricter over the years (there was no PCI 25 years ago, it was the Wild West), and data that might be needed to reprocess a transaction is now entirely masked.

The term log, much like byte, is widely overused in the field. With the advent of logging frameworks such as log4j, j.u.Logging, logback, and SLF4J, and then log aggregators that ingest loads of unstructured data into tools like Splunk, Datadog, and ElasticSearch/Kibana, you name it, there is confusion. People often conflate a log that outputs debug information line by line—which is undoubtedly useful—with addressing a different problem that's the one we aim to solve.

When I initially wrote the jPOS logger subsystem, large projects such as JBoss were looking for logging alternatives. I even had an email exchange with JBoss' author. He evaluated it but didn’t like it because it was too XMLish, which I found amusing since the whole JBoss configuration back in the day was heavily XML-based. Log4j was an early project hosted, if I recall correctly, at IBM’s alphaWorks, and that was his choice. Had I had that exchange a few weeks earlier, I would have adopted Log4j right away, as it was a promising project that proved to be the right way to go.

However, my concern and the focus of this post is not about the logger itself and its extensibility through SPIs, which are great. This post is about what one ends up putting in the log. It’s about the content and how to structure it in such a way that it becomes useful not only to the naked eye but also to other applications that ingest it.

In jPOS, it’s very common to encounter log events like the following:

<log realm="channel/186.48.107.208:58949" at="2024-04-29T13:06:13.086" lifespan="228ms">
<receive>
<isomsg direction="incoming">
<!-- org.jpos.iso.packager.XMLPackager -->
<field id="0" value="0800"/>
<field id="11" value="000001"/>
<field id="41" value="00000001"/>
<field id="70" value="301"/>
</isomsg>
</receive>
</log>
<log realm="channel/186.48.107.208:58949" at="2024-04-29T13:06:13.618" lifespan="4ms">
<send>
<isomsg>
<!-- org.jpos.iso.packager.XMLPackager -->
<field id="0" value="0810"/>
<field id="11" value="000001"/>
<field id="37" value="743110"/>
<field id="38" value="319046"/>
<field id="39" value="0000"/>
<field id="41" value="00000001"/>
<field id="70" value="301"/>
</isomsg>
</send>
</log>

Now consider the following output:

2024-04-29 13:06:13.086 INFO Channel:23 - <log realm="channel/186.48.107.208:58949" at="2024-04-29T13:06:13.086" lifespan="228ms">
2024-04-29 13:06:13.086 INFO Channel:23 - <receive>
2024-04-29 13:06:13.086 INFO Channel:23 - <isomsg direction="incoming">
2024-04-29 13:06:13.086 INFO Channel:23 - <!-- org.jpos.iso.packager.XMLPackager -->
2024-04-29 13:06:13.086 INFO Channel:23 - <field id="0" value="0800"/>
2024-04-29 13:06:13.086 INFO Channel:23 - <field id="11" value="000001"/>
2024-04-29 13:06:13.086 INFO Channel:23 - <field id="41" value="00000001"/>
2024-04-29 13:06:13.086 INFO Channel:23 - <field id="70" value="301"/>
2024-04-29 13:06:13.086 INFO Channel:23 - </isomsg>
2024-04-29 13:06:13.086 INFO Channel:23 - </receive>
2024-04-29 13:06:13.086 INFO Channel:23 - </log>
2024-04-29 13:06:13.086 INFO Channel:23 - <log realm="channel/186.48.107.208:58949" at="2024-04-29T13:06:13.618" lifespan="4ms">
2024-04-29 13:06:13.086 INFO Channel:23 - <send>
2024-04-29 13:06:13.087 INFO Channel:23 - <isomsg>
2024-04-29 13:06:13.087 INFO Channel:23 - <!-- org.jpos.iso.packager.XMLPackager -->
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="0" value="0810"/>
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="11" value="000001"/>
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="37" value="743110"/>
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="38" value="319046"/>
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="39" value="0000"/>
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="41" value="00000001"/>
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="70" value="301"/>
2024-04-29 13:06:13.087 INFO Channel:23 - </isomsg>
2024-04-29 13:06:13.087 INFO Channel:23 - </send>
2024-04-29 13:06:13.087 INFO Channel:23 - </log>

This is actually a single log event and should go in a single log line. Frameworks support this, but most implementations I’ve seen in the wild, probably due to the fact that the event contains linefeeds, end up generating several dozen lines of logs, each with its own timestamp.

This output format presents several challenges:

  1. Readability: The output is difficult to read and requires additional string manipulation to parse effectively, which complicates debugging and analysis. Yes, you can always use cut to take away the first chunk, but that's an extra step.
  2. Excessive Detail: The granularity of timestamps down to milliseconds (e.g., 086 vs. 087) is unnecessary and consumes resources without adding value to the debug information.
  3. Instance-Based Configuration Limitations: In jPOS, a single Channel class may connect to numerous endpoints as both a client and a server in multi-tenant applications. It's important to manage logs separately per instance, yet typical line loggers tie configuration to the class rather than to instances. You can, of course, use parameterized logger names, but that's not the standard way people tend to log information in their applications, and there's no standard approach for doing this, so different teams implement it in various ways.
  4. Intermingling of Outputs: Line loggers mix outputs from multiple threads, necessitating the use of MDC (Mapped Diagnostic Context) to group together a logic log event. This approach increases code complexity and log verbosity.
  5. Operational Inefficiency: Analyzing logs often involves navigating VPNs and two-factor authentication, complicating access to third-party log aggregator interfaces. This is reasonable for an operations team in charge of an application, but it presents quite a problem for external development teams trying to assist in case of issues.

The jPOS Structured Audit Log intends to improve the existing jPOS logging system by addressing some of its flaws and enhancing the original goals. It supports compact, structured output that is suitable for ingestion by log aggregators, while still retaining the capability for easy, text-based log manipulation for debugging and emergency situations.

AuditLogEvent

To prevent conflicts with the existing system, we have introduced a new package named org.jpos.log, where this new feature is implemented. The main class within this package is AuditLogEvent. The intention is for AuditLogEvent to eventually replace the existing org.jpos.util.LogEvent.

The StructuredLogEvent is a very simple sealed interface:

public sealed interface AuditLogEvent permits ... { }
tip

We used sealed interface in order to get compile-time errors if we miss to handle an AuditLogEvent, but there's extensibility provided by ServiceLoaders so that you can create your own.

Q2 Start event

When a jPOS log starts, we usually get the following XML preamble:

<log realm="Q2.system" at="2024-06-06T16:48:44.290350" lifespan="1ms">
<info>
Q2 started, deployDir=/opt/local/jpos/deploy, environment=default
</info>
</log>

The equivalente structured audit log event is:

{
"ts" : "2024-06-06T19:49:30.577151Z",
"realm" : "Q2.system",
"tag" : "info",
"trace-id" : "dbd2e6b6-693a-4ff5-9c51-5b2b643a8c8d",
"evts" : [ {
"t" : "start",
"ts" : "2024-06-06T19:49:30.576244Z",
"q2" : "6969be9c-dce9-4919-ad7e-69b5faf1ef33",
"version" : "3.0.0-SNAPSHOT",
"deploy" : "/opt/local/jpos/deploy",
"env" : "default"
} ]
}

We aim to keep these log messages as short and compression-friendly as possible. Given that JSON is verbose, we want to avoid extra boilerplate, especially for transaction messages that are repeated sometimes hundreds of millions of times daily. Therefore, we prefer using readable abbreviations like ts for timestamp and tx for transmission, etc. Here, we are showing pretty-printed output; however, the actual output will be a compact one-liner. The encoding is assumed to be UTF-8.

Each JSON object needs to be easily unmarshalled by different applications. To distinguish between various types of messages, every SAL message includes a unique type property (abbreviated as t) to facilitate polymorphic deserialization.

In jPOS SAL, there are two types of audit log records:

  1. jPOS native audit log records.
  2. Application-specific audit log extensions (those that you can load via service loaders).

SAL Native audit log records are defined under the new org.jpos.log.evt package and are part of the AuditLogEvent sealed interface.

Application-specific audit log messages are loaded via the ServiceLoader (META-INF/services/org.jpos.log.AuditLogEvent).

The Start StructuredLogEvent is defined like this:

public record Start(Instant ts, UUID q2, String version, String appVersion, String deploy, String env) implements AuditLogEvent { }
  • clazz is the logger’s class (i.e. DailyLogListener)
  • q2 is Q2’s instance Id
  • version is Q2’s version
  • appVersion is the application’s version

Q2 Stop event

Instead of the old </logger> closing element, we'll have an end JSON event

The Stop AuditLogEvent is defined as:

public record Stop(Instant ts, UUID id, Duration uptime) implements AuditLogEvent { }

The JSON output lookw like this:

{
"ts" : "2024-06-06T19:49:38.050499Z",
"realm" : "Q2.system",
"tag" : "info",
"trace-id" : "446c7ed1-ba61-4f5c-9f13-3edf1699242f",
"evts" : [ {
"t" : "stop",
"ts" : "2024-06-06T19:49:38.050480Z",
"id" : "6969be9c-dce9-4919-ad7e-69b5faf1ef33",
"uptime" : 7.827676000
} ]
}

LogWriters, LogRenderers, AuditLogEvents

jPOS users are familiar with channels, packagers, and ISOMsgs, and there's an analogy between these components.

An AuditLogEvent is analogous to an ISOMsg. It is an object that encapsulates information.

The LogRenderer functions like a packager. It converts the AuditLogEvent into a wire format. Just as you can have multiple packagers to handle different ISO-8583 variants, you can have multiple LogRenderers to convert AuditLogEvents into different formats.

We currently handle four formats:

  • TXT: A new format suitable for injection into regular line loggers.
  • XML: Equivalent to the existing format.
  • JSON: As shown in the examples above.
  • MARKDOWN: Suitable for use during development/tests or for rendering in web-based UIs.

Finally, the LogWriter is like the channel, it's the one in charge to output the already rendered AuditLogEvent to the log PrintStream.

note

Please note that a LogListener can use a regular logger, by using for example the LogbackLogListener. Using LogbackLogListener with a TXT renderer provides a very predictable output.

LogWriter

The LogWriter interface, contributed by Alwyn not so long ago, is a great addition. It plugs into a LogListener and is very easy to configure:

<log-listener class="org.jpos.util.SimpleLogListener" enabled="true">
<writer class="org.jpos.util.JsonLogWriter" enabled="true" />
</log-listener>

If we do not specify a writer, the log listener behaves like the traditional jPOS. It outputs all Loggeable objects using their dump method. We have JsonLogWriter, MarkdownLogWriter, TxtLogWriter, and XmlLogWriter.

tip

The XmlLogWriter's output is slightly different from the standard no-writer Loggeable dumps as it enforces XML output to be provided via Jackson's ObjectMapper, hence producing guaranteed XML.

LogRenderer

The LogRenderers interface looks like this:

public interface LogRenderer<T> {
void render (T obj, PrintStream ps, String indent);
Class<?> clazz();
Type type();
...
...
}

The renderers are then loaded via the org.jpos.log.LogRenderer service loader that looks like this:

org.jpos.log.render.json.LogEventJsonLogRenderer
org.jpos.log.render.json.LoggeableJsonLogRenderer
org.jpos.log.render.json.AuditLogEventJsonLogRenderer

org.jpos.log.render.xml.LoggeableXmlLogRenderer

org.jpos.log.render.markdown.ProfilerMarkdownRenderer
org.jpos.log.render.markdown.ContextMarkdownRenderer
org.jpos.log.render.markdown.LoggeableMarkdownLogRenderer
org.jpos.log.render.markdown.LogEventMarkdownRenderer
org.jpos.log.render.markdown.StringMarkdownLogRenderer
org.jpos.log.render.markdown.ObjectMarkdownLogRenderer
org.jpos.log.render.markdown.ByteArrayMarkdownLogRenderer
org.jpos.log.render.markdown.ThrowableMarkdownLogRenderer
org.jpos.log.render.markdown.ElementMarkdownLogRenderer
org.jpos.log.render.markdown.TransactionManagerTraceMarkdownLogRenderer
org.jpos.log.render.markdown.TransactionManagerTraceArrayMarkdownLogRenderer
...
...

There's a handy LogRendererRegistry that can get you a renderer for any given object. It properly traverses the class hierarchy and implementation hierarchy to get the most suitable renderer for an object if one is not defined for a given class. Because that reflection-based traversal is expensive, all results are cached and readily available for the next call.

Renderer implementations are very easy to write, and we encourage users to create specific ones for their objects. We aim to provide a large number of renderers for jPOS objects. Here is an example of a ByteArray renderer that produces Markdown output:

public final class ByteArrayMarkdownLogRenderer implements LogRenderer<byte[]> {
@Override
public void render(byte[] b, PrintStream ps, String indent) {
if (b.length > 16) {
ps.printf ("```%n%s%n```%n", indent(indent, ISOUtil.hexdump(b)));
} else {
ps.printf ("`%s`%n", indent(indent,ISOUtil.hexString(b)));
}
}
public Class<?> clazz() {
return byte[].class;
}
public Type type() {
return Type.MARKDOWN;
}
}

This is a just-for-fun proof of concept that outputs a jPOS profiler as a Markdown table:

public final class ProfilerMarkdownRenderer implements LogRenderer<Profiler> {
public ProfilerMarkdownRenderer() {
}

@Override
public void render(Profiler prof, PrintStream ps, String indent) {
var events = prof.getEvents();
int width = maxLength(events.keySet());
final String fmt = STR."| %-\{width}s | %10.10s | %10.10s |%n";
ps.print (row(fmt, "Checkpoint", "Elapsed", "Total"));
ps.print(
row(fmt, "-".repeat(width), "---------:", "-------:")
);
StringBuilder graph = new StringBuilder();
events.forEach((key, v) -> {
ps.print(
row(fmt, v.getEventName(), toMillis(v.getDurationInNanos()), toMillis(v.getTotalDurationInNanos()))
);
graph.append (" \"%s\" : %s%n".formatted(key, toMillis(v.getDurationInNanos())));
});
ps.println();
ps.println ("```mermaid");
ps.println ("pie title Profiler");
ps.println (graph);
ps.println ("```");

}
public Class<?> clazz() {
return Profiler.class;
}

public Type type() {
return Type.MARKDOWN;
}
private String row (String fmt, String c1, String c2, String c3) {
return fmt.formatted(c1, c2, c3);
}
private String toString(Profiler.Entry e, String fmt) {
return FMT."""
\{fmt}s |\{toMillis(e.getDurationInNanos())} | \{toMillis(e.getTotalDurationInNanos())} |
""".formatted(e.getEventName());
}

private String toMillis(long nanos) {
return FMT."%d\{nanos / Profiler.TO_MILLIS}.%03d\{nanos % Profiler.TO_MILLIS % 1000}";
}

private int maxLength (Set<String> keys) {
return keys.stream()
.mapToInt(String::length)
.max()
.orElse(0); // R
}
}
note

StringTemplates look nice!

With the previous renderer, the regular profiler output:

    <profiler>
prepare: o.j.t.p.Debug [8.8/8.8]
prepare: o.j.t.p.Debug-1 [0.7/9.6]
prepare: o.j.t.p.Delay [2.4/12.0]
prepare: o.j.t.p.Trace [0.6/12.7]
commit: o.j.t.p.Debug [1.9/14.7]
commit: o.j.t.p.Debug-1 [0.4/15.1]
commit: o.j.t.p.Delay [0.4/15.6]
commit: o.j.t.p.Trace [0.5/16.1]
end [7.6/23.7]
</profiler>

Looks like this:

CheckpointElapsedTotal
prepare: o.j.t.p.Debug8.0838.083
prepare: o.j.t.p.Debug-10.5009.583
prepare: o.j.t.p.Delay2.00012.583
prepare: o.j.t.p.Trace0.87512.458
commit: o.j.t.p.Debug1.04214.500
commit: o.j.t.p.Debug-10.37515.875
commit: o.j.t.p.Delay0.45815.333
commit: o.j.t.p.Trace0.50016.833
Why-oh-why

Why, you may ask? You just want good JSON output to feed your aggregator, and we have you covered. But once little pieces like this start to fit perfectly into the system, we are tempted to play with additional options. In a production environment, you want a compact JSON format from the AuditLogEvents. When you're in the development phase—which we are most of the time—having a nice Markdown output ready to be copied and pasted into an issue, email, or discussion is a good thing to have. And if it includes awesome Mermaid graphics, even better!

Recap on why we are doing this

People often use Grok, Fluentd, Splunk regexp-based filters, or just grep/AWK manipulation to make sense of log messages produced by applications. These applications sometimes decide to change the message format from:

Starting

to:

 ____  _             _   _             
/ ___|| |_ __ _ _ __| |_(_)_ __ __ _
\___ \| __/ _` | '__| __| | '_ \ / _` |
___) | || (_| | | | |_| | | | | (_| |
|____/ \__\__,_|_| \__|_|_| |_|\__, |
|___/

Because banners are cool, they inadvertently trick those regexps with an innocent change.

We want jPOS audit logs to be precise, compact, and suitable for storage (e.g., in jPOS' binlog) for audit purposes

Stay tuned as this is coming soon to jPOS next.

Config Smell

· 5 min read
Alejandro Revilla
jPOS project founder

jPOS configuration is very flexible, but that flexibility can sometimes lead to confusion.

We have two main categories:

  • Build-time static token replacement (jPOS build target).
  • Runtime environments.

Build-time static token replacement

This is described in detail in the Configuration Tutorial, but the basic idea is that at build time, when you call gradle installApp or gradle dist, you have the ability to define a compile-time target.

The default target is called devel and reads an optional property file called devel.properties.

If your devel.properties file looks like this:

ABC=AlphaBravoCharlie

And then you have a QBean configuration like this:

<property name="ABC" value="@ABC@" />

The @ABC@ will be expanded to AlphaBravoCharlie in the final file that will land in the build directory.

If you go to build/install/xxx/deploy/yourxml.xml, you’ll see something like this:

<property name="ABC" value="AlphaBravoCharlie" />

The jPOS build system performs that replacement in XML files in the deploy directory, as well as in configuration files (.cfg, .properties) in the cfg directory.

A typical use case, very useful during development is to define a database name.

You can have a devel.properties file with tokens like this:

dbhost=localhost
dbname=jposee
dbuser=jpos
dbpass=password
dbport=5432

And then have a db.properties file (in src/dist/cfg/db.properties) that looks like this:

connection.url=jdbc:postgresql://@dbhost@:@dbport@/@dbname@

When you call gradle installApp, you’ll see that the destination db.properties file in the build/install/xxx/cfg directory will look something like this:

connection.url=jdbc:postgresql://localhost:5432/jpos

When you run locally, in development mode, your devel.properties file, which happens to be the default one, is the one providing tokens for replacement.

But you can call gradle -Ptarget=ci installApp. In that case, tokens get read from a file called ci.properties that you can use in your continuous integration environment.

CONFIG SMELL 1

If you have some handy tokens in the devel.properties file to make configuration tweaks for a development versus continuous integration target, or you have a local, test, or stresstest environment, that’s fine. However, if you have a prod target, there might be a configuration smell. It can be acceptable, but beware that you are not leaking sensitive information such as database names, URLs, access tokens, etc.

prod.properties should probably be reviewed by your security team.

Runtime Environments

The runtime environments are explained here, but the basic idea is that you can define a variable either in a YAML file, a Java property, or an OS environment variable and use it as part of the configuration using property expansion. Imagine you have an OS environment variable like this:

export DB_PASSWORD=mysupersecurepassword

Then you can configure a QBean, or config file with something like:

<property name="dbpass" value="${db.password}" />

When you call cfg.get("dbpass") from your QBean or participant implementation, you will get the string "mysupersecurepassword".

There are variations you can use, such as defaults like ${db.password:nopassword}, in which case, if the DB_PASSWORD OS environment variable is not available or not exported, a call to cfg.get("dbpass") would return nopassword.

From the jPOS Programmer’s Guide Section 3.3 (Configuration)

SimpleConfiguration recognizes and de-references properties with the format: ${xxx} and searches for a system property, or operating system environment variable under the xxx name. As a fallback mechanism of last resort, the property can be resolved from an environment file, in YAML or properties file syntax, found in the cfg directory (default filename default.yml or default.cfg, which can be overridden by the jpos.env system property).

You can add a default value within the expression itself, using the : (colon) separator. For example ${xxx:def_value}. If the property is not found, the default value will be used.

The format $sys{xxx} de-references just from system properties, $env{xxx} just from the operating system environment, and $cfg{xxx} just from the environment file (the.

In the rare case where a value with the format ${...} is required, the $verb{${...}}format (verbatim) can be used.

In addition, a property named xx.yy.zz can be overridden by the environment variable XX_YY_ZZ (note that dots are replaced by underscore, and property name is converted to uppercase.

Things start to get confusing when we hit Config Smell 2:

CONFIG SMELL 2

We often encounter situations where a build-time target configuration looks like this:

dbpass=${db.password}

and then the dbpass property in a given QBean or configuration file looks like this:

<property name="dbpass" value="@dbpass@" />

That totally works. The resulting file in the build directory after a gradle installApp will look like this:

<property name="dbpass" value="${db.password}" />

as we want, and then, at runtime, the ${db.password} value will be de-referenced using the DB_PASSWORD environment variable.

That's fine, everything works, but why oh why!

When you see a target build time configured like this, new users might get confused about when the ${db.password} gets de-referenced—whether it happens at build time based on the build-time environment variable or at runtime. It’s very confusing, and even an experienced jPOS developer can get caught off guard. Don’t do it.

We mentioned CODE SMELL 1 and CODE SMELL 2, but there’s CODE SMELL 0!:

CODE SMELL 0 !

Configuration doesn’t belong in the same repository as the source code. That’s why we have a distnc (distribution, no-config) task in jPOS’ Gradle plugin that only ships the main jar and all dependencies in the lib directory, leaving the cfg and deploy directories to be fetched by other means. We tend to use GitOps for that, even before GitOps was a term; we did this in the old Subversion days. Having a deploy and cfg directory in your development repo is handy for development and testing, but we recommend keeping it separate from production.

New Documentation

· One min read
Alejandro Revilla
jPOS project founder

I've recently embarked on an ambitious project to develop a new jPOS documentation, which is currently in its nascent stages. This tutorial is designed to evolve in tandem with ongoing jPOS development, and in time, it will replace existing documentation to become a dynamic, live tutorial and reference.

As it stands, the tutorial is in its early stages, but it already offers valuable insights for newcomers to jPOS, particularly with the setup primer and QBean information. Veterans of jPOS will find the sections on 'Environments' insightful, especially for applications in cloud deployments. This is also a prime opportunity to test and experience the jPOS Gradle Plugin in action.

Alongside the tutorial, we have a companion project on Github. It’s structured with distinct branches for each part of the tutorial, enabling you to access and execute the examples easily.

Since this is a work in progress and will be evolving alongside jPOS, your feedback is incredibly valuable. We encourage you to share your thoughts, suggestions, and insights, which will be instrumental in shaping this tutorial into a comprehensive and up-to-date resource for all things jPOS.

Here is the link.

Restricted Context

· 3 min read
Alejandro Revilla
jPOS project founder

The jPOS TransactionManager context, when it comes to traverse all participants, is quite promiscuous, as promiscuous as the global state of any application. It relies on the participants to do the right thing and don't step over each other, by mistake, or on purpose.

As applications grow and we move from a handful participants to several dozens, or even hundreds, it becomes very useful to put some constraints on what a participant can or can't do related to its Context.

On large jPOS applications, created by disparate teams, the TransactionManager configuration file is usually an excellent self-documenting starting point in order to figure out which transactions are supported, which participants are involved and how they are configured.

In addition, it is common for a participant’s implementation, before doing its job, to check for the presence in the Context of its needed input variables. CheckFields for example, needs a REQUEST object. SendResponse, needs a RESPONSE object.

To solve this, we've introduced a new required element, verified by the TransactionManager before calling the participant's prepare method, that looks like this:

<participant class="org.jpos.transaction.CheckFields">
<requires>REQUEST</requires>
</participant>

In this case, the TM checks for the presence of a REQUEST entry in the Context. If it’s not present, the transaction aborts right away (as if the participant had returned ABORTED).

This is just syntactic sugar, as the participant can check (and actually check) that, but there's a little twist.

The participant does not receive the whole context, it just receives a clone of it with JUST the entries described in the <require> element (which BTW, can be a comma separated list).

In addition to the require element, we support an optional element. i.e.:

<participant class="org.jpos.transaction.CheckFields">
<requires>REQUEST</requires>
<optional>TIMESTAMP, TIMEZONE</optional>
</participant>

You get the idea, if the optional TIMESTAMP and TIMEZONE entries are present, they would get copied to the mini Context used to call the participant.

A similar approach is used in order to selectively get the output of a participant. We use the provides element like this:

<participant class="org.jpos.transaction.CreateDSRequest">
<requires>REQUEST</requires>
<provides>RESPONSE</provides>
</participant>

The TransactionManager, who has access to both the original Context and the miniContext offered to the participant, merges back ONLY the entries listed in the provides method.

This is an experimental preview feature implemented in jPOS 3.0 (next). It needs to be battle tested in order to see if it makes sense to keep it, or it's just a useless idea. If we like it, and we continue supporting it, we may choose to enforce it on every participant. In a second phase we can define a TM-level promiscuousproperty so that if one chooses the old behavior, you need to set it true, either globally at the TM config level, or at the participant level, perhaps with a promiscuous="true" attribute.

jPOS 2.1.8 has been released

· One min read
Alejandro Revilla
jPOS project founder

jPOS 2.1.8 has been released. New development version is 2.1.9-SNAPSHOT and jPOS 3.0.0 (aka 'next').

See ChangeLog for details.

We will continue to maintain the 2.x.x series for a long while, so you can consider the 2.x.x series an LTS version, but all the fun and new features will happen in 3.x.x.

Q2 templates

· 3 min read
Alejandro Revilla
jPOS project founder

Typical jPOS applications have many similar servers, muxes, and channels. Sometimes hundreds of them.

In order to simplify its configuration Q2 now supports templates that can be launched from a template deployer, or programmatically using Q2's deployTemplate method.

A template is identical to a regular QBean XML descriptor, and can be placed anywhere in the filesystem, or classpath (including dynamic classpath) that has a special keyword called __PREFIX__ that are replaced at deploy time.

Q2's deployTemplate method has the following signature:

deployTemplate(String resource, String filename, String prefix);
  • resource is the resource location (e.g.: templates/channel.xml). If the resource starts with jar:, Q2 fetchs it from the classpath (e.g.: jar:META-INF/q2/templates/channel.xml).
  • filename is the QBean descriptor filename, e.g.: 10_acme_channel.xml
  • prefix is the Environment prefix (see below).

Imagine you have an environment (cfg/default.yml) file that looks like this:

    acme:
host: 192.168.1.1
port: 8000
emca:
host: 192.168.1.2
port: 8001

In a regular deployment, you could have two channel configurations like this:

    10_channel_acme.xml

<channel-adaptor name="acme-channel-adaptor" ...>
<channel class="${acme.channel}" packager=...>
<property name="host" value="${acme.host}" />
<property name="port" value="${acme.port}" />
...
...
</channel>
<in>acme-send</in>
<out>acme-receive</out>
</channel-adaptor>

10_channel_emca.xml

<channel-adaptor name="emca-channel-adaptor" ...>
<channel class="${emca.channel}" packager=...>
<property name="host" value="${emca.host}" />
<property name="port" value="${emca.port}" />
...
...
</channel>
<in>emca-send</in>
<out>emca-receive</out>
</channel-adaptor>

With templates, you can have a single template like this somewhere in your filesystem or classpath (say templates/channel.xml).

<channel-adaptor name="__PREFIX__-channel-adaptor" 
enabled="${__PREFIX__.enabled:false}"
class="org.jpos.q2.iso.ChannelAdaptor"
logger="${__PREFIX__.logger:Q2}">
<channel class="${__PREFIX__.channel}"
logger="${__PREFIX__.logger:Q2}"
packager="${__PREFIX__.packager:org.jpos.iso.packager.GenericPackager}">
<property name="host" value="${__PREFIX__.host}" />
<property name="port" value="${__PREFIX__.port}" />
<property name="header" value="${__PREFIX__.header}" />
<property name="connect-timeout" value="${__PREFIX__.timeout:5000}" />
<property name="packager-config" value="${__PREFIX__.packagerConfig}" />
</channel>
<in>__PREFIX__-send</in>
<out>__PREFIX__-receive</out>
<reconnect-delay>${__PREFIX__.reconnect:10000}</reconnect-delay>
<wait-for-workers-on-stop>${__PREFIX__.wait:yes}</wait-for-workers-on-stop>
</channel-adaptor>

and perhaps a mux.xml connected to it:

----
<mux name="__PREFIX__" class="org.jpos.q2.iso.QMUX"
enabled="${__PREFIX__.enabled:false}"
logger="${__PREFIX__.logger:Q2}" realm="__PREFIX__">
<key>${__PREFIX__.mux.key:41,11}</key>

<in>__PREFIX__-receive</in>
<out>__PREFIX__-send</out>

<ready>__PREFIX__-channel.ready</ready>
</mux>
----

Then you can deploy a QBean descriptor like this:

<templates>
<template resource="templates/channel.xml" descriptor-prefix="10_channel_">
acme,emca
</template>
<template resource="templates/mux.xml" descriptor-prefix="20_mux_">
acme,emca
</template>
</templates>

The special text __PREFIX__ is replaced by Q2 in each file using the prefix acme and emca, so the properties:

  <property name="host" value="${__PREFIX__.host}" />
<property name="port" value="${__PREFIX__.port}" />

gets converted to

  <property name="host" value="${acme.host}" />
<property name="port" value="${acme.port}" />

Very simple, it's just configuration sugar, but comes very handy in deployments with hundreds of services.

This is available in jPOS Next (series 3.x.x) starting in 3.0.0-SNAPSHOT availble in jPOS Maven repo, as well as Github next branch.

TM timeout and max-time

· 2 min read
Alejandro Revilla
jPOS project founder

The TransactionManager's participants provide easy ways to control per transaction timeouts. One can add a TIMESTAMP entry at the beginning of the transaction, and then have each participant control how it's going, so to speak. If the transaction has spent more than a threshold amount of time, it's better to abort the transaction right away because the client (i.e. POS device) probably has expired the transaction already and it's not waiting for our response anymore.

We can do that on each participant, or you can have a general-purpose timeout participant that you can place here and there.

But because this is a common use case, to encourage its use, in jPOS Next the TransactionManager honors a couple of new attributes at the participant level:

  • timeout
  • max-time

e.g.:

  ...
...
<participant class="org.jpos.jcard.CheckFields">
<property name="mandatory" value="PCODE,7,11,12,AMOUNT,PAN,41" />
</participant>
<participant class="org.jpos.jcard.SelectCurrency" />
<participant class="org.jpos.jcard.CreateTranLog" timeout="1000" max-time="5000">
<property name="capture-date" value="capture-date" />
<property name="checkpoint" value="create-tranlog" />
<property name="node" value="${jcard.node:99}" />
</participant>
...
...

In this example, three things can happen:

  • if, by the time the TM reaches the CreateTranLog participant, the transaction has taken more than 5000ms thus far, the participant is not called, and the transaction continues with the ABORTED route.
  • If the partitipant takes more than 1000ms to process, even if it has returned PREPARED, we coerce the response to ABORTED.
  • If max time, at the end of the participant is greater than 5000ms, we also coerce the response to ABORTED.

jPOS Next

· 3 min read
Alejandro Revilla
jPOS project founder

jPOS has evolved based on simple principles:

  • Latest version is always production ready
  • We don't rock the boat. All changes are backward compatible.

When Java started its new release cadence, we kept the baseline source compatibility to Java 8, while making sure jPOS was able to run on newer JDKs, but that prevented us to take advantage of new language features. There were features—like a long time waiting for multi-line Strings—that were nice to have, but one could still live without, but then there was one very interesting one for us, Project Loom (formerly fibers) that almost made it to Java 17, that we want to test at some large sites along with the TransactionManager.

Sealed classes, Records, Switch pattern matching, being prepared for JEP411 that will deprecate the Security Manager are all things we want to play with.

In addition, there are some changes we want to try in jPOS that would certainly rock the boat, related to audit logging (yes, we want to be agreegator/JSON/ELK friendly), using Strings instead of Integers in ISOMsg keys (so one doesn't have to create an arbitrary mapping on EMV related messages), Fluent APIs to create components for those that hate XML (funny, people find XML configs archaic but still love pom.xml), in-memory deployments, per-participant profiling on the TransationManager, etc.

In addition, we need to get rid of some baggage. The compat_1_5_2 module will remain available in the repo, as a reference, but it doesn't have to exist in the main tree. Some AWT/Swing based classes may go back to jPOS-EE as a module, and we may have a sibling module using JavaFX in order to see if it takes traction (JavaFX looks promising).

The CLI is up for a refactor. It's super useful to implement jPOS commands, but the API, initially very tied to JLine has proven to be kind of problematic once we added SSH support. We can do better on that side.

So... here is what we are planning to do:

  • jPOS 2.x.x series will become an LTS version with just security patches.
  • All the fun will happen in jPOS 3.x.x series, that currently has a baseline on Java 17 (but that may change if our Loom tests go well and we need it as a baseline).

For the time being, there's a next branch (already available in Github) that builds jPOS 3.0.0-SNAPSHOT.

The master branch currently has jPOS 2.1.8-SNAPSHOT but once we release 2.1.8, we'll switch next to become master (or take the opportunity to rename it to main) and use 2.1.8 as the forking point for an LTS v2 (or similar name) branch.

jpos-branch-plan

jPOS 2.1.7 has been released

· One min read
Alejandro Revilla
jPOS project founder

jPOS 2.1.7 has been released. New development version is 2.1.8-SNAPSHOT.

See ChangeLog for details.

Work has been started toward jPOS 3.0.0 targeting Java 17 as a baseline.

We will continue to maintain the 2.x.x series for a long while, so you can consider the 2.x.x series an LTS version, but all the fun and new features will happen in 3.x.x.