jPOS upgraded to Java 26
Java 26 reached General Availability on March 17, 2026. jPOS, jPOS-EE, and the tutorial projects have been updated to run on it, alongside Gradle 9.4.1.
Java 26 reached General Availability on March 17, 2026. jPOS, jPOS-EE, and the tutorial projects have been updated to run on it, alongside Gradle 9.4.1.
One of the things that breaks down quickly in real-world accounting systems is currency.
Most ledger designs treat currency as an afterthought — a field on a transaction, a conversion applied at reporting time, a problem deferred to the spreadsheet team. MGL treats it differently. Exchange rates are a first-class part of the data model, imported automatically, stored with full history, and wired directly into the layer architecture so that multi-currency consolidation happens at query time, not at batch time.
MGL is starting to feel like a real system now, not just a collection of ideas.
The core ledger is already in place, but what makes it interesting is how the pieces are starting to come together: multi-layer accounting, virtual layers for reporting and FX-driven consolidation, high-volume controller-style account structures, and an AI assistant that can operate the system through the same tools and permission model used by the web UI.
We just released version 0.0.17 of the jPOS Gradle Plugin. This release brings two meaningful improvements and a toolchain upgrade.
If you already have a local copy of jPOS-EE (master or next), please note there are REQUIRED ACTIONS at the end of this blog post.
Following the same branch model we adopted for jPOS last December, jPOS-EE is now aligned with JEP-14 — The Tip & Tail Model of Library Development.
The rename is straightforward:
next branch (jPOS-EE 3.x) becomes main — the tip, where active development happens.master branch (jPOS-EE 2.x) becomes tail — the stable series, receiving only critical fixes.next branch:git branch -m next main
git fetch origin
git branch -u origin/main main
git remote set-head origin -a
master branch:git branch -m master tail
git fetch origin
git branch -u origin/tail tail
The jPOS tutorial has been expanded with several new sections covering some of the most commonly used — and most commonly misunderstood — parts of the framework.
In our Tip‑and‑Tail announcement we described the new branch model we’re rolling out across the jPOS ecosystem. The jpos/jPOS-template repository now follows that scheme:
| Old branch | New branch | Purpose |
|---|---|---|
next | main | tip branch (default) |
next-multimodule | main-multimodule | tip branch for the multimodule tree |
master | tail | trails the latest tagged release |
multimodule | tail-multimodule | tail branch for the multimodule tree |
If you already have the template checked out, you only need to teach Git the new names—no history was rewritten.
git fetch origin --prune
git branch -m next main
git branch -u origin/main main
git branch -m next-multimodule main-multimodule
git branch -u origin/main-multimodule main-multimodule
git branch -m master tail
git branch -u origin/tail tail
git branch -m multimodule tail-multimodule
git branch -u origin/tail-multimodule tail-multimodule
That’s it—your local branch names now match the remote.
Grabbing a fresh copy of the repo is always the simplest option:
git clone https://github.com/jpos/jPOS-template.git
cd jPOS-template
GitHub serves main by default. If you’re working with the multimodule example, switch to main-multimodule:
git switch main-multimodule
From here on out, whenever our docs mention “tip” think main (or main-multimodule), and whenever they mention “tail” think tail (or tail-multimodule). Happy hacking!
I recently tweeted about something I see coming: maintainer overflow (X).

I have been the maintainer of the jPOS project for a quarter of a century now. Over the years, pull requests of very different quality have come and gone.
The good ones—the ones that truly move the project forward—usually bubble up from discussions on the mailing list, Slack, or at our meets. Although we don’t have a formal process—we are not that big—they tend to originate from some kind of JEP, a jPOS Enhancement Proposal. There is context. There is discussion. There is intent.
From time to time, we get the smart developer blasting 100 projects with the same trivial scripted PR—changing a StringBuffer to a StringBuilder, or replacing a Hashtable with a HashMap. Those changes could be welcome, if not for the typical tone that reads like:
“Dramatically improve the performance of this project with my genius discovery of this new Java class.”
I usually respond with a few questions. And if the refactor includes more than a handful of lines, we need to talk about a CLA. That’s often when the silence begins.
These developers remind me of my old days in ham radio, when the Russian Woodpecker radar would sweep across the band—pahka pahka pahka pahka pahka—blasting your headphones in the middle of the night, and then disappearing.
I believe these kinds of PRs will soon arrive in the hundreds. We already see “CRITICAL” ones created by developers trying to collect internet karma by prompting their AI agent with a junior-level request and submitting whatever comes out.
Let me be clear: all improvements are welcome.
But well-established projects, running in production at very large companies and handling literally billions of dollars daily—like jPOS—require an extremely cautious approval process.
We follow PCI requirements. ISO/IEC 27001. ISO/IEC 20243 supply-chain assessments. We must be careful.
Analyzing a PR produced by AI is an order of magnitude more complex than prompting your own AI to explore the same idea. When you prompt it yourself, you can walk through the plan, tweak it, ask questions about side effects, keep the broader roadmap in mind, inspect the generated diffs, and request tests where you fear an edge case could beat you—or where you remember it already did.
Reviewing a blind AI-generated PR reverses that process. The maintainer has to reconstruct the intent, guess the prompt, and audit the reasoning after the fact. That is expensive.
It is much easier to receive a prompt—possibly with a detailed plan and some sample tests—than to receive a finished pull request.
So here is a proposal.
Instead of sending Pull Requests, consider sending Prompt Requests.
A Prompt Request describes:
That allows the maintainer to evaluate the reasoning before evaluating the diff. It allows iteration before code lands. It allows us to align with the roadmap, compliance requirements, and architectural direction.
In many cases, the maintainer can then take the prompt, refine it, test it with different models, and generate the final implementation under controlled conditions.
Or decide that it’s not the right change.
I'm basically saying: Give it here—I’ll do it myself, genius. :)
We are pleased to announce the release of jPOS 3.0.1, a stabilization update to the 3.x line that focuses on robustness, observability, and production hardening.
While 3.0.0 was a major leap forward—introducing Virtual Threads, Micrometer instrumentation, Java Flight Recorder integration, and full JPMS support—3.0.1 consolidates that foundation. This release incorporates a large number of bug fixes, performance refinements, dependency updates, and operational improvements. Most importantly, it is already running in production at several large sites, validating the architectural decisions made in 3.0.0 under real-world load.
Several improvements target operational stability:
ChannelAdaptor reconnect logic and early detection of dead peers.ChannelAdaptor.ISOServer now gracefully catches BindException.MUXPool startup is more lenient with missing muxes and sanitizes configuration.In addition, the new ConfigValidator helps catch configuration issues early, reducing deployment-time surprises.
Virtual Threads were one of the flagship features of jPOS 3. This release refines their adoption:
virtual-thread property to ISOServer (default false), ChannelAdaptor (default false), and TransactionManager (default true).DirPoll now supports Virtual Threads.These additions give operators more explicit control when tuning thread behavior in mixed or incremental migration scenarios.
Observability continues to improve:
Metrics utility class for easier Micrometer access./jpos/q2/status endpoint when metrics are enabled.PrometheusMeterRegistry.throwExceptionOnRegistrationFailure to surface misconfiguration early.These changes improve clarity and safety in high-volume environments where metric cardinality and instrumentation correctness matter.
Q2 receives several important improvements:
Q2.node() (now static).QFactory method to instantiate from Element.enabled attribute on additional QServer child elements.Together, these changes make Q2 more flexible in embedded, modular, and containerized deployments.
As part of ongoing maintenance and supply-chain hygiene:
We also export org.jpos.iso.channel.* via JPMS, further improving modular compatibility.
SnapshotLogListener.LSPace.Environment.resolve(String).If you are already on 3.0.0, this is a strongly recommended upgrade. It strengthens the runtime characteristics of the 3.x architecture, improves observability, tightens security posture, and incorporates extensive real-world feedback from early adopters.
If you are coming from jPOS 2.x, 3.0.1 represents a mature entry point into the modernized 3.x line—bringing Virtual Threads, Micrometer-based instrumentation, JPMS support, and structured logging into a stable, production-validated release.
References:
/by Barzilai Spinak a.k.a. @barspi a.k.a. The Facilitator/
A long time ago, our Benevolent Dictator for Life published a blog post titled You want a timeout, which I suggest reading before this one. That post is still totally valid, but when time comes to actually configure those timeouts, the available options can get a little confusing.
In the jPOS world, several configuration options use terms like "timeout" or "alive" in their name, but each serves a different purpose. Due to the organic growth of jPOS throughout the years, configuration option names where often chosen because they made sense within the specific context or use case. They have a distinct meaning within a particular jPOS component, and apply only at a certain point during the message flow or protocol stage. However, when viewed in the broader picture, some of those names can feel repetitive or vague against the full array of available options.
Making things even more confusing, the configuration options follow different design trends, or just use what was historically available in jPOS at the time of implementation. Thus, we have some that are XML attributes, others may be a configuration <property ...> (of the kind you get through the Configurable interface), and yet others are their own full blown XML element.
It's often unclear which option applies to which object, or what all the available ones are. Many are documented, and for others we may need to refer to the source code to understand the full semantics and when they apply.
A couple of years ago I compiled, mostly for myself, a file in the style of a ChannelAdaptor deploy file, with comments about all the relevant configurations I found. It took me a good part of an afternoon, following code paths and grepping the jPOS code base.
So here it is, sanitized and commented, for your own benefit. I hope it will clarify this topic and help you build more robust systems:
<channel-adaptor name='my-channel' class="org.jpos.q2.iso.ChannelAdaptor">
<!-- ================================
This is the ISOChannel being wrapped by the ChannelAdaptor.
The following explanations apply if the channel implementation extends BaseChannel.
================================= -->
<channel class="org.jpos.iso.channel.SomeChannel"
packager="org.jpos.iso.packager.GenericPackager"
timeout="DOES NOT EXIST!!!">
^^^^^^^^^^^^^^^^^^^^^^^^^^
I have seen multiple cases of the timeout XML attribute for <channel>.
It may be a leftover from an ancient jPOS codebase that does not exist anymore,
or a totally hallucinated property. But it has been copied in many projects in a
sort of "cargo cult programming" fashion.
The channel element has NO KNOWLEDGE of a timeout attribute.
It doesn't hurt that it's there, but serves no purpose .
[... some extra channel config ...]
<!-- ==================================================
The following configuration **properties** are at the CHANNEL LEVEL,
thefore, inside the channel XML element.
====================================================== -->
<!-- boolean: default false
Enable the **low-level** `SO_KEEPALIVE` feature for the socket.
This feature is handled by the operating system's network stack, and neither
jPOS, nor we, can do anything else other than enabling it.
This property has **absolutely nothing** to do with the <keep-alive> XML
element defined for the ChannelAdatpr and explained below in this file.
-->
<property name="keep-alive" value="true" />
<!-- boolean: default false
When enabled, if a zero-length ISOMsg is received from the remote endpoint,
the BaseChannel receive() method will consume it and keep reading from the socket.
Some implementations of BaseChannel will handle this in the overridden getMessageLength()
and reply with a send a zero-length message response back, and continue
reading from the socket. In this case, the zero-length message does not propagate up to
BaseChannel.
This config is the counterpart to a remote sending zero-length messages.
Normally used in a QServer's channel, because in jPOS at least, the remote
client ChannelAdaptor is the one that sends the zero-length messages.
So this config is more or less related to the <keep-alive> element described below
(which is NOT the keep-alive property described above!).
-->
<property name="expect-keep-alive" value="true" />
<!-- pretty obvious: how long to wait when attempting connection.
default: whatever the timeout property (see below) has been set to (or its default)
This is a long value, in milliseconds, and it's by default initialized to the
same value as `timeout` (see below), but a shorter value may be advisable.
-->
<property name="connect-timeout" value="60000" />
<!--
default: 300000 (5 min)
This is the **socket read** timeout, handled at the O.S. socket level.
It is always enabled at **some value** (unless explicitly set to zero, which
effectively disables it, and strongly discouraged!).
If the "reader thread" of the channel does not read even a single byte for that
amount of time, an exception will be thrown, the local socket will be forcefully closed,
and the <reconnect-delay> will kick in, and a new connection will be attempted.
NOTE: Since this timeout only applies when nothing has been received from the socket, the
regular exchange of ISO-8583 echo tests (either receiving a request or a response) will
make keep the channel "happy" and this timeout will not happen.
-->
<property name="timeout" value="300000" />
</channel>
<!-- ==================================================
The following config options are OUTSIDE the channel element.
They are children elements of the ChannelAdaptor (or QServer) wrapper.
They are NOT config properties, but have their own XML tags instead.
====================================================== -->
<!--
WARNING: it's not a true/false but yes/no (anything different from "yes" means "no")
Default: "no"
If enabled, and the ChannelAdaptor hasn't **sent** an ISOMsg out for
"reconnect-delay" milliseconds, then invoke the underlying channel's sendKeepAlive(),
which by default sends an ISOMsg of length zero.
It's worth noticing that:
1) The underlying channel's sendKeepAlive() is the one sending the packet (usually a zero-length
message, with just the header).
2) The ChannelAdaptor (not the channel) is the one who triggers this behavior.
3) There's no equivalent for a channel wrapped in a QServer.
4) The timeout for the ChannelAdaptor to trigger this behavior is the same as the reconnect-delay
value. It repurposes that config value for this extra behavior.
5) The regular exchange of ISO-8583 echo tests (or any other message) will prevent this from
happening.
-->
<keep-alive>yes/no</keep-alive>
<!--
default: 0, meaning forever
When we receive an ISOMsg through the channel, it will be sent to the <out> space queue.
The ISOMsg has an **expiration** timeout (`sp.out(msg, timeout)`) in the queue taken from this value.
It will live in the queue for this amount of time, until consumed or expired.
Normally there will be a MUX on the other side of the <out> queue who will consume the ISOMsg
quickly (and maybe send it to a TransactionManager queue, with a different timeout, so we don't
typically have to worry about this parameter).
-->
<timeout>15000</timeout>
<!--
Another hallucinated configuration that I've occassionally seen in the wild.
The ready configuration is something belonging to QMUX.
A ChannelAdaptor will maintain an entry in the space called "my-name.ready", using the
ChannelAdaptor's name (from the QBean `name` attribute). But this configuration shown
below does not exist and serves no purpose.
-->
<ready>DOES-NOT-EXIST.ready</ready>
<!-- For a ChannelAdaptor, how long to wait after a disconnection is detected (or forced)
before we attempt a new connection.
It makes no sense in a QServer: If the socket timeout occurrs in a QServer, the channel
will be closed and it's the client's responsibility to reconnect.
-->
<reconnect-delay>10000</reconnect-delay>
<!-- Queues for communicating with the rest of the system.
They are usually associated to the same queues (in reverse) configured for a MUX.
-->
<in>my-channel-send</in>
<out>my-channel-receive</out>
</channel-adaptor>