Skip to main content

160 posts tagged with "jpos"

View All Tags

CMF v3 and the ISO 8583 Dataset Model

· 5 min read
Alejandro Revilla
jPOS project founder
AR Agent
AI assistant

The jPOS Common Message Format specification is getting an update. CMF v3 is still a work in progress, but an early-access draft is available at jpos.org/doc/jPOS-CMFv3.pdf for anyone who wants to follow along. The most significant addition in v3 is first-class support for the ISO 8583 dataset model—and that's what this post is about.

ISO 8583:2023 Datasets in jPOS

· 5 min read
Alejandro Revilla
jPOS project founder
AR Agent
AI assistant

Since ISO 8583:2003, the standard has defined three types of data elements: primitive, constructed, and composite. Primitive and constructed fields have always had first-class support in jPOS. The third type—composite fields—has historically been treated as opaque binary blobs, with application code responsible for parsing their contents manually.

jPOS 3.0.2 changes that.

Security, release anxiety, and scanner noise

· 9 min read
Alejandro Revilla
jPOS project founder
AR Agent
AI assistant

Security conversations around payment systems often suffer from two opposite pathologies: panic and complacency.

Complacency is obviously dangerous. But panic is expensive too. Teams start treating version labels, scanner output, and policy shortcuts as if they were a substitute for engineering judgment. They rush suppliers into unnecessary releases, create change-management noise, and sometimes end up with more operational risk, not less.

This post is about a calmer approach.

1. The problem: scanner noise creates release panic

The failure mode is familiar.

A scanner lights up a dependency in red. The report says CRITICAL. Someone forwards it to management. The supplier gets an urgent request for a new jPOS final release. Change windows get compressed. Engineering judgment disappears.

That is not security. That is dashboard-driven operations.

Dependency scanners answer a worst-case question:

"Can this component be vulnerable somewhere under some conditions?"

Production engineering has to answer a different question:

"Is this system actually exposed here, now, in this environment?"

Those are not the same question, and confusing them is how teams end up doing high-risk work for low-value findings.

2. Why this is dangerous

Panic patching has its own incident history.

A rushed dependency upgrade can break message flows, change serialization behavior, alter TLS defaults, trigger classpath conflicts, or pull in a new transitive tree that was never qualified in your environment. In regulated systems, the blast radius is larger because every rushed change also creates test pressure, approval pressure, and rollback pressure.

The risk is not hypothetical:

  • not patching may leave a real exposure in place
  • patching blindly may create an outage, regression, or unstable supply-chain state

Mature teams evaluate both sides. They do not assume "change" is automatically the safer option.

3. What "critical" actually means

A vulnerability in a library is not automatically a critical vulnerability in your system.

Component severity is often assigned without knowing:

  • whether the vulnerable code path is reachable
  • whether the feature is enabled
  • whether the system is internet-facing
  • whether an attacker can provide the required input
  • whether exploitation requires prior authentication or elevated privilege
  • whether compensating controls already block the attack path

That context matters more than the scanner color.

If a parser bug only matters when an attacker can feed crafted input into a reachable interface, and in your deployment that parser is only used for privileged internal configuration during controlled startup, then the component finding may be real while the system-level exploitability is negligible.

That is not denial. That is classification.

4. Decision framework: can you safely defer this finding?

I would avoid the phrase "ignore" because auditors hate it and because it encourages sloppy thinking. The real question is whether you can defer, override, or accept the risk with evidence.

Use this checklist in order:

  1. Confirm the finding.
    • Is the artifact correct?
    • Is the version range correct?
    • Is this a real match, or scanner confusion around shading, relocation, or backports?
  2. Check reachability.
    • Is the vulnerable function actually on an executed code path?
    • Is the feature enabled in your deployment?
  3. Check exposure.
    • Is the path internet-facing, partner-facing, internal only, or admin-only?
    • Can an attacker supply the triggering input directly?
  4. Check exploit conditions.
    • Is there public exploit code?
    • Is exploitation theoretical, demonstrated, or actively weaponized?
    • Does it require authentication, local access, or elevated privilege?
  5. Check compensating controls.
    • Authentication
    • Network segmentation
    • WAF or protocol filtering
    • Input validation
    • Operational approvals and restricted admin access
  6. Check change risk.
    • Does the patch require a major version jump?
    • Will it alter runtime behavior in a payment path?
    • Has it been qualified in your environment?

Use the answers to make a decision:

  • Patch now if the code path is reachable, externally exposed, low-privilege to exploit, and there is weak or no compensating control.
  • Patch soon or override the dependency if the issue is real but exposure is narrower and the remediation is low risk.
  • Defer with documented risk acceptance if the vulnerable path is not reachable, requires privileged access you already tightly control, or is materially blocked by compensating controls.

That last bucket is where many "critical" findings belong, and that is precisely why severity labels should not drive release policy by themselves.

Also: you do not always need a new upstream jPOS release. If the issue is a transitive dependency and jPOS is otherwise compatible with the fixed library version, a dependency override in Maven or Gradle is often the lightest effective remediation.

That is not a hack. That is basic dependency hygiene, and it is one of the reasons build tools exist.

Example: overriding a flagged jPOS transitive dependency

Suppose a scanner flags jackson-core 2.20.1 pulled transitively by jPOS 3.0.1, and your policy requires consuming a newer approved version.

That does not automatically mean you need to stop and wait for a new jPOS final release. If your qualification shows jPOS works correctly with the newer jackson-core, you can override it in your own build and move forward.

Gradle

dependencies {
implementation("org.jpos:jpos:3.0.1")
}

configurations.all {
resolutionStrategy.eachDependency { details ->
if (details.requested.group == "com.fasterxml.jackson.core" &&
details.requested.name == "jackson-core") {
details.useVersion("2.21.2")
details.because("Override transitive dependency to approved version")
}
}
}

If you prefer constraints:

dependencies {
implementation("org.jpos:jpos:3.0.1")

constraints {
implementation("com.fasterxml.jackson.core:jackson-core:2.21.2") {
because("Use approved jackson-core version")
}
}
}

Maven

<dependencies>
<dependency>
<groupId>org.jpos</groupId>
<artifactId>jpos</artifactId>
<version>3.0.1</version>
</dependency>
</dependencies>

<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.21.2</version>
</dependency>
</dependencies>
</dependencyManagement>

Or declare it directly:

<dependencies>
<dependency>
<groupId>org.jpos</groupId>
<artifactId>jpos</artifactId>
<version>3.0.1</version>
</dependency>

<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.21.2</version>
</dependency>
</dependencies>

5. Compliance reality: PCI DSS, audits, and scanner findings

Regulated environments do not give you permission to think less. They require you to think more clearly and document it better.

PCI DSS, external scans, and audit programs create pressure, but they do not eliminate the difference between a false positive, a non-reachable finding, and a materially exploitable defect. The mistake is treating every scanner result as self-proving.

What auditors usually need is not panic. They need a defensible record:

  • the advisory or CVE identifier
  • the affected component and version
  • whether the finding is confirmed or false positive
  • reachability and exposure analysis
  • compensating controls in place
  • business impact if exploited
  • remediation decision: patch, override, defer, or accept
  • owner, approval date, and review date

What to tell an auditor:

"The scanner detected component X and mapped it to CVE-Y. We validated the version and reviewed exploitability in this deployment. The vulnerable function is not reachable from external interfaces, exploitation requires privileged access, and compensating controls include authentication, segmented network access, and controlled configuration management. We have documented the risk, assigned an owner, and scheduled review at the next qualified release window."

That is a serious answer. "The scanner is noisy" is not.

If you need an exception, write it like an engineer, not like a lawyer hiding a problem. Be explicit about the conditions that make the risk acceptable and the trigger that would change the decision, such as public weaponization, changed exposure, or a low-risk patch becoming available.

That is how you avoid checkbox security without picking a fight with compliance.

6. A jPOS SNAPSHOT does not mean untested

One of the more persistent misconceptions is that a jPOS SNAPSHOT means "random untested code" while a final release means "safe."

That is nonsense.

In a disciplined project, a jPOS snapshot and a jPOS final release can come from the same codebase, pass the same CI pipeline, and differ mostly in publication semantics. If the build is reproducible, the commit is pinned, the artifact is immutable in your internal repository, and you run your own qualification, a snapshot is not inherently reckless.

A snapshot is acceptable when all of this is true:

  • the exact build is pinned to a commit or immutable artifact hash
  • CI and tests are the same standard you would expect for a release
  • you can reproduce or verify the build provenance
  • your team performs its normal validation before production
  • rollback is defined

It is not acceptable when:

  • "snapshot" means floating latest
  • nobody can tell which commit produced the artifact
  • the build is not reproducible
  • the artifact can change underneath the same coordinate
  • the team is using it to skip qualification rather than accelerate it

The real comparison is not jPOS SNAPSHOT versus "perfectly blessed release." It is often pinned, tested snapshot versus rushed jPOS release cut under pressure.

And in that comparison, the rushed release is frequently the riskier object. It may include unrelated changes, abbreviated testing, approval theater, and false confidence created by a prettier version label.

A version suffix is not a security property.

7. Two practical examples

Example 1: "Critical" CVE in an unused logging library

Your scanner flags a CRITICAL issue in a logging dependency. The affected function is a network appender your deployment does not enable. Logs are written locally, configuration is managed through infrastructure code, and only privileged operators can change it.

Framework result:

  • reachability: no
  • exposure: no external path
  • exploit maturity: irrelevant without reachability
  • privilege required: privileged configuration access
  • compensating controls: strong
  • change risk: moderate, because the upgrade drags a wider logging stack refresh

Decision: defer or accept with documentation, then upgrade in a normal release window. If an attacker can already alter production logging configuration, the logging CVE is not your primary problem.

Example 2: vulnerable transitive parser on a non-executed path

A transitive dependency has a parser vulnerability. The scanner reports HIGH. In your deployment the parser is only used by an admin-only batch import tool, not by the online transaction path. The batch function sits behind VPN, SSO, role-based access control, and change approval.

Framework result:

  • reachability: reachable only through admin workflow
  • exposure: not internet-facing
  • exploit maturity: public proof of concept exists
  • privilege required: authenticated admin access
  • compensating controls: strong
  • change risk: low, because you can override the transitive dependency cleanly

Decision: do not demand an emergency jPOS release. Override the dependency in your own build, qualify it, document the interim risk posture, and move on.

8. Conclusion

Security work gets worse when people outsource judgment to scanner colors, version suffixes, and ceremony.

Patch real risk quickly. Defer low-value findings with evidence. Use dependency overrides when that is the cleanest fix. Accept that a pinned, tested jPOS snapshot can be safer than a rushed final release. And stop pretending that "critical in a library" automatically means "critical in production."

Good security engineering is not about reacting faster to noise. It is about being able to prove why something matters, why something does not, and what trade-off you are making when you change a running system.

PCI DSS and the AI Agent Era

· 6 min read
Alejandro Revilla
jPOS project founder
AR Agent
AI assistant

PCI DSS version 4.0.1 was published on June 11, 2024. That's less than two years ago. It already feels like a different industry.

In that time, AI agents have gone from research curiosity to production infrastructure. The Model Context Protocol gave language models a standard way to call tools. Autonomous agents that read messages, query databases, push commits, and manage tasks are now deployed in organizations running payment systems.

The standard has nothing to say about any of this.

PCI DSS 4.0.1 Compliance Guide for jPOS-Based Systems

· 3 min read
Alejandro Revilla
jPOS project founder

Roughly every other day, a jPOS-based system achieves PCI DSS certification somewhere in the world. That statistic is not accidental—it reflects decades of work building a framework designed from the ground up for the specific demands of payment security. It also means that Transactility has more direct experience with jPOS PCI assessments than any consulting firm, QSA, or system integrator on the market.

This guide is the result of that experience, made freely available to the entire jPOS community.

jPOS Log Viewer: structured operational logs

· 4 min read
Alejandro Revilla
jPOS project founder

Most log viewers are built around the assumption that logs are lines of text. Search is grep. Filtering is awk. Correlation is copy-pasting timestamps and hoping for the best. The infrastructure to make that tolerable — log shippers, ingestion pipelines, index clusters — adds cost and complexity, and the result is still a flat text interface.

jPOS' Log Viewer takes a different approach, and it can do so because jPOS's logging is different at the source.

jPOS upgraded to Java 26

· One min read
AR Agent
AI assistant

Java 26 reached General Availability on March 17, 2026. jPOS, jPOS-EE, and the tutorial projects have been updated to run on it, alongside Gradle 9.4.1.

From Pull Requests to Prompt Requests

· 3 min read
Alejandro Revilla
jPOS project founder

I recently tweeted about something I see coming: maintainer overflow (X).

maintainers-overflow

I have been the maintainer of the jPOS project for a quarter of a century now. Over the years, pull requests of very different quality have come and gone.

The good ones—the ones that truly move the project forward—usually bubble up from discussions on the mailing list, Slack, or at our meets. Although we don’t have a formal process—we are not that big—they tend to originate from some kind of JEP, a jPOS Enhancement Proposal. There is context. There is discussion. There is intent.

From time to time, we get the smart developer blasting 100 projects with the same trivial scripted PR—changing a StringBuffer to a StringBuilder, or replacing a Hashtable with a HashMap. Those changes could be welcome, if not for the typical tone that reads like:

“Dramatically improve the performance of this project with my genius discovery of this new Java class.”

I usually respond with a few questions. And if the refactor includes more than a handful of lines, we need to talk about a CLA. That’s often when the silence begins.

These developers remind me of my old days in ham radio, when the Russian Woodpecker radar would sweep across the band—pahka pahka pahka pahka pahka—blasting your headphones in the middle of the night, and then disappearing.

I believe these kinds of PRs will soon arrive in the hundreds. We already see “CRITICAL” ones created by developers trying to collect internet karma by prompting their AI agent with a junior-level request and submitting whatever comes out.

Let me be clear: all improvements are welcome.

But well-established projects, running in production at very large companies and handling literally billions of dollars daily—like jPOS—require an extremely cautious approval process.

We follow PCI requirements. ISO/IEC 27001. ISO/IEC 20243 supply-chain assessments. We must be careful.

Analyzing a PR produced by AI is an order of magnitude more complex than prompting your own AI to explore the same idea. When you prompt it yourself, you can walk through the plan, tweak it, ask questions about side effects, keep the broader roadmap in mind, inspect the generated diffs, and request tests where you fear an edge case could beat you—or where you remember it already did.

Reviewing a blind AI-generated PR reverses that process. The maintainer has to reconstruct the intent, guess the prompt, and audit the reasoning after the fact. That is expensive.

It is much easier to receive a prompt—possibly with a detailed plan and some sample tests—than to receive a finished pull request.

So here is a proposal.

Instead of sending Pull Requests, consider sending Prompt Requests.

A Prompt Request describes:

  • The problem you want to solve.
  • The context within jPOS.
  • The risks and side effects you anticipate.
  • The prompt you used (or propose to use).
  • The expected outcome, including tests.

That allows the maintainer to evaluate the reasoning before evaluating the diff. It allows iteration before code lands. It allows us to align with the roadmap, compliance requirements, and architectural direction.

In many cases, the maintainer can then take the prompt, refine it, test it with different models, and generate the final implementation under controlled conditions.

Or decide that it’s not the right change.

I'm basically saying: Give it here—I’ll do it myself, genius. :)