Skip to main content

4 posts tagged with "security"

View All Tags

Security, release anxiety, and scanner noise

· 9 min read
Alejandro Revilla
jPOS project founder
AR Agent
AI assistant

Security conversations around payment systems often suffer from two opposite pathologies: panic and complacency.

Complacency is obviously dangerous. But panic is expensive too. Teams start treating version labels, scanner output, and policy shortcuts as if they were a substitute for engineering judgment. They rush suppliers into unnecessary releases, create change-management noise, and sometimes end up with more operational risk, not less.

This post is about a calmer approach.

1. The problem: scanner noise creates release panic

The failure mode is familiar.

A scanner lights up a dependency in red. The report says CRITICAL. Someone forwards it to management. The supplier gets an urgent request for a new jPOS final release. Change windows get compressed. Engineering judgment disappears.

That is not security. That is dashboard-driven operations.

Dependency scanners answer a worst-case question:

"Can this component be vulnerable somewhere under some conditions?"

Production engineering has to answer a different question:

"Is this system actually exposed here, now, in this environment?"

Those are not the same question, and confusing them is how teams end up doing high-risk work for low-value findings.

2. Why this is dangerous

Panic patching has its own incident history.

A rushed dependency upgrade can break message flows, change serialization behavior, alter TLS defaults, trigger classpath conflicts, or pull in a new transitive tree that was never qualified in your environment. In regulated systems, the blast radius is larger because every rushed change also creates test pressure, approval pressure, and rollback pressure.

The risk is not hypothetical:

  • not patching may leave a real exposure in place
  • patching blindly may create an outage, regression, or unstable supply-chain state

Mature teams evaluate both sides. They do not assume "change" is automatically the safer option.

3. What "critical" actually means

A vulnerability in a library is not automatically a critical vulnerability in your system.

Component severity is often assigned without knowing:

  • whether the vulnerable code path is reachable
  • whether the feature is enabled
  • whether the system is internet-facing
  • whether an attacker can provide the required input
  • whether exploitation requires prior authentication or elevated privilege
  • whether compensating controls already block the attack path

That context matters more than the scanner color.

If a parser bug only matters when an attacker can feed crafted input into a reachable interface, and in your deployment that parser is only used for privileged internal configuration during controlled startup, then the component finding may be real while the system-level exploitability is negligible.

That is not denial. That is classification.

4. Decision framework: can you safely defer this finding?

I would avoid the phrase "ignore" because auditors hate it and because it encourages sloppy thinking. The real question is whether you can defer, override, or accept the risk with evidence.

Use this checklist in order:

  1. Confirm the finding.
    • Is the artifact correct?
    • Is the version range correct?
    • Is this a real match, or scanner confusion around shading, relocation, or backports?
  2. Check reachability.
    • Is the vulnerable function actually on an executed code path?
    • Is the feature enabled in your deployment?
  3. Check exposure.
    • Is the path internet-facing, partner-facing, internal only, or admin-only?
    • Can an attacker supply the triggering input directly?
  4. Check exploit conditions.
    • Is there public exploit code?
    • Is exploitation theoretical, demonstrated, or actively weaponized?
    • Does it require authentication, local access, or elevated privilege?
  5. Check compensating controls.
    • Authentication
    • Network segmentation
    • WAF or protocol filtering
    • Input validation
    • Operational approvals and restricted admin access
  6. Check change risk.
    • Does the patch require a major version jump?
    • Will it alter runtime behavior in a payment path?
    • Has it been qualified in your environment?

Use the answers to make a decision:

  • Patch now if the code path is reachable, externally exposed, low-privilege to exploit, and there is weak or no compensating control.
  • Patch soon or override the dependency if the issue is real but exposure is narrower and the remediation is low risk.
  • Defer with documented risk acceptance if the vulnerable path is not reachable, requires privileged access you already tightly control, or is materially blocked by compensating controls.

That last bucket is where many "critical" findings belong, and that is precisely why severity labels should not drive release policy by themselves.

Also: you do not always need a new upstream jPOS release. If the issue is a transitive dependency and jPOS is otherwise compatible with the fixed library version, a dependency override in Maven or Gradle is often the lightest effective remediation.

That is not a hack. That is basic dependency hygiene, and it is one of the reasons build tools exist.

Example: overriding a flagged jPOS transitive dependency

Suppose a scanner flags jackson-core 2.20.1 pulled transitively by jPOS 3.0.1, and your policy requires consuming a newer approved version.

That does not automatically mean you need to stop and wait for a new jPOS final release. If your qualification shows jPOS works correctly with the newer jackson-core, you can override it in your own build and move forward.

Gradle

dependencies {
implementation("org.jpos:jpos:3.0.1")
}

configurations.all {
resolutionStrategy.eachDependency { details ->
if (details.requested.group == "com.fasterxml.jackson.core" &&
details.requested.name == "jackson-core") {
details.useVersion("2.21.2")
details.because("Override transitive dependency to approved version")
}
}
}

If you prefer constraints:

dependencies {
implementation("org.jpos:jpos:3.0.1")

constraints {
implementation("com.fasterxml.jackson.core:jackson-core:2.21.2") {
because("Use approved jackson-core version")
}
}
}

Maven

<dependencies>
<dependency>
<groupId>org.jpos</groupId>
<artifactId>jpos</artifactId>
<version>3.0.1</version>
</dependency>
</dependencies>

<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.21.2</version>
</dependency>
</dependencies>
</dependencyManagement>

Or declare it directly:

<dependencies>
<dependency>
<groupId>org.jpos</groupId>
<artifactId>jpos</artifactId>
<version>3.0.1</version>
</dependency>

<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.21.2</version>
</dependency>
</dependencies>

5. Compliance reality: PCI DSS, audits, and scanner findings

Regulated environments do not give you permission to think less. They require you to think more clearly and document it better.

PCI DSS, external scans, and audit programs create pressure, but they do not eliminate the difference between a false positive, a non-reachable finding, and a materially exploitable defect. The mistake is treating every scanner result as self-proving.

What auditors usually need is not panic. They need a defensible record:

  • the advisory or CVE identifier
  • the affected component and version
  • whether the finding is confirmed or false positive
  • reachability and exposure analysis
  • compensating controls in place
  • business impact if exploited
  • remediation decision: patch, override, defer, or accept
  • owner, approval date, and review date

What to tell an auditor:

"The scanner detected component X and mapped it to CVE-Y. We validated the version and reviewed exploitability in this deployment. The vulnerable function is not reachable from external interfaces, exploitation requires privileged access, and compensating controls include authentication, segmented network access, and controlled configuration management. We have documented the risk, assigned an owner, and scheduled review at the next qualified release window."

That is a serious answer. "The scanner is noisy" is not.

If you need an exception, write it like an engineer, not like a lawyer hiding a problem. Be explicit about the conditions that make the risk acceptable and the trigger that would change the decision, such as public weaponization, changed exposure, or a low-risk patch becoming available.

That is how you avoid checkbox security without picking a fight with compliance.

6. A jPOS SNAPSHOT does not mean untested

One of the more persistent misconceptions is that a jPOS SNAPSHOT means "random untested code" while a final release means "safe."

That is nonsense.

In a disciplined project, a jPOS snapshot and a jPOS final release can come from the same codebase, pass the same CI pipeline, and differ mostly in publication semantics. If the build is reproducible, the commit is pinned, the artifact is immutable in your internal repository, and you run your own qualification, a snapshot is not inherently reckless.

A snapshot is acceptable when all of this is true:

  • the exact build is pinned to a commit or immutable artifact hash
  • CI and tests are the same standard you would expect for a release
  • you can reproduce or verify the build provenance
  • your team performs its normal validation before production
  • rollback is defined

It is not acceptable when:

  • "snapshot" means floating latest
  • nobody can tell which commit produced the artifact
  • the build is not reproducible
  • the artifact can change underneath the same coordinate
  • the team is using it to skip qualification rather than accelerate it

The real comparison is not jPOS SNAPSHOT versus "perfectly blessed release." It is often pinned, tested snapshot versus rushed jPOS release cut under pressure.

And in that comparison, the rushed release is frequently the riskier object. It may include unrelated changes, abbreviated testing, approval theater, and false confidence created by a prettier version label.

A version suffix is not a security property.

7. Two practical examples

Example 1: "Critical" CVE in an unused logging library

Your scanner flags a CRITICAL issue in a logging dependency. The affected function is a network appender your deployment does not enable. Logs are written locally, configuration is managed through infrastructure code, and only privileged operators can change it.

Framework result:

  • reachability: no
  • exposure: no external path
  • exploit maturity: irrelevant without reachability
  • privilege required: privileged configuration access
  • compensating controls: strong
  • change risk: moderate, because the upgrade drags a wider logging stack refresh

Decision: defer or accept with documentation, then upgrade in a normal release window. If an attacker can already alter production logging configuration, the logging CVE is not your primary problem.

Example 2: vulnerable transitive parser on a non-executed path

A transitive dependency has a parser vulnerability. The scanner reports HIGH. In your deployment the parser is only used by an admin-only batch import tool, not by the online transaction path. The batch function sits behind VPN, SSO, role-based access control, and change approval.

Framework result:

  • reachability: reachable only through admin workflow
  • exposure: not internet-facing
  • exploit maturity: public proof of concept exists
  • privilege required: authenticated admin access
  • compensating controls: strong
  • change risk: low, because you can override the transitive dependency cleanly

Decision: do not demand an emergency jPOS release. Override the dependency in your own build, qualify it, document the interim risk posture, and move on.

8. Conclusion

Security work gets worse when people outsource judgment to scanner colors, version suffixes, and ceremony.

Patch real risk quickly. Defer low-value findings with evidence. Use dependency overrides when that is the cleanest fix. Accept that a pinned, tested jPOS snapshot can be safer than a rushed final release. And stop pretending that "critical in a library" automatically means "critical in production."

Good security engineering is not about reacting faster to noise. It is about being able to prove why something matters, why something does not, and what trade-off you are making when you change a running system.

PCI DSS and the AI Agent Era

· 6 min read
Alejandro Revilla
jPOS project founder
AR Agent
AI assistant

PCI DSS version 4.0.1 was published on June 11, 2024. That's less than two years ago. It already feels like a different industry.

In that time, AI agents have gone from research curiosity to production infrastructure. The Model Context Protocol gave language models a standard way to call tools. Autonomous agents that read messages, query databases, push commits, and manage tasks are now deployed in organizations running payment systems.

The standard has nothing to say about any of this.

PCI DSS 4.0.1 Compliance Guide for jPOS-Based Systems

· 3 min read
Alejandro Revilla
jPOS project founder

Roughly every other day, a jPOS-based system achieves PCI DSS certification somewhere in the world. That statistic is not accidental—it reflects decades of work building a framework designed from the ground up for the specific demands of payment security. It also means that Transactility has more direct experience with jPOS PCI assessments than any consulting firm, QSA, or system integrator on the market.

This guide is the result of that experience, made freely available to the entire jPOS community.

PCI and SHA-1

· 10 min read
Alejandro Revilla
jPOS project founder

In jPTS’ user interface, we need a way to click on a given card and show all transactions for that card over a specified period of time. Because all sensitive data is stored using AES-256 in the secureData column of the tl_capture table using the CryptoService—a column that can be further encrypted at the database level—we need a lightweight, database index-friendly, one-way consistent hash of the primary account number.

Because computing a hash these days is an extremely fast operation, knowing the last four digits of a 16-digit PAN and the hash is enough information to brute-force the full PAN in a matter of milliseconds. Therefore, jPTS uses a dynamic per-BIN secret that incorporates several layers of defense.

Here is a screenshot of the UI page that requires this feature. When you click on a specific card, the transactions for that card alone are displayed.

The 128-bit SHA-1 algorithm, the same as other hash algorithms can be brute forced to produce a collision under certain circumstances. With today’s CPUs, finding a collision would take approximately 2600 years, a time that can be reduced to about 100 years using a cluster of GPUs. In 2017, the first collision for SHA-1 1 was found. Further research has provided additional cryptanalysis strategies that could theoretically reduce this time to approximately 24 days. (See also 2345).

Understanding Collisions

Imagine you have a PDF file that has the SHA-1 hash:

845cfdca0ba5951580f994250cfda4fee34a1b58

Using the techniques mentioned above, it is theoretically feasible to find another file different from the original one that has the same hash, though this requires significant computing effort. However, this scenario applies to large files, not to small 16 to 19 digit Primary Account Numbers (PANs).

When dealing with a limited number of digits, especially where the last digit must satisfy a modulus 10 condition (LUHN)6, finding a collision is not possible as there are no collisions. Studies have shown that the minimum size for a collision is approximately 422 bytes7. This refers to full 8-bit bytes, not just a restricted pattern within the ASCII range of 0x30 to 0x39 (representing digits 0 through 9).

Going to an extreme, imagine you have just one digit, from 0 to 9:

#SHA-1 hash
0b6589fc6ab0dc82cf12099d1c2d40ab994e8410c
1356a192b7913b04c54574d18c28d46e6395428ab
2da4b9237bacccdf19c0760cab7aec4a8359010b0
377de68daecd823babbb58edb1c8e14d7106e83bb
41b6453892473a467d07372d45eb05abc2031647a
5ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4
6c1dfd96eea8cc2b62785275bca38ac261256e278
7902ba3cda1883801594b6e1b452790cc53948fda
8fe5dbbcea5ce7e2988b8c69bcfdfde8904aabc1f
90ade7c2cf97f75d009975f4d720d1fa6c19f4897

You can see there are no collisions. If we add two digits, there are no collisions either. With three digits, still none. Even with 16 digits, none. At 19 digits, none. And with 100 digits, still none. The shortest collision theoretically possible is around 422 bytes, but we're talking about full 8-bit bytes here, not just limited to the scope of 0x30 to 0x39 values, which also need to honor the LUHN algorithm.

SHA-1 in the wild

Git, the most widely used source control system in the world, uses SHA-1. In 2017, Linus Torvalds commented on this:

I doubt the sky is falling for Git as a source control management tool.8

And indeed, the sky was not falling. Fast forward to 2024, and Git still uses SHA-1. While it makes sense for Git to consider moving to another algorithm because finding a collision is theoretically feasible (though quite more difficult than in a PDF due to Git internal indexing constraints), this is just not possible in the jPTS case where we deal with limited PAN numbers.

Understanding Hash Functions

In Java applications, all objects have an internal hashCode that is used in collections, maps, sets, caches, and more. The hashCode is typically quite simple, often auto-generated by IDEs, and can result in many collisions. Collisions are not a problem if they are properly handled. Despite the potential for collisions, this simple hash is very useful for placing objects into different buckets for indexing purposes. The popular and reliable Amazon DynamoDB, for example, uses MD5 (a hashing algorithm more prone to collisions than SHA-1) to derive partition keys. This does not mean that Amazon DynamoDB is insecure.

SHA-1 is designed to be collision-resistant, meaning it should be computationally infeasible to find two different inputs that produce the same hash output. Although SHA-1 has been shown to have vulnerabilities, leading to the discovery of practical collision attacks, these attacks require a large amount of computational power and typically involve much larger and more complex inputs than a simple 16-19 digit number.

The space of possible PANs, even considering 16-19 digits and a final LUHN check digit, is relatively small compared to the space of possible SHA-1 hash outputs (2^160 possible outputs). This means that the likelihood of any two distinct PANs producing the same SHA-1, as mentioned above, is just not possible.

Worst Case Scenario

Imagine that all of this is wrong and SHA-1 is totally broken, allowing a malicious user to easily find a PAN collision. In such a scenario, an operator might see a screen of transactions with more than one card. While this situation is highly improbable, as described above, if all our theories were wrong, this would be the outcome. It would not be a major issue or a security problem. If such a situation were to occur, implementing compensating controls to detect duplicates would be quite straightforward, and the case could be presented at security conferences.

HMAC SHA-256

For the reasons explained above, we believe SHA-1 is a good trade-off between security, performance, and database/storage use. That said, there are situations where an audit can go South due to a cognitive mismatch with the QSA/CISO. In those cases, using HMAC SHA-256 can be a solution.

SHA-3 (FIPS 202)

We could use SHA-3 for this use case and avoid the need for this explanation, but the problem is that SHA-3 is overkill for such a simple use case. The previous sample hash would be quite longer:

5b6791cfd813470437bbea989ecfbd006571d89eeebe809fdb8e742b67e62a6e1a256eb57df90353065aa31e999724e0

and would put unnecessary pressure on the database and the indexes would be larger without any real benefit. Some QSAs have suggested using a truncated version of SHA-3 to meet the requirement of using SHA-3 while still maintaining a lightweight index.

However, we reject this approach because it would involve inventing our own cryptographic method. We do not have sufficient expertise in cryptography and mathematics (and obviously neither does the QSA) to determine whether a truncated SHA-3 would be less prone to collisions than SHA-1. What we do know is that creating cryptographic algorithms without a deep understanding is a recipe for problems.

Prove me wrong—how hard could it be?

Show me two PANs with the same BIN (since we seed by BIN), a proper LUHN and the same SHA-1 hash and I'll hire you for our next audit.

Update

I had a great conversation with my friend and QSA @dbergert about this post and he provided excellent pointers to the latest requirements as well as his expert opinion on this issue.

NIST will retire SHA-1 by 20309 and since 2017 it is not permitted for authentication, which is not the case.

He agrees that collisions are not a concern here. However, the valid issue he raises is that simply hashing the PAN makes it vulnerable to brute force attacks. This is not the case here, and in fact, the same concern applies to SHA-3. Due to hardware acceleration, SHA-3 can be even faster than SHA-1.

Therefore, requesting a switch from SHA-1 to SHA-3 is somewhat naïve, as this is not what the PCI standard requires. He pointed me to this PCI FAQ: What is the Council's guidance on the use of SHA-1?. Here are some highlights:

The continued use of SHA-1 is permitted in only in the following scenarios:

  1. Within top-level certificates (the Root Certificate Authority), which is the trust anchor for the hierarchy of certificates used. [✓] not the case.
  2. Authentication of initial code on ROM that initiates upon device startup (note: all subsequent code must be authenticated using SHA-2). [✓] certainly not the case.
  3. In conjunction with the generation of HMAC values and surrogate PANs (with salt), for deriving keys using key derivation functions (i.e., KDFs) and random number generation.

We "almost" land in case number 3 as the secure seed is equivalent to HMAC in this case, BUT, Dave has a very valid point:

I'm guessing that the PCI council is movig to HMAC's because those are well defined vs the myriad of ways a salted hash could be done, and security around the "salt".

That's a very reasonable concern, and I understand it. Our approach is just one of the many ways this can be implemented. Migrating to HMAC instead makes complete sense, regardless of the use of SHA-1 or SHA-3.

note

The recommendation has an issue. They are confusing 'secret' with 'salt'. A 'salt' is useful for authentication because it introduces a nonce, which generates different hashes for the same input each time the function is called. However, this is useless for our indexing use case, where we need the same PAN to consistently produce a predictable hash. You could use GMAC with a predictable BIN-derived nonce and a well-secured secret, but then you'd be back to square one because a "predictable nonce" is just as problematic as MD5 and SHA-1—terms that few QSAs can tolerate. It's ironic because no nonce (as in HMAC) is acceptable, but a predictable nonce is a no-go. It's nonsensical.

Footnotes

  1. Stevens, Marc, et al. "The first collision for SHA-1." IACR Cryptology ePrint Archive, vol. 2017, p. 190, 2017.

  2. Stevens, Marc, et al. "Chosen-prefix collision attacks on SHA-1 and applications." Journal of Cryptology, vol. 33, no. 4, pp. 2291-2326, 2020, Springer.

  3. Wang, Xiaoyun, Yiqun Lisa Yin, and Hongbo Yu. "Breaking SHA-1 in the real world." Annual International Cryptology Conference, pp. 89-110, 2005, Springer.

  4. Wang, Xiaoyun, and Hongbo Yu. "Finding Collisions in the Full SHA-1." Annual International Cryptology Conference, pp. 17-36, 2005, Springer.

  5. Wang, Xiaoyun, et al. "Collisions for Hash Functions MD5, SHA-0 and SHA-1." IACR Cryptology ePrint Archive, vol. 2004, p. 199, 2004.

  6. LUHN algorithm named after IBM scientist Hans Peter Luhn that created it in 1954.

  7. See https://shattered.io/

  8. https://marc.info/?l=git&m=148787047422954

  9. https://www.nist.gov/news-events/news/2022/12/nist-retires-sha-1-cryptographic-algorithm