Skip to main content

148 posts tagged with "jpos"

View All Tags

jPOS Tip and Tail

· 3 min read
Alejandro Revilla
jPOS project founder
REQUIRED ACTIONS

If you already have a local copy of jPOS (master or next), please note there are REQUIRED ACTIONS at the end of this blog post

When we started work on jPOS 3.x on the new next branch, the plan was to rename the existing master branch (jPOS 2 series) to v2 and then migrate the next branch to main, aligning with the new industry default branch naming convention for the default repository (main) upon the release of jPOS 3.0.0.

Now that jPOS 3.0.0 has been released, we can see how it aligns perfectly with JEP-14, The Tip & Tail Model of Library Development, which fits our approach seamlessly.

TIP & TAIL

(from JEP-14 summary)

..… The tip release of a library contains new features and bug fixes, while tail releases contain only critical bug fixes. As little as possible is backported from the tip to the tails. The JDK has used tip & tail since 2018 to deliver new features at a faster pace, as well as to provide reliable and predictable updates for users focused on stability.

So, we slightly adjusted our branch plan as follows:

  • The current master branch (jPOS 2) will be renamed to tail.
  • The current next branch (jPOS 3) will be renamed to main.
note

We briefly considered naming the main branch tip to better align with JEP-14 terminology. However, most developers are already familiar with the term main, and migrating projects from master to main for political correctness was challenging enough. Since main is now the default for new repositories, we’ve decided not to deviate further to avoid unnecessary confusion.

Bottom line: We have two important branches now. main is our Tip, jPOS 3, where all the exiting development goes. tail is currently the latest 2-series. The branching plan looks like this:

REQUIRED ACTIONS

On your existing next branch:

git branch -m next main
git fetch origin
git branch -u origin/main main
git remote set-head origin -a

On your existing master branch:

git branch -m master tail
git fetch origin
git branch -u origin/tail tail
tip

Once we test this new branch plan, we'll use it in jPOS-EE as well.

jPOS 3.0.0 has been released

· 7 min read
Alejandro Revilla
jPOS project founder

We are excited to announce the release of jPOS 3.0.0, marking a significant step forward in the evolution of the project. This release delivers substantial improvements, feature additions, and modernization efforts aimed at enhancing flexibility, performance, and security.

For a long time, jPOS has evolved version after version, maintaining almost 100% backward compatibility with previous releases—never rocking the boat, so to speak. However, to preserve this backward compatibility, we’ve had to hold back from fully embracing advancements in the Java ecosystem, especially the rapid development and release pace of OpenJDK, which is delivering incredible results and exciting new features.

In 2022, we announced jPOS Next, introducing a new branch naming convention: master and next. Unintentionally, we ended up implementing something closely aligned with the JEP 14: Tip & Tail release model that was recently published. Our current next branch functions as the Tip, while master serves as the Tail. In the future, we may adjust this naming to better align with the JEP 14 terminology.

From Tip & Tail Summary

Tip & tail is a release model for software libraries that gives application developers a better experience while helping library developers innovate faster. The tip release of a library contains new features and bug fixes, while tail releases contain only critical bug fixes. As little as possible is backported from the tip to the tails. The JDK has used tip & tail since 2018 to deliver new features at a faster pace, as well as to provide reliable and predictable updates for users focused on stability.

Key Features and Changes

Platform Modernization

  • Java Baseline Upgrade: The baseline is now Java 23, embracing the latest language features and performance enhancements.
  • Gradle Upgrades: Build system upgraded to Gradle 8.11.1, incorporating Version Catalogs for dependency management.
  • Dependency Updates: Major dependency bumps include BouncyCastle, JLine, Mockito, JUnit, and SnakeYAML, addressing security vulnerabilities and improving stability.

Project Loom - Virtual Threads

If we had to highlight one key feature of jPOS 3, it would be the use of Virtual Threads in critical components like the TransactionManager, QServer, ChannelAdaptor, and more. This innovation is also one of the reasons the release took longer than expected. While we initially aimed to launch alongside Java 21, that version had some issues with pinned platform threads that were later improved in Java 22 and 23. We are now eagerly anticipating Java 24 for even better performance.

With jPOS 3, the TransactionManager continuations—once a groundbreaking reactive programming feature (long before "reactive programming" became a trend)—are now obsolete. Thanks to Virtual Threads, we can safely handle 100k or more concurrent sessions in the TM without any issues. Virtual Threads are incredibly lightweight and efficient, marking a significant leap forward for jPOS.

As an example, consider a 15-second SLA with a sustained load of 5,000 transactions per second (TPS). That results in 75,000 in-flight transactions—a load that would be unreasonable to handle with traditional platform threads due to resource constraints. However, with Virtual Threads, this scenario becomes entirely manageable, showcasing their incredible efficiency and scalability for high-throughput environments.

Higher loads introduce new challenges. While managing a system with a few hundred threads was feasible using jPOS’s internal logging tools, like the SystemMonitor and the Q2 log, it becomes unmanageable with 100k+ threads. To address this, we’ve implemented new structured audit logging, integrated Java Flight Recorder, and introduced comprehensive Micrometer-based instrumentation to provide better visibility and control.

Micrometer instrumentation

In jPOS 2, we used HdrHistogram directly to handle certain TransactionManager-specific metrics. While it was a useful tool for profiling edge cases and offering limited visibility into remote production systems, we fell short in providing robust additional tooling. In jPOS 3, we still rely on HdrHistogram, but instead of using it directly, we now leverage Micrometer—an outstanding, well-designed library that integrates seamlessly with standard tools like Prometheus and OpenTelemetry.

jPOS 3 also includes an embedded Prometheus endpoint server, which can be used out of the box (see q2 --help, in particular the --metrics-port and --metrics-path switches). This provides lightweight yet powerful visibility into channels, servers, multiplexers, and, of course, the TransactionManager and its participants.

Java Flight Recorder

jPOS 3 incorporates Java Flight Recorder (JFR) instrumentation at key points within the TransactionManager, the Space and other sensitive components. This provides deeper insights into application use cases and generates valuable profiling information, enabling us to implement continuous performance improvements effectively.

Structured Audit Log

The Structured Audit Log supports TXT, XML, JSON, and Markdown output and is set to eventually phase out the current jPOS XML-flavored log that has served us well over the years. While the existing log is excellent for development, QA, and small tests, it struggles to handle massive data volumes efficiently. The new structured audit log, built on Java Records that extend the sealed interface AuditLogEvent, is designed to be both human-readable and compact for log aggregation, offering a much more scalable and user-friendly solution. In previous versions of jPOS, we used the ProtectedLogListener, which operated on live LogEvent objects by cloning and modifying certain fields. With the new approach, we’ve laid the groundwork for implementing annotation-based security directly on the log records. This ensures that sensitive data is never leaked under any circumstances.

Full Support for Java Modules with module-info

jPOS 3 fully embraces the Java Platform Module System (JPMS) by including module-info descriptors in its core modules. This allows developers to take full advantage of Java’s modular architecture, ensuring strong encapsulation, improved security, and better dependency management. By supporting Java modules, jPOS integrates seamlessly into modern modularized applications, making it easier to define clear module boundaries, minimize classpath conflicts, and optimize runtime performance. This alignment with the latest Java standards positions jPOS as a forward-looking solution for today’s demanding payment environments.

As part of this transition, we have deprecated OSGi support due to its low traction over the years. Despite a handful of deployments on platforms like Apache Karaf and IBM CICS/Liberty, feedback was minimal, and adoption remained low. The shift to Java Modules provides a more robust and future-proof solution for modularity, aligning jPOS with modern Java standards.

Security Enhancements

  • Aligned with JEP 411, deprecating the SecurityManager for removal, reflecting industry best practices.
  • Upgraded cryptographic libraries and introduced new cryptogram padding methods for enhanced security.
  • Introduced KeyUsage serialization and improved handling of sensitive data, such as track data in ISOUtil.

Configuration and Deployment

  • New options for environment property replacement across configuration files.
  • Support for complex fields in IsoTagStringFieldPackager and auto-configuration of enum fields.
  • Added Q2 server command-line switches, including shutdown delay and hook overrides useful on K8S deployments.

Performance and Monitoring

  • Added GC stats dumping and enhanced thread dump capabilities in SystemMonitor.
  • Improved metrics tracking with precise histogram timestamps.
  • Optimized handling of multi-level nested ISOMsgs and monotonic time usage in TSpace.

ISO Standards and Protocol Support

  • Introduced new packager types, including IFB_LLLLLLCHAR, IFB_LLLHEX, and IFEPE_LLBINARY.
  • Added support for CMF DE 027 - POS Capability and other ISO-related enhancements.
  • Improvements in EuroSubFieldPackager and BER/TLV field handling.

New Utilities and Tools

  • Added helper methods for message class identification and new utilities for working with ISO dates and durations.
  • Enhanced the RotateLogListener with ZoneId support for better log management.
  • Added a NoCardValidator and support for SSH remote resize (WINCH).

Breaking Changes

  • Removed legacy modules, including compat_1_5_2 and QNode, aligning the codebase with modern usage patterns.
  • Deprecated TM Continuations, reflecting a shift to more streamlined transaction management (still supported via VirtualThread simulation).

Bug Fixes and Stability

  • Addressed key issues, such as proper BER/TLV field handling and ensuring monotonic time for better reliability.
  • Fixed parsing errors in ISODate and improved support for track data generation.

Why Upgrade to jPOS 3.0.0?

This release represents a forward-looking approach, embracing modern Java standards, improving security, and delivering powerful new tools for developers. Whether you're building cutting-edge payment solutions or maintaining existing systems, jPOS 3.0.0 offers the foundation needed to innovate and scale with confidence.

The jPOS Gradle Plugin as of 0.0.12 has been updated to support jPOS 3.

References:

QBean Eager Start

· 2 min read
Alejandro Revilla
jPOS project founder

QBeans have a very simple lifecycle:

  • init
  • start
  • stop
  • destroy

When Q2 starts, it reads all the QBeans available in the deploy directory and calls init on all of them, then start, using alphabetical order. That's the reason we have an order convention: 00* for the loggers, 10* for the channels, 20* for the MUXes, 30* for the TransactionManagers, etc.

This startup lifecycle has served us well for a very long time, but when we start getting creative with the Environment and the Environment Providers, things start to get awkward.

With the environment providers, we can dereference configuration properties from YAML files, Java properties, or operating system variables. These variables can have a prefix that indicates we need to call a provider loaded via a ServiceLoader. For example, an HSM environment provider allows you to use properties like this: hsm::xxxxxx, where xxxxxx is a cryptogram encrypted under the HSM's LMKs.

To decrypt xxxxxx, we need to call the HSM driver, typically configured using a QBean.

Imagine this deploy directory:

20_hsm_driver.xml
25_cryptoservice.xml

The cryptoservice wants to pick some configuration properties (such as its PGP private key passphrase) using the HSM, but it needs the HSM driver to be already started.

Up until now, Q2 would have called:

  • hsm_driver: init method
  • cryptoservice: init method

But the cryptoservice, at init time, needs to block until the hsm_driver is fully operational, which requires its start callback to be called.

The solution is simple: we now have an eager-start attribute:

<qbean name="xxx" class="org.jpos.xxx.QBeanXXX" eager-start="true">
...
...
</qbean>

When Q2 sees a QBean with the eager-start attribute set to true, it starts it right away.

PCI and SHA-1

· 9 min read
Alejandro Revilla
jPOS project founder

In jPTS’ user interface, we need a way to click on a given card and show all transactions for that card over a specified period of time. Because all sensitive data is stored using AES-256 in the secureData column of the tl_capture table using the CryptoService—a column that can be further encrypted at the database level—we need a lightweight, database index-friendly, one-way consistent hash of the primary account number.

Because computing a hash these days is an extremely fast operation, knowing the last four digits of a 16-digit PAN and the hash is enough information to brute-force the full PAN in a matter of milliseconds. Therefore, jPTS uses a dynamic per-BIN secret that incorporates several layers of defense.

Here is a screenshot of the UI page that requires this feature. When you click on a specific card, the transactions for that card alone are displayed.

The 128-bit SHA-1 algorithm, the same as other hash algorithms can be brute forced to produce a collision under certain circumstances. With today’s CPUs, finding a collision would take approximately 2600 years, a time that can be reduced to about 100 years using a cluster of GPUs. In 2017, the first collision for SHA-1 1 was found. Further research has provided additional cryptanalysis strategies that could theoretically reduce this time to approximately 24 days. (See also 2345).

Understanding Collisions

Imagine you have a PDF file that has the SHA-1 hash:

845cfdca0ba5951580f994250cfda4fee34a1b58

Using the techniques mentioned above, it is theoretically feasible to find another file different from the original one that has the same hash, though this requires significant computing effort. However, this scenario applies to large files, not to small 16 to 19 digit Primary Account Numbers (PANs).

When dealing with a limited number of digits, especially where the last digit must satisfy a modulus 10 condition (LUHN)6, finding a collision is not possible as there are no collissions. Studies have shown that the minimum size for a collision is approximately 422 bytes7. This refers to full 8-bit bytes, not just a restricted pattern within the ASCII range of 0x30 to 0x39 (representing digits 0 through 9).

Going to an extreme, imagine you have just one digit, from 0 to 9:

#SHA-1 hash
0b6589fc6ab0dc82cf12099d1c2d40ab994e8410c
1356a192b7913b04c54574d18c28d46e6395428ab
2da4b9237bacccdf19c0760cab7aec4a8359010b0
377de68daecd823babbb58edb1c8e14d7106e83bb
41b6453892473a467d07372d45eb05abc2031647a
5ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4
6c1dfd96eea8cc2b62785275bca38ac261256e278
7902ba3cda1883801594b6e1b452790cc53948fda
8fe5dbbcea5ce7e2988b8c69bcfdfde8904aabc1f
90ade7c2cf97f75d009975f4d720d1fa6c19f4897

You can see there are no collisions. If we add two digits, there are no collisions either. With three digits, still none. Even with 16 digits, none. At 19 digits, none. And with 100 digits, still none. The shortest collision theoretically possible is around 422 bytes, but we're talking about full 8-bit bytes here, not just limited to the scope of 0x30 to 0x39 values, which also need to honor the LUHN algorithm.

SHA-1 in the wild

Git, the most widely used source control system in the world, uses SHA-1. In 2017, Linus Torvalds commented on this:

I doubt the sky is falling for Git as a source control management tool.8

And indeed, the sky was not falling. Fast forward to 2024, and Git still uses SHA-1. While it makes sense for Git to consider moving to another algorithm because finding a collision is theoretically feasible (though quite more difficult than in a PDF due to Git internal indexing constraints), this is just not possible in the jPTS case where we deal with limited PAN numbers.

Understanding Hash Functions

In Java applications, all objects have an internal hashCode that is used in collections, maps, sets, caches, and more. The hashCode is typically quite simple, often auto-generated by IDEs, and can result in many collisions. Collisions are not a problem if they are properly handled. Despite the potential for collisions, this simple hash is very useful for placing objects into different buckets for indexing purposes. The popular and reliable Amazon DynamoDB, for example, uses MD5 (a hashing algorithm more prone to collisions than SHA-1) to derive partition keys. This does not mean that Amazon DynamoDB is insecure.

SHA-1 is designed to be collision-resistant, meaning it should be computationally infeasible to find two different inputs that produce the same hash output. Although SHA-1 has been shown to have vulnerabilities, leading to the discovery of practical collision attacks, these attacks require a large amount of computational power and typically involve much larger and more complex inputs than a simple 16-19 digit number.

The space of possible PANs, even considering 16-19 digits and a final LUHN check digit, is relatively small compared to the space of possible SHA-1 hash outputs (2^160 possible outputs). This means that the likelihood of any two distinct PANs producing the same SHA-1, as mentioned above, is just not possible.

Worst Case Scenario

Imagine that all of this is wrong and SHA-1 is totally broken, allowing a malicious user to easily find a PAN collision. In such a scenario, an operator might see a screen of transactions with more than one card. While this situation is highly improbable, as described above, if all our theories were wrong, this would be the outcome. It would not be a major issue or a security problem. If such a situation were to occur, implementing compensating controls to detect duplicates would be quite straightforward, and the case could be presented at security conferences.

HMAC SHA-256

For the reasons explained above, we believe SHA-1 is a good trade-off between security, performance, and database/storage use. That said, there are situations where and audit can go South due to a cognitive mismatch with the QSA/CISO. In those cases, using HMAC SHA-256 can be a solution.

SHA-3 (FIPS 202)

We could use SHA-3 for this use case and avoid the need for this explanation, but the problem is that SHA-3 is overkill for such a simple use case. The previous sample hash would be quite longer:

5b6791cfd813470437bbea989ecfbd006571d89eeebe809fdb8e742b67e62a6e1a256eb57df90353065aa31e999724e0

and would put unnecessary pressure on the database and the indexes would be larger without any real benefit. Some QSAs have suggested using a truncated version of SHA-3 to meet the requirement of using SHA-3 while still maintaining a lightweight index.

However, we reject this approach because it would involve inventing our own cryptographic method. We do not have sufficient expertise in cryptography and mathematics (and obviously neither does the QSA) to determine whether a truncated SHA-3 would be less prone to collisions than SHA-1. What we do know is that creating cryptographic algorithms without a deep understanding is a recipe for problems.

Prove me wrong—how hard could it be?

Show me two PANs with the same BIN (since we seed by BIN), a proper LUHN and the same SHA-1 hash and I'll hire you for our next audit.

Update

I had a great conversation with my friend and QSA @dbergert about this post and he provided excellent pointers to the latest requirements as well as his expert opinion on this issue.

NIST will retire SHA-1 by 20309 and since 2017 it is not permitted for authentication, which is not the case.

He agrees that collisions are not a concern here. However, the valid issue he raises is that simply hashing the PAN makes it vulnerable to brute force attacks. This is not the case here, and in fact, the same concern applies to SHA-3. Due to hardware acceleration, SHA-3 can be even faster than SHA-1.

Therefore, requesting a switch from SHA-1 to SHA-3 is somewhat naïve, as this is not what the PCI standard requires. He pointed me to this PCI FAQ: What is the Council's guidance on the use of SHA-1?. Here are some highlights:

The continued use of SHA-1 is permitted in only in the following scenarios:

  1. Within top-level certificates (the Root Certificate Authority), which is the trust anchor for the hierarchy of certificates used. [✓] not the case.
  2. Authentication of initial code on ROM that initiates upon device startup (note: all subsequent code must be authenticated using SHA-2). [✓] certainly not the case.
  3. In conjunction with the generation of HMAC values and surrogate PANs (with salt), for deriving keys using key derivation functions (i.e., KDFs) and random number generation.

We "almost" land in case number 3 as the secure seed is equivalent to HMAC in this case, BUT, Dave has a very valid point:

I'm guessing that the PCI council is movig to HMAC's because those are well defined vs the myriad of ways a salted hash could be done, and security around the "salt".

That's a very reasonable concern, and I understand it. Our approach is just one of the many ways this can be implemented. Migrating to HMAC instead makes complete sense, regardless of the use of SHA-1 or SHA-3.

note

The recommendation has an issue. They are confusing 'secret' with 'salt'. A 'salt' is useful for authentication because it introduces a nonce, which generates different hashes for the same input each time the function is called. However, this is useless for our indexing use case, where we need the same PAN to consistently produce a predictable hash. You could use GMAC with a predictable BIN-derived nonce and a well-secured secret, but then you'd be back to square one because a "predictable nonce" is just as problematic as MD5 and SHA-1—terms that few QSAs can tolerate. It's ironic because no nonce (as in HMAC) is acceptable, but a predictable nonce is a no-go. It's nonsensical.

Footnotes

  1. Stevens, Marc, et al. "The first collision for SHA-1." IACR Cryptology ePrint Archive, vol. 2017, p. 190, 2017.

  2. Stevens, Marc, et al. "Chosen-prefix collision attacks on SHA-1 and applications." Journal of Cryptology, vol. 33, no. 4, pp. 2291-2326, 2020, Springer.

  3. Wang, Xiaoyun, Yiqun Lisa Yin, and Hongbo Yu. "Breaking SHA-1 in the real world." Annual International Cryptology Conference, pp. 89-110, 2005, Springer.

  4. Wang, Xiaoyun, and Hongbo Yu. "Finding Collisions in the Full SHA-1." Annual International Cryptology Conference, pp. 17-36, 2005, Springer.

  5. Wang, Xiaoyun, et al. "Collisions for Hash Functions MD5, SHA-0 and SHA-1." IACR Cryptology ePrint Archive, vol. 2004, p. 199, 2004.

  6. LUHN algorithm named after IBM scientist Hans Peter Luhn that created it in 1954.

  7. See https://shattered.io/

  8. https://marc.info/?l=git&m=148787047422954

  9. https://www.nist.gov/news-events/news/2022/12/nist-retires-sha-1-cryptographic-algorithm

MUXPool Strategies

· 4 min read
Barzilai Spinak
The Facilitator

/by Barzilai Spinak a.k.a. @barspi a.k.a. The Facilitator/

Many jPOS users are familiar with the MUXPool class. In short, it's a MUX that sits in front of multiple muxes (usually QMUX) and will choose among them using some configured strategy. The client code, such as a SelectDestination participant in the transaction manager, will just talk to the MUXPool (i.e., call the request() method), without worrying about the underlying muxes, how many there are, whether they are available, etc.

Imagine you need to send messages to some issuer Bank ABC, and they have given you three connection endpoints (IPs and ports) for load balance or redundancy. In a typical configuration you would have three ChannelAdaptors and three matching muxes: let's call them abc1, abc2, and abc2. Your business logic will decide that "this transaction should go to Bank ABC", choosing a mux called BankABC.

So, you could configure a MUXPool with a nice symbolic name such as BankABC like this:

<mux class="org.jpos.q2.iso.MUXPool" logger="Q2" name="BankABC">
<muxes>abc1 abc2 abc3</muxes>
<strategy>round-robin</strategy>
</mux>

In this example, it would choose among the three muxes using a round robin strategy, among the "usable" muxes. So, if abc2 is not usable at the moment, then the choice would happen between abc1 and abc3 (the concept of "usable" applies mostly to implementations that honor the isConnected() method, such as QMUX, but this is another topic for another time).

New tricks for an old dog

The built-in strategies for MUXPool, which we won't discuss here, are:

  • round-robin
  • round-robin-with-override
  • split-by-divisor
  • and the default, internally called PRIMARY_SECONDARY, which basically goes through the list in order until it finds the first usable mux.

Those strategies have existed for ages (git is telling me "9 years ago" for the latest addition of some of them). But for some time now, since around april 2023, there's a new trick: The inner interface MUXPool.StrategyHandler.

This is what it looks like, right from the jPOS source code:

public interface StrategyHandler {
/** If this method returns null, the {@link MUXPool} will fall back to the configured built-in
* strategy.
*
* @param pool the {@link MUXPool} using this strategy handler
* @param m the {@link ISOMsg} that we wish to send
* @param maxWait deadline in milliseconds (epoch value as given by {@code System.currentTimeMillis()})
* @return an appropriate {@link MUX} for this strategy, or {@code null} if none is found
*/
MUX getMUX(MUXPool pool, ISOMsg m, long maxWait);
}

If the Javadoc is not clear enough, an instance of StrategyHandler will be called by the MUXPool, passing a reference to itself (pool), the ISOMsg we intend to send, and a timeout for the response.

This is how you could configure your MUXPool with a StrategyHandler:

<mux class="org.jpos.q2.iso.MUXPool" logger="Q2" name="BankABC">
<muxes>abc1 abc2 abc3</muxes>

<!-- The default, fall-back strategy -->
<strategy>round-robin</strategy>

<strategy-handler class="xxx.yyy.MyPoolStrategy">
<!-- some config properties here -->
</ strategy-handler>
</mux>

So, you implement the MyPoolStrategy, and your getMUX() method can choose among the available muxes using some other strategy. It's convenient to invoke the String[] getMuxNames() method of the given pool, and then you could use the NameRegistrar to get a reference to the muxes. You then could, for example, query some metrics from the underlying muxes (such as the ones provided by QMUX) to make your decision of the best mux to use.

If your _ StrategyHandler_ can't decide, then you just return null and the configured <strategy> will be used as a fall-back.

Some ideas for strategies include:

  • Use the getRxPending() method of QMUX to get an estimate of which mux is more loaded, or slower to respond.
  • Use the getIdleTimeInMillis() of QMUX to find a mux that hasn't been used recently.
  • Take a look at the MTI of the ISOMsg to send reversals to a different destination from authorizations.
  • A weighted round-robin, perhaps using some custom property of your muxes (such as subclass of QMUX), or configuring the weights in the strategy handler itself (hint: they can be Configurable so they can have their own configuration properties in the XML)

There may be alternate ways to implement similar behavior using other jPOS components, but the MUXPool.StrategyHandler may be a nice trick to keep in mind, and make some logic simpler.

Your imagination is the limit!

Pause Timeout

· 4 min read
Alejandro Revilla
jPOS project founder

In 2007, a long time ago, when hardware was not as fast as it is today, we introduced CONTINUATIONS that had the ability to PAUSE (or park) a given transaction while waiting for a response from a remote server. Think of it as some kind of reactive programming.

If you have, say, a 4 CPU machine and need to process one thousand simultaneous transactions, without continuations you would need to set your TransactionManager sessions to a disproportionately large number (such as 1000), which has no relation to your hardware, making the entire system slower due to the context switch overhead.

With continuations, a small number of sessions (e.g., twice the number of CPUs you have) can handle a large number of transactions, because those waiting for a response are offloaded from the transaction manager until the response arrives.

The typical candidate for continuations is the QueryHost participant, which is implemented like this:

  mux.request(m, t, this, ctx);
return PREPARED | READONLY | PAUSE | NO_JOIN;

The PAUSE modifier tells the TM to park that particular transaction until the context is resumed as part of its Pausable interface implementation. For the record, Pausable looks like this:

public interface Pausable {
void setPausedTransaction(PausedTransaction p);
...
...
void resume();
}

and the standard TransactionManager’s Context (org.jpos.transaction.Context) implements it.

When we call the asynchronous mux.request implementation and pass it an ISOResponseListener (QueryHost implements ISOResponseListener) and the current Context as a handback object, the transaction gets resumed and queued back with high priority when a response is received or the MUX times out, allowing it to be picked up by the next available session in the TransactionManager.

Futures were not popular in those days. Otherwise, the void request(ISOMsg m, long timeout, ISOResponseListener r, Object handBack) signature of the MUX’s asynchronous call would have probably been implemented using them.

QueryHost, which uses QMUX, as well as many other participants such as HttpQuery in jPOS-EE, use this facility to handle a large number of simultaneous transactions without requiring a large number of sessions—which would require a platform thread—in their transaction manager and are guaranteed to resume the context so that the transaction can continue its route within the TransactionManager.

This delicate mechanism works very well at production sites processing billions of transactions. However, we can’t guarantee that user-implemented participants can reliably time out, always, sooner or later, to complete the transaction and avoid a memory leak.

And here comes the reason for this post: In situations where participants may pause a transaction and not resume it after a timeout, we have a pause-timeout optional property that you can configure in the TransactionManager, e.g.:

xml
<property name="pause-timeout" value="300000" />

This is a safety net for those situations. If for some reason your participant doesn’t resume, the TransactionManager will auto-resume it for you after a reasonable amount of time to prevent a memory leak. Contexts are lightweight and consume little memory, so those leaks manifest themselves after several days or even weeks depending on your system load, making 5 minutes a reasonable default.

But the pause-timeout feature comes at a price. For every Context that we pause, a TimerTask is created to cancel the transaction. TimerTasks, popular 20 years ago, are quite brittle as they rely on a single platform thread calling every TimerTask Runnable WITHOUT CATCHING unchecked exceptions, which can cause the entire Timer facility to fail and create significant problems for other components such as the TSpace’s garbage collector.

Takeaways

  • The pause-timeout feature can be used if you fear your pausable participants may not resume. It’s usually used for debugging purposes after you experience warnings related to high in-transit transactions. A high in-transit indicates that at least one transaction got stuck without ever finishing. You can see this in the log by noticing that the tail ID gets stuck and doesn’t move up.

  • If you decide to use a pause-timeout, remember that this is a safety net that should never be necessary. Setting a short timeout conflicting with the timeout you use at QueryHost (e.g., 15 seconds, or 30 seconds) is not advisable because it increases the chances of a race condition between the normal resume caused by the QueryHost expiration and the resume caused by the TimerTask runnable trying to do the same. If you use a pause-timeout, make it a long one.

Final Comment

All this goes away in jPOS 3. Continuations make no sense in jPOS 3’s TransactionManager because sessions are handled by virtual threads, making it entirely feasible to have a large number of sessions in the 100k range. All the delicate asynchronous programming we required almost 20 years ago is now magically handled by Project Loom’s virtual threads.

Structured Audit Log

· 13 min read
Alejandro Revilla
jPOS project founder

jPOS Structured Audit Log (SAL)

DRAFT

Unlike most blog posts published here that are final, this one is being updated in parallel with the work we are doing on the structured audit log effort for jPOS. It will be finalized once this notice is removed.

The jPOS XML logs, written a quarter of a century ago, were intended to serve as an audit trail capable of being reprocessed later in the event of a major system failure. Over the years, they proved to be very useful, saving the day during migrations, testing, and even production issues.

Originally, these logs were structured audit logs, similar to what we aim to achieve now. However, as the system grew, the highly flexible Loggeable interface was sometimes improperly implemented, occasionally resulting in invalid XML. While still reasonably intelligible to the human eye, this made it difficult to parse the XML content automatically. Moreover, PCI requirements have become stricter over the years (there was no PCI 25 years ago, it was the Wild West), and data that might be needed to reprocess a transaction is now entirely masked.

The term log, much like byte, is widely overused in the field. With the advent of logging frameworks such as log4j, j.u.Logging, logback, and SLF4J, and then log aggregators that ingest loads of unstructured data into tools like Splunk, Datadog, and ElasticSearch/Kibana, you name it, there is confusion. People often conflate a log that outputs debug information line by line—which is undoubtedly useful—with addressing a different problem that's the one we aim to solve.

When I initially wrote the jPOS logger subsystem, large projects such as JBoss were looking for logging alternatives. I even had an email exchange with JBoss' author. He evaluated it but didn’t like it because it was too XMLish, which I found amusing since the whole JBoss configuration back in the day was heavily XML-based. Log4j was an early project hosted, if I recall correctly, at IBM’s alphaWorks, and that was his choice. Had I had that exchange a few weeks earlier, I would have adopted Log4j right away, as it was a promising project that proved to be the right way to go.

However, my concern and the focus of this post is not about the logger itself and its extensibility through SPIs, which are great. This post is about what one ends up putting in the log. It’s about the content and how to structure it in such a way that it becomes useful not only to the naked eye but also to other applications that ingest it.

In jPOS, it’s very common to encounter log events like the following:

<log realm="channel/186.48.107.208:58949" at="2024-04-29T13:06:13.086" lifespan="228ms">
<receive>
<isomsg direction="incoming">
<!-- org.jpos.iso.packager.XMLPackager -->
<field id="0" value="0800"/>
<field id="11" value="000001"/>
<field id="41" value="00000001"/>
<field id="70" value="301"/>
</isomsg>
</receive>
</log>
<log realm="channel/186.48.107.208:58949" at="2024-04-29T13:06:13.618" lifespan="4ms">
<send>
<isomsg>
<!-- org.jpos.iso.packager.XMLPackager -->
<field id="0" value="0810"/>
<field id="11" value="000001"/>
<field id="37" value="743110"/>
<field id="38" value="319046"/>
<field id="39" value="0000"/>
<field id="41" value="00000001"/>
<field id="70" value="301"/>
</isomsg>
</send>
</log>

Now consider the following output:

2024-04-29 13:06:13.086 INFO Channel:23 - <log realm="channel/186.48.107.208:58949" at="2024-04-29T13:06:13.086" lifespan="228ms">
2024-04-29 13:06:13.086 INFO Channel:23 - <receive>
2024-04-29 13:06:13.086 INFO Channel:23 - <isomsg direction="incoming">
2024-04-29 13:06:13.086 INFO Channel:23 - <!-- org.jpos.iso.packager.XMLPackager -->
2024-04-29 13:06:13.086 INFO Channel:23 - <field id="0" value="0800"/>
2024-04-29 13:06:13.086 INFO Channel:23 - <field id="11" value="000001"/>
2024-04-29 13:06:13.086 INFO Channel:23 - <field id="41" value="00000001"/>
2024-04-29 13:06:13.086 INFO Channel:23 - <field id="70" value="301"/>
2024-04-29 13:06:13.086 INFO Channel:23 - </isomsg>
2024-04-29 13:06:13.086 INFO Channel:23 - </receive>
2024-04-29 13:06:13.086 INFO Channel:23 - </log>
2024-04-29 13:06:13.086 INFO Channel:23 - <log realm="channel/186.48.107.208:58949" at="2024-04-29T13:06:13.618" lifespan="4ms">
2024-04-29 13:06:13.086 INFO Channel:23 - <send>
2024-04-29 13:06:13.087 INFO Channel:23 - <isomsg>
2024-04-29 13:06:13.087 INFO Channel:23 - <!-- org.jpos.iso.packager.XMLPackager -->
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="0" value="0810"/>
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="11" value="000001"/>
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="37" value="743110"/>
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="38" value="319046"/>
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="39" value="0000"/>
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="41" value="00000001"/>
2024-04-29 13:06:13.087 INFO Channel:23 - <field id="70" value="301"/>
2024-04-29 13:06:13.087 INFO Channel:23 - </isomsg>
2024-04-29 13:06:13.087 INFO Channel:23 - </send>
2024-04-29 13:06:13.087 INFO Channel:23 - </log>

This is actually a single log event and should go in a single log line. Frameworks support this, but most implementations I’ve seen in the wild, probably due to the fact that the event contains linefeeds, end up generating several dozen lines of logs, each with its own timestamp.

This output format presents several challenges:

  1. Readability: The output is difficult to read and requires additional string manipulation to parse effectively, which complicates debugging and analysis. Yes, you can always use cut to take away the first chunk, but that's an extra step.
  2. Excessive Detail: The granularity of timestamps down to milliseconds (e.g., 086 vs. 087) is unnecessary and consumes resources without adding value to the debug information.
  3. Instance-Based Configuration Limitations: In jPOS, a single Channel class may connect to numerous endpoints as both a client and a server in multi-tenant applications. It's important to manage logs separately per instance, yet typical line loggers tie configuration to the class rather than to instances. You can, of course, use parameterized logger names, but that's not the standard way people tend to log information in their applications, and there's no standard approach for doing this, so different teams implement it in various ways.
  4. Intermingling of Outputs: Line loggers mix outputs from multiple threads, necessitating the use of MDC (Mapped Diagnostic Context) to group together a logic log event. This approach increases code complexity and log verbosity.
  5. Operational Inefficiency: Analyzing logs often involves navigating VPNs and two-factor authentication, complicating access to third-party log aggregator interfaces. This is reasonable for an operations team in charge of an application, but it presents quite a problem for external development teams trying to assist in case of issues.

The jPOS Structured Audit Log intends to improve the existing jPOS logging system by addressing some of its flaws and enhancing the original goals. It supports compact, structured output that is suitable for ingestion by log aggregators, while still retaining the capability for easy, text-based log manipulation for debugging and emergency situations.

AuditLogEvent

To prevent conflicts with the existing system, we have introduced a new package named org.jpos.log, where this new feature is implemented. The main class within this package is AuditLogEvent. The intention is for AuditLogEvent to eventually replace the existing org.jpos.util.LogEvent.

The StructuredLogEvent is a very simple sealed interface:

public sealed interface AuditLogEvent permits ... { }
tip

We used sealed interface in order to get compile-time errors if we miss to handle an AuditLogEvent, but there's extensibility provided by ServiceLoaders so that you can create your own.

Q2 Start event

When a jPOS log starts, we usually get the following XML preamble:

<log realm="Q2.system" at="2024-06-06T16:48:44.290350" lifespan="1ms">
<info>
Q2 started, deployDir=/opt/local/jpos/deploy, environment=default
</info>
</log>

The equivalente structured audit log event is:

{
"ts" : "2024-06-06T19:49:30.577151Z",
"realm" : "Q2.system",
"tag" : "info",
"trace-id" : "dbd2e6b6-693a-4ff5-9c51-5b2b643a8c8d",
"evts" : [ {
"t" : "start",
"ts" : "2024-06-06T19:49:30.576244Z",
"q2" : "6969be9c-dce9-4919-ad7e-69b5faf1ef33",
"version" : "3.0.0-SNAPSHOT",
"deploy" : "/opt/local/jpos/deploy",
"env" : "default"
} ]
}

We aim to keep these log messages as short and compression-friendly as possible. Given that JSON is verbose, we want to avoid extra boilerplate, especially for transaction messages that are repeated sometimes hundreds of millions of times daily. Therefore, we prefer using readable abbreviations like ts for timestamp and tx for transmission, etc. Here, we are showing pretty-printed output; however, the actual output will be a compact one-liner. The encoding is assumed to be UTF-8.

Each JSON object needs to be easily unmarshalled by different applications. To distinguish between various types of messages, every SAL message includes a unique type property (abbreviated as t) to facilitate polymorphic deserialization.

In jPOS SAL, there are two types of audit log records:

  1. jPOS native audit log records.
  2. Application-specific audit log extensions (those that you can load via service loaders).

SAL Native audit log records are defined under the new org.jpos.log.evt package and are part of the AuditLogEvent sealed interface.

Application-specific audit log messages are loaded via the ServiceLoader (META-INF/services/org.jpos.log.AuditLogEvent).

The Start StructuredLogEvent is defined like this:

public record Start(Instant ts, UUID q2, String version, String appVersion, String deploy, String env) implements AuditLogEvent { }
  • clazz is the logger’s class (i.e. DailyLogListener)
  • q2 is Q2’s instance Id
  • version is Q2’s version
  • appVersion is the application’s version

Q2 Stop event

Instead of the old </logger> closing element, we'll have an end JSON event

The Stop AuditLogEvent is defined as:

public record Stop(Instant ts, UUID id, Duration uptime) implements AuditLogEvent { }

The JSON output lookw like this:

{
"ts" : "2024-06-06T19:49:38.050499Z",
"realm" : "Q2.system",
"tag" : "info",
"trace-id" : "446c7ed1-ba61-4f5c-9f13-3edf1699242f",
"evts" : [ {
"t" : "stop",
"ts" : "2024-06-06T19:49:38.050480Z",
"id" : "6969be9c-dce9-4919-ad7e-69b5faf1ef33",
"uptime" : 7.827676000
} ]
}

LogWriters, LogRenderers, AuditLogEvents

jPOS users are familiar with channels, packagers, and ISOMsgs, and there's an analogy between these components.

An AuditLogEvent is analogous to an ISOMsg. It is an object that encapsulates information.

The LogRenderer functions like a packager. It converts the AuditLogEvent into a wire format. Just as you can have multiple packagers to handle different ISO-8583 variants, you can have multiple LogRenderers to convert AuditLogEvents into different formats.

We currently handle four formats:

  • TXT: A new format suitable for injection into regular line loggers.
  • XML: Equivalent to the existing format.
  • JSON: As shown in the examples above.
  • MARKDOWN: Suitable for use during development/tests or for rendering in web-based UIs.

Finally, the LogWriter is like the channel, it's the one in charge to output the already rendered AuditLogEvent to the log PrintStream.

note

Please note that a LogListener can use a regular logger, by using for example the LogbackLogListener. Using LogbackLogListener with a TXT renderer provides a very predictable output.

LogWriter

The LogWriter interface, contributed by Alwyn not so long ago, is a great addition. It plugs into a LogListener and is very easy to configure:

<log-listener class="org.jpos.util.SimpleLogListener" enabled="true">
<writer class="org.jpos.util.JsonLogWriter" enabled="true" />
</log-listener>

If we do not specify a writer, the log listener behaves like the traditional jPOS. It outputs all Loggeable objects using their dump method. We have JsonLogWriter, MarkdownLogWriter, TxtLogWriter, and XmlLogWriter.

tip

The XmlLogWriter's output is slightly different from the standard no-writer Loggeable dumps as it enforces XML output to be provided via Jackson's ObjectMapper, hence producing guaranteed XML.

LogRenderer

The LogRenderers interface looks like this:

public interface LogRenderer<T> {
void render (T obj, PrintStream ps, String indent);
Class<?> clazz();
Type type();
...
...
}

The renderers are then loaded via the org.jpos.log.LogRenderer service loader that looks like this:

org.jpos.log.render.json.LogEventJsonLogRenderer
org.jpos.log.render.json.LoggeableJsonLogRenderer
org.jpos.log.render.json.AuditLogEventJsonLogRenderer

org.jpos.log.render.xml.LoggeableXmlLogRenderer

org.jpos.log.render.markdown.ProfilerMarkdownRenderer
org.jpos.log.render.markdown.ContextMarkdownRenderer
org.jpos.log.render.markdown.LoggeableMarkdownLogRenderer
org.jpos.log.render.markdown.LogEventMarkdownRenderer
org.jpos.log.render.markdown.StringMarkdownLogRenderer
org.jpos.log.render.markdown.ObjectMarkdownLogRenderer
org.jpos.log.render.markdown.ByteArrayMarkdownLogRenderer
org.jpos.log.render.markdown.ThrowableMarkdownLogRenderer
org.jpos.log.render.markdown.ElementMarkdownLogRenderer
org.jpos.log.render.markdown.TransactionManagerTraceMarkdownLogRenderer
org.jpos.log.render.markdown.TransactionManagerTraceArrayMarkdownLogRenderer
...
...

There's a handy LogRendererRegistry that can get you a renderer for any given object. It properly traverses the class hierarchy and implementation hierarchy to get the most suitable renderer for an object if one is not defined for a given class. Because that reflection-based traversal is expensive, all results are cached and readily available for the next call.

Renderer implementations are very easy to write, and we encourage users to create specific ones for their objects. We aim to provide a large number of renderers for jPOS objects. Here is an example of a ByteArray renderer that produces Markdown output:

public final class ByteArrayMarkdownLogRenderer implements LogRenderer<byte[]> {
@Override
public void render(byte[] b, PrintStream ps, String indent) {
if (b.length > 16) {
ps.printf ("```%n%s%n```%n", indent(indent, ISOUtil.hexdump(b)));
} else {
ps.printf ("`%s`%n", indent(indent,ISOUtil.hexString(b)));
}
}
public Class<?> clazz() {
return byte[].class;
}
public Type type() {
return Type.MARKDOWN;
}
}

This is a just-for-fun proof of concept that outputs a jPOS profiler as a Markdown table:

public final class ProfilerMarkdownRenderer implements LogRenderer<Profiler> {
public ProfilerMarkdownRenderer() {
}

@Override
public void render(Profiler prof, PrintStream ps, String indent) {
var events = prof.getEvents();
int width = maxLength(events.keySet());
final String fmt = STR."| %-\{width}s | %10.10s | %10.10s |%n";
ps.print (row(fmt, "Checkpoint", "Elapsed", "Total"));
ps.print(
row(fmt, "-".repeat(width), "---------:", "-------:")
);
StringBuilder graph = new StringBuilder();
events.forEach((key, v) -> {
ps.print(
row(fmt, v.getEventName(), toMillis(v.getDurationInNanos()), toMillis(v.getTotalDurationInNanos()))
);
graph.append (" \"%s\" : %s%n".formatted(key, toMillis(v.getDurationInNanos())));
});
ps.println();
ps.println ("```mermaid");
ps.println ("pie title Profiler");
ps.println (graph);
ps.println ("```");

}
public Class<?> clazz() {
return Profiler.class;
}

public Type type() {
return Type.MARKDOWN;
}
private String row (String fmt, String c1, String c2, String c3) {
return fmt.formatted(c1, c2, c3);
}
private String toString(Profiler.Entry e, String fmt) {
return FMT."""
\{fmt}s |\{toMillis(e.getDurationInNanos())} | \{toMillis(e.getTotalDurationInNanos())} |
""".formatted(e.getEventName());
}

private String toMillis(long nanos) {
return FMT."%d\{nanos / Profiler.TO_MILLIS}.%03d\{nanos % Profiler.TO_MILLIS % 1000}";
}

private int maxLength (Set<String> keys) {
return keys.stream()
.mapToInt(String::length)
.max()
.orElse(0); // R
}
}
note

StringTemplates look nice!

With the previous renderer, the regular profiler output:

    <profiler>
prepare: o.j.t.p.Debug [8.8/8.8]
prepare: o.j.t.p.Debug-1 [0.7/9.6]
prepare: o.j.t.p.Delay [2.4/12.0]
prepare: o.j.t.p.Trace [0.6/12.7]
commit: o.j.t.p.Debug [1.9/14.7]
commit: o.j.t.p.Debug-1 [0.4/15.1]
commit: o.j.t.p.Delay [0.4/15.6]
commit: o.j.t.p.Trace [0.5/16.1]
end [7.6/23.7]
</profiler>

Looks like this:

CheckpointElapsedTotal
prepare: o.j.t.p.Debug8.0838.083
prepare: o.j.t.p.Debug-10.5009.583
prepare: o.j.t.p.Delay2.00012.583
prepare: o.j.t.p.Trace0.87512.458
commit: o.j.t.p.Debug1.04214.500
commit: o.j.t.p.Debug-10.37515.875
commit: o.j.t.p.Delay0.45815.333
commit: o.j.t.p.Trace0.50016.833
Why-oh-why

Why, you may ask? You just want good JSON output to feed your aggregator, and we have you covered. But once little pieces like this start to fit perfectly into the system, we are tempted to play with additional options. In a production environment, you want a compact JSON format from the AuditLogEvents. When you're in the development phase—which we are most of the time—having a nice Markdown output ready to be copied and pasted into an issue, email, or discussion is a good thing to have. And if it includes awesome Mermaid graphics, even better!

Recap on why we are doing this

People often use Grok, Fluentd, Splunk regexp-based filters, or just grep/AWK manipulation to make sense of log messages produced by applications. These applications sometimes decide to change the message format from:

Starting

to:

 ____  _             _   _             
/ ___|| |_ __ _ _ __| |_(_)_ __ __ _
\___ \| __/ _` | '__| __| | '_ \ / _` |
___) | || (_| | | | |_| | | | | (_| |
|____/ \__\__,_|_| \__|_|_| |_|\__, |
|___/

Because banners are cool, they inadvertently trick those regexps with an innocent change.

We want jPOS audit logs to be precise, compact, and suitable for storage (e.g., in jPOS' binlog) for audit purposes

Stay tuned as this is coming soon to jPOS next.

Config Smell

· 5 min read
Alejandro Revilla
jPOS project founder

jPOS configuration is very flexible, but that flexibility can sometimes lead to confusion.

We have two main categories:

  • Build-time static token replacement (jPOS build target).
  • Runtime environments.

Build-time static token replacement

This is described in detail in the Configuration Tutorial, but the basic idea is that at build time, when you call gradle installApp or gradle dist, you have the ability to define a compile-time target.

The default target is called devel and reads an optional property file called devel.properties.

If your devel.properties file looks like this:

ABC=AlphaBravoCharlie

And then you have a QBean configuration like this:

<property name="ABC" value="@ABC@" />

The @ABC@ will be expanded to AlphaBravoCharlie in the final file that will land in the build directory.

If you go to build/install/xxx/deploy/yourxml.xml, you’ll see something like this:

<property name="ABC" value="AlphaBravoCharlie" />

The jPOS build system performs that replacement in XML files in the deploy directory, as well as in configuration files (.cfg, .properties) in the cfg directory.

A typical use case, very useful during development is to define a database name.

You can have a devel.properties file with tokens like this:

dbhost=localhost
dbname=jposee
dbuser=jpos
dbpass=password
dbport=5432

And then have a db.properties file (in src/dist/cfg/db.properties) that looks like this:

connection.url=jdbc:postgresql://@dbhost@:@dbport@/@dbname@

When you call gradle installApp, you’ll see that the destination db.properties file in the build/install/xxx/cfg directory will look something like this:

connection.url=jdbc:postgresql://localhost:5432/jpos

When you run locally, in development mode, your devel.properties file, which happens to be the default one, is the one providing tokens for replacement.

But you can call gradle -Ptarget=ci installApp. In that case, tokens get read from a file called ci.properties that you can use in your continuous integration environment.

CONFIG SMELL 1

If you have some handy tokens in the devel.properties file to make configuration tweaks for a development versus continuous integration target, or you have a local, test, or stresstest environment, that’s fine. However, if you have a prod target, there might be a configuration smell. It can be acceptable, but beware that you are not leaking sensitive information such as database names, URLs, access tokens, etc.

prod.properties should probably be reviewed by your security team.

Runtime Environments

The runtime environments are explained here, but the basic idea is that you can define a variable either in a YAML file, a Java property, or an OS environment variable and use it as part of the configuration using property expansion. Imagine you have an OS environment variable like this:

export DB_PASSWORD=mysupersecurepassword

Then you can configure a QBean, or config file with something like:

<property name="dbpass" value="${db.password}" />

When you call cfg.get("dbpass") from your QBean or participant implementation, you will get the string "mysupersecurepassword".

There are variations you can use, such as defaults like ${db.password:nopassword}, in which case, if the DB_PASSWORD OS environment variable is not available or not exported, a call to cfg.get("dbpass") would return nopassword.

From the jPOS Programmer’s Guide Section 3.3 (Configuration)

SimpleConfiguration recognizes and de-references properties with the format: ${xxx} and searches for a system property, or operating system environment variable under the xxx name. As a fallback mechanism of last resort, the property can be resolved from an environment file, in YAML or properties file syntax, found in the cfg directory (default filename default.yml or default.cfg, which can be overridden by the jpos.env system property).

You can add a default value within the expression itself, using the : (colon) separator. For example ${xxx:def_value}. If the property is not found, the default value will be used.

The format $sys{xxx} de-references just from system properties, $env{xxx} just from the operating system environment, and $cfg{xxx} just from the environment file (the.

In the rare case where a value with the format ${...} is required, the $verb{${...}}format (verbatim) can be used.

In addition, a property named xx.yy.zz can be overridden by the environment variable XX_YY_ZZ (note that dots are replaced by underscore, and property name is converted to uppercase.

Things start to get confusing when we hit Config Smell 2:

CONFIG SMELL 2

We often encounter situations where a build-time target configuration looks like this:

dbpass=${db.password}

and then the dbpass property in a given QBean or configuration file looks like this:

<property name="dbpass" value="@dbpass@" />

That totally works. The resulting file in the build directory after a gradle installApp will look like this:

<property name="dbpass" value="${db.password}" />

as we want, and then, at runtime, the ${db.password} value will be de-referenced using the DB_PASSWORD environment variable.

That's fine, everything works, but why oh why!

When you see a target build time configured like this, new users might get confused about when the ${db.password} gets de-referenced—whether it happens at build time based on the build-time environment variable or at runtime. It’s very confusing, and even an experienced jPOS developer can get caught off guard. Don’t do it.

We mentioned CODE SMELL 1 and CODE SMELL 2, but there’s CODE SMELL 0!:

CODE SMELL 0 !

Configuration doesn’t belong in the same repository as the source code. That’s why we have a distnc (distribution, no-config) task in jPOS’ Gradle plugin that only ships the main jar and all dependencies in the lib directory, leaving the cfg and deploy directories to be fetched by other means. We tend to use GitOps for that, even before GitOps was a term; we did this in the old Subversion days. Having a deploy and cfg directory in your development repo is handy for development and testing, but we recommend keeping it separate from production.

jPOS 2.1.8 has been released

· One min read
Alejandro Revilla
jPOS project founder

jPOS 2.1.8 has been released. New development version is 2.1.9-SNAPSHOT and jPOS 3.0.0 (aka 'next').

See ChangeLog for details.

We will continue to maintain the 2.x.x series for a long while, so you can consider the 2.x.x series an LTS version, but all the fun and new features will happen in 3.x.x.

New Documentation

· One min read
Alejandro Revilla
jPOS project founder

I've recently embarked on an ambitious project to develop a new jPOS documentation, which is currently in its nascent stages. This tutorial is designed to evolve in tandem with ongoing jPOS development, and in time, it will replace existing documentation to become a dynamic, live tutorial and reference.

As it stands, the tutorial is in its early stages, but it already offers valuable insights for newcomers to jPOS, particularly with the setup primer and QBean information. Veterans of jPOS will find the sections on 'Environments' insightful, especially for applications in cloud deployments. This is also a prime opportunity to test and experience the jPOS Gradle Plugin in action.

Alongside the tutorial, we have a companion project on Github. It’s structured with distinct branches for each part of the tutorial, enabling you to access and execute the examples easily.

Since this is a work in progress and will be evolving alongside jPOS, your feedback is incredibly valuable. We encourage you to share your thoughts, suggestions, and insights, which will be instrumental in shaping this tutorial into a comprehensive and up-to-date resource for all things jPOS.

Here is the link.