Skip to main content

Extending jPOS structured audit logs

· 6 min read
Alejandro Revilla
jPOS project founder
AR Agent
AI assistant

jPOS 3 introduced structured audit logging as a first-class feature: instead of writing only text lines, a log event can carry typed payloads such as start, stop, deploy, connect, disconnect, txn, and so on.

That structure is what makes tools such as the jPOS Log Viewer possible. The viewer can filter, facet, correlate, and render events because it is not guessing meaning from text. It is reading fields.

Until now, however, those typed audit events were effectively limited to the event classes shipped inside jPOS itself. That was fine for core runtime events, but it was not enough for real applications.

A jPOS-EE module, an application module, or a customer-specific extension may have its own operational event worth logging in a structured way:

  • an HTTP access event from QRest,
  • an authentication event from a web application,
  • a business workflow transition,
  • a settlement file import,
  • a reconciliation result,
  • a domain-specific warning that deserves first-class fields.

Those should not have to be flattened into strings. They should be allowed to live next to the built-in jPOS audit events.

That is now possible.

The problem

Structured audit log events are serialized polymorphically. A typical payload contains a short type discriminator named t:

{
"t": "warn",
"warn": "disk space is low"
}

or:

{
"t": "txn",
"name": "authorization",
"id": 123456
}

The t value is intentionally stable and compact. It lets a reader—human or machine—know what shape the rest of the object has.

Previously, the list of known subtypes was declared directly on AuditLogEvent using Jackson annotations. That meant adding a new event type required changing jPOS itself.

That does not scale. jPOS-EE and application modules need to define their own events without sending every type back to the jPOS core repository.

The new SPI

AuditLogEvent is still the marker interface for typed structured log payloads:

package org.jpos.log;

public interface AuditLogEvent { }

The difference is that the mapping between a stable type id and its Java class now lives in a registry.

External modules contribute mappings by implementing:

package org.jpos.log;

public interface AuditLogEventProvider {
Collection<AuditLogEventType> types();
}

Each mapping is represented by:

package org.jpos.log;

public record AuditLogEventType(
String name,
Class<? extends AuditLogEvent> clazz
) { }

The registry loads built-in jPOS event types first, then discovers external providers using Java's ServiceLoader:

AuditLogEventRegistry.register(objectMapper);

jPOS' JSON and XML log renderers already call this registry, so modules usually only need to provide the event class and the provider.

Built-in type ids remain unchanged:

warn, start, stop, deploy, undeploy, msg, shutdown,
deploy-activity, throwable, license, sysinfo,
connect, disconnect, listen, session-start, session-end, txn

External providers cannot shadow those names. If a provider tries to register a conflicting type id, startup fails fast instead of silently producing ambiguous logs.

A small example

Suppose an application wants to log a structured event every time it imports a settlement file.

First, define the event:

package com.acme.settlement;

import org.jpos.log.AuditLogEvent;

public record SettlementImport(
String file,
int records,
int accepted,
int rejected,
long durationMs
) implements AuditLogEvent { }

Pick a stable type id. Keep it short, descriptive, and unlikely to collide with another module. A project prefix is a good habit:

acme-settlement-import

Then provide the mapping:

package com.acme.settlement;

import org.jpos.log.AuditLogEventProvider;
import org.jpos.log.AuditLogEventType;

import java.util.Collection;
import java.util.List;

public class SettlementAuditLogEventProvider implements AuditLogEventProvider {
@Override
public Collection<AuditLogEventType> types() {
return List.of(
new AuditLogEventType(
"acme-settlement-import",
SettlementImport.class
)
);
}
}

Finally, register the provider using Java's standard service-provider mechanism. Add this file to the module JAR:

META-INF/services/org.jpos.log.AuditLogEventProvider

with one line:

com.acme.settlement.SettlementAuditLogEventProvider

Now the event can be added to a regular jPOS LogEvent payload:

import org.jpos.util.LogEvent;
import org.jpos.util.Logger;

LogEvent evt = getLog().createInfo();
evt.addMessage(new SettlementImport(
"settlement-2026-05-06.csv",
1280,
1274,
6,
842
));
Logger.log(evt);

When written through the structured JSON log writer, the payload remains typed:

{
"ts": "2026-05-06T15:00:00Z",
"kind": "info",
"tags": {
"realm": "settlement.import"
},
"payload": [
{
"t": "acme-settlement-import",
"file": "settlement-2026-05-06.csv",
"records": 1280,
"accepted": 1274,
"rejected": 6,
"durationMs": 842
}
]
}

That is the important part: no fragile parsing, no regular expressions, no guessing that the third token in a line is a status. The event has fields.

Why this matters

The immediate use case is QRest.

Today QRest can be noisy at connection boundaries: accepted connection, closed connection, timeout, and so on. That is useful at times, but it is not the same as an access log.

What we really want from an HTTP server is a single event per request, similar in spirit to what Apache or NGINX access logs provide, but structured from the start:

{
"t": "http-access",
"method": "POST",
"path": "/api/cards",
"status": 201,
"bytes": 348,
"durationMs": 37,
"remoteAddr": "203.0.113.10",
"userAgent": "curl/8.7.1"
}

Once QRest emits that as an AuditLogEvent, the same structured log pipeline can ingest it. The Log Viewer can filter by status code, path, method, duration, or remote address. Operators can find slow requests, failed requests, noisy clients, and deployment regressions without scraping text.

This also creates a pattern for other jPOS-EE modules and application modules. They can publish domain-specific operational events while remaining compatible with the same structured log tools.

A few guidelines

When defining your own audit events:

  • Use records where possible. They are compact and serialize naturally.
  • Keep fields flat unless nesting adds real value.
  • Prefer stable names over clever names.
  • Keep type ids short, lowercase, and namespaced when appropriate.
  • Avoid secrets, PANs, tokens, passwords, authorization headers, and raw request bodies.
  • Log identifiers that help correlation, not sensitive payloads.
  • Treat the event shape as a public contract once logs are consumed by tools.

Structured logging is only useful if the structure is boring and predictable.

Where this goes next

The SPI is the foundation. It lets jPOS remain small while allowing the ecosystem around it to grow structured events independently.

The next natural step is to use it in jPOS-EE, starting with QRest access events. After that, other modules can follow: authentication, sysconfig changes, crypto-service operations, scheduler activity, settlement workflows, and application-specific events.

The payoff is visible in the jPOS Log Viewer demo: once logs are structured at the source, operational tools no longer need to reverse-engineer text. They can index, filter, render, and correlate the data directly.

That was always the point of jPOS logging. This SPI simply opens that model to the rest of the application stack.