Skip to main content

jPOS/MGL Kubernetes Deployments

· 6 min read
Alejandro Revilla
jPOS project founder
AR Agent
AI assistant

Deploying financial infrastructure should not depend on someone remembering the right kubectl context, pasting the right kubeconfig into the right terminal, or manually reconstructing which Helm values were used last time. The deployment path is part of the control surface. It needs the same auditability, separation, and repeatability as the ledger itself.

The jPOS Control Plane brings Kubernetes deployment into the operator console. It stores target cluster credentials encrypted at rest, registers Helm charts from OCI registries, turns JSON Schema-backed chart values into typed forms, binds everything into reusable release plans, and drives dry-run, preflight, apply, resources, and logs from one audited UI.

What the demo shows

The video walks through the full deployment workflow from a fresh jPOS/MGL instance:

  • bootstrap the deployment encryption keyring using an RSA-4096 OpenPGP keypair
  • run an encrypt/decrypt round-trip test to confirm the crypto layer is live
  • export a passphrase-protected keyring backup and verify it without exposing key material
  • register a Kubernetes target cluster using in-cluster ServiceAccount credentials
  • constrain that target with a namespace allowlist
  • register a Helm chart from an OCI registry
  • create a values profile using a schema-driven form instead of raw YAML
  • create a release plan that binds chart, target, namespace, values profile, and Helm release name
  • run a required dry-run before the first real apply
  • execute RBAC and reachability preflight checks
  • apply the release using Helm's atomic upgrade/install path
  • inspect created resources and Helm history
  • stream pod logs through the same encrypted target credentials
  • return to the dashboard and see the system-status banner report deployment health

The demo deploys mgl-iso-sim, a jPOS-based ISO 8583 autoresponder, into an iso-sim namespace. The important part is not the sample workload; it is the deployment contract around it.

Encrypted target credentials

Kubernetes credentials are powerful. If an application stores them, it has to treat them as production secrets.

The jPOS Control Plane starts by bootstrapping an encryption keyring. The private half of the keyring is protected by an unlock passphrase, and the keyring itself is stored durably so it can survive redeployments. Cluster credentials are encrypted at rest under this keyring.

The demo also shows backup and verification. A .kbk backup file captures the keyring and wrapped data keys in a passphrase-protected envelope. Verification decrypts the backup and displays metadata—fingerprint, capture timestamp, matching-keyring status—without exposing key bytes.

This matters operationally. Losing the keyring means losing access to encrypted deployment credentials. Backups are not an afterthought; they are part of the deployment lifecycle.

Target clusters with blast-radius limits

A target cluster defines where the control plane is allowed to deploy. In the demo, jPOS/MGL is running inside the same Kubernetes cluster, so it can use in-cluster credentials: the pod's ServiceAccount token is read at operation time and used to construct a kubeconfig on the fly.

The target also carries a namespace allowlist. That allowlist is the first guardrail. A release plan cannot casually point at an arbitrary namespace; it must stay inside the cluster and namespace boundaries defined for that target.

This is deliberately narrower than "give the UI a kubeconfig and hope operators are careful." The target definition is an authorization boundary.

Helm charts as catalog entries

The chart catalog stores immutable references to Helm charts in OCI registries. Registering a chart pulls its metadata, records its version, and captures the chart-layer digest.

The demo registers mgl-iso-sim from an in-cluster OCI registry. Once registered, the chart becomes a selectable deployment artifact. Operators do not need to remember a registry URL, chart version, or digest each time they deploy. They choose from the catalog.

Charts that ship a JSON Schema get a typed values form. Instead of editing YAML by hand, operators see fields with appropriate input types, defaults, and validation. Raw YAML remains available as an escape hatch, but the default path is structured.

Release plans

A release plan ties everything together:

  • chart
  • target cluster
  • values profile
  • namespace
  • Helm release name
  • plan status

Plans are reusable. Applying a plan creates a release record, but the plan remains as the stable definition of what should be deployed and where.

This is useful for audit and operations. "What did we deploy?" is not hidden in shell history. "Which values were used?" is not a pasted YAML fragment in a ticket. The plan is explicit, stored, and reviewable.

Dry-run first, then preflight

The jPOS Control Plane requires a successful dry-run before the first real apply on a release plan. Dry-run renders the chart and asks the Kubernetes API server to validate the manifests without creating resources.

After dry-run succeeds, the preflight panel becomes available. Preflight checks cluster reachability and namespace-scoped RBAC before Helm is allowed to perform the real operation. If the credentials cannot create or update the resources Helm needs, the operator learns that before the apply starts.

This sequence catches the common deployment failures early:

  • malformed or incompatible values
  • invalid rendered manifests
  • missing namespace permissions
  • unreachable cluster
  • insufficient RBAC verbs

The goal is not to make deployment magical. The goal is to make failure visible before it becomes a half-deployed production incident.

Atomic apply and live visibility

The apply path uses Helm upgrade/install with atomic behavior. If the release cannot come up cleanly, Helm rolls it back instead of leaving a partial state behind.

Once the release succeeds, the control plane shows the resources owned by the release and the Helm history behind it. The pod logs panel reads logs through the same encrypted target credentials. Operators can inspect current or previous container instances, choose tail size, and filter client-side while troubleshooting.

That last part is important: deployment does not end when Helm exits. Operators need immediate visibility into what was created and whether it is alive.

The design choice

The jPOS Control Plane is not trying to replace Kubernetes, Helm, or GitOps. Those tools remain the substrate. What it adds is an operator-facing control plane around a specific class of deployments: regulated systems where credentials, approvals, repeatability, and audit trail matter.

A terminal can deploy a chart. A CI job can deploy a chart. But neither automatically gives a back-office operator a safe, constrained, auditable workflow with encrypted cluster credentials, schema-driven values, required dry-run, RBAC preflight, atomic apply, and live logs in one place.

That is the point of the deployment plugin: make deployment an application workflow, not a shell ritual.