Client demo strategy

Open Liberty + NGINX Web Tier POC

Recommended demo app

BRAC Digital Onboarding and Loan Eligibility POC

A compact enterprise application that shows Open Liberty as a production-ready MicroProfile runtime behind an NGINX web tier. The app demonstrates canary routing, operational health, JVM metrics, and edge hardening in one business-relevant workflow.

Stable Normal traffic routes to Liberty A
Beta Header: version: beta routes to Liberty B
OS OpenShift runtime
CI Jenkins + Nexus
CD OpenShift GitOps

Capability story

Why this app fits the client requirement

Business-relevant workflow

The demo models a digital onboarding or loan eligibility journey instead of a generic hello-world endpoint, keeping the story close to banking operations.

Visible canary behavior

Stable users receive the current scoring response from Instance A. Beta users receive the experimental risk model response from Instance B.

Target architecture

Web tier and Open Liberty runtime

Browser / curl Client demo traffic
NGINX Reverse Proxy Routing, HSTS, rate limits, error pages
Open Liberty A Stable application path
Open Liberty B Beta canary path

OpenShift target platform

How the demo maps to the client runtime

OpenShift Route

External demo traffic enters through an OpenShift Route and reaches the NGINX reverse proxy service.

NGINX web tier

NGINX remains the L7 control point for header canary routing, hardening headers, rate limiting, and custom error pages.

Open Liberty workloads

Liberty A and Liberty B run as separate OpenShift Deployments or deployment variants so stable and beta traffic can be observed independently.

Platform observability

MicroProfile health and metrics are exposed through NGINX, while OpenShift can use probes for runtime lifecycle management.

CI/CD and GitOps

Jenkins builds, Nexus stores, OpenShift GitOps deploys

GitHub Source code, Dockerfiles, NGINX config, manifests, and docs.
Jenkins CI Compile, test, package WAR, build container image, scan, and tag.
Nexus Repository Stores Maven artifacts and versioned container images for promotion.
OpenShift GitOps Syncs Kustomize or Helm manifests into OpenShift using desired state.

Jenkins should not deploy directly to OpenShift in the final flow. It should publish artifacts to Nexus and update the GitOps repository or manifest image tag, leaving OpenShift GitOps to reconcile the cluster.

Client requirement map

How the POC proves each requested capability

Requirement Demo implementation Proof point
L7 Load Balancing Two Open Liberty workloads behind one NGINX reverse proxy on OpenShift. Requests are served through NGINX, not direct Liberty ports.
Header-based Canary NGINX routes version: beta to Instance B. Response body shows instance: B and beta flags.
MicroProfile Health Liberty exposes /health/live through NGINX. Wiki and demo show liveness status.
MicroProfile Metrics Liberty exposes /metrics through NGINX. JVM thread count is highlighted from MP metrics output.
Hardening NGINX config includes HSTS, custom errors, and rate limiting. Brute-force simulation triggers throttling on app context.
Enterprise delivery Jenkins builds, Nexus stores artifacts, OpenShift GitOps deploys. Release promotion is visible through versioned artifacts and GitOps sync.

Application scope

Small app, clear enterprise signals

/appLanding page showing the active backend instance.
/app/api/applicationsMock onboarding application records.
/app/api/eligibilityLoan eligibility scoring endpoint.
/app/api/instanceReturns instance, version, hostname, and feature flags.
/app/loginProtected context for NGINX rate-limit demonstration.
/health/liveMicroProfile liveness endpoint exposed via NGINX.
/health/readyMicroProfile readiness endpoint exposed via NGINX.
/metricsMicroProfile metrics endpoint for JVM thread count.
/openapiOpenAPI document for API discovery.

Traffic steering

Canary behavior the client can see

Default request
curl http://localhost/app/api/instance
Beta canary request
curl -H "version: beta" http://localhost/app/api/instance

The beta response should visibly include instance: B, version: beta, and a feature flag such as new-risk-model.

MicroProfile observability

Health and JVM metrics through NGINX

Liveness

Expose /health/live through NGINX and show an aggregate green liveness status in the demo.

JVM thread count

Read MicroProfile metrics from /metrics and highlight the JVM thread count as the required runtime signal.

NGINX hardening

Controls to demonstrate at the web tier

HSTS response header for HTTPS-only browser behavior.
Custom error pages for upstream failure and protected routes.
Rate limiting on /app/login and high-risk app contexts.
Brute-force simulation that returns throttling responses through NGINX.

Demo flow

Recommended client walkthrough

  1. Open the app normally and show the request is served by Instance A.
  2. Send version: beta and show the request is served by Instance B.
  3. Open /health/live through NGINX and show liveness status.
  4. Open /metrics and highlight the JVM thread count metric.
  5. Trigger repeated requests against /app/login to show rate limiting.
  6. Stop one Liberty instance and show custom NGINX error behavior.
  7. Show Jenkins build output, Nexus artifact versions, and OpenShift GitOps sync state.

Delivery plan

Repository structure for the POC

brac-poc-openliberty-v5/
|-- app/                  Open Liberty MicroProfile application
|-- liberty/              server.xml and runtime config
|-- nginx/                reverse proxy, hardening, error pages
|-- deploy/
|   |-- openshift/        Deployment, Service, Route, ConfigMap, Secret templates
|   `-- gitops/           Kustomize or Helm overlays consumed by OpenShift GitOps
|-- jenkins/              Jenkinsfile and pipeline helper scripts
|-- docs/                 Cloudflare Pages wiki
|-- docker-compose.yml    two Liberty instances plus NGINX
`-- README.md             local runbook