Documentation

Everything you need to deploy, operate and scale Axera in your environment.

Installation

Prerequisites

Before deploying Axera, ensure your environment meets the following requirements.

  • Docker Engine 24+ and Docker Compose v2
  • PostgreSQL 16 (included in the default Docker Compose stack)
  • Apache Kafka 3.7+ (included in the default Docker Compose stack)
  • One or more Kubernetes / OpenShift clusters with OVN-Kubernetes CNI
  • Network access from Axera services to target cluster API servers

Docker Compose Deployment

Axera ships as a set of Docker containers orchestrated via Docker Compose.

API:5000
Proxy:5001
Monitor:5002
Radar:5003
Frontend:3000
PostgreSQL:5432
Kafka:9092
# Clone the repositories
git clone <axera-backend-repo>
git clone <axera-frontend-repo>

# Start all services
cd axera-backend
docker compose up -d

# Verify services are running
docker compose ps
  • The stack includes: API (5000), Proxy (5001), AllowDropMonitor (5002), Radar (5003), Frontend (3000)
  • PostgreSQL and Kafka are included as containers in the default stack
  • Kafka UI is available at port 8081 for debugging flow ingestion
  • Scheduled jobs dashboard is available at /hangfire for deployment monitoring

Configuration

Axera is configured through environment variables in docker-compose.yml and the web UI settings page.

# Key environment variables (docker-compose.yml)
API_PORT=5000
PROXY_PORT=5001
MONITOR_PORT=5002
RADAR_PORT=5003
FRONTEND_PORT=3000

# PostgreSQL
POSTGRES_USER=axera
POSTGRES_PASSWORD=<your-password>

# Kafka
KAFKA_BOOTSTRAP_SERVERS=kafka:9092
  • Cluster connections are managed via Settings > Connections in the web UI
  • Git provider settings (GitHub, BitBucket, GitLab) are configured per user via the UI
  • Ticketing provider settings (Jira, Linear, ServiceNow, Azure DevOps) are configured per user
  • Policy generation mode (Soft / Hard / Custom) is set per user in account settings

Architecture

Overview

Axera runs as a set of containerized services. The diagram below shows the high-level component layout and how data flows between them.

AXERA PLATFORMFrontend:3000API Server:5000Proxy Service:5001AllowDrop Monitor:5002Radar:5003PostgreSQL:5432Kafka:9092K8s / OpenShiftGit ProvidersITSM ProvidersSecurity ToolsApplication ServicesData StoresExternal Systems
  • Frontend — Web UI for topology visualization, policy management, monitoring and analytics
  • API Server — Core backend handling policy lifecycle, user management and third-party integrations
  • Proxy Service — Secure credential injection for all external API calls (clusters, Git, ITSM, security tools)
  • Radar — Consumes network flow data from Kafka, enriches with cluster metadata and serves flow visualization
  • AllowDropMonitor — Streams real-time OVN-Kubernetes ACL allow/drop verdicts from connected clusters
  • PostgreSQL — Persistent storage for policies, accounts, settings and flow data
  • Kafka — Message broker for flow data ingestion (one topic per cluster)

Basic Deployment (Single Server)

The simplest deployment runs all components on a single machine using Docker Compose. Ideal for evaluation, small environments or up to ~5 clusters.

SINGLE SERVER — DOCKER COMPOSEFrontendAPI ServerProxyAllowDrop MonitorRadarPostgreSQLKafkaMin: 4 vCPU • 8 GB RAM • 50 GB diskK8s ClustersIntegrations
  • All services, PostgreSQL and Kafka run on one host via docker compose up -d
  • Recommended minimum: 4 vCPU, 8 GB RAM, 50 GB disk
  • Suitable for evaluation, PoC and small production environments
  • Backup: standard pg_dump to local or remote storage
  • Upgrade: docker compose pull && docker compose up -d

Production Deployment (External DB)

For production, move PostgreSQL and optionally Kafka to managed or dedicated instances outside the application server.

APP SERVERFrontendAPI ServerProxyRadar + AllowDrop Mon.DATABASE SERVERPostgreSQL 16Dedicated or Managed (RDS, Azure)MESSAGE BROKERKafka (dedicated or managed)TCP/5432TCP/9092K8s ClustersIntegrations
  • PostgreSQL runs on a dedicated server, VM or managed service (e.g., AWS RDS, Azure Database for PostgreSQL)
  • Kafka can run on a separate host or use a managed service (e.g., MSK, Confluent Cloud)
  • App services remain stateless — easy to back up, restore and move
  • Connection pooling (e.g., PgBouncer) recommended for PostgreSQL under high load
  • Backup: managed snapshots or pg_dump to object storage on a schedule

High Availability & DR

For organizations requiring high availability or disaster recovery, Axera services can be deployed with redundancy across multiple nodes.

APP NODE 1 (ACTIVE)APIProxyRadarAllowDropAPP NODE 2 (ACTIVE)APIProxyRadarAllowDropLoad Balancer / IngressFrontend (CDN / HA)POSTGRESQL HAPrimaryRead/WriteStandbyStreaming ReplicaAuto-failover (Patroni / Managed)KAFKA CLUSTERBroker 1Broker 2Broker 3RPO/RTO depends on replication lag (typically < 5 min)
  • All Axera services are stateless and can run as active-active behind a load balancer
  • PostgreSQL: use primary + streaming replica with automatic failover (Patroni, managed HA, or cloud-native)
  • Kafka: deploy a 3-broker cluster for fault tolerance; consumer offsets are maintained across restarts
  • Frontend: serve from multiple instances or a CDN for geographic distribution
  • DR strategy: replicate PostgreSQL to a standby region; restore Axera services from container images
  • RPO/RTO depends on PostgreSQL replication lag and container restart time (typically < 5 minutes)

Supported Versions & Requirements

Axera is tested and supported with the following component versions.

ComponentSupported VersionNotes
PostgreSQL16.x (recommended), 15.xIncluded in Docker Compose stack
Apache Kafka3.7+Included in Docker Compose stack
Docker Engine24+Docker Compose v2 required
Kubernetes1.26+Any distribution with NetworkPolicy
OpenShift4.13+OVN-Kubernetes required
BrowsersLatest 2 versionsChrome, Firefox, Edge
Server OSLinux x86_64macOS for development only

Security & Access Control

Axera provides role-based access control and secure credential management out of the box.

Admin
Full system access including user management, cluster connections, and settings
Permissions: All
NetworkOperator
Create, edit, deploy, and manage network policies
Permissions: Policy CRUD + Deploy
Auditor
Read-only access to policies, monitoring, and audit trails
Permissions: Read-only
AccessUser
Basic access to view topology and assigned resources
Permissions: View only
  • Four built-in roles: Admin, NetworkOperator, Auditor, AccessUser
  • JWT-based authentication with secure refresh tokens
  • Admin role required for user management, cluster connections and system settings
  • All external credentials (cluster tokens, Git tokens, ITSM keys) are stored server-side and never exposed to the browser
  • The Proxy service handles all outbound API calls, injecting credentials at the server level

Day-2 Operations

Scaling

Axera services can be scaled independently based on cluster count and flow volume.

  • All Axera services are stateless — add replicas behind a load balancer as needed
  • AllowDropMonitor scales with the number of connected clusters
  • For high-volume environments, increase Kafka partitions to parallelize flow ingestion
  • Use connection pooling (e.g., PgBouncer) for PostgreSQL under heavy load
  • Frontend can be served from a CDN or multiple instances for geographic distribution

Upgrades

Axera follows semantic versioning. Upgrades are performed via Docker Compose with minimal downtime.

# Pull latest images
docker compose pull

# Restart with new images
docker compose up -d

# Verify all services are healthy
docker compose ps

# Check API health
curl http://localhost:5000/health
  • Database migrations are applied automatically on service startup
  • Review the changelog before upgrading major versions for breaking changes
  • Kafka consumer offsets are maintained — no flow data is lost during upgrades
  • Use docker compose pull before upgrading to pre-fetch images and reduce downtime

Backup & Restore

Axera state is stored in PostgreSQL. Standard database backup procedures apply.

# Backup
pg_dump -h <db-host> -U axera axera_db > axera_backup.sql

# Restore
psql -h <db-host> -U axera axera_db < axera_backup.sql
  • Policy data, user accounts and configuration are in PostgreSQL — back up regularly
  • Flow/radar data has a separate retention lifecycle — back up based on your retention policy
  • For GitOps users: the Git repository serves as an additional policy backup
  • ITSM tickets provide an independent audit trail outside of Axera
  • Automate backups with pg_dump cron jobs, managed snapshots or your preferred backup tool

Troubleshooting

Common issues and their resolutions.

# Check all service health
docker compose ps
docker compose logs api --tail=50
docker compose logs radar --tail=50

# Verify Kafka connectivity
docker compose logs kafka --tail=20

# Check network policies on cluster
kubectl get networkpolicy -A | head -20
  • API not starting: check PostgreSQL connectivity and database availability
  • No flow data: verify Kafka is running and topics are created for each cluster
  • AllowDropMonitor not collecting: ensure OVN-Kubernetes is the CNI and cluster is reachable
  • Git push failing: verify Git provider token permissions in Settings > Connections
  • ITSM ticket creation failing: check provider credentials and network access
  • Frontend not loading: verify the API URL environment variable points to the correct backend

Policy Workflow

ObserveSignals1GeneratePolicies2ReviewDiffs3ApproveITSM/Git4DeployClusters5MonitorAudit6

Signal Sources

Axera ingests data from multiple sources to build an accurate picture of service communication before generating any policy.

PC
Prisma Cloud Radar
Traffic intelligence via API or ZIP upload
ACS
ACS / StackRox
Vulnerability and compliance signals
KF
Kafka Flows
Network flow data per cluster topic
ACL
OVN-K ACL Logs
Real-time allow/drop verdicts
Axera Policy Engine
  • Prisma Cloud Radar — Upload radar data as a ZIP file or connect directly via API to pull traffic intelligence from Palo Alto Prisma Cloud
  • ACS / StackRox — Import Red Hat Advanced Cluster Security signals for vulnerability and compliance context alongside network data
  • Kafka Flow Ingestion — Network flow data is ingested via Kafka (one topic per cluster), enriched with pod and deployment labels, and stored for visualization
  • OVN-Kubernetes ACL Logs — Real-time allow/drop verdicts are streamed from OVN-Kubernetes across all connected clusters
  • All signal ingestion is read-only and non-intrusive — Axera never modifies workloads or injects sidecars

Policy Generation

Axera generates least-privilege NetworkPolicy recommendations based on observed traffic. Policies are never auto-enforced.

Signal DataFlows + ACLsPolicy EngineLeast-privilegeanalysisSoftHardDraft PoliciesYAML + Diffs + Risk Tags“Why” explanationsDRAFT
  • Upload Mode — Upload a Prisma Cloud or ACS data export (ZIP) through the web UI; Axera parses the archive and generates policies from the contained flow data
  • Connected Mode — Pull data directly from Prisma Cloud or ACS APIs using stored credentials (configured in Settings > Connections)
  • Generation Modes — Choose between Soft (conservative, allow-heavy), Hard (strict least-privilege), or Custom (tailored rules with exceptions) via account settings
  • Output — Each generated policy includes: the Kubernetes NetworkPolicy YAML, a human-readable diff showing what changes, risk tags, and an explanation of why the policy was recommended
  • Policies are created in draft state — they must be explicitly reviewed, approved, and deployed through the management workflow

Review & Approval

Before any policy reaches a cluster, it passes through configurable review and approval gates aligned with enterprise change management.

  • Policy Diffs — Every change shows a clear before/after diff with added, removed, and modified rules highlighted
  • ITSM Integration — Create change requests in Jira, ServiceNow, Linear, or Azure DevOps directly from the policy management UI; tickets are linked to the policy record for traceability
  • Git Integration — Push policy files to GitHub, BitBucket, or GitLab with automatic PR creation; the PR includes the full diff and metadata for code review
  • Approval Gates — Policies can require approval from designated roles (Security, Platform, etc.) before deployment proceeds
  • Audit Trail — Every action (create, edit, approve, reject, deploy, rollback) is logged with the user, timestamp, and details for compliance

Deployment & Rollout

Approved policies can be deployed to clusters immediately or scheduled for a maintenance window, always with rollback capability.

  • Direct Deployment — Push policies directly to one or more Kubernetes / OpenShift clusters; credentials are injected server-side and never exposed
  • Scheduled Deployment — Schedule policy deployments via cron expressions for change windows (e.g., every Tuesday at 02:00 UTC)
  • Versioned Rollback — Every deployment creates a versioned snapshot that can be restored instantly with one click
  • Multi-cluster Support — Deploy the same policy set across multiple clusters simultaneously or roll out progressively cluster by cluster
  • Deployment status is tracked in the UI with real-time feedback on success, failure, and partial application

Policy Versioning

Axera maintains a full version history of every policy for audit, comparison, and rollback purposes.

  • Current State — The active version of each policy represents what is (or should be) enforced on the cluster
  • Version History — Each time a policy is updated, the previous version is archived with a timestamp and the user who made the change
  • Diff Comparison — Compare any two versions side-by-side to see exactly what changed between them
  • Restore — Roll back to any previous version with a single action; the rollback itself is versioned and logged
  • Export — Export policy history as structured data for external audit systems or compliance reporting

Integrations Guide

Git Providers

Axera supports GitHub, BitBucket, and GitLab for policy-as-code workflows. Configuration is per-user via Settings > Connections.

GH
GitHub
PR creation, branch management, policy-as-code
BB
BitBucket
Workspace integration, app password auth
GL
GitLab
Project-level access, merge request support
# GitHub setup requires:
# 1. Personal Access Token with repo scope
# 2. Organization name
# 3. Repository name
# 4. Target branch (default: main)

# BitBucket setup requires:
# 1. App password with repository:write scope
# 2. Workspace slug
# 3. Repository slug

# GitLab setup requires:
# 1. Personal Access Token with api scope
# 2. Project ID or path
# 3. Target branch
  • When you push policies to Git, Axera creates a new branch, commits the policy YAML files, and opens a Pull Request automatically
  • The PR description includes the policy diff, risk tags, and a link back to the Axera policy record for context
  • Each user configures their own Git credentials — pushes are attributed to the individual user for clear ownership
  • Supported operations: push new policies, update existing policies, and sync policy state between Axera and Git

ITSM Providers

Axera integrates with four ITSM providers for change management alignment. Tickets are created from the policy management UI.

JR
Jira
Issue tracking, custom fields, status transitions
SN
ServiceNow
Change requests, approval workflows, CHG records
LN
Linear
Labels, assignees, team-based task management
AZ
Azure DevOps
Work items, Azure Boards, area paths
# Jira setup:
# - Base URL (e.g., https://company.atlassian.net)
# - API token + email
# - Project key

# ServiceNow setup:
# - Instance URL
# - Username + password or OAuth
# - Assignment group

# Linear setup:
# - API key
# - Team ID

# Azure DevOps setup:
# - Organization URL
# - Personal Access Token
# - Project name
  • Jira — Create issues in any Jira project; supports custom fields, priority mapping, and automatic status transitions
  • ServiceNow — Create change requests (CHG) or incidents; integrates with ServiceNow approval workflows and change windows
  • Linear — Create issues with labels and assignees; ideal for platform engineering teams using Linear for task management
  • Azure DevOps — Create work items in Azure Boards; supports custom work item types and area paths
  • All tickets include a reference back to the Axera policy record and the policy diff for reviewers

Cluster Connections

Connect Kubernetes and OpenShift clusters to Axera for policy deployment, monitoring, and flow data collection.

Axera PlatformProxy ServiceBearer TokenProduction ClusterStaging ClusterDev ClusterNetworkPolicy CRUDPod / Namespace listingFlow data collectionACL log streaming
# Each cluster connection requires:
# 1. Cluster name (display name in Axera UI)
# 2. API server URL (e.g., https://api.cluster.example.com:6443)
# 3. Bearer token (service account token with appropriate RBAC)
# 4. CA certificate (optional, for self-signed certificates)

# Minimum RBAC permissions:
# - get/list/watch networkpolicies (all namespaces)
# - create/update/delete networkpolicies (for deployment)
# - get/list pods, namespaces, deployments (for enrichment)
# - get/list/watch pods/log in openshift-ovn-kubernetes (for AllowDropMonitor)
  • Cluster connections are managed via Settings > Connections in the web UI by Admin users
  • The Proxy service injects stored credentials for all cluster API calls — no secrets are exposed to the browser
  • Axera creates a dedicated Kafka topic per cluster for flow data ingestion
  • AllowDropMonitor automatically discovers OVN-Kubernetes components and begins streaming ACL logs upon connection
  • Connection health is monitored and displayed in the Settings page with status indicators

Prisma Cloud

Connect to Palo Alto Prisma Cloud to pull radar data for policy generation and traffic intelligence.

PC
Prisma Cloud
Radar data, service-to-service maps, traffic intelligence
ACS
ACS / StackRox
Vulnerability data, compliance, network flows
  • Configuration requires: Prisma Cloud API URL, Access Key ID, and Secret Key
  • The Proxy service handles all Prisma Cloud API calls, injecting credentials server-side
  • Radar data includes service-to-service communication maps, port information, and protocol details
  • Data can be pulled on-demand or uploaded as a ZIP export from the Prisma Cloud console
  • Prisma Cloud integration is optional — Axera can generate policies from Kafka flow data alone

ACS / StackRox

Connect to Red Hat Advanced Cluster Security (formerly StackRox) for security signal ingestion.

  • Configuration requires: ACS Central URL and an API token with read permissions
  • ACS provides vulnerability data, compliance results, and network flow information
  • Security signals from ACS enrich policy recommendations with risk context
  • Combined with Prisma Cloud radar data, ACS signals provide a comprehensive view of cluster security posture
  • ACS integration is optional and works alongside other signal sources

User Interface

Overview (Topology)

The Overview page is the primary entry point, showing an interactive service topology graph powered by AntV G6.

OV
Overview (Topology)
Interactive service topology graph with namespace grouping
GR
Graph View
Detailed flow table with filtering and drill-down
PM
Policy Management
Create, version, review, and deploy policies
MO
Monitoring
Live ACL verdicts and enforcement status
AN
Analytics
Dashboards, trends, anomaly detection, and reports
ST
Settings
Connections, integrations, users, and preferences
  • Interactive Graph — Drag, zoom, and click on nodes to explore service-to-service communication across namespaces
  • Namespace Grouping — Services are visually grouped by namespace with color-coded boundaries
  • Edge Details — Click on edges (connections) to see protocol, port, and traffic volume information
  • Policy Overlay — Toggle a policy overlay to see which connections are covered by NetworkPolicy and which are exposed
  • Cluster Selector — Switch between connected clusters to view topology for each environment
  • The topology graph auto-refreshes as new flow data arrives from Kafka and AllowDropMonitor

Graph View

The Graph page provides detailed traffic flow visualization with filtering and drill-down capabilities.

  • Flow Table — Tabular view of all observed flows with source, destination, port, protocol, and status columns
  • Filtering — Filter by namespace, deployment, pod, port, or time range to focus on specific traffic patterns
  • Real-time Streaming — Live flow data is streamed and updated as new entries arrive
  • Historical View — Query archived flows beyond the 7-day active window for forensic analysis
  • Export — Export filtered flow data for external analysis or reporting

Policy Management

The Policy Management page is the operational hub for viewing, editing, and deploying NetworkPolicy.

  • Policy List — Sortable table showing all policies with name, namespace, cluster, status (draft/active/archived), and last modified date
  • Policy Editor — View and edit policy YAML with syntax highlighting; diffs are shown when editing existing policies
  • Bulk Actions — Select multiple policies for batch deployment, export, or ticket creation
  • Version History — Access the full version history of any policy with one click; compare versions side-by-side
  • Deployment Controls — Deploy to cluster immediately, schedule via cron, or push to Git from the policy detail view
  • ITSM Actions — Create tickets in Jira, ServiceNow, Linear, or Azure DevOps directly from the policy context

Policy Monitoring

The Monitoring page displays real-time AllowDropMonitor data and policy enforcement status.

  • Live ACL Stream — Real-time feed of OVN-Kubernetes allow/drop verdicts across all connected clusters
  • Verdict Filtering — Filter by cluster, namespace, verdict type (allow/drop), or time range
  • Pod Logs — Stream live pod logs via the WebSocket Proxy for troubleshooting policy-related issues
  • Alerts — Visual indicators when unexpected drops or drift events are detected
  • Evidence Export — Export monitoring data as evidence for compliance and audit requirements

Analytics

The Analytics page provides dashboards, trend analysis, and exportable reports for security posture management.

  • Dashboard — Overview cards showing policy coverage percentage, exposed paths count, violation trends, and rollout velocity
  • Charts — Interactive ECharts and ApexCharts visualizations for coverage over time, risk trends, and comparison views
  • Anomaly Detection — Automated detection of unusual traffic patterns or sudden changes in policy posture metrics
  • Comparison Mode — Compare policy posture between time periods, clusters, or namespaces to track improvement
  • Report Export — Generate PDF reports with executive summaries or export raw data as Excel spreadsheets
  • Custom Date Ranges — Analyze any time window from real-time to full historical data

Settings

The Settings page manages cluster connections, third-party integrations, user accounts, and system configuration.

  • Connections — Add, edit, and test cluster connections (Kubernetes / OpenShift API server URL + bearer token)
  • Git Providers — Configure GitHub, BitBucket, or GitLab credentials and default repository settings per user
  • ITSM Providers — Configure Jira, ServiceNow, Linear, or Azure DevOps credentials and project mappings per user
  • Security Sources — Configure Prisma Cloud API and ACS API connections for signal ingestion
  • User Management — Admin-only section to create, edit, and deactivate user accounts with role assignment (Admin, NetworkOperator, Auditor, AccessUser)
  • Account Settings — Personal preferences including default policy generation mode (Soft / Hard / Custom)