KlusterAlert — Kubernetes Monitoring & Alerting

Know when your clusters need attention

KlusterAlert monitors your Kubernetes workloads, fires intelligent alerts to Slack, Teams, Google Chat, PagerDuty, email, and more — and gives your team the runbooks to resolve incidents fast.

Free plan available · No credit card required

Built for production Kubernetes

From a single self-hosted cluster to dozens of cloud-managed namespaces — KlusterAlert scales with you.

Multi-Cluster Monitoring

Connect unlimited Kubernetes clusters across any cloud provider. Get a unified view of pod health, node status, and resource utilisation in real time.

Intelligent Alerting

Define alert rules as code. KlusterAlert detects CrashLoopBackOff, OOMKill, pending pods, and resource saturation — before your users notice.

Every Channel Your Team Uses

Route alerts to Slack, Microsoft Teams, Google Chat, PagerDuty, email, or any custom webhook. Configurable severity levels, deduplication, and cooldown periods keep noise low.

Runbook Automation

Attach runbooks to alert rules. When an alert fires, responders see the exact steps to triage and resolve the incident — no context switching.

Historical Metrics

Browse 90 days of cluster metrics. Correlate past incidents with deployments, scaling events, and config changes in a single timeline.

Security Alerts

Surface privilege escalations, exposed secrets, and pods running as root. Security posture checks run on every cluster sync.

Works with every Kubernetes distribution

Deploy the lightweight KlusterAlert agent with a single Helm chart. It runs inside your cluster, streams events to our ingestion API, and starts alerting within minutes.

Amazon EKS
Google GKE
Azure AKS
DigitalOcean DOKS
Self-hosted k8s
k3s / k3d
Get started
Install via Helm
helm repo add speclayer \
  https://charts.speclayer.net

helm install klusteralert \
  speclayer/klusteralert \
  --namespace klusteralert \
  --create-namespace \
  --set apiKey=$SPECLAYER_API_KEY \
  --set clusterName=production
alert-rules.yaml
- name: CrashLoopDetect
  severity: critical
  condition: pod.restarts > 5
  window: 10m
  notify:
    - channel: "#incidents"
    - pagerduty: platform-oncall
  runbook: https://runbooks.acme.com/crashloop

- name: HighMemoryPressure
  severity: warning
  condition: node.memory_used_pct > 85
  window: 5m
  notify:
    - channel: "#k8s-alerts"
Alert rules as code

Define rules in YAML, version them in Git

Alert rules live in your repository alongside your Kubernetes manifests. Changes are reviewed, audited, and rolled back just like any other code change. No dashboards to click through.

Stop finding out from your users

Connect your first cluster in under 5 minutes. Free plan monitors up to 3 clusters forever.

View pricing