Quickstart
Connect your first cluster, create an alert rule, and receive a Slack notification — in about 5 minutes.
Prerequisites
- A running Kubernetes cluster (any distribution)
- Helm 3 installed and configured to access your cluster
kubectlaccess with cluster-admin or equivalent- A SpecLayer account — get started free
Step 1 — Get your API key
In the SpecLayer app, go to Settings → API Keys and create a key with the klusteralert:write scope.
Step 2 — Add the Helm repo
helm repo add speclayer https://charts.speclayer.net helm repo update
Step 3 — Install the agent
helm install klusteralert speclayer/klusteralert \ --namespace klusteralert \ --create-namespace \ --set apiKey=$SPECLAYER_API_KEY \ --set clusterName=production
prod-eu-west-1 or staging.Step 4 — Verify the agent is connected
Check the agent pods are running:
kubectl get pods -n klusteralert # Expected output: NAME READY STATUS RESTARTS AGE klusteralert-agent-6d9f8b7c4d-xkp2j 1/1 Running 0 45s
In the SpecLayer app, go to KlusterAlert → Clusters. Your cluster should appear as Connected within 30 seconds.
Step 5 — Create your first alert rule
Create an alert rule file and apply it to your cluster via ConfigMap, or create it in the UI:
apiVersion: klusteralert.speclayer.net/v1
kind: AlertRule
metadata:
name: crash-loop-detect
namespace: klusteralert
spec:
description: Alert when a pod enters CrashLoopBackOff
severity: critical
condition:
type: pod.status
equals: CrashLoopBackOff
notify:
- channel: slack
target: '#incidents'
runbook: https://runbooks.acme.com/crashloopkubectl apply -f alert-rules.yaml
Step 6 — Connect Slack
In the SpecLayer app, go to KlusterAlert → Notification Channels → Add channel. Select Slack and follow the OAuth flow to authorise the SpecLayer app in your workspace. Once connected, select the channels you want to route alerts to.
Step 7 — Test the alert
Trigger a test notification from the Notification Channels settings page, or deploy a broken pod to verify end-to-end:
kubectl run crasher --image=busybox -- /bin/sh -c "exit 1" # Within ~30 seconds KlusterAlert fires the alert. # Check your Slack channel for the notification. kubectl delete pod crasher