CloudTadaInsights

Lesson 22: Patroni on Kubernetes

Patroni on Kubernetes

After this lesson, you will be able to:

  • Deploy a Patroni cluster on Kubernetes.
  • Configure StatefulSets and PersistentVolumes.
  • Use the Patroni Kubernetes operator.
  • Implement storage classes and volume management.
  • Monitor and scale Patroni in a K8s environment.

1. Kubernetes Architecture for Patroni

1.1. Components

TEXT
Kubernetes Cluster:
├─ StatefulSet: postgres-cluster
│  ├─ Pod: postgres-0 (Leader)
│  ├─ Pod: postgres-1 (Replica)
│  └─ Pod: postgres-2 (Replica)
├─ Service: postgres-master (ClusterIP)
├─ Service: postgres-replica (ClusterIP)
├─ Service: postgres-config (Headless)
├─ ConfigMap: postgres-config
├─ Secret: postgres-credentials
└─ PersistentVolumeClaims:
   ├─ pgdata-postgres-0
   ├─ pgdata-postgres-1
   └─ pgdata-postgres-2

DCS: Kubernetes API (replaces etcd!)

1.2. Advantages of K8s

  • No separate etcd needed: Uses Kubernetes API for DCS.
  • Built-in scheduling: K8s handles pod placement.
  • Storage management: PVCs auto-provisioned.
  • Service discovery: K8s Services for endpoints.
  • Rolling updates: Native K8s feature.
  • Resource limits: CPU/memory guaranteed.

2. Prerequisites

2.1. Kubernetes cluster

BASH
# Using kind (Kubernetes in Docker) for local testing
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

# Create cluster
kind create cluster --name postgres-ha

# Or use existing K8s cluster (GKE, EKS, AKS)

2.2. kubectl setup

BASH
# Install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

# Verify
kubectl version --client
kubectl cluster-info

2.3. Helm (optional)

BASH
# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Verify
helm version

3. Manual Deployment with StatefulSets

3.1. Create namespace

YAML
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: postgres-ha
BASH
kubectl apply -f namespace.yaml

3.2. ConfigMap

YAML
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-config
  namespace: postgres-ha
data:
  patroni.yml: |
    scope: postgres-cluster
    namespace: /service/
    
    kubernetes:
      labels:
        application: patroni
        cluster-name: postgres-cluster
      scope_label: cluster-name
      role_label: role
      use_endpoints: true
      pod_ip: $(POD_IP)
      ports:
        - name: postgresql
          port: 5432
    
    bootstrap:
      dcs:
        ttl: 30
        loop_wait: 10
        retry_timeout: 10
        maximum_lag_on_failover: 1048576
        postgresql:
          use_pg_rewind: true
          parameters:
            max_connections: 100
            shared_buffers: 256MB
            effective_cache_size: 1GB
            maintenance_work_mem: 64MB
            checkpoint_completion_target: 0.9
            wal_buffers: 16MB
            default_statistics_target: 100
            random_page_cost: 1.1
            effective_io_concurrency: 200
            work_mem: 2621kB
            min_wal_size: 1GB
            max_wal_size: 4GB
            max_worker_processes: 4
            max_parallel_workers_per_gather: 2
            max_parallel_workers: 4
            max_parallel_maintenance_workers: 2
      
      initdb:
        - encoding: UTF8
        - data-checksums
      
      pg_hba:
        - host replication replicator 0.0.0.0/0 scram-sha-256
        - host all all 0.0.0.0/0 scram-sha-256
    
    postgresql:
      listen: 0.0.0.0:5432
      connect_address: $(POD_IP):5432
      data_dir: /var/lib/postgresql/data/pgdata
      bin_dir: /usr/lib/postgresql/18/bin
      authentication:
        replication:
          username: replicator
          password: rep_password
        superuser:
          username: postgres
          password: postgres_password
      parameters:
        unix_socket_directories: '/var/run/postgresql'
    
    restapi:
      listen: 0.0.0.0:8008
      connect_address: $(POD_IP):8008
BASH
kubectl apply -f configmap.yaml

3.3. Secret

YAML
# secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: postgres-credentials
  namespace: postgres-ha
type: Opaque
stringData:
  postgres-password: postgres_password
  replicator-password: rep_password
BASH
kubectl apply -f secret.yaml

3.4. StatefulSet

YAML
# statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
  namespace: postgres-ha
  labels:
    application: patroni
    cluster-name: postgres-cluster
spec:
  serviceName: postgres-config
  replicas: 3
  selector:
    matchLabels:
      application: patroni
      cluster-name: postgres-cluster
  template:
    metadata:
      labels:
        application: patroni
        cluster-name: postgres-cluster
    spec:
      serviceAccountName: postgres
      containers:
        - name: postgres
          image: postgres:18-alpine
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 5432
              name: postgresql
              protocol: TCP
            - containerPort: 8008
              name: patroni
              protocol: TCP
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.podIP
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: PATRONI_KUBERNETES_POD_IP
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.podIP
            - name: PATRONI_KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: PATRONI_KUBERNETES_LABELS
              value: "{application: patroni, cluster-name: postgres-cluster}"
            - name: PATRONI_SCOPE
              value: postgres-cluster
            - name: PATRONI_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: PATRONI_POSTGRESQL_DATA_DIR
              value: /var/lib/postgresql/data/pgdata
            - name: PATRONI_REPLICATION_USERNAME
              value: replicator
            - name: PATRONI_REPLICATION_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres-credentials
                  key: replicator-password
            - name: PATRONI_SUPERUSER_USERNAME
              value: postgres
            - name: PATRONI_SUPERUSER_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres-credentials
                  key: postgres-password
          volumeMounts:
            - name: pgdata
              mountPath: /var/lib/postgresql/data
            - name: config
              mountPath: /etc/patroni
          livenessProbe:
            httpGet:
              path: /liveness
              port: 8008
              scheme: HTTP
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /readiness
              port: 8008
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 5
            timeoutSeconds: 3
            failureThreshold: 3
          resources:
            requests:
              cpu: 500m
              memory: 512Mi
            limits:
              cpu: 2000m
              memory: 2Gi
      volumes:
        - name: config
          configMap:
            name: postgres-config
  volumeClaimTemplates:
    - metadata:
        name: pgdata
      spec:
        accessModes: ["ReadWriteOnce"]
        storageClassName: standard  # Adjust for your K8s cluster
        resources:
          requests:
            storage: 10Gi
BASH
kubectl apply -f statefulset.yaml

3.5. Services

YAML
# services.yaml
---
# Headless service for StatefulSet
apiVersion: v1
kind: Service
metadata:
  name: postgres-config
  namespace: postgres-ha
  labels:
    application: patroni
    cluster-name: postgres-cluster
spec:
  clusterIP: None
  ports:
    - port: 5432
      targetPort: 5432
      name: postgresql
    - port: 8008
      targetPort: 8008
      name: patroni
  selector:
    application: patroni
    cluster-name: postgres-cluster

---
# Service for master (read-write)
apiVersion: v1
kind: Service
metadata:
  name: postgres-master
  namespace: postgres-ha
  labels:
    application: patroni
    cluster-name: postgres-cluster
spec:
  type: ClusterIP
  ports:
    - port: 5432
      targetPort: 5432
      name: postgresql
  selector:
    application: patroni
    cluster-name: postgres-cluster
    role: master

---
# Service for replicas (read-only)
apiVersion: v1
kind: Service
metadata:
  name: postgres-replica
  namespace: postgres-ha
  labels:
    application: patroni
    cluster-name: postgres-cluster
spec:
  type: ClusterIP
  ports:
    - port: 5432
      targetPort: 5432
      name: postgresql
  selector:
    application: patroni
    cluster-name: postgres-cluster
    role: replica
BASH
kubectl apply -f services.yaml

3.6. RBAC (Service Account)

YAML
# rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: postgres
  namespace: postgres-ha

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: postgres
  namespace: postgres-ha
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
      - get
      - list
      - patch
      - update
      - watch
      - delete
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get
      - patch
      - update
      - create
      - list
      - watch
      - delete
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
      - list
      - patch
      - update
      - watch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: postgres
  namespace: postgres-ha
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: postgres
subjects:
  - kind: ServiceAccount
    name: postgres
BASH
kubectl apply -f rbac.yaml

4. Verify Deployment

4.1. Check pods

BASH
kubectl get pods -n postgres-ha -w

# Output:
# NAME         READY   STATUS    RESTARTS   AGE
# postgres-0   1/1     Running   0          2m
# postgres-1   1/1     Running   0          1m
# postgres-2   1/1     Running   0          30s

4.2. Check StatefulSet

BASH
kubectl get statefulset -n postgres-ha

kubectl describe statefulset postgres -n postgres-ha

4.3. Check services

BASH
kubectl get svc -n postgres-ha

# NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
# postgres-config   ClusterIP   None            <none>        5432/TCP,8008/TCP   3m
# postgres-master   ClusterIP   10.96.100.1     <none>        5432/TCP            3m
# postgres-replica  ClusterIP   10.96.100.2     <none>        5432/TCP            3m

4.4. Check Patroni cluster

BASH
# Exec into pod
kubectl exec -it postgres-0 -n postgres-ha -- bash

# Inside pod
patronictl list

# + Cluster: postgres-cluster -------+----+-----------+
# | Member     | Host        | Role   | State     | TL | Lag in MB |
# +------------+-------------+--------+-----------+----+-----------+
# | postgres-0 | 10.244.0.5  | Leader | running   |  1 |           |
# | postgres-1 | 10.244.0.6  | Replica| streaming |  1 |         0 |
# | postgres-2 | 10.244.0.7  | Replica| streaming |  1 |         0 |
# +------------+-------------+--------+-----------+----+-----------+

4.5. Test connection

BASH
# From within cluster
kubectl run -it --rm psql-client --image=postgres:18 --restart=Never -n postgres-ha -- \
  psql -h postgres-master -U postgres

# Create test table
CREATE TABLE k8s_test (id serial primary key, data text);
INSERT INTO k8s_test (data) VALUES ('Hello from Kubernetes!');
SELECT * FROM k8s_test;

5. Using Zalando Postgres Operator

5.1. Install operator

BASH
# Clone operator repo
git clone https://github.com/zalando/postgres-operator.git
cd postgres-operator

# Install via kubectl
kubectl apply -k kustomize/operator/

# Or via Helm
helm repo add postgres-operator-charts https://opensource.zalando.com/postgres-operator/charts/postgres-operator
helm install postgres-operator postgres-operator-charts/postgres-operator

5.2. Create PostgreSQL cluster

YAML
# postgres-cluster.yaml
apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
  name: acid-postgres-cluster
  namespace: postgres-ha
spec:
  teamId: "myteam"
  volume:
    size: 10Gi
    storageClass: standard
  numberOfInstances: 3
  users:
    myapp:
      - superuser
      - createdb
  databases:
    myapp: myapp
  postgresql:
    version: "18"
    parameters:
      shared_buffers: "256MB"
      max_connections: "100"
      log_statement: "all"
  resources:
    requests:
      cpu: 500m
      memory: 512Mi
    limits:
      cpu: 2000m
      memory: 2Gi
  patroni:
    initdb:
      encoding: "UTF8"
      locale: "en_US.UTF-8"
      data-checksums: "true"
    pg_hba:
      - hostssl all all 0.0.0.0/0 scram-sha-256
      - host all all 0.0.0.0/0 scram-sha-256
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 33554432
  backup:
    schedule: "0 2 * * *"
    retentionPolicy: "7d"
BASH
kubectl apply -f postgres-cluster.yaml

5.3. Check cluster status

BASH
kubectl get postgresql -n postgres-ha

# NAME                   TEAM     VERSION   PODS   VOLUME   CPU-REQUEST   MEMORY-REQUEST   AGE   STATUS
# acid-postgres-cluster  myteam   18        3      10Gi     500m          512Mi            2m    Running

kubectl get pods -l cluster-name=acid-postgres-cluster -n postgres-ha

5.4. Connect to cluster

BASH
# Get password
export PGPASSWORD=$(kubectl get secret myapp.acid-postgres-cluster.credentials.postgresql.acid.zalan.do \
  -n postgres-ha -o jsonpath='{.data.password}' | base64 -d)

# Port-forward
kubectl port-forward svc/acid-postgres-cluster 5432:5432 -n postgres-ha &

# Connect
psql -h localhost -U myapp -d myapp

6. Storage Management

6.1. StorageClass for performance

YAML
# storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: postgres-fast
provisioner: kubernetes.io/aws-ebs  # Or GCE, Azure, etc.
parameters:
  type: gp3  # AWS EBS GP3 (faster than GP2)
  iops: "3000"
  throughput: "125"
  fsType: ext4
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Retain  # Don't delete PV on PVC deletion
BASH
kubectl apply -f storageclass.yaml

# Update StatefulSet to use new StorageClass
# volumeClaimTemplates.spec.storageClassName: postgres-fast

6.2. Volume expansion

BASH
# Enable volume expansion in StorageClass
# allowVolumeExpansion: true

# Edit PVC
kubectl edit pvc pgdata-postgres-0 -n postgres-ha

# Change: storage: 10Gi → storage: 20Gi

# K8s will automatically expand the volume
kubectl get pvc -n postgres-ha -w

6.3. Backup volumes

YAML
# Using VolumeSnapshot (if supported by storage provider)
# snapshot.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: postgres-0-snapshot
  namespace: postgres-ha
spec:
  volumeSnapshotClassName: csi-snapclass
  source:
    persistentVolumeClaimName: pgdata-postgres-0
BASH
kubectl apply -f snapshot.yaml
kubectl get volumesnapshot -n postgres-ha

7. Monitoring on Kubernetes

7.1. Prometheus ServiceMonitor

YAML
# servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: postgres
  namespace: postgres-ha
  labels:
    prometheus: kube-prometheus
spec:
  selector:
    matchLabels:
      application: patroni
      cluster-name: postgres-cluster
  endpoints:
    - port: patroni
      path: /metrics
      interval: 30s
BASH
kubectl apply -f servicemonitor.yaml

7.2. Grafana dashboard

BASH
# Import Patroni dashboard
# Dashboard ID: 9628 (from grafana.com)

# Or create custom dashboard
kubectl port-forward svc/grafana 3000:3000 -n monitoring

7.3. Logs with Loki

YAML
# promtail-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: promtail-config
  namespace: postgres-ha
data:
  promtail.yaml: |
    server:
      http_listen_port: 9080
      grpc_listen_port: 0
    
    clients:
      - url: http://loki:3100/loki/api/v1/push
    
    scrape_configs:
      - job_name: postgres
        kubernetes_sd_configs:
          - role: pod
            namespaces:
              names:
                - postgres-ha
        relabel_configs:
          - source_labels: [__meta_kubernetes_pod_label_application]
            action: keep
            regex: patroni

8. Scaling and Updates

8.1. Scale cluster

BASH
# Scale up
kubectl scale statefulset postgres --replicas=5 -n postgres-ha

# Scale down (careful!)
kubectl scale statefulset postgres --replicas=3 -n postgres-ha

8.2. Rolling update

BASH
# Update PostgreSQL version
kubectl set image statefulset/postgres postgres=postgres:18.1-alpine -n postgres-ha

# Or edit StatefulSet
kubectl edit statefulset postgres -n postgres-ha

# K8s will update pods one by one
kubectl rollout status statefulset/postgres -n postgres-ha

8.3. Manual failover

BASH
# Exec into any pod
kubectl exec -it postgres-0 -n postgres-ha -- bash

# Perform switchover
patronictl switchover postgres-cluster --master postgres-0 --candidate postgres-1

9. Troubleshooting

9.1. Pod stuck in Pending

BASH
kubectl describe pod postgres-0 -n postgres-ha

# Common issues:
# - Insufficient resources (CPU/memory)
# - PVC not bound
# - Node affinity rules not satisfied

9.2. Replication not working

BASH
kubectl logs postgres-1 -n postgres-ha

# Check Patroni status
kubectl exec -it postgres-1 -n postgres-ha -- patronictl list

# Check PostgreSQL logs
kubectl exec -it postgres-1 -n postgres-ha -- tail -f /var/lib/postgresql/data/pgdata/log/postgresql-*.log

9.3. Leader election issues

BASH
# Check Kubernetes Endpoints
kubectl get endpoints -n postgres-ha

# Check RBAC permissions
kubectl auth can-i create endpoints --as=system:serviceaccount:postgres-ha:postgres -n postgres-ha

10. Best Practices

✅ DO

  1. Use StatefulSets: Stable network identity.
  2. Set resource limits: Prevent OOM kills.
  3. Enable PV retention: Don't lose data on deletion.
  4. Use headless service: For StatefulSet discovery.
  5. Monitor with Prometheus: Track health.
  6. Use operators: Simplify management.
  7. Test failover: Regularly validate HA.
  8. Backup to external storage: S3, GCS, etc.
  9. Use anti-affinity: Spread pods across nodes.
  10. Document procedures: For operations team.

❌ DON'T

  1. Don't use Deployments: Use StatefulSets.
  2. Don't skip resource limits: Can crash node.
  3. Don't delete PVCs: Unless sure about data loss.
  4. Don't ignore pod affinity: All pods on same node = bad.
  5. Don't use emptyDir: Data lost on pod restart.
  6. Don't skip backups: K8s is not a backup solution.

11. Lab Exercises

Lab 1: Deploy Patroni with StatefulSets

Tasks:

  1. Create namespace and RBAC.
  2. Deploy ConfigMap and Secret.
  3. Create StatefulSet with 3 replicas.
  4. Deploy Services.
  5. Verify cluster status.

Lab 2: Test failover in Kubernetes

Tasks:

  1. Delete leader pod.
  2. Observe automatic failover.
  3. Verify new leader elected.
  4. Check application connectivity.
  5. Document RTO.

Lab 3: Use Zalando Postgres Operator

Tasks:

  1. Install operator.
  2. Create PostgreSQL cluster CR.
  3. Connect and create database.
  4. Scale cluster up/down.
  5. Test rolling update.

Lab 4: Monitor with Prometheus

Tasks:

  1. Deploy Prometheus Operator.
  2. Create ServiceMonitor.
  3. Query metrics in Prometheus.
  4. Create Grafana dashboard.
  5. Setup alerting rules.

12. Summary

Kubernetes vs Traditional

AspectTraditionalKubernetes
DCSetcd clusterK8s API
StorageLocal disksPVCs
Service discoveryDNS/HAProxyK8s Services
ScalingManualkubectl scale
UpdatesManual SSHRolling updates
MonitoringSeparate setupServiceMonitor

Key Concepts

  • StatefulSet: Ordered pod creation/deletion.
  • PVC: Persistent data storage.
  • Service: Endpoint discovery (master/replica).
  • ConfigMap: Patroni configuration.
  • Secret: Passwords and credentials.
  • RBAC: Kubernetes API access for Patroni.

Next Steps

Lesson 23 will cover Patroni Configuration Management:

  • Dynamic configuration changes
  • DCS-based config storage
  • patronictl edit-config usage
  • Zero-downtime updates
  • Configuration validation

Share this article

You might also like

Browse all articles

Lesson 9: Bootstrap PostgreSQL Cluster

Learn how to bootstrap a Patroni cluster including starting Patroni for the first time on 3 nodes, verifying cluster status with patronictl, checking replication, troubleshooting common issues, and testing basic failover.

#Patroni#bootstrap#cluster

Lesson 8: Detailed Patroni Configuration

Learn detailed Patroni configuration including all sections of patroni.yml, bootstrap options, PostgreSQL parameters tuning, authentication setup, tags and constraints, and timing parameters optimization.

#Patroni#configuration#parameters

Lesson 7: Installing Patroni

Learn how to install Patroni, including setting up Python dependencies, installing via pip, understanding the patroni.yml configuration structure, creating systemd service, and configuring Patroni on 3 nodes for PostgreSQL high availability.

#Patroni#installation#configuration

Lesson 4: Infrastructure Preparation for PostgreSQL HA

Setting up the infrastructure for PostgreSQL High Availability with Patroni and etcd, including hardware requirements, network configuration, firewall, SSH keys, and time synchronization.

#Database#PostgreSQL#Infrastructure

Lesson 3: Introduction to Patroni and etcd

Understanding Patroni and etcd for PostgreSQL High Availability, including DCS, Raft consensus algorithm, leader election, and split-brain prevention mechanisms.

#Database#PostgreSQL#Patroni