April 2026 • Database & Automation • 6 min read

Automating Database Backups: From Manual to "Set and Forget"

1. The Strategy: When and How?

Before we automate, we must decide on the frequency. A missing backup strategy is a disaster waiting to happen.

Best Practice Timing

  • The "Golden Hour": Most production databases are backed up between 1:00 AM and 3:00 AM. This is typically when user traffic is lowest, minimizing the performance impact of the pg_dump process on active connections and IOPS.
  • Frequency:
    • Standard Apps: Once every 24 hours (Daily).
    • Critical Apps: Every 4 or 6 hours.
    • High-Transaction Apps: Hourly + Write-Ahead Logging (WAL) archiving.

The 3-2-1 Backup Rule

To ensure your data is truly resilient against ransomware or physical hardware failure, mandate the 3-2-1 rule:

  • 3 copies of your data (Live DB, Local Backup, Offsite Backup).
  • 2 different media types (Local Storage, Object Storage/Cloud).
  • 1 copy offsite (A distinct geographic physical location or cloud provider).

2. Refining Your Script for Automation

When running scripts via headless automation tools like Cron, they execute without an interactive terminal handler (tty). We must architect the script to handle explicit logging pipelines and embed a strict retention policy (pruning old backups to prevent catastrophic disk saturation).

Below is an enterprise-grade shell script built to automate PostgreSQL backups via Docker Compose:

Bash
#!/usr/bin/env bash

# Exit immediately if a pipeline or command fails
set -e

# --- CONFIGURATION ---
BACKUP_BASE_DIR="$HOME/backups"
LOG_FILE="$BACKUP_BASE_DIR/backup_log.log"
RETENTION_DAYS=7  # Prune backups older than 7 days
TODAY=$(date +"%Y-%m-%d")
BACKUP_DIR="$BACKUP_BASE_DIR/$TODAY"

# Ensure base directory and log file exist safely
mkdir -p "$BACKUP_BASE_DIR"
touch "$LOG_FILE"

# Capture stdout and stderr and pipe it into the log file simultaneously
exec > >(tee -a "$LOG_FILE") 2>&1

echo "--- Backup Started: $(date) ---"

# Create today's backup directory with strict read-only permissions
if [[ ! -d "$BACKUP_DIR" ]]; then
    mkdir -p "$BACKUP_DIR"
    chmod 700 "$BACKUP_DIR"
fi

backup_postgres() {
    local env_name=$1
    local compose_dir=$2
    local service_name="postgres"

    echo "-> Backing up $env_name..."
    cd "$compose_dir"

    # Best practice: Execute purely non-interactive (-T) against the db container
    docker compose exec -T "$service_name" sh -c 'pg_dumpall -U "$POSTGRES_USER"' | gzip > "$BACKUP_DIR/${env_name}_${TODAY}.sql.gz"

    echo "   [SUCCESS] ${env_name} saved and compressed."
}

# Run Backups via Absolute Paths
backup_postgres "SRV-DB-NAME" "$HOME/path/to/db-name"

# --- RETENTION POLICY (Pruning) ---
echo "-> Cleaning up historical root aggregates older than $RETENTION_DAYS days..."
find "$BACKUP_BASE_DIR" -type d -mtime +$RETENTION_DAYS -exec rm -rf {} +

echo "--- Backup Completed Successfully: $(date) ---"

3. Scheduling with Cron

The Cron daemon is the POSIX standard for scheduling unattended background jobs.

Step 1: Open the Crontab editor

Bash
crontab -e

Step 2: Declare the Schedule

To explicitly execute the DB backup sequence every night precisely at 2:30 AM, inject this payload into the file:

Cron
30 2 * * * /bin/bash /home/your_username/scripts/db_backup_all.sh
Cron Syntax Breakdown (30 2 * * *):
30: Minute (0-59)
2: Hour (0-23)
*: Day of month (1-31)
*: Month (1-12)
*: Day of week (0-6, Sunday is 0)

Step 3: Verify the Job Allocation

List your active, bound cron configurations:

Bash
crontab -l

4. Production Best Practices for Backups

A. Monitor your Backups (Dead Man's Snitch)

A prevalent post-mortem failure point is a cron job that silently dies because of a full inode disk or rotated DB credentials. Use an external heartbeat service (like Healthchecks.io) to ensure successful egress execution. Attach a standard curl ping to the footer of the bash script:

Bash
curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/your-uuid-here

B. Offsite Sync (The "Cloud" Step)

Maintaining localized backups creates a single point of catastrophic failure. Push your compressed `.gz` files into immutable Object Storage (S3). Utilize the AWS CLI native sync engine:

Bash
# Syncing natively to AWS S3
aws s3 sync "$BACKUP_BASE_DIR" s3://my-unique-backup-bucket/db-backups/ --delete

C. Test Your Restores (Mandatory)

A pristine backup is completely worthless until safely restored into an isolated state. Dedicate an operational window once per month to spin up a volatile Docker container and enforce a DB schema restoration using one of your `.sql.gz` files. Do not wait for a crisis to realize there's silent memory corruption during `pg_dump`.

D. Environmental Variable Security

As utilizing `docker compose exec`, always enforce strict credential mapping inside isolated `.env` parameter configurations pointing to your containerized POSTGRES_USER and POSTGRES_PASSWORD strings. Your shell environment is now securely decoupled and inherits environmental execution context directly through the container binary pipeline.

Avril 2026 • Base de Données & Automatisation • 6 min de lecture

Automatisation des Sauvegardes de Bases de Données : Du Manuel au "Set and Forget"

Avant d'automatiser, nous devons déterminer la fréquence de sauvegarde. Ne pas avoir de stratégie est la garantie d'un désastre technique.

Cet article plonge dans la mise en œuvre d'un script Bash de production avec une réelle stratégie de sauvegarde (règle du 3-2-1), la gestion des journaux pour Cron, et des politiques de suppression d'anciens fichiers automatiques.

Consultez la version anglaise pour l'intégralité du guide technique, incluant les scripts Bash complets prêts pour la production, la syntaxe Cron détaillée et l'intégration de synchronisation vers le cloud avec AWS S3 CLI.