A Docker container that automatically backs up PostgreSQL databases and uploads them to Amazon S3.
This project was inspired by and builds upon the excellent work of Musab520 and their pgbackup-sidecar project. We extend our gratitude for the original implementation that served as the foundation for this enhanced version.
- Automated PostgreSQL database backups using
pg_dump
- Upload backups to Amazon S3
- Encryption support with GPG or OpenSSL
- Additional compression options (gzip, bzip2, xz) beyond pg_dump's built-in compression
- Configurable backup schedule via cron
- Webhook notifications for backup status
- Robust error handling and logging
POSTGRES_HOST
- PostgreSQL server hostnamePOSTGRES_PORT
- PostgreSQL server port (default: 5432)POSTGRES_DB
- Database name to backupPOSTGRES_USER
- PostgreSQL usernamePOSTGRES_PASSWORD
- PostgreSQL password
TITLE
- Title for webhook notificationsCRON_TIME
- Cron schedule for backups (e.g., "0 2 * * *" for daily at 2 AM)
ENCRYPTION_KEY
- Encryption passphrase/key (if not set, backups will not be encrypted)ENCRYPTION_METHOD
- Encryption method:gpg
(default) oropenssl
COMPRESSION_TYPE
- Additional compression:none
(default),gzip
,bzip2
, orxz
Note: PostgreSQL's --format=custom
already includes compression. Additional compression is applied on top of this for extra space savings, but may increase backup time.
CLEANUP_ENABLED
- Control whether to clean up local backup files after processing:true
(default) orfalse
Note: When true
, all local backup files are removed after processing. When false
, only intermediate files are cleaned up and the final backup file is retained locally.
S3_BUCKET
- S3 bucket name for storing backupsS3_PREFIX
- S3 key prefix (default: "backups")S3_OPTIONS
- Additional AWS CLI options (e.g.,--storage-class STANDARD_IA
)
Note: Local backup files are automatically cleaned up after processing, regardless of S3 configuration. This ensures your system doesn't accumulate backup files over time.
WEBHOOK_URL
- URL to send backup status notifications
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
AWS_DEFAULT_REGION=us-east-1
AWS_ROLE_ARN=arn:aws:iam::123456789012:role/BackupRole
No additional environment variables needed when running on EC2 with an instance profile.
For a complete working example, see the docker-compose.yml
file in this repository which demonstrates all configuration options including database setup, encryption, compression, and optional S3 upload.
- Uses AES256 cipher
- More secure and widely supported
- File extension:
.pgdump.gpg
or.pgdump.gz.gpg
(if compressed)
- Uses AES-256-CBC with salt
- Good compatibility
- File extension:
.pgdump.enc
or.pgdump.gz.enc
(if compressed)
- None (default): Uses only pg_dump's built-in compression
- gzip: Fast compression, good balance of speed and size
- bzip2: Better compression ratio than gzip, slower
- xz: Best compression ratio, slowest
- PostgreSQL dump (with built-in compression if using custom format)
- Additional compression (if specified)
- Encryption (if key provided)
gpg --batch --yes --passphrase "your-passphrase" --decrypt backup.pgdump.gz.gpg | gunzip > backup.pgdump
openssl enc -aes-256-cbc -d -in backup.pgdump.gz.enc -k "your-passphrase" | gunzip > backup.pgdump
Backups are stored in S3 with the following structure:
# Unencrypted, no additional compression
s3://your-bucket/backups/hostname/2024-01-15T10:30:00+00:00.pgdump
# With gzip compression
s3://your-bucket/backups/hostname/2024-01-15T10:30:00+00:00.pgdump.gz
# With GPG encryption
s3://your-bucket/backups/hostname/2024-01-15T10:30:00+00:00.pgdump.gpg
# With both gzip compression and GPG encryption
s3://your-bucket/backups/hostname/2024-01-15T10:30:00+00:00.pgdump.gz.gpg
Your AWS user or role needs the following S3 permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::your-backup-bucket/*"
}
]
}
- Use IAM roles instead of access keys when possible
- Follow the principle of least privilege for S3 permissions
- Enable S3 bucket encryption
- Consider using AWS Secrets Manager for sensitive credentials
- Use strong encryption keys for backup encryption
- Store encryption keys securely - losing the key means losing access to your backups
- Test backup restoration including decryption process
- S3 Upload Fails: Check AWS credentials and S3 bucket permissions
- Database Connection Issues: Verify PostgreSQL host, port, and credentials
- Permission Errors: Ensure the container has write access to
/opt/dumps
- Encryption Failures: Verify encryption key is set correctly and method is supported
- Compression Issues: Check available disk space and compression tool availability
pg_restore -h localhost -U postgres -d restored_db backup.pgdump
# For gzip
gunzip backup.pgdump.gz
pg_restore -h localhost -U postgres -d restored_db backup.pgdump
# For bzip2
bunzip2 backup.pgdump.bz2
pg_restore -h localhost -U postgres -d restored_db backup.pgdump
# For xz
unxz backup.pgdump.xz
pg_restore -h localhost -U postgres -d restored_db backup.pgdump
# GPG + gzip example
gpg --batch --yes --passphrase "your-passphrase" --decrypt backup.pgdump.gz.gpg | gunzip > backup.pgdump
pg_restore -h localhost -U postgres -d restored_db backup.pgdump
# OpenSSL + bzip2 example
openssl enc -aes-256-cbc -d -in backup.pgdump.bz2.enc -k "your-passphrase" | bunzip2 > backup.pgdump
pg_restore -h localhost -U postgres -d restored_db backup.pgdump