simple backup for mysql, posgresql, svn and files to s3 or local filesystem
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
Simple database and filesystem backups with S3 and Rackspace Cloud Files support (with optional encryption)
We needed a backup solution that will satisfy the following requirements:
And since we didn't find any, we wrote our own :)
The following functionality was contributed by astrails-safe users:
pg_dump(by Mark Mansour [email protected])
Thanks to all :)
sudo gem install astrails-safe --source http://gemcutter.org
Please report problems at the Issues tracker
Usage: astrails-safe [OPTIONS] CONFIG_FILE Options: -h, --help This help screen -v, --verbose be verbose, duh! -n, --dry-run just pretend, don't do anything. -L, --local skip remote storage, only do local backups
Note: CONFIG_FILE will be created from template if missing
If you want to encrypt your backups you have 2 options: * use simple password encryption * use GPG public key encryption
IMPORTANT: some gpg installations automatically set 'use-agent' option in the default configuration file that is created when you run gpg for the first time. This will cause gpg to fail on the 2nd run if you don't have the agent running. The result is that 'astrails-safe' will work ONCE when you manually test it and then fail on any subsequent run. The solution is to remove the 'use-agent' from the config file (usually /root/.gnupg/gpg.conf) To mitigate this problem for the gpg 1.x series '--no-use-agent' option is added by defaults to the autogenerated config file, but for gpg2 is doesn't work. as the manpage says it: "This is dummy option. gpg2 always requires the agent." :(
For simple password, just add password entry in gpg section. For public key encryption you will need to create a public/secret keypair.
We recommend to create your GPG keys only on your local machine and then transfer your public key to the server that will do the backups.
This way the server will only know how to encrypt the backups but only you will be able to decrypt them using the secret key you have locally. Of course you MUST backup your backup encryption key :) We recommend also pringing the hard paper copy of your GPG key 'just in case'.
The procedure to create and transfer the key is as follows:
run 'gpg --gen-key' on your local machine and follow onscreen instructions to create the key (you can accept all the defaults).
extract your public key into a file (assuming you used [email protected] as your key email):
gpg -a --export [email protected] > [email protected]
transfer public key to the server
import public key on the remote system:
since we don't keep the secret part of the key on the remote server, gpg has no way to know its yours and can be trusted. To fix that we can sign it with other trusted key, or just directly modify its trust level in gpg (use level 5):
$ gpg --edit-key [email protected] ... Command> trust ... 1 = I don't know or won't say 2 = I do NOT trust 3 = I trust marginally 4 = I trust fully 5 = I trust ultimately m = back to the main menu
Your decision? 5 ... Command> quit
export your secret key for backup (we recommend to print it on paper and burn to a CD/DVD and store in a safe place):
safe do verbose true
local :path => "/backup/:kind/:id"
s3 do key "...................." secret "........................................" bucket "backup.astrails.com" path "servers/alpha/:kind/:id" end
cloudfiles do user "..........." api_key "................................." container "safe_backup" path ":kind/" # this is default service_net false end
sftp do host "sftp.astrails.com" user "astrails" # port 8023 password "ssh password for sftp" end
gpg do command "/usr/local/bin/gpg" options "--no-use-agent" # symmetric encryption key # password "qwe"
# public GPG key (must be known to GPG, i.e. be on the keyring) key "[email protected]"
keep do local 20 s3 100 cloudfiles 100 sftp 100 end
mysqldump do options "-ceKq --single-transaction --create-options"
user "root" password "............" socket "/var/run/mysqld/mysqld.sock" database :blog database :servershape database :astrails_com database :secret_project_com do skip_tables "foo" skip_tables ["bar", "baz"] end
svndump do repo :my_repo do repo_path "/home/svn/my_repo" end end
pgdump do options "-i -x -O" # -i => ignore version, -x => do not dump privileges (grant/revoke), -O => skip restoration of object ownership in plain text format
user "username" password "............" # shouldn't be used, instead setup ident. Current functionality exports a password env to the shell which pg_dump uses - untested! database :blog database :stateofflux_com
tar do options "-h" # dereference symlinks archive "git-repositories", :files => "/home/git/repositories" archive "dot-configs", :files => "/home//.[^.]" archive "etc", :files => "/etc", :exclude => "/etc/puppet/other"
archive "blog-astrails-com" do files "/var/www/blog.astrails.com/" exclude "/var/www/blog.astrails.com/log" exclude "/var/www/blog.astrails.com/tmp" end archive "astrails-com" do files "/var/www/astrails.com/" exclude ["/var/www/astrails.com/log", "/var/www/astrails.com/tmp"] end
git checkout -b my-new-feature)
git commit -am 'Add some feature')
git push origin my-new-feature)
Copyright (c) 2010-2013 Astrails Ltd. See LICENSE.txt for details.