Module amazons3
source code
Store-type extension that writes data to Amazon S3.
This extension requires a new configuration section <amazons3>
and is intended to be run immediately after the standard stage action,
replacing the standard store action. Aside from its own configuration,
it requires the options and staging configuration sections in the
standard Cedar Backup configuration file. Since it is intended to replace
the store action, it does not rely on any store configuration.
The underlying functionality relies on the AWS CLI
interface. Before you use this extension, you need to set up your
Amazon S3 account and configure the AWS CLI connection per Amazon's
documentation. The extension assumes that the backup is being executed
as root, and switches over to the configured backup user to communicate
with AWS. So, make sure you configure AWS CLI as the backup user and not
root.
You can optionally configure Cedar Backup to encrypt data before
sending it to S3. To do that, provide a complete command line using the
${input}
and ${output}
variables to represent
the original input file and the encrypted output file. This command will
be executed as the backup user.
For instance, you can use something like this with GPG:
/usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input}
The GPG mechanism depends on a strong passphrase for security. One
way to generate a strong passphrase is using your system random number
generator, i.e.:
dd if=/dev/urandom count=20 bs=1 | xxd -ps
(See StackExchange for more details about that advice.) If
you decide to use encryption, make sure you save off the passphrase in a
safe place, so you can get at your backup data later if you need to. And
obviously, make sure to set permissions on the passphrase file so it can
only be read by the backup user.
This extension was written for and tested on Linux. It will throw an
exception if run on Windows.
Author:
Kenneth J. Pronovici <pronovic@ieee.org>
|
AmazonS3Config
Class representing Amazon S3 configuration.
|
|
LocalConfig
Class representing this extension's configuration document.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_verifyUpload(config,
stagingDir,
s3BucketUrl)
Verify that a staging directory was properly uploaded to the Amazon
S3 cloud. |
source code
|
|
|
|
|
logger = <logging.Logger object>
|
|
SU_COMMAND = [ ' su ' ]
|
|
AWS_COMMAND = [ ' aws ' ]
|
|
STORE_INDICATOR = ' cback.amazons3 '
|
|
__package__ = ' CedarBackup2.extend '
|
Executes the amazons3 backup action.
- Parameters:
configPath (String representing a path on disk.) - Path to configuration file on disk.
options (Options object.) - Program command-line options.
config (Config object.) - Program configuration.
- Raises:
ValueError - Under many generic error conditions
IOError - If there are I/O problems reading or writing files
|
_findCorrectDailyDir(options,
config,
local)
| source code
|
Finds the correct daily staging directory to be written to Amazon
S3.
This is substantially similar to the same function in store.py. The
main difference is that it doesn't rely on store configuration at
all.
- Parameters:
options - Options object.
config - Config object.
local - Local config object.
- Returns:
- Correct staging dir, as a dict mapping directory to date suffix.
- Raises:
IOError - If the staging directory cannot be found.
|
_applySizeLimits(options,
config,
local,
stagingDirs)
| source code
|
Apply size limits, throwing an exception if any limits are
exceeded.
Size limits are optional. If a limit is set to None, it does not
apply. The full size limit applies if the full option is set or if today
is the start of the week. The incremental size limit applies otherwise.
Limits are applied to the total size of all the relevant staging
directories.
- Parameters:
options - Options object.
config - Config object.
local - Local config object.
stagingDirs - Dictionary mapping directory path to date suffix.
- Raises:
ValueError - Under many generic error conditions
ValueError - If a size limit has been exceeded
|
_writeToAmazonS3(config,
local,
stagingDirs)
| source code
|
Writes the indicated staging directories to an Amazon S3 bucket.
Each of the staging directories listed in stagingDirs
will be written to the configured Amazon S3 bucket from local
configuration. The directories will be placed into the image at the root
by date, so staging directory /opt/stage/2005/02/10 will be
placed into the S3 bucket at /2005/02/10 . If an encrypt
commmand is provided, the files will be encrypted first.
- Parameters:
config - Config object.
local - Local config object.
stagingDirs - Dictionary mapping directory path to date suffix.
- Raises:
ValueError - Under many generic error conditions
IOError - If there is a problem writing to Amazon S3
|
Writes a store indicator file into staging directories.
- Parameters:
config - Config object.
stagingDirs - Dictionary mapping directory path to date suffix.
|
Clear any existing backup files for an S3 bucket URL.
- Parameters:
config - Config object.
s3BucketUrl - S3 bucket URL associated with the staging directory
|
_uploadStagingDir(config,
stagingDir,
s3BucketUrl)
| source code
|
Upload the contents of a staging directory out to the Amazon S3
cloud.
- Parameters:
config - Config object.
stagingDir - Staging directory to upload
s3BucketUrl - S3 bucket URL associated with the staging directory
|
_verifyUpload(config,
stagingDir,
s3BucketUrl)
| source code
|
Verify that a staging directory was properly uploaded to the Amazon S3
cloud.
- Parameters:
config - Config object.
stagingDir - Staging directory to verify
s3BucketUrl - S3 bucket URL associated with the staging directory
|
_encryptStagingDir(config,
local,
stagingDir,
encryptedDir)
| source code
|
Encrypt a staging directory, creating a new directory in the
process.
- Parameters:
config - Config object.
stagingDir - Staging directory to use as source
encryptedDir - Target directory into which encrypted files should be written
|