Skip to content

Amazon S3 or S3-like storage

Object cloud storage that offers scalability, high availability and security using a simple but powerful interface. Objects are stored using keys, and have some attributes and metadata associated. The keys can contain / in their names, and this creates folder-like structures. Every storage is is defined in terms of buckets. Each bucket is like a new drive that will hold different objects.

You can use Amazon S3 or any S3 compatible cloud storage provider (like DigitalOcean) to store the backups.

The backupsPath in S3 is like a prefix for the object keys. It is recommended to put something here to easily organise the backups from the rest of files in the bucket. The initial slash / is removed when uploading the files. You don't need to create anything under the backupsPath prefix, it will be created automatically with the upload.

The content type of the files will be guessed and set in the metadata when uploading.

To start using it, you must define accessKeyId, accessSecretKey, region and bucket. If using something different from Amazon S3 (like DigitalOcean's Spaces), you must define the endpoint as well. For example, in Spaces, the endpoint will be https://${region}, where ${region} is the region you choose when creating the space.


The backupsPath must start with initial slash /.


In order to use S3, you must install the following python packages:

  • boto3
  • python-magic

Configuration schema

  "type": "s3",
  "backupsPath": "Path in S3 where the backups will be located",
  "maxBackupsKept": "Indicates how many backups to keep in this storage, or set to null to keep them all",
  "region": "Region of the S3 storage",
  "endpoint": "Endpoint (if not set, uses Amazon S3 endpoint)",
  "accessKeyId": "Access Key ID",
  "accessSecretKey": "Access Secret Key",
  "bucket": "Name of the bucket"