Rclone Amazon Photos

Posted on  by admin

For reasons as yet unknown rclone has been banned from Amazon Drive. All attempts to get tokens are met with “Client authentication failed: invalidclient”. And you’ll find rclone has disappeared from your authorized apps in your Amazon account. This only affects Amazon Drive, it doesn’t affect any of the other cloud providers that rclone supports. Amazon haven’t been returning. I'm trying to use rclone to back up a huge (500GB+) folder with tons of different files to Amazon Cloud Drive. It's working pretty well, but a couple of files seem to be consistently failing to upload.

May 13, 2019 The Raspberry Pi is a great little device to run a variety of services in your home network. It save s files on a user-provided SD card or USB stick. But what if you want to store files off-device. Note that rclone.org is only updated on releases - to see the documentation for the latest beta go to tip.rclone.org. Downloads for scripting. If you would like to download the current version (maybe from a script) from a URL which doesn't change then you can use these links.

Rclone Overview

Rclone provides a modern alternative to rsync. It is able to communicate with any S3 compatible cloud storage provider as well as other storage platforms, and can be used to migrate data from one bucket to another, even if those buckets are in different regions.

Requirements

  • You have an account and are logged into console.scaleway.com
  • You have configured your SSH Key
  • You have generated your API Key
  • You have at least two object storage buckets

Installing Rclone

1 . Connect to your server as root via SSH.

2 . Update the APT packet cache and the software already installed on the instance:

3 . Download and install Rclone with the following sequence of commands:

Configuring Rclone

1 . Begin rclone configuration with the following command:

If you do not have any existing remotes, the following output displays:

If you have previously configured rclone you may see a slightly different output. However, that does not affect the following steps.

2 . Type n to make a new remote. You are then prompted to type a name - here we type remote-sw-paris:

The following output displays:

3 . Type s3 and hit enter to confirm this storage type. The following output displays:

4 . Type Scaleway and hit enter to confirm this S3 provider. The following output displays:

5 . Type false and hit enter, to be able to enter your credentials in the next step.

The following output displays:

6 . Enter your API Access Key and hit enter.

The following output displays:

7 . Enter your API Secret Key and hit enter.

Rclone Amazon Photos Download

The following output displays:

8 . Enter your chosen region and hit enter. Here we choose fr-par.

The following output displays:

9 . Enter your chosen endpoint and hit enter. Here we choose s3.fr-par.scw.cloud.

The following output displays:

10 . Enter your chosen ACL and hit enter. Here we choose private (1).

The following output displays:

11 . Enter your chosen stoage class and hit enter. Here we choose STANDARD (2).

The following output displays:

12 . Type n and hit enter. A summary of your config displays:

13 . Type y to confirm that this remote config is OK, and hit enter.

The following output displays:

14 . Type q to quit the config, and hit enter.

15 . If you want to be able to transfer data to or from a bucket in a different region to the one you just set up, repeat steps 1-14 again to set up a new remote in the required region. Simply enter the required region at steps 7 and 8. Similarly, you may wish to set up a new remote for a different object storage provider.

Note: For further information, please refer to the official RClone S3 Object Storage Documentation. Official documentation also exists for other storage backends

Migrating data

There are two commands that can be used to migrate data from one backend to another.

Rclone amazon photos free
  • The copy command copies data from source to destination.

For example, the following command copies data from a bucket named my-first-bucket in the remote-sw-paris remote backend that we previously set up, to another bucket named my-second-bucket in the same remote backend. The --progress flag allows us to follow the progress of the transfer:

  • The sync command copies data from one one backend to another, but also deletes files/objects in the destination that are not present in the source:

For example, the following command copies data from a bucket named my-first-bucket in the remote-sw-paris remote backend that we previously set up, to another bucket named my-third-bucket in a different remote backend that we configured for the nl-ams region and named remote-sw-ams. It also deletes any data that present in my-third-bucket that isn’t also present in my-first-bucket:

Note: this migration may incur some costs from the object storage you are migrating from since they may or may not bill egress bandwitdth.

There are other commands such as move, which progressively deletes data from the source backend.

Transferring data to C14 Cold Storage

When you copy or sync, your can determine which storage class you wish to transfer your data as.

At Scaleway Elements you can choose from two classes:

  • STANDARD: The Standard class for any upload; suitable for on-demand content like streaming or CDN.
  • GLACIER: Archived, long-term retention storage;

If the storage class is not specified, the data will be transferred as STANDARD by default.

To transfer data to C14 Cold Storage class, add

to your command, as such:

You can verify the storage class of the transferred data by accessing your bucket on the Scaleway Elements console.

Amazon Drive

Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storageservice run by Amazon for consumers.

Status

Important: rclone supports Amazon Drive only if you have your ownset of API keys. Unfortunately the Amazon Drive developerprogram is now closed tonew entries so if you don't already have your own set of keys you willnot be able to use rclone with Amazon Drive.

For the history on why rclone no longer has a set of Amazon Drive APIkeys see the forum.

If you happen to know anyone who works at Amazon then please ask themto re-instate rclone into the Amazon Drive developer program - thanks!

Setup

The initial setup for Amazon Drive involves getting a token fromAmazon which you need to do in your browser. rclone config walksyou through it.

The configuration process for Amazon Drive may involve using an oauthproxy. This is used to keep theAmazon credentials out of the source code. The proxy runs in Google'svery secure App Engine environment and doesn't store any credentialswhich pass through it.

Since rclone doesn't currently have its own Amazon Drive credentialsso you will either need to have your own client_id andclient_secret with Amazon Drive, or use a third party oauth proxyin which case you will need to enter client_id, client_secret,auth_url and token_url.

Note also if you are not using Amazon's auth_url and token_url,(ie you filled in something for those) then if setting up on a remotemachine you can only use the copying the config method ofconfiguration

  • rclone authorize will not work.

Here is an example of how to make a remote called remote. First run:

This will guide you through an interactive setup process:

See the remote setup docs for how to set it up on amachine with no Internet browser available.

Note that rclone runs a webserver on your local machine to collect thetoken as returned from Amazon. This only runs from the moment itopens your browser to the moment you get back the verificationcode. This is on http://127.0.0.1:53682/ and this it may requireyou to unblock it temporarily if you are running a host firewall.

Once configured you can then use rclone like this,

List directories in top level of your Amazon Drive

List all the files in your Amazon Drive

To copy a local directory to an Amazon Drive directory called backup

Modified time and MD5SUMs

Amazon Drive doesn't allow modification times to be changed viathe API so these won't be accurate or used for syncing.

It does store MD5SUMs so for a more accurate sync, you can use the--checksum flag.

Restricted filename characters

CharacterValueReplacement
NUL0x00
/0x2F

Invalid UTF-8 bytes will also be replaced,as they can't be used in JSON strings.

Deleting files

Any files you delete with rclone will end up in the trash. Amazondon't provide an API to permanently delete files, nor to empty thetrash, so you will have to do that with one of Amazon's apps or viathe Amazon Drive website. As of November 17, 2016, files areautomatically deleted by Amazon from the trash after 30 days.

Using with non .com Amazon accounts

Let's say you usually use amazon.co.uk. When you authenticate withrclone it will take you to an amazon.com page to log in. Youramazon.co.uk email and password should work here just fine.

Standard Options

Here are the standard options specific to amazon cloud drive (Amazon Drive).

--acd-client-id

OAuth Client IdLeave blank normally.

  • Config: client_id
  • Env Var: RCLONE_ACD_CLIENT_ID
  • Type: string
  • Default: '

--acd-client-secret

OAuth Client SecretLeave blank normally.

  • Config: client_secret
  • Env Var: RCLONE_ACD_CLIENT_SECRET
  • Type: string
  • Default: '

Advanced Options

Here are the advanced options specific to amazon cloud drive (Amazon Drive).

--acd-token

OAuth Access Token as a JSON blob.

  • Config: token
  • Env Var: RCLONE_ACD_TOKEN
  • Type: string
  • Default: '

--acd-auth-url

Auth server URL.Leave blank to use the provider defaults.

  • Config: auth_url
  • Env Var: RCLONE_ACD_AUTH_URL
  • Type: string
  • Default: '

--acd-token-url

Token server url.Leave blank to use the provider defaults.

  • Config: token_url
  • Env Var: RCLONE_ACD_TOKEN_URL
  • Type: string
  • Default: '

--acd-checkpoint

Checkpoint for internal polling (debug).

  • Config: checkpoint
  • Env Var: RCLONE_ACD_CHECKPOINT
  • Type: string
  • Default: '

--acd-upload-wait-per-gb

Additional time per GB to wait after a failed complete upload to see if it appears.

Sometimes Amazon Drive gives an error when a file has been fullyuploaded but the file appears anyway after a little while. Thishappens sometimes for files over 1GB in size and nearly every time forfiles bigger than 10GB. This parameter controls the time rclone waitsfor the file to appear.

The default value for this parameter is 3 minutes per GB, so bydefault it will wait 3 minutes for every GB uploaded to see if thefile appears.

You can disable this feature by setting it to 0. This may causeconflict errors as rclone retries the failed upload but the file willmost likely appear correctly eventually.

These values were determined empirically by observing lots of uploadsof big files for a range of file sizes.

Upload with the '-v' flag to see more info about what rclone is doingin this situation.

  • Config: upload_wait_per_gb
  • Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
  • Type: Duration
  • Default: 3m0s

--acd-templink-threshold

Files >= this size will be downloaded via their tempLink.

Rclone Amazon Photos Download

Files this size or more will be downloaded via their 'tempLink'. Thisis to work around a problem with Amazon Drive which blocks downloadsof files bigger than about 10GB. The default for this is 9GB whichshouldn't need to be changed.

To download files above this threshold, rclone requests a 'tempLink'which downloads the file through a temporary URL directly from theunderlying S3 storage.

Amazon Drive

  • Config: templink_threshold
  • Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
  • Type: SizeSuffix
  • Default: 9G

--acd-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

  • Config: encoding
  • Env Var: RCLONE_ACD_ENCODING
  • Type: MultiEncoder
  • Default: Slash,InvalidUtf8,Dot

Limitations

Note that Amazon Drive is case insensitive so you can't have afile called 'Hello.doc' and one called 'hello.doc'.

Amazon Drive has rate limiting so you may notice errors in thesync (429 errors). rclone will automatically retry the sync up to 3times by default (see --retries flag) which should hopefully workaround this problem.

Amazon Drive has an internal limit of file sizes that can be uploadedto the service. This limit is not officially published, but all fileslarger than this will fail.

At the time of writing (Jan 2016) is in the area of 50GB per file.This means that larger files are likely to fail.

Unfortunately there is no way for rclone to see that this failure isbecause of file size, so it will retry the operation, as any otherfailure. To avoid this problem, use --max-size 50000M option to limitthe maximum size of uploaded files. Note that --max-size does not splitfiles into segments, it only ignores files over this size.

Amazon Photos Desktop App

rclone about is not supported by the Amazon Drive backend. Backends withoutthis capability cannot determine free space for an rclone mount oruse policy mfs (most free space) as a member of an rclone unionremote.

Rclone Amazon Photos Online

See List of backends that do not support rclone aboutSee rclone about