This is a continuation
of articles as they appear here for a discussion on the Azure Data Platform:
All
storage operations with S3 command line tools are idempotent and retriable
which plays well for robustness against provider failures, rate limits and
network connectivity disruptions. While s3cmd and azcopy
must invoke download of files and folders independent from the upload of the
same, rclone
can accomplish
simultaneous download and upload in a single command invocation by specifying
different data connections as command line arguments and
in a streaming manner which is more performant and leaves little or no footprint
on the local filesystem.
For
example:
user@macbook
localFolder
% rclone
copy –metadata –progress aws:s3-namespace
azure:blob-account
Transferred:
215.273 MiB / 4.637 GiB, 5%, 197.226 KiB/s, ETA 6h32m14s
will
stream from one to another leaving no trace in the local filesystem.
The
following script can be used to set tags on objects when copying an object from
s3 source to destination even if –metadata parameter with rclone
command does not work:
#! /bin/bash
for name in `rclone
lsf con-aws-1:srcFolder --recursive`;
do
if [[ $name != */ ]]
then
var=`aws s3api get-object-tagging --bucket
"srcFolder" --key "$name" --output json --query
"TagSet"`
echo "$var"
count=`echo "$var" | jq length`
tags=""
for (( i=0; i < $count; i++)) ; do
tags+=`echo $var | jq .[$i].Key`"="`echo $var | jq
.[$i].Value`"&"; done
echo $tags | tr -d '"'
$tags=`echo $tags | tr -d '"'`
echo $tags
#Key2=Value2&Key1=Value1&
if [[ -n $tags ]]
then
echo rclone copy --metadata-set `echo $tags | tr -d '"'` con-aws-1:/srcFolder/$name con-aws-1:/srcFolder
else
echo rclone copy con-aws-1:/srcFolder/$name con-aws-1:/destFolder
fi
fi
done
#[
# {
# "Key":
"Key2",
# "Value":
"Value2"
# },
# {
# "Key":
"Key1",
# "Value":
"Value1"
# }
#]
#Key2=Value2&Key1=Value1&
No comments:
Post a Comment