Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If the Latest Version of an Object Is the Same Version as What Will Be Restored Skip It. #5

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@ usage: s3-pit-restore [-h] -b BUCKET [-B DEST_BUCKET] [-d DEST]
[-P DEST_PREFIX] [-p PREFIX] [-t TIMESTAMP]
[-f FROM_TIMESTAMP] [-e] [-v] [--dry-run] [--debug]
[--test] [--max-workers MAX_WORKERS]
[--avoid-duplicates]

optional arguments:
-h, --help show this help message and exit
Expand All @@ -111,6 +112,7 @@ optional arguments:
--test s3 pit restore testing
--max-workers MAX_WORKERS
max number of concurrent download requests
--avoid-duplicates tries to avoid copying files that are already at the latest version
```

## Docker Usage
Expand Down
20 changes: 19 additions & 1 deletion s3-pit-restore
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ import os, sys, time, signal, argparse, boto3, botocore, \
from datetime import datetime, timezone
from dateutil.parser import parse
from s3transfer.manager import TransferConfig
from botocore.exceptions import ClientError

args = None
executor = None
Expand Down Expand Up @@ -235,14 +236,30 @@ def handled_by_standard(obj):
return True

def handled_by_copy(obj):
if args.dry_run:
if args.avoid_duplicates and not needs_copy(obj):
return True
if args.dry_run:
print_obj(obj)
return True
future = executor.submit(s3_copy_object, obj)
global futures
futures[future] = obj
return True

def needs_copy(obj):
try:
destination_object_data = client.head_object(Bucket=args.dest_bucket, Key=obj["Key"])
except ClientError as error:
if error.response['ResponseMetadata']['HTTPStatusCode'] == 404:
return True
else:
raise error
# Won't work for files uploaded with different multipart chunk sizes
if args.bucket != args.dest_bucket:
return obj["ETag"] != destination_object_data["ETag"]
else:
return obj["VersionId"] != destination_object_data["VersionId"]

def download_file(obj):
transfer.download_file(args.bucket, obj["Key"], obj["Key"], extra_args={"VersionId": obj["VersionId"]})
unixtime = time.mktime(obj["LastModified"].timetuple())
Expand Down Expand Up @@ -393,6 +410,7 @@ if __name__=='__main__':
parser.add_argument('--debug', help='enable debug output', action='store_true')
parser.add_argument('--test', help='s3 pit restore testing', action='store_true')
parser.add_argument('--max-workers', help='max number of concurrent download requests', default=10, type=int)
parser.add_argument('--avoid-duplicates', help='avoids copying files if the latest version is the version that matches timestamp requested', action='store_true')
args = parser.parse_args()

if args.dest_bucket is None and not args.dest:
Expand Down