Skip to content

Feature: S3 compatible api endpoint for Atomic Drive #542

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
AlexMikhalev opened this issue Nov 24, 2022 · 2 comments
Open

Feature: S3 compatible api endpoint for Atomic Drive #542

AlexMikhalev opened this issue Nov 24, 2022 · 2 comments

Comments

@AlexMikhalev
Copy link
Collaborator

Give the user the ability to generate S3-compatible credentials:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and
the ability create AWS compatible AWS_ENDPOINT to the user can populate Atomic Drive using standard AWS S3 compatible tools like rclone (or numerous UI clients).

@joepio
Copy link
Member

joepio commented Nov 24, 2022

Does this mean letting usser access their atomic data as if atomic-server was an S3 bucket? Or something different?

And how does this relate to #543?

@AlexMikhalev
Copy link
Collaborator Author

Yes, like if you want to put 100 images into atomic server you can use rclone (or cloud drive), or aws s3 cp command. It's not related to #543 - in 543 atomic server acts as a client for S3 compatible store, in this scenario Atomic Server acts like a s3 server i.e. minio/seaweedfs.

That may not only allow users to upload their assets quickly, but presents an interesting backup/sync mechanism: if you want to backup your data from one atomic server to another, you can use existing S3 tooling - plenty of tools.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants