S3 provides a simple way for uploading files to the Amazon S3 service with a progress bar. This is useful for uploading images and files that you want accesible to the public. S3 is built on Knox and AWS-SDK. Both modules are made available on the server after installing this package.
If you want to keep using the older version of this package (pre 0.9.0) check it out using meteor add lepozepo:s3@=3.0.1
If you want to keep using the version of this package that uses server resources to upload files check it out using meteor add lepozepo:s3@=4.1.3
S3 now uploads directly from the client to Amazon. Client files will not touch your server.
Star my code in github or atmosphere if you like my code or shoot me a dollar or two!
- S3.upload path parameter: "" is now root instead of "/".
- Methods:
- _S3upload, _S3_abort_mpu, _S3_multipart_upload, and _S3_multipart_close have been destroyed
- _S3delete has been renamed to _s3_delete
- Infrastructure:
- Package no longer uses lepozepo:streams
$ meteor add lepozepo:s3
V4 signature
Define your Amazon S3 credentials. SERVER SIDE.
S3.config = {
key: 'amazonKey',
secret: 'amazonSecret',
bucket: 'bucketName'
};
Create a file input and progress indicator. CLIENT SIDE.
Create a function to upload the files and a helper to see the uploads progress. CLIENT SIDE.
Template.s3_tester.events({
"click button.upload": function(){
var files = $("input.file_bag")[0].files
S3.upload({
files:files,
path:"subfolder"
},function(e,r){
console.log(r);
},function(progress){
console.log(progress, '%percent uploaded');
}
);
}
})
Template.s3_tester.helpers({
"files": function(){
return S3.collection.find();
}
})
For all of this to work you need to create an aws account. On their website navigate to S3 and create a bucket in region US Standard. Navigate to your bucket and on the top right side you'll see your account name. Click it and go to Security Credentials. Once you're in Security Credentials create a new access key under the Access Keys (Access Key ID and Secret Access Key) tab. This is the info you will use for the first step of this plug. Go back to your bucket and select the properties OF THE BUCKET, not a file. Under Static Website Hosting you can Enable website hosting, to do that first upload a blank index.html file and then enable it. YOU'RE NOT DONE.
You need to set permissions so that everyone can see what's in there. Under the Permissions tab click Edit CORS Configuration and paste this:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Save it. Now click Edit bucket policy and paste this, REPLACE THE BUCKET NAME WITH YOUR OWN:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::YOURBUCKETNAMEHERE/*"
}
]
}
Enjoy, this took me a long time to figure out and I'm sharing it so that nobody has to go through all that. NOTE: It might take a couple of hours before you can actually start uploading to S3. Amazon takes some time to make things work.
This is a null Meteor.Collection that exists only on the users client. After the user leaves the page or refreshes, the collection disappears forever.
This is the upload function that manages all the dramatic things you need to do for something so essentially simple.
Parameters:
- ops.files [REQUIRED]: Must be a FileList object. You can get this via jQuery via $("input[type='file']")[0].files.
- ops.path [DEFAULT: ""]: Must be in this format ("folder/other_folder"). So basically never start with "/" and never end with "/". Defaults to ROOT folder.
- ops.unique_name [DEFAULT: true]: If set to true, the uploaded file name will be set to a uuid without changing the files' extension. If set to false, the uploaded file name will be set to the original name of the file.
- ops.encoding [OPTIONAL: "base64"]: If set to "base64", the uploaded file will be uploaded as a base64 string. The uploader will enforce a unique_name if this option is set.
- ops.expiration [DEFAULT: 1800000 (30 mins)]: Defines how much time the file has before Amazon denies the upload. Must be in milliseconds. Defaults to 1800000 (30 minutes).
- ops.uploader [DEFAULT: "default"]: Defines the name of the uploader. Useful for forms that use multiple uploaders.
- ops.acl [DEFAULT: "public-read"]: Access Control List. Describes who has access to the file. Can only be one of the following options:
- "private"
- "public-read"
- "public-read-write"
- "authenticated-read"
- "bucket-owner-read"
- "bucket-owner-full-control"
- "log-delivery-write"
- Support for signed GET is still pending so uploads that require authentication won't be easily reachable
- ops.bucket [DEFAULT: SERVER SETTINGS]: Overrides the bucket that will be used for the upload.
- ops.region [DEFAULT: SERVER SETTINGS]: Overrides the region that will be used for the upload. Only accepts the following regions:
- "us-west-2"
- "us-west-1"
- "eu-west-1"
- "eu-central-1"
- "ap-southeast-1"
- "ap-southeast-2"
- "ap-northeast-1"
- "sa-east-1"
- callback: A function that is run after the upload is complete returning an Error as the first parameter (if there is one), and a Result as the second.
- Result: The returned value of the callback function if there is no error. It returns an object with these keys:
- loaded: Integer (bytes)
- total: Integer (bytes)
- percent_uploaded: Integer (out of 100)
- uploader: String (describes which uploader was used to upload the file)
- url: String (S3 hosted URL)
- secure_url: String (S3 hosted URL for https)
- relative_url: String (S3 URL for delete operations, this is what you should save in your DB to control delete)
- progressCallback: A function to be called everytime there's an update on the upload progress. It will be called with the percentage og byte uploaded.
This function permanently destroys a file located in your S3 bucket. It still needs more work for security in the form of allow/deny rules.
Parameters:
- path: Must be in this format ("/folder/other_folder/file.extension"). So basically always start with "/" and never end with "/". This is required.
- callback: A function that is run after the delete operation is complete returning an Error as the first parameter (if there is one), and a Result as the second.
This is where you define your key, secret, bucket, and other account wide settings.
Parameters:
- ops.key [REQUIRED]: Your Amazon AWS Key.
- ops.secret [REQUIRED]: Your Amazon AWS Secret.
- ops.bucket [REQUIRED]: Your Amazon AWS S3 bucket.
- ops.denyDelete [DEFAULT: undefined]: If set to true, will block delete calls. This is to enable secure deployment of this package before a more granular permissions system is developed.
- ops.region [DEFAULT: "us-east-1"]: Your Amazon AWS S3 Region. Defaults to US Standard. Can be any of the following:
- "us-west-2"
- "us-west-1"
- "eu-west-1"
- "eu-central-1"
- "ap-southeast-1"
- "ap-southeast-2"
- "ap-northeast-1"
- "sa-east-1"
S3.config = {
key: 'amazonKey',
secret: 'amazonSecret',
bucket: 'bucketName'
};
The current knox client.
The current aws-sdk client.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/frames.html https://github.com/Differential/meteor-uploader/blob/master/lib/UploaderFile.coffee#L169-L178
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html http://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html https://github.com/CulturalMe/meteor-slingshot/blob/master/services/aws-s3.js