The difference between the request time and the current time is too large #2074
-
Describe the bugI'm getting the error "The difference between the request time and the current time is too large" when trying to upload a big file using AWSSDK.S3 version 3.7.9.21. If I use AWS Tools for Windows on PowerShell version 3.15.172 it works, but I want to use C# code to upload this file. That is my code: using var s3 = new AmazonS3Client(Settings.Default.AWSAccessKey, Settings.Default.AWSSecretKey, Amazon.RegionEndpoint.GetBySystemName(Settings.Default.AWSRegionEndpoint));
s3.PutObject(new PutObjectRequest
{
BucketName = Settings.Default.AWSS3BucketName,
FilePath = backupsFileName
}); Expected BehaviorUpload a large file using AWSSDK C# code without any problem. Current BehaviorThe error "The difference between the request time and the current time is too large" is returned. Reproduction Steps
Possible SolutionNo response Additional Information/ContextNo response AWS .NET SDK and/or Package version usedAWSSDK.S3 3.7.9.21 Targeted .NET Platform.NET Framework 4.8 Operating System and versionWindows Server 2016 (AWS EC2 instance) |
Beta Was this translation helpful? Give feedback.
Replies: 9 comments 3 replies
-
Hi @JaderCM, Good afternoon. The article How to fix S3 (RequestTimeTooSkewed) error documents the possible reasoning behind this issue and possible way to fix it. Also, the recommended way to upload large files is to use multipart uploads. Refer Uploading and copying objects using multipart upload for more details. The page Uploading an object using multipart upload lists the examples using the same using high level (using TransferUtility) or low level API(s) using .NET. Just FYI, the AWS PowerShell Tools for .NET uses TransferUtility (and hence multipart upload) behind the scenes. Thanks, |
Beta Was this translation helpful? Give feedback.
-
@ashishdhingra that solutions still workarounds in my opinion. Do you have some better solution? |
Beta Was this translation helpful? Give feedback.
-
@JaderCM As you mentioned that the upload works fine using AWS Tools got PowerShell, the recommended way to upload large files is to use TransferUtility since it increases thorough put and recalculates signature for each uploaded part. |
Beta Was this translation helpful? Give feedback.
-
@ashishdhingra do you know if it's possible to this signature assertation be refactored to some more smart solution? At moment, all data trafic sent was lost with a generic error message for a common scenario (sending large file to S3). |
Beta Was this translation helpful? Give feedback.
-
Please elaborate. Nothing I'm aware of in SDK. The error is thrown by the S3 service.
That's the reason it is recommended to use multipart uploads for large files as per documentation. |
Beta Was this translation helpful? Give feedback.
-
@ashishdhingra my suggestion is could you add some extra info in that error message when the realNow (AWS UTC time) is equivalent with the current machine UTC time? Like: "Try to use an alternative way to upload your file by mutlipart." |
Beta Was this translation helpful? Give feedback.
-
@JaderCM The error message is not controlled by the SDK, rather returned by the S3 service. And the message is independent of the mechanism used by caller, single file upload or multi-part upload. |
Beta Was this translation helpful? Give feedback.
-
@ashishdhingra thanks to explain this important part about SDK code is autogenerated by service models. In that case, I understand your point to avoid specific exception handlers. In the end, just this discussion can be enough for future references what is the solution for this case: Use TransferUtility. |
Beta Was this translation helpful? Give feedback.
-
Hello! Reopening this discussion to make it searchable. |
Beta Was this translation helpful? Give feedback.
@JaderCM As you mentioned that the upload works fine using AWS Tools got PowerShell, the recommended way to upload large files is to use TransferUtility since it increases thorough put and recalculates signature for each uploaded part.