-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
azcopy sync ignoring AZCOPY_BUFFER_GB #1215
Comments
Hi @aruvic |
Hi, Are you sure that this is a documentation issue? In previous issues and release notes this flag was announced as the way to limit the memory consumption. Br, |
+1 on this issue. We currently have no way of syncing or copying over 15,000,000 files from one storage account to another (most of them skipped) inside a VM with 8GB of RAM. |
+1 on this issue too. It is impossible for us to sync over 7,300,000 files without using at least 10GB of RAM. |
AZCOPY_BUFFER_GB and other configuration problems aside, azcopy sync memory usage seems to grow really fast and linearly with the number of files. Is there already an issue filed about this memory (ab)use and lack of scalability? This is surprising and disappointing considering |
Which version of the AzCopy was used?
Note: The version is visible when running AzCopy without any argument
AzCopy 10.6.0
Which platform are you using? (ex: Windows, Mac, Linux)
Windows
What command did you run?
Note: Please remove the SAS to avoid exposing your credentials. If you cannot remember the exact command, please retrieve it from the beginning of the log file.
azcopy.exe sync %source% %dest% --put-md5 --recursive=true --delete-destination=true --log-level=ERROR
What problem was encountered?
Even though there is AZCOPY_BUFFER_GB=8 and in the azcopy log Max file butter RAM 8.000 GB memory consumption grows up to 20 GB and we need to kill it.
How can we reproduce the problem in the simplest way?
Try to sync a 2TB volume with thousands of folders and millions of files to a Azure Blob
Have you found a mitigation/solution?
No
The text was updated successfully, but these errors were encountered: