Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Big file download from SMB via fast networks eats up memory #29328

Closed
mmattel opened this issue Oct 23, 2017 · 32 comments · Fixed by owncloud-archive/documentation#3503
Closed

Comments

@mmattel
Copy link
Contributor

mmattel commented Oct 23, 2017

Steps to reproduce

  1. Have a big file (mine is 6.8GB) on an SMB share mounted to oC
  2. Having a (important!) fast connection between server and client (LAN)
  3. Download this file via browser

Expected behaviour

Download should pass without problems

Actual behaviour

Download stops after a while (3-4GB downloaded), system becomes unresponsive, after some minutes, system is accessible again.

This does not occur when

  • downloading small/smaller files via SMB
  • downloading from local, means same file is not on SMB but in the users oC home. In my environment the datadirectory is on NFS
  • downloading via a slow connection =eg internet, only a view mbit/s

Testing

Monitoring the system via htop under different scenarious
On fast/SMB, memory and swap gets eaten up, not from the beginning but after a while until system becomes unresponsive, download gets broken and recovers after a while beeing responsive againg and memory is recovered. Memory recovery also takes place if the file downloaded finishes earlier than memory becomes eaten up. This is monitorable.
libsmbclient installed in the latest public version (sudo pecl upgrade smbclient)
Network performance = browser (client) side, the SMB backend delivering the data is fast (55-60MB/s)

Slow Network, SMB: 3.2mbit/s (manually stopped after 510 MB = ~26min, no issues)
image

Fast Network, SMB 114mbit/s (memory starts raising from 345 to +603MB)
Memory raising appears between 1.2 and 3.7GB downloaded, mostly on higher numbers
image

Fast Network, SMB 114mbit/s (memory continues raising from 603 to +980MB)
Memory consumption continues
image

Pause or stop download at 980MB, waiting a minute
Memory gets freed up
image

Fast Network, Local (which is on an NFS mountpoint 200mbit/s (memory stays at ~380)
image

Server configuration

Operating system: ubuntu 16.04

Web server: nginx 1.13.6 (currently latest)

Database: mysql

PHP version: 7.0

ownCloud version: 10.0.3 production

Updated from an older ownCloud or fresh install: updated

Where did you install ownCloud from: tar

Signing status (ownCloud 9.0 and above):

No errors have been found.

The content of config/config.php:
config_report_20171023.txt

List of activated apps:

Enabled:
  - activity: 2.3.6
  - comments: 0.3.0
  - configreport: 0.1.1
  - dav: 0.3.0
  - federatedfilesharing: 0.3.1
  - federation: 0.1.0
  - files: 1.5.1
  - files_clipboard: 0.6.2
  - files_external: 0.7.1
  - files_external_dropbox: 1.0.0
  - files_external_ftp: 0.2.0
  - files_pdfviewer: 0.8.2
  - files_sharing: 0.10.1
  - files_texteditor: 2.2.1
  - files_trashbin: 0.9.1
  - files_versions: 1.3.0
  - files_videoplayer: 0.9.8
  - firstrunwizard: 1.1
  - gallery: 16.0.2
  - market: 0.2.2
  - music: 0.5.3
  - notifications: 0.3.1
  - provisioning_api: 0.5.0
  - systemtags: 0.3.0
  - templateeditor: 0.1
  - updatenotification: 0.2.1
Disabled:
  - encryption
  - external
  - files_antivirus
  - theme-example
  - user_external

Are you using external storage, if yes which one: local / smb / sftp /

Are you using encryption: no

Are you using an external user-backend, if yes which one: no

Client configuration

Browser: Opera, Chrome, same behaviour

Operating system: W10x64

Logs

Web server error log

nothing related

ownCloud log (data/owncloud.log)

nothing related, because I paused/stopped it. Else upstream timeout for php-fpm if I let it crash

Browser log

nothing related

@PVince81 @DeepDiver1975

@jvillafanez
Copy link
Member

Is libsmbclient-php installed and used?

@mmattel
Copy link
Contributor Author

mmattel commented Oct 23, 2017

yes, as written above

dpkg -s libsmbclient

Package: libsmbclient
Status: install ok installed
Priority: optional
Section: libs
Installed-Size: 276
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Architecture: amd64
Multi-Arch: same
Source: samba
Version: 2:4.3.11+dfsg-0ubuntu0.16.04.11

@hodyroff
Copy link

More RAM solves the issue, or?

@mmattel
Copy link
Contributor Author

mmattel commented Oct 23, 2017

More memory just delays the time to crash and is not the solution.
Low(er) memory systems for smaller environments suffer therefore most of it.
These issues are necessary to be fixed with low mem systems so we do not run into issues with higher mem systems which are then much harder to investigate.
I have 1.2GB of memory and the system fails to download a 6.8GB file ??
Thinking about big backup or multimedia files ?

@ghost
Copy link

ghost commented Oct 23, 2017

@mmattel
Copy link
Contributor Author

mmattel commented Oct 23, 2017

dpkg -s php-smbclient

Package: php-smbclient
Status: install ok installed
Priority: optional
Section: php
Installed-Size: 84
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Architecture: amd64
Version: 0.8.0~rc1-2build1
Provides: php-libsmbclient
Depends: ucf, php-common (>= 1:7.0+33~), phpapi-20151012, libc6 (>= 2.14), libsmbclient (>= 2:4.1.1)
Conffiles:
 /etc/php/7.0/mods-available/smbclient.ini 854bee29f153a54b7e7cd275261d8073
Description: PHP wrapper for libsmbclient
 smbclient is a PHP extension that uses Samba's libsmbclient library
 to provide Samba related functions and 'smb' streams to PHP programs.
Original-Maintainer: Debian PHP PECL Maintainers <pkg-php-pecl@lists.alioth.debian.org>
Homepage: http://pecl.php.net/smbclient

@ghost
Copy link

ghost commented Oct 23, 2017

Installed doesn't mean its loaded correctly so another check you could do is to verify this via a phpinfo().

@mmattel
Copy link
Contributor Author

mmattel commented Oct 23, 2017

phpini()

Additional .ini files parsed
/20-smbclient.ini, (shortened)
libsmbclient .
Version 0.9.0
smbclient Support enabled
smbclient extension Version 0.9.0
libsmbclient library Version 4.3.11-Ubuntu

@mmattel
Copy link
Contributor Author

mmattel commented Oct 23, 2017

@kdslkdsaldsal refering #29328 (comment)
even it has no impact to this case, pls adopt the documentation to reflect your statement.

@ghost
Copy link

ghost commented Oct 23, 2017

@mmattel Ok, that looks good and shows that the module is correctly loaded. If you think anything needs to be adopted feel free to create a new issue at the issue tracker of the documentation. 😃

@jvillafanez
Copy link
Member

@mmattel could you check running the following script from you nginx server? Adjust the parameters (workgroup, username, password and smbfile) in order to download your file.

<?php
$workgroup = null;
$username = 'tester';
$password = 'password';
$smbfile = 'smb://localhost/testshare/testdir/testfile.txt';
$localfile = '/tmp/localfile.txt';

// Create new state:
$state = smbclient_state_new();

// Initialize the state with workgroup, username and password:
smbclient_state_init($state, $workgroup, $username, $password);

// Open a file for reading:
$file = smbclient_open($state, $smbfile, 'r');

$localfiledesc = fopen($localfile, 'wb');

// Read the file incrementally, dump contents to output:
while (true) {
	$data = smbclient_read($state, $file, 4096);
	if ($data === false || strlen($data) === 0) {
		break;
	}
	fwrite($localfiledesc, $data);
}

fclose($localfiledesc);
// Close the file handle:
smbclient_close($state, $file);

// Free the state:
smbclient_state_free($state);

I haven't seen any possible memory leak in our code, so it might be in the libsmbclient-php or lower libraries (libsmbclient), or maybe nginx could be buferring the whole response.

The above script is quite simple and should help to determine where the problem might be:

  • If run standalone and the problem appears, then the memory leak should be either in the libsmbclient or libsmbclient-php libraries.
  • If run through nginx (supposing that the problem isn't in the libsmbclient nor libsmbclient-php libraries) and the problem appears, then nginx might be the one to blame.

You can also check with apache. Last time I checked (with apache) in a VM with 2-3 GB of memory fetching a file of around 5 GB I didn't experience any issue. PHP memory limitations were the default (512 MB). This was between 2 different VM on the same host, although I didn't remember if I put some network constraint to test another issue.
I'll recheck anyway.

@mmattel
Copy link
Contributor Author

mmattel commented Oct 24, 2017

(Edited)
I tested as suggested, following results / comments:

  • I used the code above and adopted it a little to see if there are any errors.
    Copying a file from SMB to a local mount which is on NFS with 6.8 GB went fine.
    Memory consumtion was low and did not change. This means that there is no problem with libsmbclient-php or lower libraries without going thru the webserver
  • I would be great to test the same code idea but then output to the browser.
    I have no coding experience in that and need help.
  • Q: we read in the test made above the file via a while statement to chunk it until it´s end.
    The chunk result is continiously written with fwrite. When writing to the browser we use in our code readfile which sends from my understanding from a source. The question is, where is the data buffered?
    I reference https://zinoui.com/blog/download-large-files-with-php (chunked download) which uses echo and not readfile which is clear to me but not in how it seems we do it which looks to me as two seperate processes - or?
  • As written in the issue on top, it is a matter of low memory (my system has 1.2GB) AND SMB (local is fine) AND low/high bandwith to the browser where low bandwith causes no issues...

My phi.ini has output_buffering = off and nginx is also configured with fastcgi_request_buffering off

@mmattel
Copy link
Contributor Author

mmattel commented Oct 24, 2017

It seems that I found a fix (or better a workaround)...

Documenting the nginx parameter fastcgi_request_buffering off, I questioned myself regarding the word request and re-took a look into the nginx documentation http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_buffering

fastcgi_request_buffering
Enables or disables buffering of a client request body

and with a quick search I found:

fastcgi_buffering default on
Enables or disables buffering of responses from the FastCGI server

Testing this parameter in on/off situations I can tell that having it set to off solves the problem 😄
Totally weired.

I am not happy with the solution because I still think that the root cause is in the oC code. I cannot imagine that nginx would have an memory issue for such an long time and nobody identifies it...

I will file a documentation PR asap.

@hodyroff
Copy link

Wow, thanks! So we can close this one?

@mmattel
Copy link
Contributor Author

mmattel commented Oct 25, 2017

No.
It is necessary to investigate why the combination

  • Low memory (my system has 1.2GB)
  • AND SMB (local is fine)
  • AND high bandwith to the browser (low is fine)
  • AND big file download (many GB)

causes this issue

@jvillafanez
Copy link
Member

I'm checking with apache and I don't see any problem downloading a 4.4GB file between local VMs (between 13-15MB/s). VM memory limited to 2GB, and docker container (with ownCloud) inside that VM limited to 512MB. Memory consumption stays stable during the whole transfer, also checked with output buffering on and off (according to the .htaccess file).

You might want to retry with apache to check if the problem is with nginx.

Note that, as far as I know, nginx isn't officially supported, so I don't think there will be more progress other than verifying that apache works under similar conditions.

@hodyroff
Copy link

Just to make sure: Nginx is supported. Our recommendation and default however continues to be Apache as we don't see a huge advantage and there are way more improvements for performance which we can make inside ownCloud rather then using a different web server.
Seems like Nginx needs additional documentation for the optimal setup, so very much appreciate your efforts @mmattel

@mmattel
Copy link
Contributor Author

mmattel commented Oct 25, 2017

You are right. As you have seen, I already filed a fix/workaround for nginx in the documentation repro.
(pls look at it and approve so it gets quickly merged)
I am just writing a small script to downlad from smb via browser not with readfile but with echo in a while loop and let you know the result as I have it.
Such things are always good to investigate to see different aspects respectively impacts to set the proper measures.

@mmattel
Copy link
Contributor Author

mmattel commented Oct 25, 2017

Made some more tests.

I created a subpage and added the two scripts from below.
I had the same php-fpm configuration settings as in my owncloud domain and set the parameter fastcgi_buffering once to OFF respectively commented it out to make it default ON.

Results

  • It downloaded completely in all tests
  • Memory consumtion did not change or rise
  • fastcgi_buffering OFF or ON made no difference
  • I needed to add ini_set('max_execution_time', 3600); to avoid timouts
  • Download performance was ony 8-9MB/s compared 13-14MB/s with readfile
  • CPU load was the same about 20-30% for PHP, but neglectible in nginx (which was high using readfile)

Personal opinion

  • It is not browser related
  • It is not directly related to php-smbclient
  • It is not directly related to nginx
  • It is a difference if either using readfile or echo in a while loop.
  • The issue seems to be a combination of how php prepares the data and hands it over to the webserver

I come back to the question if we on the one hand side use fwrite in the while loop until it is finished and on the other side use readfile - where does fwrite write to so that readfile can use it. And from my understanding these are two processes. I do not know if they mandatory run sequential or can run in parallel...

index.html

<form>
<input type="button" value="Download file" onclick="window.location.href='./download.php'" />
</form>

download.php
Note: the echo statements just helped to see if all smbclient related calls passed before I echo´ed the result. Just comment the while block or echo out to check.

<?php

# 300 seconds = 5 minutes
ini_set('max_execution_time', 3600);

echo "Starting" . "<br>";

$workgroup = 'xyz';
$username = 'abc';
$password = 'def';
$smbfile = 'smb://server/share/file';

# Create new state:
$state = smbclient_state_new();
echo "New: " . smbclient_state_errno($state) . "<br>";

# Initialize the state with workgroup, username and password:
smbclient_state_init($state, $workgroup, $username, $password);
echo "Init: " . smbclient_state_errno($state) . "<br>";

$stat = smbclient_stat($state, $smbfile);
echo "State: " . smbclient_state_errno($state) . "<br>";

$filesize = $stat['size'];
echo "Filesize: " . number_format($filesize) . "<br>";
#echo '<pre>'; print_r($stat); echo '</pre>';

// Open a file for reading:
$file = smbclient_open($state, $smbfile, 'r');
echo "Open: " . smbclient_state_errno($state) . "<br>";

header('Content-Description: File Transfer');
header('Content-Type: application/octet-stream');
header('Content-Disposition: attachment; filename="'.basename($smbfile).'"');
header('Expires: 0');
header('Cache-Control: must-revalidate');
header('Pragma: public');
header("Content-length: $filesize");

echo "Start sending file " . $localfile . "<br>";
# Read the file incrementally, dump contents to output:
while (true) {
        $data = smbclient_read($state, $file, 4096);
          if ($data === false || strlen($data) === 0) {
                break;
          }
        echo $data;
}
echo "Finished sending file" ."<br>";

# Close the file handle:
smbclient_close($state, $file);
echo "Close: " . smbclient_state_errno($state) ."<br>";

# Free the state:
smbclient_state_free($state);

echo "Download finished";

?>

@jvillafanez
Copy link
Member

@mmattel you might want to check how the php://output stream is handled because that's the only thing that might be different:

echo "Start sending file " . $localfile . "<br>";
# Read the file incrementally, dump contents to output:
$output = fopen('php://output', 'wb');
while (true) {
        $data = smbclient_read($state, $file, 4096);
          if ($data === false || strlen($data) === 0) {
                break;
          }
        fwrite($output, $data);
}

Hopefully that's close enough to what ownCloud should be doing.

@ghost
Copy link

ghost commented Oct 25, 2017

Just keep in mind that running PHP scripts via php-cli vs. mod_php (in apache) vs. php-fpm (in nginx) might be completely different things where not even the webserver but the SAPI plays a role.

@mmattel
Copy link
Contributor Author

mmattel commented Oct 25, 2017

@kdslkdsaldsal
I ran all tests not via cli but from the browser ! - forgot to mention...

@jvillafanez
I ran the test with the code adopted, it passes as fine, no memory issues. Which what I expected !
But the difference looking into your statement is, that oC uses - as far I can identify it - readfile to pass the data to the browser while this tests use echo or fwrite inside a while block.

@ghost
Copy link

ghost commented Oct 26, 2017

@mmattel Yes, but nginx is using php-fpm and Apache mod-php

Both SAPI might behave completely different, even if they are both "PHP". This e.g. could be even caused by a bug in PHP only showing up in php-fpm under certain circumstances or similar.

@jvillafanez
Copy link
Member

Yes, all the tests should be done through nginx because the problem is there.

But the difference looking into your statement is, that oC uses - as far I can identify it - readfile to pass the data to the browser while this tests use echo or fwrite inside a while block.

I don't think that's the case since all the file access should be done through the webdav interface, so downloads should be controlled by the webdav module, which is using stream_copy_to_stream function (or it should).
The reason we aren't using that function is because we need an open stream to read the data from, which is more difficult to get without using ownCloud's code.

You might want to check what happens if you add the following line in the apps/dav/lib/Connector/Sabre/FilesPlugin.php file (with the nginx buffering active):

                        $checksum = $node->getChecksum('sha1');
                        if ($checksum !== null && $checksum !== '') {
                                $response->addHeader('OC-Checksum', $checksum);
                        }
                        $response->addHeader('X-Accel-Buffering', 'no');    // <----- line added

That should disable the buffering for the file downloads

@mmattel
Copy link
Contributor Author

mmattel commented Oct 26, 2017

Just ran the test by commenting out fastcgi_buffering (defaults now to on)
and adding the addHeader('X-Accel-Buffering', 'no') statement as suggested.
I also tried it with: addHeader('X-Accel-Buffering: no');

Memory consumption raises again.

Personal opinion
This is a very good approach, but maybe it was the wrong or not the only place to be added ...
Maybe not all have the ability to change the nginx config because of hosting...

Thinking about the following situation:
Could that also become an issue on federated systems when loading multi GB files from one fed to the other in environments as described in the issue here??

@jvillafanez
Copy link
Member

I can confirm the problem with a similar configuration as the one proposed by ownCloud before removing the nginx buffering (data from "top")

 1503 www-data  20   0  564444  44692  33104 R 41.7  2.2   1:08.17 php-fpm7.0   
 1101 www-data  20   0  612764 491868   2940 S 32.3 24.0   0:51.64 nginx

nginx memory consumption keeps increasing (php-fpm7.0 is stable)

Adding the line said in #29328 (comment) seems to do the trick for me, keeping the nginx's memory usage low (0.2 comparing with the data above - php-fpm7.0 keeps a similar value of 2.2)

I'm using the following configuration (based on what it is in the documentation):
for the server configuration:

upstream php-handler {
  server unix:/var/run/php/php7.0-fpm.sock;
}

server {
        listen 80;
        server_name localhost;

        # Path to the root of your installation
        root /opt/owncloud/;

        client_max_body_size 5G; # set max upload size
        fastcgi_buffers 64 4K;

        rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect;
        rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect;
        rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect;

        index index.php;
        error_page 403 /core/templates/403.php;
        error_page 404 /core/templates/404.php;

        location = /robots.txt {
            allow all;
            log_not_found off;
            access_log off;
        }

        location ~ ^/(data|config|\.ht|db_structure\.xml|README) {
                deny all;
        }

        location / {
                # The following 2 rules are only needed with webfinger
                rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
                rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;

                rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
                rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;

                rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;

                try_files $uri $uri/ index.php;
        }

        location ~ ^(.+?\.php)(/.*)?$ {
                try_files $1 = 404;
                include fastcgi_params;
                fastcgi_param SCRIPT_FILENAME $document_root$1;
                fastcgi_param PATH_INFO $2;
                fastcgi_pass php-handler;
        }

        # Optional: set long EXPIRES header on static assets
        location ~* ^.+\.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
                expires 30d;
                # Optional: Don't log access to assets
                access_log off;
        }
}


server {
  listen 443 ssl;
  server_name localhost;

  ssl_certificate /etc/nginx/ssl/server.crt;
  ssl_certificate_key /etc/nginx/ssl/server.key;

  # Path to the root of your installation
  root /opt/owncloud/;
  # set max upload size
  client_max_body_size 10G;
  fastcgi_buffers 64 4K;

  rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect;
  rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect;
  rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect;

  index index.php;
  error_page 403 /core/templates/403.php;
  error_page 404 /core/templates/404.php;

  location = /robots.txt {
    allow all;
    log_not_found off;
    access_log off;
    }

  location ~ ^/(?:\.htaccess|data|config|db_structure\.xml|README){
    deny all;
    }

  location / {
   # The following 2 rules are only needed with webfinger
   rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
   rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;

   rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
   rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;

   rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;

   try_files $uri $uri/ /index.php;
   }

   location ~ \.php(?:$|/) {
   fastcgi_split_path_info ^(.+\.php)(/.+)$;
   include fastcgi_params;
   fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
   fastcgi_param PATH_INFO $fastcgi_path_info;
   fastcgi_param HTTPS on;
   fastcgi_pass php-handler;
   }

   # Optional: set long EXPIRES header on static assets
   location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
       expires 30d;
       # Optional: Don't log access to assets
         access_log off;
   }

  }

nginx configuration is the default provided by ubuntu (mail section is commented, so it's ignored here):

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
	worker_connections 768;
	# multi_accept on;
}

http {

	##
	# Basic Settings
	##

	sendfile on;
	tcp_nopush on;
	tcp_nodelay on;
	keepalive_timeout 65;
	types_hash_max_size 2048;
	# server_tokens off;

	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	##
	# SSL Settings
	##

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
	ssl_prefer_server_ciphers on;

	##
	# Logging Settings
	##

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;

	##
	# Gzip Settings
	##

	gzip on;
	gzip_disable "msie6";

	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	# gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

	##
	# Virtual Host Configs
	##

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
}

@jvillafanez
Copy link
Member

@mmattel you might want to recheck the custom header solution (#29400). If that doesn't work, you might want to post your nginx configuration so we can check what setup is the problematic one.

@mmattel
Copy link
Contributor Author

mmattel commented Oct 31, 2017

Retried - works 👍

image
It seems that I have put the code in my test before into the wrong line...

@mmattel
Copy link
Contributor Author

mmattel commented Oct 31, 2017

I have created an nginx ticket: https://trac.nginx.org/nginx/ticket/1408

@mmattel
Copy link
Contributor Author

mmattel commented Nov 2, 2017

Update from nginx ticket:

Are you using fastcgi_buffers 64 4K; as in the configuration provided in the issue? `
If yes, please test with a lower number of buffers (at most 63, ​default 8 is fine).
Using 64 fastcgi_buffers as in the configuration provided will end up in total 65 buffers 
being used to read the response, and this makes it possible that ngx_readv_file() will 
have to allocate memory which will be only freed when closing the connection, see d1bde5c3c5d2.

And

Yes buffers were set to fastcgi_buffers 64 4K;
Testing with fastcgi_buffers 63 4K; as you said 63 should be max, downloading went fine 
and no memory was eaten up.

Two points:
a.) The check in the changeset referenced is >64 and not >63 ...
b.) Add a documentation note about this limit in ​
http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_buffers 
because many people do not know about this "silent" issue ... 

Because we have in our documentation fastcgi_buffers 64 4K; as parameter set, I will do a doc PR asap to correct this (reducing to 8 4K respectively adding a note not to exeed 63).

@jvillafanez we can add in the doc also the nginx conf parameter fastcgi_ignore_headers “X-Accel-Buffering” which would reenable buffering when set properly. If the admin can not set it, we are fine because of your PR to disable buffering in the header...

@jvillafanez
Copy link
Member

@jvillafanez we can add in the doc also the nginx conf parameter fastcgi_ignore_headers “X-Accel-Buffering” which would reenable buffering when set properly. If the admin can not set it, we are fine because of your PR to disable buffering in the header...

Makes sense 👍

@lock
Copy link

lock bot commented Jul 30, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Jul 30, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants