0xStubs

System Administration, Reconfigurable Computing and Other Random Topics

Risks of nginx fastcgi buffering or, how iTunes can mess with your Nextcloud server

As suggested by the title, this post consists of two parts. The first part will explain how nginx fastcgi buffering can be exploited relatively easily to fill up a servers hard drive. The second part will tell you how I had to learn this the hard way…

Risks of nginx fastcgi buffering

If you use nginx in combination with fastcgi, e.g., to couple it with php-fpm, you should be aware of the different buffering and caching settings that are available. Buffering of fastcgi responses is a neat feature of nginx, allowing to temporarily store responses from the fastcgi processes so that they are immediately ready for new requests, even if sending that response to the visitor of your site takes some time.

The problem discussed here is related to the fastcgi_max_temp_file_size setting. It allows configuring a hard disk based buffer for responses that are too large to fit into the in-memory buffer. By default, it is set to 1024m meaning that responses up to 1024 MB in size are buffered in temporary files on the hard disk. To see how this works and to demonstrate a possible issue, we can setup a little test.

On our server we create a binary test file of 100M size:

# dd if=/dev/urandom of=testfile bs=1M count=100

Additionally, we create a small PHP script that reads in this file and sends it out as response to incoming requests:

<?php
header("Content-type: application/octet-stream");
$f = fopen("testfile", "rb");
fpassthru($f);
fclose($f);
?>

We can now fetch this file, e.g. using the following curl command. We use HTTP/1.0 for the request to avoid a chunked response. Also, we limit the transfer speed so that the download keeps running for a while:

# curl --limit-rate 1K -0 https://example.org/data.php > /dev/null

On the server, we will now see that while the transfer is ongoing the available disk space is reduced by 100M. The reason is the temporary file used by nginx to buffer the response:

# lsof|grep nginx|grep deleted
nginx 1268 www-data 69u REG 202,0 104677442 393383 /var/lib/nginx/fastcgi/7/08/0000000087 (deleted)

Now, here comes the interesting part. If we request the file not only once but dozens of times in parallel, nginx will happily create temporary files for all of these requests and fill up the hard drive. The limit given by fastcgi_max_temp_file_size is only a per-request/response limit.

# for i in $(seq 20); do curl --limit-rate 1K -0 https://example.org/data.php > /dev/null &; done
# lsof|grep nginx|grep deleted
nginx 1268 www-data 54u REG 202,0 104677442 530022 /var/lib/nginx/fastcgi/2/10/0000000102 (deleted)
nginx 1268 www-data 58u REG 202,0 104628338 528121 /var/lib/nginx/fastcgi/6/09/0000000096 (deleted)
nginx 1268 www-data 84u REG 202,0 104669258 528116 /var/lib/nginx/fastcgi/2/09/0000000092 (deleted)
nginx 1268 www-data 85u REG 202,0 104677442 530029 /var/lib/nginx/fastcgi/7/10/0000000107 (deleted)
nginx 1268 www-data 88u REG 202,0 104677442 528122 /var/lib/nginx/fastcgi/7/09/0000000097 (deleted)
nginx 1268 www-data 89u REG 202,0 104677442 528123 /var/lib/nginx/fastcgi/8/09/0000000098 (deleted)
nginx 1268 www-data 90u REG 202,0 104677442 528124 /var/lib/nginx/fastcgi/9/09/0000000099 (deleted)
nginx 1268 www-data 92u REG 202,0 104677442 528125 /var/lib/nginx/fastcgi/0/10/0000000100 (deleted)
nginx 1268 www-data 96u REG 202,0 104677442 393383 /var/lib/nginx/fastcgi/8/08/0000000088 (deleted)
nginx 1268 www-data 99u REG 202,0 104677442 524367 /var/lib/nginx/fastcgi/9/08/0000000089 (deleted)
nginx 1268 www-data 100u REG 202,0 104677442 530023 /var/lib/nginx/fastcgi/3/10/0000000103 (deleted)
nginx 1268 www-data 101u REG 202,0 104677442 393384 /var/lib/nginx/fastcgi/0/09/0000000090 (deleted)
nginx 1268 www-data 102u REG 202,0 104669258 530027 /var/lib/nginx/fastcgi/6/10/0000000106 (deleted)
nginx 1268 www-data 103u REG 202,0 104677442 393385 /var/lib/nginx/fastcgi/1/09/0000000091 (deleted)
nginx 1268 www-data 108u REG 202,0 104669258 393386 /var/lib/nginx/fastcgi/3/09/0000000093 (deleted)
nginx 1268 www-data 109u REG 202,0 104669258 265302 /var/lib/nginx/fastcgi/4/09/0000000094 (deleted)
nginx 1268 www-data 110u REG 202,0 104669258 265303 /var/lib/nginx/fastcgi/5/09/0000000095 (deleted)
nginx 1268 www-data 111u REG 202,0 104677442 528126 /var/lib/nginx/fastcgi/1/10/0000000101 (deleted)
nginx 1268 www-data 112u REG 202,0 104677442 530024 /var/lib/nginx/fastcgi/4/10/0000000104 (deleted)
nginx 1268 www-data 114u REG 202,0 104677442 530026 /var/lib/nginx/fastcgi/5/10/0000000105 (deleted)

Mitigations

There are a couple of possible mitigations for this problem:

  • Set fastcgi_max_temp_file_size to 0. This will disable buffering of fastcgi responses that do not fit into the in-memory buffer. However, this means that your fastcgi processes will be busy with serving long-running downloads.
  • Put /var/lib/nginx/fastcgi, or wherever this is on your system (configurable via fastcgi_temp_path), on a separate partition so that at least no valuable partition is filled up.
  • Use some form of rate-limiting to limit the number of concurrent requests for a certain file / path / client.

At the end of the article, I will show how we mitigate this issue in a specific scenario. If you know of any other possible solutions, I would be happy to hear about them in a comment.

How iTunes can mess with your Nextcloud server

Now to the fun part. Why is the behavior shown above problematic? Who serves large amounts of data via PHP scripts anyway? Well, for example ownCloud and Nextcloud do. And this is how I experienced the issue first-handed.

I administer a small Nextcloud instance. A user of that instance published a download link to a 346.5 MB epub stored within that instance. A couple of days ago, someone seems to have imported that link into their iTunes library. Afterwards, our server was flooded by HTTP requests on that resource by the following user agent:

itunesstored/1.0 iOS/13.3.1 model/iPad7,11 hwp/t8010 build/17D50 (5; dt:214)

Within a few hours our server received 695 HTTP HEAD requests and 756 GET requests on the download URL. While the HEAD requests posed no issue, the GET requests led to the buffering behavior described above and filled up our hard disk. 370 of the GET requests were actually handled by our server (the remaining have either failed or have been blocked). In total, 14 GB of data have been transferred to that person’s iPad. Due to the on-disk buffering, the requests caused 264.5 GB of disk I/O. And all of that just to serve a single 346.5 MB file‌ to a single device…

I have no idea what the intention of iTunes’ behavior here is. It would make sense if concurrent downloads requested different parts of the file. However, all requests concerned the entire file, even concurrent ones. Four of the 370 served downloads actually finished, i.e., iTunes received the entire file four times and still kept downloading it.

How we dealt with the issue

For our setup this event had three consequences:

  1. nginx now only allows one concurrent download per file and IP. The configuration to achieve this is shown below.
  2. /var/lib/nginx/fastcgi is now on a separate partition so that even if many different files are downloaded in parallel or requests come from different IPs no valuable partition will be filled up.
  3. Requests by itunesstored are generally rejected by our server.

To limit the number of concurrent connections per file and IP, we extended the nginx configuration as follows:

limit_conn_zone $binary_remote_addr$ncfileid zone=ncdl:10m;
server {
    [...]

    location ~ ^(.+?\.php)(/.*)?$ {
        # directives required to properly pass requests to PHP-FPM
        [...]
        location ~ ^(/index\.php)(/s/(?<ncfileid>.+)/download)$ {
            # repeat all command-type directives from surrounding location
            [...]

            limit_conn ncdl 1;
        }
    }
}

If you use a scheme like shown here, be aware that not all configuration directives are inherited by the inner location block. Command-type directives like try_files and fastcgi_pass are not inherited and need to be repeated.

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha loading...