Alright, so one of the few hangups I’ve ran into with moving from Apache to Nginx was output buffering. Now, we have a few administrative tools we use to perform large operations on our library and data. These scripts can take normally 3-5 minutes to run, and they output their progress and what step they are on as they run. The way they do this is after ever step in PHP I issue the same two commands:
[php]
ob_flush(); // Flush anything that might be in the header output buffer
flush(); // Send contents so far to the browser
[/php]
Well, with Nginx, it will wait for an entire response from the PHP-FPM instance before sending data to the browser. This is because traditionally the time to generate the response is less than the time to send the response to the browser, so Nginx will let the CGI instance finish as quick as possible to free it up for other requests. Well, even if I called ob_flush() and flush(), Nginx would wait for the entire response before sending it to the client’s browser. So, for our staff panel, I had to disable this buffering. It took a lot of scouring the web, but I finally figured it out:
Nginx Configuration
You need to set a few variables. I couldn’t actually figure out how to disable the buffer for Nginx, but I could set it very low on a per location configuration.
So I have the following location configurations:
location ~ .php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /path/to/public_html/$fastcgi_script_name; fastcgi_read_timeout 600; fastcgi_buffer_size 1k; fastcgi_buffers 128 1k; # up to 4k + 128 * 4k fastcgi_max_temp_file_size 0; gzip off; }
The important configurations are:
fastcgi_buffer_size 1k; fastcgi_buffers 128 1k; # up to 1k + 128 * 1k fastcgi_max_temp_file_size 0; gzip off;
So I set the fastcgi_buffer_size and buffers to 1k. Then, you need to set the max temp file size to 0. Nginx by default will start to buffer on the disk. By setting it to 0 it will send it to the browser. The last piece of the puzzle I missed was turning gzip off, because it would try to buffer, even if it exceeded 1k, to compress the response.
Now, to get it to work, in my php script I echo out 1k worth of html commented out text to ensure everything else after it gets sent to the browser.
Now, I don’t recommend these settings for busy productions servers. However, only 3 people use this staff panel, so the performance differences are extremely low.
Thanks, I had missed the “gzip off;” setting.
LikeLike
Thank you sir! Exactly what I had been looking for
LikeLike
FWIW, you cannot turn off output buffering for FastCGI in Nginx by design. This means for the given usage scenario this configuration works, but as you stated, it definitely isn’t an option for many concurrent users. HTTP proxying is currently the only real option.
A good discussion of this complex topic can be found here:
http://www.ruby-forum.com/topic/197216
LikeLike
An alternative approach would be to pipe the output of the script to a file or socket. The output of the script can than be streamed/output to the browser without waiting for the PHP script to finish executing.
LikeLike
You need more than you write.
Upgrade nginx witch configuration:
fastcgi_keep_conn on; # < solution
proxy_buffering off;
gzip off;
LikeLike
after reading that I’m starting to regret switching to nginx.
I don’t want users waiting 10 seconds or more before they see ANYTHING when in under a second it could send the user the first part of the content – that is something that was easy in the old days with cgi, but on a 64bit machine php is way too massive to use as cgi.
these new limitations are starting to drive me away from php!
I want lower latency, whatever it takes
because right now those long waits are driving users away.
Its not so painful for users if the can quickly see *something* (as in the first part of the content) and by the time they scroll down theres more…
but keep them waiting for everything before they see anything
and most of them will never see any of that content,because by that time its sent to the user the user is no longer there – because they have long ago given up and gone somewhere else!
LikeLike
Its time to migrate to Nginx for my server..
Thanks for inspired, Justin …
LikeLike