|  support |  documentation |  report a bug |  advanced search |  search howto |  statistics |  random bug |  login
Request #55166 Implement a global limit to the overall number of php processes
Submitted: 2011-07-09 12:07 UTC Modified: 2011-10-08 19:53 UTC
Avg. Score:5.0 ± 0.0
Reproduced:1 of 1 (100.0%)
Same Version:1 (100.0%)
Same OS:1 (100.0%)
From: trollofdarkness at gmail dot com Assigned: fat (profile)
Status: Closed Package: FPM related
PHP Version: 5.3SVN-2011-07-09 (SVN) OS:
Private report: No CVE-ID: None
 [2011-07-09 12:07 UTC] trollofdarkness at gmail dot com
Hi everyone,

I post here a patch that adds the ability to php-fpm to globally limit the global number of php processes.

I made it for personal purpose as I have an "only" 2GB RAM server and several website running on it. 

The problem is the following, when you have several pools :

* Either you choose to have a max_children which is low, you won't go out of memory, but if there is a peak on a specific website while the other ones are quiet, this specific website will be slow, because there will be a queue for the FastCGI requests
* Or you choose to have a max_children which is high and you thus allow peaks on your websites, but if ever there is a peak on several website at the same time, chances are that your server will go out of memory.

With this patch, I introduce a new configuration directive : 

max_total_processes = 40

40 will be the total number of php processes authorized to be launched in memory at the same time.
When trying to fork a new process (only in the case on pm = dynamic (or pm = ondemand, by the way)) the FPM will look at the current total number of processes that are launched in memory. This is made easy to integrate because FPM already has a fpm_globals.running_children variable referencing the total number of children, all pools summed up. I just had to add a condition, a variable & the ability to read a new config option. SO this is a really, really small patch ! But very useful I think.

The current uploaded patch file has been made for the PHP 5.3 SVN Branch source. Feel free to adapt it to another version of the sources if you want, you will see that it is just a matter of 4-5 changes in the source code.

Hope it will be useful.

-- Troll


max_total_processes-patch-V1-PHP-5.3-SVN-Branch (last revision 2011-07-09 16:09 UTC by trollofdarkness at gmail dot com)

Add a Patch

Pull Requests

Add a Pull Request


AllCommentsChangesGit/SVN commitsRelated reports
 [2011-07-17 07:10 UTC]
-Assigned To: +Assigned To: fat
 [2011-07-17 07:41 UTC]
Automatic comment from SVN on behalf of fat
Log: - Implemented FR #55166 (Added process.max to control the number of process FPM can fork)
 [2011-07-17 07:43 UTC]
-Status: Assigned +Status: Analyzed
 [2011-07-17 07:43 UTC]
Commited into 5.4. Thx you very much for your help.

I'll wait 5.3.7 is out before backporting this to 5.3 branch.
 [2011-07-17 17:22 UTC] trollofdarkness at gmail dot com
Ok :) Thank you for your reactivity ;)
 [2011-09-27 10:46 UTC] albertcasademont at gmail dot com
This is an amazing feature guys, thank you very much! Any news on backporting the patch to 5.3?

 [2011-10-08 13:47 UTC]
Automatic comment from SVN on behalf of fat
Log: - Backported FR #55166 from 5.4 branch (Added process.max to control the number of process FPM can fork)
 [2011-10-08 19:53 UTC]
-Status: Analyzed +Status: Closed
 [2011-10-08 19:53 UTC]
This bug has been fixed in SVN.

Snapshots of the sources are packaged every three hours; this change
will be in the next snapshot. You can grab the snapshot at

 For Windows:
Thank you for the report, and for helping us make PHP better.

 [2012-03-03 14:59 UTC] trollofdarkness at gmail dot com
A small note to those ones who will read this entry : When committed into 5.3.4, 
the directive has been renamed from "max_total_process" to "process.max".

I don't know whether it has been added in the documentation yet, I found by 
digging into the code.
 [2019-12-26 23:08 UTC] tom at tgmedia dot nz
I know this is super old, but what happens if the total processes are used, e.g. 40 in this example; will the request from the web server be queued up? Cheers
 [2019-12-26 23:18 UTC] bugreports at gmail dot com
google for backlog, on a proper setup with fast enough scripts you just have a small delay and it's the same for iptables connlimit with drop
PHP Copyright © 2001-2024 The PHP Group
All rights reserved.
Last updated: Mon Feb 26 19:01:29 2024 UTC