php.net |  support |  documentation |  report a bug |  advanced search |  search howto |  statistics |  random bug |  login
Request #52569 Implement "ondemand" process-manager (to allow zero children)
Submitted: 2010-08-09 20:54 UTC Modified: 2011-10-09 09:22 UTC
Votes:30
Avg. Score:4.8 ± 0.6
Reproduced:20 of 21 (95.2%)
Same Version:14 (70.0%)
Same OS:15 (75.0%)
From: mplomer at gmx dot de Assigned: fat (profile)
Status: Closed Package: FPM related
PHP Version: 5.3.3 OS:
Private report: No CVE-ID: None
 [2010-08-09 20:54 UTC] mplomer at gmx dot de
Description:
------------
We are currently implementing PHP-FPM in a shared hosting environment. We have many users on one server (about 200) and we have to define one pool for each customer (each with a different uid).
If PHP-FPM starts one children per customer at startup, this would kill the servers, I think.
So we have to start them on demand. When using PHP via mod_fcgid/suEXEC you can define FcgidMinProcessesPerClass 0, which works fine, but in PHP-FPM this is not allowed.

I tried to remove the check in fpm_conf.c:
  if (config->pm_min_spare_servers <= 0)
  if (config->pm_start_servers <= 0)

but this does not really work (zero children are created at startup which is fine, but no child is created on request and the request hangs). I currently don't find the right entry point.


Patches

fpm-ondemand.v11-5.3.patch (last revision 2011-07-14 22:38 UTC by fat@php.net)
fpm-ondemand.v11.patch (last revision 2011-07-14 22:27 UTC by fat@php.net)
fpm-ondemand.v10-5.3.patch (last revision 2011-07-10 17:49 UTC by fat@php.net)
fpm-ondemand.v10.patch (last revision 2011-07-10 17:49 UTC by fat@php.net)
fpm-ondemand.v9-5.3.patch (last revision 2011-07-09 12:30 UTC by fat@php.net)
fpm-ondemand.v9.patch (last revision 2011-07-09 12:30 UTC by fat@php.net)
fpm-ondemand.v8-5.3.patch (last revision 2011-07-09 00:22 UTC by fat@php.net)
fpm-ondemand.v8.patch (last revision 2011-07-09 00:21 UTC by fat@php.net)
fpm-ondemand.v7-5.3.patch (last revision 2011-07-05 23:12 UTC by fat@php.net)
fpm-ondemand.v7.patch (last revision 2011-07-05 23:08 UTC by fat@php.net)
fpm-ondemand-pm-v6 (last revision 2010-09-25 16:27 UTC by mplomer at gmx dot de)
php-fpm-ondemand-pm-v5 (last revision 2010-08-30 08:16 UTC by mplomer at gmx dot de)
fpm-ondemand.v4.patch (last revision 2010-08-27 06:27 UTC by fat@php.net)
fpm-ondemand-pm-v3 (last revision 2010-08-25 22:12 UTC by mplomer at gmx dot de)
fpm-ondemand.v2.patch.txt (last revision 2010-08-23 22:51 UTC by fat@php.net)
fpm-ondemand-pm-php53 (last revision 2010-08-10 18:01 UTC by mplomer at gmx dot de)
fpm-ondemand-pm (last revision 2010-08-09 20:14 UTC by fat@php.net)

Add a Patch

Pull Requests

Add a Pull Request

History

AllCommentsChangesGit/SVN commitsRelated reports
 [2010-08-09 22:13 UTC] fat@php.net
-Status: Open +Status: Analyzed -Assigned To: +Assigned To: fat
 [2010-08-09 22:13 UTC] fat@php.net
It's been discussed many time. Here is a conversation which explains everything:

http://groups.google.com/group/highload-php-
en/browse_thread/thread/70450f63a727ffd3/6c718c73c9b22aaf

You can try the ** VERY ** experimental attached patch which can maybe help you. 
If you can test it, it would be great.
 [2010-08-09 22:14 UTC] fat@php.net
The following patch has been added/updated:

Patch Name: fpm-ondemand-pm
Revision:   1281384850
URL:        http://bugs.php.net/patch-display.php?bug=52569&patch=fpm-ondemand-pm&revision=1281384850
 [2010-08-10 19:59 UTC] mplomer at gmx dot de
-Summary: Allow zero pm.start_servers/pm.min_spare_servers +Summary: Implement "ondemand" process-manager (to allow zero children)
 [2010-08-10 19:59 UTC] mplomer at gmx dot de
Thanks. Now it's clear, why setting start_servers = 0 does not work :-)

First of all, I updated the appended patch to work with PHP 5.3 branch (see attachment). The first tests work very well. I'll do some more tests the next days.
 [2010-08-24 00:51 UTC] fat@php.net
The following patch has been added/updated:

Patch Name: fpm-ondemand.v2.patch.txt
Revision:   1282603885
URL:        http://bugs.php.net/patch-display.php?bug=52569&patch=fpm-ondemand.v2.patch.txt&revision=1282603885
 [2010-08-24 00:54 UTC] fat@php.net
I did some adjustements.

I've added two configuration directives:

; The minimum delay (in µs) between two consecutive forks.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 100µs
;pm.min_delay_between_fork = 100

; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
;pm.ondemand_process_timeout = 10s;

Moreover, I've added a check on the listen.backlog directive which has to be 
greater than pm.max_children when the 'ondemand' PM is used. If the backlog 
queue is smaller than pm.max_children the libevent can't detect incoming 
connections.
 [2010-08-26 00:15 UTC] mplomer at gmx dot de
I did some finetuning and cleanups in the fpm-ondemand-pm-v3.patch:

set listen_backlog default to 128 (to be discussed?)

removed listen_backlog adjustment (I considered that it is enough to leave the default at 128, a greater value is mostly ignored by the system anyway, and the number of requests in the backlog has rather nothing to do with max_children. If you do not agree with this, feel free to restore the old behaviour :-) )

renamed ondemand_process_timeout to process_idle_timeout (it's better, I think)

fixed "else if (wp->config->pm == PM_STYLE_ONDEMAND)" in fpm_conf.c (was only "else" before)

removed config->pm_(start/min_spare/max_spare)_servers = 0; ... in fpm_conf.c (should not be used anyway when pm = ondemand)

log libevent version in fpm_event_init_main

updated some comments in sample config
 [2010-08-26 00:27 UTC] fat@php.net
For information, the listen.backlog default value has been changed from -1 to 
128 into trunk recentely: http://svn.php.net/viewvc?
view=revision&revision=302725

This changed won't be applied to 5.3 branch so as the ondemand process manager 
as it's a (big ?) new feature. It could be discussed.

I like the listen_backlog adjustment. It was maybe not perfect but setting it to 
0 will make the on demand PM not working.

for the "else if" fix, you have to add an "else {}" in all the cases. If there 
is a bug somewhere else, it's not advised to have a case which could not be 
checked.

it looks great. Can you also provides test results ?

thx a LOT for you help and your time making PHP better.
 [2010-08-27 08:27 UTC] fat@php.net
The following patch has been added/updated:

Patch Name: fpm-ondemand.v4.patch
Revision:   1282890440
URL:        http://bugs.php.net/patch-display.php?bug=52569&patch=fpm-ondemand.v4.patch&revision=1282890440
 [2010-08-27 08:31 UTC] fat@php.net
here is a new revision:

1- I restore the backlog value check at startup. It's been simplified. If it's 
lower than 128, it's set to 128. I kept also the change of the backlog default 
value from -1 to 128. As the ondemand will certainely end up in trunk, it's not 
a violation of the 5.3 code.

2- you were right for the "else if (wp->config->pm == PM_STYLE_ONDEMAND)". I 
thought there were a "if (wp->config->pm == PM_STYLE_STATIC)" on the front of 
the block

3- I change the libevent callback on pool listen_socket to prevent CPU burns. If 
max_childre is triggered, the callback function will be set up at next process 
maintenance call (every 1s).
 [2010-08-27 08:38 UTC] fat@php.net
Updates to come:

1- there is a bug. after fork, child process seems to run code reverved to the 
parent process:

Aug 27 08:32:30.646905 [WARNING] pid 4335, fpm_stdio_child_said(), line 143: 
[pool www_chroot] child 4450 said into stderr: "Aug 27 08:32:30.628866 [DEBUG] 
pid 4450, fpm_pctl_on_socket_accept(), line 529: [pool www_chroot] got accept 
without idle child available .... I forked, now=22184178.981102"

2- the 1s max delay before resetting the fpm_pctl_on_socket_accept() is in 
theory enough. But I prefer to set a much smaller specific timer (~1ms) just in 
case. Imagine there is a bug and children becomes to segfault and it's not 
restarted. There will be a 1s delay (max) before it's forked again. I now it's 
the worst case scenario.
 [2010-08-30 10:18 UTC] mplomer at gmx dot de
Patch version 5:
- Added missing fpm_globals.is_child check (proposed by jerome)
- Implemented "max children reached" status counter.
- Fixed missing last_idle_child = NULL; in fpm_pctl_perform_idle_server_maintenance which caused the routine to shutdown only one (or a few?) processes per second globally instead per pool, when you have multiple pools. I think this was not the intention, and it's a bug.
 [2010-08-30 10:21 UTC] mplomer at gmx dot de
Some test results of the "ondemand"-pm:

General
- Pool has to start with 0 children - OK
- Handling and checking of new config options - OK

Concurrent requests
- Children has to be forked immediately on new requests without delay - OK
- Idle children has to be killed after pm.process_idle_timeout + 0-1s - OK
- When there are more than one idle children, kill only one per second PER POOL - OK

Reaching pm.max_children limit
- No more processes has to be created - OK
- Requests have to wait until one child becomes idle and then get handled immediately without further delay - OK
- When limit is reached, issue a warning and increase status counter (and do this only once) - OK:
  Aug 28 13:39:41.537174 [WARNING] pid 27540, fpm_pctl_on_socket_accept(), line 507: [pool www] server reached max_children setting (10), consider raising it
- Warning is re-issued after children count decreases and hit the limit again - OK

CPU burns
- When reaching max_children limit, pause libevent callback and reenable it in the maintenance routine, to avoid CPU burns - OK

- When children takes too long to accept() the request, avoid CPU burn - NOTOK
 -> happens sometimes (in praxis only sometimes after forking) - to reproduce add an usleep(50000) in children's code after fork(), or apachebench with ~200 concurrent requests :-)
 -> You get a lot of: "fpm_pctl_on_socket_accept(), line 502: [pool www] fpm_pctl_on_socket_accept() called"
 -> It's not a big problem, because this doesn't take much time (in one rare case it took ~90ms on my machine), but it's not nice, especially when the server is flooded with requests
 -> one idea:
   - do not re-enable event-callback in fpm_pctl_on_socket_accept
   - send an event from children just after accept() to parent process
   - re-enable event-callback in parent process, when it receives this event from children
   - in case of an error it is re-enabled in the maintainance routine after max 1s, which is IMHO not bad to throttle requests in case of error

Stress tests
- Test-machine: Intel Core i7 930 (4 x 2.8 GHz) (VMware with 256 MB RAM)

- Testing with 100 concurrent requests on the same pool to a sleep(10); php script with 0 running processes and max_children = 200:
 - took about 4ms per fork in average
 - 25 processes are forked in one block (timeslice?), after this there is a gap of 200-1000ms
  - took about 125ms to fork 25 children
  - took about 2.5s to fork all 100 children and accept the requests
- Testing with 200 concurrent requests
  - hits RAM limit of VM, so it's maybe not meaningful
  - took ~10.5s to fork all 200 children and accept the requests
- Testing with 10 concurrent requests on 20 pools (so in fact 200 concurrent requests)
  - took ~11.2s to fork all 200 children and accept the requests
  - all children are killed after process_timeout + 10s (1 process per second per pool is killed) - OK
 [2010-09-04 16:26 UTC] dennisml at conversis dot de
Since this patch causes the master process to dynamically fork children on demand I'm wondering if it would be feasible to introduce the possibility to do setuid()/setgid() calls after the fork to run the child process with different user id's?
What I'm thinking about is the mass-hosting case that was previously talked about on the mailinglist. Back then this would have been quite a bit of work to do but with this patch this should be much easier to accomplish.
 [2010-09-05 20:42 UTC] fat@php.net
@dennisml at conversis dot de

It's complex to do and security risky. Don't want to mess with that.
 [2010-09-13 03:30 UTC] dennisml at conversis dot de
Is v5 of the patch known not to work with fpm in php 5.3.3? When applying the patch I get the following segfault:

Program received signal SIGSEGV, Segmentation fault.
0x00000000005cf319 in fpm_env_conf_wp (wp=<value optimized out>)
    at /home/dennis/php-5.3.3/sapi/fpm/fpm/fpm_env.c:141
141			if (*kv->value == '$') {
 [2010-09-13 06:27 UTC] fat@php.net
You should "make clean" before recompiling with v5 patch.

The v5 patch does not apply on 5.3.3, it applies on the svn PHP5_3_3 branch.

++ Jerome
 [2010-09-25 18:26 UTC] mplomer at gmx dot de
Released patch v6 - Updated patch to be compatible with current PHP_5_3 branch (rev 303365)

There are no functional changes against v5

Merged (removed) parts which have already been committed:
- rev 301886: only one process (for all pools) could be killed by the 'dynamic' process manager
- rev 301912: Changed listen.backlog in the FPM configuration file to default to 128 instead of -1
- rev 301913: Add libevent version to the startup debug log in FPM
- rev 301925: add 'max children reached' to the FPM status page

Changes:
- Undo change in config.m4 which has IMHO nothing to do with this patch
- Merged listen.backlog part in php-fpm.conf.in from trunk (trunk and 5.3-branch is currently out of sync here!)
- (small code beautify)
 [2010-11-12 01:28 UTC] luca at fantacast dot it
Just a thought on the dynamic setuid/setgid/chroot via fastcgi variables exclusion because of security concerns.

In the group discussion you pointed out how this opens up the possibility for an attacker to call posix_setuid/posix_setgid in PHP code to get root privileges.

However this could be easily prevented by using disable_functions to prevent these and other potentially harmful functions from being called (system, exec, etc) which could be used to achieve the same goal and are not desirable in a shared hosting environment anyway.

I guess this and other protections could be even enforced automatically by PHP FPM if dynamic setuid/setgid/chroot via fastcgi variables is requested. 

Obviously this wouldn't protect against PHP bugs allowing arbitrary code execution, so it only mitigates the potential risk.
 [2010-11-12 01:53 UTC] fat@php.net
> However this could be easily prevented by using disable_functions
> to prevent these and other potentially harmful functions from 
> being called (system, exec, etc) which could be used to achieve
> the same goal and are not desirable in a shared hosting environment anyway.

- it's very complex to do.
- you have no guarantee that nothing will be forgotten (until a security hole is found)
- you have to maintain this security layer over the time, adding new functions, ...
- If the sysadm have to list the functions to be forgotten, it will forget some (by following a buggy how-to -- which are all over the 
Internet btw)


> Obviously this wouldn't protect against PHP bugs
> allowing arbitrary code execution, so it only
> mitigates the potential risk.

I'm sorry, but it's not an option to me. There security checks at kernel level which must not be gotten arround by code running from userland 
(PHP core).
 [2010-11-12 02:30 UTC] luca at fantacast dot it
Just to be clear, I'm not advocating this solution, just contemplating the implications.

Hand built disable_functions by sysadmins is not realistic and centralized maintenance in FPM code (if at all possible) would still require work and be prone to error.

Running as root is very bad security wise and makes almost every other security check useless in case of a bug.
 [2011-06-11 10:22 UTC] denoc at gmx dot de
I tried to patch php5-5.3.5 with the last offered patch, but it did not work.

Does a patch against the current version exist?

Thanks
 [2011-06-11 10:38 UTC] mplomer at gmx dot de
Unfortunately not, as libevent was removed from FPM in PHP 5.3.4, the patch has to be ported to the new simple mini event library. If you are interested to do the port and you are familar with C you are welcome, and I can give you a quick starting point.
 [2011-07-05 19:08 UTC] fat@php.net
The following patch has been added/updated:

Patch Name: fpm-ondemand.v7.patch
Revision:   1309907302
URL:        https://bugs.php.net/patch-display.php?bug=52569&patch=fpm-ondemand.v7.patch&revision=1309907302
 [2011-07-05 19:12 UTC] fat@php.net
The following patch has been added/updated:

Patch Name: fpm-ondemand.v7-5.3.patch
Revision:   1309907530
URL:        https://bugs.php.net/patch-display.php?bug=52569&patch=fpm-ondemand.v7-5.3.patch&revision=1309907530
 [2011-07-05 19:15 UTC] fat@php.net
I've upload 2 new versions of the patch for the ondemand PM:

1- fpm-ondemand.v7.patch applies to 5.4 SVN branch and trunk
2- fpm-ondemand.v7-5.3.patch applies to 5.3 SVN branch

it works except that the event is triggered more than once when a request is 
coming. This makes the ondemand PM to fork more than it should do.

I'll look into that but if you have an idea, don't keep it for yourself.

Can you please test it ?

thx
++ jerome
 [2011-07-06 10:44 UTC] dbetz at df dot eu
Hi Jerome,

what config options must i have in php-fpm.conf to get this working ?
I have tried following:
pm = ondemand

pm.min_spare_servers = 1
pm.max_children = 2000
pm.process_idle_timeout = 10s
pm.min_delay_between_fork = 100


but no child starts for this pool:
[06-Jul-2011 16:32:31.031068] DEBUG: pid 3417, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool default] currently 0 active children, 0 spare children
[06-Jul-2011 16:32:32.031349] DEBUG: pid 3417, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool default] currently 0 active children, 0 spare children


greets,
daniel
 [2011-07-06 12:12 UTC] fat@php.net
This is normal.

the ONDEMAND pm has been made to avoid forking unnecessary children. Children are forked when requests arrives.

Here is what I have on my side:

## conf: 
pm = ondemand
pm.process_idle_timeout = 10
pm.min_delay_between_fork = 10000 # this to avoid the known bug
pm.max_children = 5


## log
[06-Jul-2011 18:05:42.236929] NOTICE: pid 2579, fpm_event_loop(), line 267: ready to handle connections
[06-Jul-2011 18:05:43.237287] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 0 spare children

## at start, no children have been forked
[06-Jul-2011 18:05:44.237661] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 0 spare children

## I request a page and a child is forked to serve the page
[06-Jul-2011 18:05:44.902976] DEBUG: pid 2579, fpm_children_make(), line 411: [pool direct] child 2580 started
[06-Jul-2011 18:05:44.902987] DEBUG: pid 2579, fpm_pctl_on_socket_accept(), line 543: [pool direct] got accept without idle child available .... I forked, now=1970813.831429
[06-Jul-2011 18:05:45.238081] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[06-Jul-2011 18:05:46.238388] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[06-Jul-2011 18:05:47.238889] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[06-Jul-2011 18:05:48.239385] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[06-Jul-2011 18:05:49.239671] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[06-Jul-2011 18:05:50.240080] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[06-Jul-2011 18:05:51.240520] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[06-Jul-2011 18:05:52.241133] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[06-Jul-2011 18:05:53.241648] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[06-Jul-2011 18:05:54.242040] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[06-Jul-2011 18:05:55.242414] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children

## 10s (pm.process_idle_timeout) later, the child has been killed.
[06-Jul-2011 18:05:55.243492] DEBUG: pid 2579, fpm_got_signal(), line 76: received SIGCHLD
[06-Jul-2011 18:05:55.243514] DEBUG: pid 2579, fpm_children_bury(), line 254: [pool direct] child 2580 has been killed by the process managment after 10.340552 seconds from start
[06-Jul-2011 18:05:56.242905] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 0 spare children
[06-Jul-2011 18:05:57.243332] DEBUG: pid 2579, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 0 spare children
 [2011-07-07 02:34 UTC] dbetz at df dot eu
Hello,

i know, but when i make an request, no child gets spawned.

My PHP-FPM has more pools. Every pool is listening to an different socket.
The mod_fastcgi 2.4.6 is patched, that it connects to the socket for the domain.

Example:

Hostname: www.domain.com has PHP Version 5.3.6

FPM Config for Pool is:
[domain.com]
listen = /etc/httpd/fastcgi/5.3.6-domain.com
user = u12345
group = nobody

pm = ondemand
pm.process_idle_timeout = 10
pm.min_delay_between_fork = 10000
pm.max_children = 5

When now an request for www.domain.com to the apache arrives, the apache looks in the ldap for the PHP Version, then mod_fastcgi searches for socket /etc/httpd/fastcgi/5.3.6-www.domain.com, if not existent for /etc/httpd/fastcgi/5.3.6-domain.com (snips www. ). Now Apache connects over mod_fastcgi to the correct socket, but no child gets spawned with pm = ondemand

With dynamic and static all works fine.

Any suggestions ?

Greetings,
Daniel
 [2011-07-08 05:38 UTC] dbetz at df dot eu
If i can help you with debug informations, pls tell me what information you need.
eg traces or gdb ?

Greetings,
 [2011-07-08 05:43 UTC] fat@php.net
You can strace to see what happens:

set log_level to debug
set daemonize to no
then run something like
strace -f -s 1024 -o /tmp/php-fpm.strace.log /path/to/php-fpm
 [2011-07-08 06:00 UTC] dbetz at df dot eu
Hm .. i can only see tons of:
20983 poll([{fd=4, events=POLLIN}], 1, 108) = 0 (Timeout)
20983 clock_gettime(CLOCK_MONOTONIC, {4578918, 852647570}) = 0
20983 clock_gettime(CLOCK_MONOTONIC, {4578918, 852702140}) = 0
20983 clock_gettime(CLOCK_MONOTONIC, {4578918, 852754708}) = 0
20983 clock_gettime(CLOCK_MONOTONIC, {4578918, 852807040}) = 0
20983 poll([{fd=4, events=POLLIN}], 1, 130) = 0 (Timeout)
20983 clock_gettime(CLOCK_MONOTONIC, {4578918, 983213866}) = 0
20983 clock_gettime(CLOCK_MONOTONIC, {4578918, 983267442}) = 0
20983 clock_gettime(CLOCK_MONOTONIC, {4578918, 983323753}) = 0
20983 clock_gettime(CLOCK_MONOTONIC, {4578918, 983368483}) = 0

and then thousands of:

20983 munmap(0xae151000, 1040)          = 0
20983 munmap(0xae150000, 1040)          = 0
20983 munmap(0xae14f000, 1040)          = 0
20983 munmap(0xae14e000, 1040)          = 0
20983 munmap(0xae14d000, 1040)          = 0

The socket gets created here:
20983 socket(PF_FILE, SOCK_STREAM, 0)   = 6
20983 setsockopt(6, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
20983 unlink("/etc/httpd/fastcgi/dynamic/5-53LATEST-wordpressmit.imageupgrade2.domainfactory-kunde.de") = -1 ENOENT (No such file or directory)
20983 umask(0111)                       = 027
20983 bind(6, {sa_family=AF_FILE, path="/etc/httpd/fastcgi/dynamic/5-53LATEST-wordpressmit.imageupgrade2.domainfactory-kunde.de"}, 110) = 0
20983 umask(027)                        = 0111
20983 listen(6, 128)                    = 0

When making an request nothing happens in the strace :-(
 [2011-07-08 19:38 UTC] fat@php.net
On what OS/version are you testing ?
 [2011-07-08 20:21 UTC] fat@php.net
The following patch has been added/updated:

Patch Name: fpm-ondemand.v8.patch
Revision:   1310170902
URL:        https://bugs.php.net/patch-display.php?bug=52569&patch=fpm-ondemand.v8.patch&revision=1310170902
 [2011-07-08 20:22 UTC] fat@php.net
The following patch has been added/updated:

Patch Name: fpm-ondemand.v8-5.3.patch
Revision:   1310170920
URL:        https://bugs.php.net/patch-display.php?bug=52569&patch=fpm-ondemand.v8-5.3.patch&revision=1310170920
 [2011-07-08 20:24 UTC] fat@php.net
I've submitted a new revision of the patch which patch fpm_events so that it won't 
trigger an FD event if it's been triggered less than 500µs.

See if it corrects the previous bug.

Please remember to clean your source tree (make clean) before compiling otherwise 
you may experience segfault or strange behaviors.
 [2011-07-09 08:30 UTC] fat@php.net
The following patch has been added/updated:

Patch Name: fpm-ondemand.v9.patch
Revision:   1310214612
URL:        https://bugs.php.net/patch-display.php?bug=52569&patch=fpm-ondemand.v9.patch&revision=1310214612
 [2011-07-09 08:30 UTC] fat@php.net
The following patch has been added/updated:

Patch Name: fpm-ondemand.v9-5.3.patch
Revision:   1310214631
URL:        https://bugs.php.net/patch-display.php?bug=52569&patch=fpm-ondemand.v9-5.3.patch&revision=1310214631
 [2011-07-09 08:33 UTC] fat@php.net
Here is a new release of the patch which is supposed to correct the double fork 
bug.

I've moved pm.min_delay_between_fork to the event layer and it's now defined in 
the global section as events.delay (default 500µs)
 [2011-07-09 11:08 UTC] trollofdarkness at gmail dot com
Hello everyone,

I tried patching the last 5.3 SVN branch with the patch. All went well (config, compile, etc. ...). But after restarting php-fpm I get that : 

Starting php-fpm [09-Jul-2011 16:57:37] ERROR: [/usr/local/etc/php-fpm.conf:27] unknown entry 'pm.min_delay_between_fork' 


Which is pretty strange because the other options are not showing any error :

This works :
[mypool]
pm = ondemand
pm.process_idle_timeout = 10
# plus the other config lines...

But this, will show the previous error message :

[mypool]
pm = ondemand
pm.process_idle_timeout = 10
pm.min_delay_between_fork = 10000 # this to avoid the known bug
# plus the other config lines...

Any idea ?

Else, beside that, if I use the pm = ondemand , there is absolutly no process forked. Neither at the beginning, nor after, when a request comes... 
However, pm=dynamic & pm=static still works well... ??
 [2011-07-09 11:47 UTC] trollofdarkness at gmail dot com
I think I get the same problem as dbetz .

I am using sockets (one per website) with both Apache2 and Nginx. Neither with the first nor the second it worked.

I tried with the use of ip:port instead of sockets but it is exactly the same... 

I've no idea where it comes from.
 [2011-07-10 11:02 UTC] fat@php.net
pm.min_delay_between_fork has been removed since the version 9 of the patch. This 
is normal :)

which OS/version are you using ?
 [2011-07-10 12:32 UTC] trollofdarkness at gmail dot com
Hello,

Ok so it is normal.

I think it would be great to make a short sum up of what's been introduced in the FPM configuration by this patch, and what behaviour of what already existing config directive has been modified. I read the entire bug file, but did not completely understand what's been modified / introduced about the config directives.

To answer your question I am running Debian Lenny 64bits. PHP has been compiled successfully from the last SVN sources (5.3 branch). And as I said, I used both Apache2 and Nginx, I tried with both sockets and host:ip methods and is has exactly the same behaviour with the two methods :

When FPM starts, no process is launched for the given pool (ok, that's right)
When a request comes... the web server handles it, passes it to FPM ... But after, I don't know, I did not see anything shocking in the log file... And absolutly no process is started by FPM. So the request waits until the webserver finally decides that FPM is not responding correctly and displays an error message instead (after at least 1 minute, so that's not about FPM being slow or something like that).


Thanks in advance.

Troll
 [2011-07-10 12:59 UTC] fat@php.net
To sum up the patch:

1- new configuration:
 * in globals: events.delay (default 500µs). It's the delay between 2 triggers on a FD event. You can disable the delay by setting it to 0. You should change it when the ondemand PM 
forks more child than it should do.

 * in pools: pm = "ondemand" to active the ondemand PM. In this mode, no processes are forked at startup but when requests are comming

 * in pools: pm.process_idle_timeout the time a process should stay idle without serving any requests before it's killed

2- Here is the debug log I have when setting log_level to debug, error_log to /dev/stderr and daemonize to no:

[10-Jul-2011 18:58:14.207987] DEBUG: pid 11370, fpm_scoreboard_init_main(), line 40: got clock tick '100'
[10-Jul-2011 18:58:14.208169] DEBUG: pid 11370, fpm_event_init_main(), line 239: 11 fds have been reserved
[10-Jul-2011 18:58:14.208177] NOTICE: pid 11370, fpm_init(), line 77: fpm is running, pid 11370
[10-Jul-2011 18:58:14.208189] DEBUG: pid 11370, fpm_event_loop(), line 266: 5984 bytes have been reserved in SHM
[10-Jul-2011 18:58:14.208194] DEBUG: pid 11370, fpm_event_loop(), line 267: events.delay = 500
[10-Jul-2011 18:58:14.208197] NOTICE: pid 11370, fpm_event_loop(), line 268: ready to handle connections
[10-Jul-2011 18:58:15.208565] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 0 spare children
[10-Jul-2011 18:58:16.208945] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 0 spare children
[10-Jul-2011 18:58:16.778599] DEBUG: pid 11370, fpm_children_make(), line 411: [pool direct] child 11371 started
[10-Jul-2011 18:58:16.778615] DEBUG: pid 11370, fpm_pctl_on_socket_accept(), line 532: [pool direct] got accept without idle child available .... I forked
[10-Jul-2011 18:58:17.209294] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[10-Jul-2011 18:58:18.209631] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[10-Jul-2011 18:58:19.210035] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[10-Jul-2011 18:58:20.210382] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[10-Jul-2011 18:58:21.210751] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[10-Jul-2011 18:58:22.211264] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[10-Jul-2011 18:58:23.211732] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[10-Jul-2011 18:58:24.212057] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[10-Jul-2011 18:58:25.212462] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[10-Jul-2011 18:58:26.212763] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[10-Jul-2011 18:58:27.213155] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 1 spare children
[10-Jul-2011 18:58:27.221542] DEBUG: pid 11370, fpm_got_signal(), line 76: received SIGCHLD
[10-Jul-2011 18:58:27.221575] DEBUG: pid 11370, fpm_children_bury(), line 254: [pool direct] child 11371 has been killed by the process managment after 10.442991 seconds from start
[10-Jul-2011 18:58:28.214082] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 0 spare children
[10-Jul-2011 18:58:29.214542] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 0 spare children
[10-Jul-2011 18:58:30.214972] DEBUG: pid 11370, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool direct] currently 0 active children, 0 spare children
 [2011-07-10 13:32 UTC] mplomer at gmx dot de
@Troll: Does FPM works for you, when using the "dynamic" or "static" pm? Seems more like an error in the Apache setup, which is somewhat tricky.
 [2011-07-10 13:49 UTC] fat@php.net
The following patch has been added/updated:

Patch Name: fpm-ondemand.v10.patch
Revision:   1310320164
URL:        https://bugs.php.net/patch-display.php?bug=52569&patch=fpm-ondemand.v10.patch&revision=1310320164
 [2011-07-10 13:49 UTC] fat@php.net
The following patch has been added/updated:

Patch Name: fpm-ondemand.v10-5.3.patch
Revision:   1310320180
URL:        https://bugs.php.net/patch-display.php?bug=52569&patch=fpm-ondemand.v10-5.3.patch&revision=1310320180
 [2011-07-10 13:52 UTC] fat@php.net
Here is the v10 of the patch. It corrects the bug trollofdarkness and dbetz have 
found.

In fact the array which is storing event has not been sized up and the event was 
not added. It was working when settings catch_workers_output to yes (but breaks 
this feature).

This should now be OK.

test and report please.

++ jerome
 [2011-07-10 14:38 UTC] trollofdarkness at gmail dot com
Thanks for your answers.

@mplomer : Yes, I said it in my first post, dynamic & static modes are still working perfectly.

But I did not set the catch_something_** = yes, and it seems its default value is "no" so this is probably why it was not working. 

I am going to install the V10 patch and I'll tell you so.

-- Troll
 [2011-07-10 15:50 UTC] trollofdarkness at gmail dot com
Ok the V10 patch works :) 

But it seems to be a bit fast-forking... 

With a test pool with Apache2. I got 5 processes launches for a single request (single request, I am sure there was not any other request, test vhost)

With a test nginx pool, I got 3 processes, but here it's a production website so maybe there was 3 requests coming together, so I can't tell.

I will try the events.delay = 700 (for instance) and I'll tell you.

If I can't manage to find a way to handle the abondance of forks, I'll post a log.

(by the way, how do you attach a file to your comment ? like a debug log ... )
 [2011-07-10 16:01 UTC] fat@php.net
to post a log, use pastbin or something like that.
 [2011-07-10 16:40 UTC] trollofdarkness at gmail dot com
Ok so I got it working.

When using a simple curl request, I have to put events.delay = 1200 (minimum) to get only 1fork/req

When using a browser... I have to put events.delay = 4000 or 5000 (I can't remember which one was working, neither the first or the second, but I don't think, arrived at such a value, that it changes anything) but maybe Opera & Firefox (tested with the two, same behaviour) are opening two simultaneous connection to the server, I don't know.

I'll try this patch on all my sites now. They're not overloaded so it won't be burn-tests but if it can help a bit... :) 

Anyway, thanks for your help.

-- Troll
 [2011-07-10 17:38 UTC] trollofdarkness at gmail dot com
Ok so I finally found why there was two requests using a browser.

There was a .js file loaded in the page, which was generated by a php script.

So the browser loading in parallel HTML and JS files, there was two simultaneous requests to PHP.

So the conclusion is events.delays >= 1200 for me to work.

If it could help, here's my server characteristics : NANO VIA U2250 // Debian Lenny 64bits // 2GB RAM // 160G SATA2
 [2011-07-10 18:03 UTC] fat@php.net
glad to hear.

The slowest your server is the highest you should set events.delay.

In fact 1 or 2 ms (1000 or 2000 for events.delay value) should be considered as a 
maximum in order not to slow down too much requests.
 [2011-07-10 18:20 UTC] trollofdarkness at gmail dot com
Ok, thanks for the information :) 

-- Troll
 [2011-07-11 02:36 UTC] dbetz at df dot eu
Hello all,

now all works fine for me. Great work !! +1
I hope this gets implemented in 5.3.7 stable release :-)

Im testing on an Gentoo 1.12.13

Greetings,
Daniel
 [2011-07-11 04:09 UTC] fat@php.net
As 5.3.7 is in a RC release process, only bugfixes are going there until 5.3.7 is 
out.

so the ondemand will be added in 5.3.7+ and 5.4 and marked as experimental.
 [2011-07-14 18:27 UTC] fat@php.net
The following patch has been added/updated:

Patch Name: fpm-ondemand.v11.patch
Revision:   1310682438
URL:        https://bugs.php.net/patch-display.php?bug=52569&patch=fpm-ondemand.v11.patch&revision=1310682438
 [2011-07-14 18:38 UTC] fat@php.net
The following patch has been added/updated:

Patch Name: fpm-ondemand.v11-5.3.patch
Revision:   1310683104
URL:        https://bugs.php.net/patch-display.php?bug=52569&patch=fpm-ondemand.v11-5.3.patch&revision=1310683104
 [2011-07-14 18:47 UTC] fat@php.net
Here is a complete new release of the ondemand patch: the version 11.

I've rewritten entierly the event part in order to have access to the following mechanism:
select (posix)
poll (posix) <- was the mechanism in used before
epoll (linux)
kqueue (*bsd)
/dev/poll (solaris)
port (solaris)

all this mechanism supports classic Level-Triggered events which is not adapted for what we need for the ondemand patch.

epoll and kqueue also supports Edge-Triggered events which suits very well the ondemand patch needs.

choice is made automatically by detection and the best is used. You can overrride the detection by setting events.mechanism in [global]

So now, ondemand PM will only works if kqueue or epoll is selected. It will only work on Linux and *BSD. But, it should work has expected without the 
previous drity trick: events.delays or pm.min_delay_between_fork.

so, to test it:
get the lastest 5.3 or 5.4 snapshot, apply the patch and:
./vcsclean
./buildconf
./config.nice (or configure ... --enable-fpm)
make

set pm to ondemand and run

/* enjoy */
 [2011-07-15 08:47 UTC] dbetz at df dot eu
Hi jerome,

the test ist successful for me.
Everything works fine with PHP5.3.7rc4-dev and ondemand patch v11

Greets,
Daniel
 [2011-09-14 21:57 UTC] shadow+dphp at la3 dot org
Since there has been no activity on this issue for a while, and I think it would be *very* good to have this ondemand process manager on stable releases as soon as possible, I have decided to try the patch by myself and report the results.

Also, I am a die-hard Debian user, so I tried the patch (v11-5.3) with the latest available Debian packages in sid (php 5.3.8 + a number of other patches).

I found a minor issue at compile time, due to the fact that the "sapi/fpm/fpm/events" folder is not created by the Makefile within the build directory structure (if you specify a build directory separated from the sources). I solved this issue easily by inserting a new line in the "sapi/fpm/Makefile.frag" file:


	$(builddir)/fpm: 
	        @mkdir -p $(builddir)/fpm
	+	@mkdir -p $(builddir)/fpm/events


After this minor correction everything compiled well. Thus, I proceeded to run some tests, using apache (Apache/2.2.20) + mod_fastcgi (2.4.7~0910052141-1) as the php5-fpm frontend. In this setup, I created a separate virtual host in front of each pool, and used the following apache config in there:

        <IfModule mod_fastcgi.c>
                ScriptAlias /fcgi-bin/ "/var/www/virtual/pool.test.com/fcgi-bin/"
                FastCGIExternalServer /var/www/virtual/pool.test.com/fcgi-bin/php-cgi \
			-socket /var/lib/php5/pools/pool.test.com
                AddHandler php-fastcgi .php
                Action php-fastcgi /fcgi-bin/php-cgi
        </IfModule>

The sockets themselves are owned by the apache's user (www-data), whereas the php interpreters run under a different user for each pool.

Results:

General
- Pool has to start with 0 children - OK
- The "ondemand" pm can be enabled - OK
- All three pm's can be enabled at the same time (for different pools, obviously) - OK

Concurrent requests
- Children has to be forked immediately on new requests without delay - OK
- Idle children has to be killed after pm.process_idle_timeout + 0-1s - OK
- When there are more than one idle children, kill only one per second PER POOL - Almost OK 
  (only does it if pm.process_idle_timeout > 1)

Reaching pm.max_children limit
- No more processes has to be created - OK
- Requests have to wait until one child becomes idle and then get handled immediately without further delay - OK
- When limit is reached, issue a warning and increase status counter (and do this only once) - OK:
  [14-Sep-2011 23:16:07] WARNING: [pool site1.test.com] server reached max_children setting (4), consider raising it
- Warning is re-issued after children count decreases and hit the limit again - OK

CPU burns
- When reaching max_children limit, pause libevent callback and reenable it in the maintenance routine, to avoid CPU burns - OK/DUNNO
  CPU usage does not spike under these conditions, but I do not know anything about libevent.
- When children takes too long to accept() the request, avoid CPU burn - OK/DUNNO
  Tried with "ab -c300", no CPU burns observed.

Observations
- "ondemand" spawns new children *way* faster than "dynamic" does
- each "dynamic" process consumes about 0.1% cpu time while idle, whereas idle "ondemand" and "static" pools do not impose any load at all.

Problems
- pm.min_spare_servers default value if unspecified in the pool config is 0, whereas the default pm is "dynamic".
- pm.max_spare_servers default value if unspecified in the pool config is 0.

Final note: Jerome, the ondemand process manager is *amazing*. If it combines well with the global maximum number of processes from #55166 , this is effectively the holy grail of php for virtual hosting companies. Big big congratulations and do not hesitate to ask me for any further info/tests that you need to make this a reality!
 [2011-09-17 10:01 UTC] uros dot gruber at gmail dot com
I tested it on two servers on FreeBSD, both amd64 kernel and the strange thing is 
that on one server I can't get it build with kqueue support. While ./configure I 
can clearly see kqueue support to yes but then when testing config I have an 
error saying "ondemand process manager can ONLY be used when events.mechanisme is 
either epoll (Linux) or kqueue (*BSD)." Is there anything I might missed while 
building?

regards
 [2011-10-08 21:03 UTC] fat@php.net
Automatic comment from SVN on behalf of fat
Revision: http://svn.php.net/viewvc/?view=revision&amp;revision=317922
Log: - Implemented FR #52569 (Add the &quot;ondemand&quot; process-manager to allow zero children)
 [2011-10-08 21:06 UTC] fat@php.net
-Status: Analyzed +Status: Closed
 [2011-10-08 21:06 UTC] fat@php.net
This bug has been fixed in SVN.

Snapshots of the sources are packaged every three hours; this change
will be in the next snapshot. You can grab the snapshot at
http://snaps.php.net/.

 For Windows:

http://windows.php.net/snapshots/
 
Thank you for the report, and for helping us make PHP better.

I finally commit this feature to svn to 5.3 and 5.4 branches.

I'd like to thanks everyone here !! (I accept payments if you want :p)

Please test it and re-open of you discover anything wrong.

++ fat
 [2011-10-09 07:57 UTC] damien at commerceguys dot com
As already mentioned in [2011-09-14 21:57 UTC] shadow+dphp at la3 dot org:

This seems to be missing a:

PHP_ADD_BUILD_DIR(sapi/fpm/fpm/events)

in the config.m4 file.
 [2011-10-09 09:22 UTC] fat@php.net
my mistake.

it's now fixed
 [2012-11-09 12:46 UTC] eugene at zhegan dot in
So, no 'ondemand' for Solaris ? That's sad.

Can we expect that this may change some day ?
Solaris has eventport where Linux has epoll and *BSD have kqueue. But yeah, I'm 
not a programmer at all.
 
PHP Copyright © 2001-2024 The PHP Group
All rights reserved.
Last updated: Tue Mar 19 03:01:29 2024 UTC