php.net |  support |  documentation |  report a bug |  advanced search |  search howto |  statistics |  random bug |  login
Request #69890 pm.ondemand does not kill children after reaching max limit
Submitted: 2015-06-20 13:39 UTC Modified: 2021-12-04 18:33 UTC
Votes:16
Avg. Score:4.4 ± 0.8
Reproduced:16 of 16 (100.0%)
Same Version:4 (25.0%)
Same OS:5 (31.2%)
From: vadimyer at gmail dot com Assigned: bukka (profile)
Status: Assigned Package: FPM related
PHP Version: 7.0.0alpha1 OS: Debian 8.1 x64
Private report: No CVE-ID: None
Welcome back! If you're the original bug submitter, here's where you can edit the bug or add additional notes.
If you forgot your password, you can retrieve your password here.
Password:
Status:
Package:
Bug Type:
Summary:
From: vadimyer at gmail dot com
New email:
PHP Version: OS:

 

 [2015-06-20 13:39 UTC] vadimyer at gmail dot com
Description:
------------
Looks like php7-fpm doesn't kill its children while in ondemand mode.

Test script:
---------------
Probably test "script" in this case is www.conf config:

user = www-data
group = www-data

listen = 127.0.0.1:9007

pm = ondemand
pm.max_children = 100
pm.process_idle_timeout = 10s

Expected result:
----------------
After spawning 100 children processes the system should start killing those processes if no longer needed, depending on the pm.process_idle_timeout setting.

Actual result:
--------------
After spawning 100 children processes all children still exist and aren't going to be killed.

Patches

Pull Requests

History

AllCommentsChangesGit/SVN commitsRelated reports
 [2015-08-19 05:34 UTC] satanistlav at mail dot ru
It seems to be only Debian 8 bug.
No such bug on Debian 7 or Ubuntu 12
 [2016-02-22 19:15 UTC] lofesa at gmail dot com
Same issue. Centos 7. php 7.0.3. Children processes are not killed. Master process are doing nothing unless send messages to systemd like this

sendmsg(21, {msg_name(21)={sa_family=AF_LOCAL, sun_path="/run/systemd/notify"}, msg_iov(1)=[{"READY=1\nSTATUS=Processes active:"..., 84}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 84
close(21)
 [2016-05-03 12:23 UTC] c dot kras at pcc-online dot net
Happens on Ubuntu 16.04 as well, both with PHP 7.0.4 provided by Ubuntu and a self compiled PHP 7.0.6. With the self compiled version I did see some children getting killed of after a long while. According to the debug output after ~326 seconds from start, even though I had set process_idle_timeout to 4. My only change to the example pool config was changing process_idle_timeout and pm=ondemand.

Here's a part of the debug output:

[03-May-2016 14:13:25.104184] DEBUG: pid 6788, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool watergid] currently 5 active children, 0 spare children
[03-May-2016 14:13:26.106198] DEBUG: pid 6788, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool watergid] currently 5 active children, 0 spare children
[03-May-2016 14:13:27.108238] DEBUG: pid 6788, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool watergid] currently 3 active children, 2 spare children
[03-May-2016 14:13:28.109372] DEBUG: pid 6788, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool watergid] currently 3 active children, 2 spare children
[03-May-2016 14:13:29.110266] DEBUG: pid 6788, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool watergid] currently 3 active children, 2 spare children
[03-May-2016 14:13:30.111446] DEBUG: pid 6788, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool watergid] currently 3 active children, 2 spare children
[03-May-2016 14:13:31.112601] DEBUG: pid 6788, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool watergid] currently 3 active children, 2 spare children
[03-May-2016 14:13:31.119000] DEBUG: pid 6788, fpm_got_signal(), line 76: received SIGCHLD
[03-May-2016 14:13:31.119052] DEBUG: pid 6788, fpm_children_bury(), line 254: [pool watergid] child 6790 has been killed by the process management after 326.681484 seconds from start
[03-May-2016 14:13:31.119065] DEBUG: pid 6788, fpm_event_loop(), line 419: event module triggered 1 events
[03-May-2016 14:13:32.114155] DEBUG: pid 6788, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool watergid] currently 3 active children, 1 spare children
[03-May-2016 14:13:32.122412] DEBUG: pid 6788, fpm_got_signal(), line 76: received SIGCHLD
[03-May-2016 14:13:32.122486] DEBUG: pid 6788, fpm_children_bury(), line 254: [pool watergid] child 6791 has been killed by the process management after 327.465150 seconds from start
[03-May-2016 14:13:32.122504] DEBUG: pid 6788, fpm_event_loop(), line 419: event module triggered 1 events
[03-May-2016 14:13:33.115569] DEBUG: pid 6788, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool watergid] currently 3 active children, 0 spare children

This last message kept getting repeated.

So in addition to the original bug report, even if the spawn limit hasn't been reached the children should still be killed after pm.process_idle_timeout, but they don't.
 [2016-05-03 12:52 UTC] c dot kras at pcc-online dot net
After some more digging I found the issue not to be with PHP but with Apache 2 and  the usage of ProxyPass(Match). The configuration had set the enableresuse=on setting. This causes the children to stay active which explains why in the log files it kept reporting there were active children even though nothing was happening.

Example of the ProxyPassMatch line in my Apache 2 configuration:

ProxyPassMatch "^/(.*\.php(/.*)?)$" "unix:/opt/php7.0.6/var/run/watergid_ftp.sock|fcgi://localhost/var/www/www_example_com/www" enablereuse=on

Simply remove enablereuse=on, restart both Apache 2 and PHP-FPM and the issue is resolved.
 [2019-05-02 10:03 UTC] mp at webfactory dot de
For the record, the Apache manual at https://httpd.apache.org/docs/current/mod/mod_proxy_fcgi.html#examples says:

--
Enable connection reuse to a FCGI backend like PHP-FPM

Please keep in mind that PHP-FPM (at the time of writing, February 2018) uses a prefork model, namely each of its worker processes can handle one connection at the time.
By default mod_proxy (configured with enablereuse=on) allows a connection pool of ThreadsPerChild connections to the backend for each httpd process when using a threaded mpm (like worker or event), so the following use cases should be taken into account:

Under HTTP/1.1 load it will likely cause the creation of up to MaxRequestWorkers connections to the FCGI backend.

Under HTTP/2 load, due to how mod_http2 is implemented, there are additional h2 worker threads that may force the creation of other backend connections. The overall count of connections in the pools may raise to more than MaxRequestWorkers.

The maximum number of PHP-FPM worker processes needs to be configured wisely, since there is the chance that they will all end up "busy" handling idle persistent connections, without any room for new ones to be established, and the end user experience will be a pile of HTTP request timeouts.
--

My interpretation of this is that with enablereuse=on, every Apache MPM worker process/thread can have an open connection to PHP-FPM. In the PHP-FPM status page, those will show up as "Reading headers..." (or similar). These processes seem to be active (waiting) for PHP-FPM and so they are not terminated.
 [2020-02-27 16:38 UTC] jleon1984 at gmail dot com
Same problem with NGINX + APACHE + PHP-FPM:
SO: Ubuntu 18.04.1 LTS
NGINX: nginx version: nginx/1.14.0 (Ubuntu)
APACHE: Apache/2.4.29 (Ubuntu)
PHP: PHP Version 7.0.32-4
PHP API: 20151012

CONF
----
pm = ondemand
pm.max_children = 101
pm.process_idle_timeout = 10s

FPM-STATUS
----------
process manager:      ondemand
start time:           24/Feb/2020:11:42:09 +0100
start since:          280125
accepted conn:        2243104
listen queue:         0
max listen queue:     0
listen queue len:     0
idle processes:       48
active processes:     1
total processes:      49
max active processes: 101
max children reached: 6
slow requests:        20

PROCESSES (STARTED 4 HOURS AGO)
---------------------------------
************************
pid:                  10289
state:                Idle
start time:           27/Feb/2020:12:35:41 +0100
start since:          17890
requests:             3428
request duration:     2271759
request method:       GET
request URI:          ****************
content length:       0
user:                 -
script:               ****************
last request cpu:     3.96
last request memory:  4194304

************************
pid:                  10290
state:                Idle
start time:           27/Feb/2020:12:35:41 +0100
start since:          17890
requests:             3408
request duration:     111484
request method:       GET
request URI:          ****************
content length:       0
user:                 -
script:               ****************
last request cpu:     8.97
last request memory:  2097152

************************
pid:                  10291
state:                Idle
start time:           27/Feb/2020:12:35:41 +0100
start since:          17890
requests:             3399
request duration:     7003
request method:       GET
request URI:          ****************
content length:       0
user:                 -
script:               ****************
last request cpu:     0.00
last request memory:  2097152
 [2021-12-04 18:33 UTC] bukka@php.net
-Type: Bug +Type: Feature/Change Request -Assigned To: +Assigned To: bukka
 [2021-12-04 18:33 UTC] bukka@php.net
I'm going to change this to feature request as it could be maybe useful to have some logic for closing inactive connections. Might be at least worth to investigate it.
 
PHP Copyright © 2001-2024 The PHP Group
All rights reserved.
Last updated: Sat Nov 23 11:01:28 2024 UTC