|  support |  documentation |  report a bug |  advanced search |  search howto |  statistics |  random bug |  login
Request #77959 Scheduling of PHP-FPM processes in "ondemand"
Submitted: 2019-05-02 09:58 UTC Modified: -
Avg. Score:3.0 ± 0.0
Reproduced:1 of 1 (100.0%)
Same Version:0 (0.0%)
Same OS:0 (0.0%)
From: mp at webfactory dot de Assigned:
Status: Open Package: FPM related
PHP Version: 7.2.17 OS: Ubuntu 18.04.2 LTS
Private report: No CVE-ID: None
View Add Comment Developer Edit
Welcome! If you don't have a Git account, you can't do anything here.
You can add a comment by following this link or if you reported this bug, you can edit this bug over here.
Block user comment
Status: Assign to:
Bug Type:
From: mp at webfactory dot de
New email:
PHP Version: OS:


 [2019-05-02 09:58 UTC] mp at webfactory dot de
I am using PHP-FPM with the "ondemand" process model, and

pm.max_requests = 500
pm.process_timeout = 120
pm.max_children = 50

(If the exact reasoning why I am using this process model or these numbers matters, let me know and I'll amend the bug description.)

From time to time, my server faces load spikes that can lead to a few dozen PHP-FPM processes running simultaneously. The baseline load is about 2 PHP-FPM reqs/s.

What I have observed is that after the spikes, I have a large number of FPM processes that never go away. I would have expected that some time after handling a load spike, the number of running processes should be very low, around what is needed to work the baseline load.

I think the explanation is that PHP-FPM allocates work to worker processes in a round-robin fashion. Thus, every single running process gets something to do before the 120s timeout is reached. This is supported by the fact that on the status page, the number of requests processed is roughly the same for all children.

I still have no clue why the max_requests does not seem to help. After some time, all the processes should have served the 500 requests an be terminated. Unless a replacement is started immediately (regardless of load), this should make the number of running (but idle) processes return to "a few".

So, my request is:

- Can anybody confirm processes are scheduled round-robin?
- When max_requests is reached, is a replacement process launched immediately, or is it up to the process management model to launch a replacement when it sees fit?
- Would it be possible to change the scheduling model to "use the oldest or youngest idle child" without severely impacting scheduling performance?


Add a Patch

Pull Requests

Pull requests:

Add a Pull Request


AllCommentsChangesGit/SVN commitsRelated reports
 [2019-05-02 10:15 UTC] mp at webfactory dot de
Regarding replacement processes, the answer lies in fpm_children_bury() (

Children that exited or have been terminated will be replaced with new ones immediately, unless the process management model is "dynamic" and the the child has been killed due to being idle.
 [2019-05-02 20:00 UTC]
The following pull request has been associated:

Patch Name: FPM: For pm = ondemand, don't respawn processes that reached pm.max_requests
On GitHub:
 [2019-05-03 11:07 UTC] mp at webfactory dot de seems to confirm that the accept() based model of PHP-FPM will distribute load among workers in a round-robin fashion. 

So, the pm.max_idle_timeout seems to be a very ineffective way of terminating processes, at least as long there average time between two requests is smaller than the timeout / number_of_workers.
 [2022-03-07 16:59 UTC] contact at sshilko dot com
Similar issue #77060.
For me splitting out into multiple php-fpm pools for different purpose partially solved the issue. The amount of workers became more "responsive" to the traffik.
PHP Copyright © 2001-2024 The PHP Group
All rights reserved.
Last updated: Thu May 30 11:01:31 2024 UTC