php.net |  support |  documentation |  report a bug |  advanced search |  search howto |  statistics |  random bug |  login
Request #71832 Support for persistent connections
Submitted: 2016-03-15 19:39 UTC Modified: 2021-07-07 12:38 UTC
Votes:14
Avg. Score:4.6 ± 0.9
Reproduced:12 of 12 (100.0%)
Same Version:10 (83.3%)
Same OS:-6 (-50.0%)
From: rmoisto at gmail dot com Assigned:
Status: Suspended Package: cURL related
PHP Version: Irrelevant OS:
Private report: No CVE-ID: None
Welcome back! If you're the original bug submitter, here's where you can edit the bug or add additional notes.
If you forgot your password, you can retrieve your password here.
Password:
Status:
Package:
Bug Type:
Summary:
From: rmoisto at gmail dot com
New email:
PHP Version: OS:

 

 [2016-03-15 19:39 UTC] rmoisto at gmail dot com
Description:
------------
There should be a way of persisting and caching cURL http(s) connections between requests.

Trying to find a solution, it seems the nature of PHP doesn't currently allow for this in any way, without writing an extension of course. Every time a request comes in I have to create a new cURL handle and therefore a new connection at least once.

It's a big performance issue on proxy systems like what I'm working on. For example even in my very fast local network the time spent establishing a connection is around 60 ms.

The issue and solution are much better explained in the description of an extension that only exists for hvvm:
https://github.com/mobfox/hhvm_ext_pcurl


Patches

Pull Requests

History

AllCommentsChangesGit/SVN commitsRelated reports
 [2018-02-28 12:05 UTC] rmoisto at gmail dot com
As a workaround I'm in the process of building an nginx reverse proxy that can hold persistent connections. The performance gains for EU - US connections are massive. Even with the nginx proxy overhead requests are on average 900 ms faster.
 [2018-02-28 12:38 UTC] spam2 at rhsoft dot net
> For example even in my very fast local network the 
> time spent establishing a connection is around 60 ms

then your error is somewhere else
the 26 ms at the bottom are non-keep-alive requests over vpn and WAN

[harry@srv-rhsoft:~]$ response-times.sh
DOMAIN:                   http://corecms
COUNT:                    250
CMS UNCACHED:             2370 us
CMS CACHED:               978 us
STATIC HTM:               737 us
STATIC PHP:               879 us
FACTOR CMS/STATIC HTM:    3.2
FACTOR CMS/STATIC PHP:    2.7
FACTOR CACHED/STATIC HTM: 1.3
FACTOR CACHED/STATIC PHP: 1.1
FACTOR STATIC PHP/STATIC: 1.2
RUNTIME:                  1.778

Concurrency Level:      1
Time taken for tests:   26.715 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      4074590 bytes
HTML transferred:       3777476 bytes
Requests per second:    37.43 [#/sec] (mean)
Time per request:       26.715 [ms] (mean)
Time per request:       26.715 [ms] (mean, across all concurrent requests)
Transfer rate:          148.95 [Kbytes/sec] received
 [2018-02-28 12:43 UTC] spam2 at rhsoft dot net
and within the LAN 1.4 ms while we talk here about fetch the whole page and not only connection and with https below 5 ms

Concurrency Level:      1
Time taken for tests:   1.349 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      4075053 bytes
HTML transferred:       3778000 bytes
Requests per second:    741.49 [#/sec] (mean)
Time per request:       1.349 [ms] (mean)
Time per request:       1.349 [ms] (mean, across all concurrent requests)
Transfer rate:          2950.78 [Kbytes/sec] received

Concurrency Level:      1
Time taken for tests:   4.978 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      4038052 bytes
HTML transferred:       3778000 bytes
Requests per second:    200.88 [#/sec] (mean)
Time per request:       4.978 [ms] (mean)
Time per request:       4.978 [ms] (mean, across all concurrent requests)
Transfer rate:          792.14 [Kbytes/sec] received
 [2018-02-28 13:06 UTC] rmoisto at gmail dot com
I'm not sure where I got that 60 ms from. Currently curl is reporting connection time to another server on the local network as 5 ms. 4 ms of that is name lookup. The total request time is 6 ms.
It would already be a bonus if curl didn't query the name server each time and cached the results for a bit. Keeping a persistent connection would be even better.
 [2018-02-28 13:51 UTC] spam2 at rhsoft dot net
on a proper system the 4 ms should only hit once, normally you have as local dns-caching resolver on 127.0.0.1 on servers where it matters which eiter does recursion directly or forward to a fast nameserver as near as possible but cache results anyways
 [2018-05-15 08:43 UTC] rmoisto at gmail dot com
In the meantime I've found out that DNS is already cached and it's enabled by default. But still, measured by curl request times with a proxy that can hold keepalive connections are anywhere from 100 ms to 800 ms faster. These requests go all over the world in parallel. That proxy however is a pain to maintain and I'm sure doing this directly in PHP would be even faster.
 [2020-10-13 14:08 UTC] e6990620 at gmail dot com
Just chiming in to insist that in general this feature would be very beneficial for PHP systems that act as proxies or API gateways, as they currently have to set up a new connection to the same origin servers again and again for each incoming request.

In the worst cases each of these connections involve a DNS lookup, TCP setup and TLS negotation to far away servers before the first byte can be sent. The savings from being able to reuse a pool of these connections between PHP-FPM requests could be massive. For now the only way I know to achieve this is by ditching PHP-FPM altogether and going with a cli-based server such as RoadRunner or ReactPHP (but this introduces its own set of problems, such as likely memory leaks that can bring the application down).

The same feature already exists in PDO (PDO::ATTR_PERSISTENT) for practically the same reason.
 [2021-07-07 12:38 UTC] cmb@php.net
-Status: Open +Status: Suspended
 [2021-07-07 12:38 UTC] cmb@php.net
> The same feature already exists in PDO (PDO::ATTR_PERSISTENT)
> for practically the same reason.

And has the known issue that persistent connections are stored per
process/thread.  I would expect the same when caching cURL
handlers, and as such this wouldn't quite as efficient as desired.
And there are certainly more "details" to be considered/discussed,
so this feature needs the RFC process[1].

[1] <https://wiki.php.net/rfc/howto>
 
PHP Copyright © 2001-2024 The PHP Group
All rights reserved.
Last updated: Sun Dec 22 11:01:30 2024 UTC