php.net |  support |  documentation |  report a bug |  advanced search |  search howto |  statistics |  random bug |  login
Bug #8102 cURL causes file descriptors to overflow
Submitted: 2000-12-04 15:09 UTC Modified: 2001-01-01 15:48 UTC
From: dave at backbeatmedia dot com Assigned:
Status: Closed Package: cURL related
PHP Version: 4.0.3pl1 OS: Linux
Private report: No CVE-ID: None
View Add Comment Developer Edit
Anyone can comment on a bug. Have a simpler test case? Does it work for you on a different platform? Let us know!
Just going to say 'Me too!'? Don't clutter the database with that please !
Your email address:
MUST BE VALID
Solve the problem:
25 + 34 = ?
Subscribe to this entry?

 
 [2000-12-04 15:09 UTC] dave at backbeatmedia dot com
I am running cURL (curl 7.4.2 (i686-pc-linux-gnu) libcurl 7.4.2) on a Linux machine (Linux linux924.dn.net 2.2.13 #1 SMP Tue Oct 26 11:53:39 EDT 1999 i686 unknown) with PHP-4.0.3pl1. 

I have cURL support compiled into PHP which is compiled as a module into Apache. Almost every request to Apache requires that PHP go out and get the code from our ad server to place upon the page. I used to use readfile() within PHP for this, it worked fine but didn't give me the ability to control timeouts. cURL does, but has been causing me major problems with the way that it deals with file descriptors.  In a nutshell, it doesn't seem to let anything re-use it's old file descriptors (including itself), and forces the system to keep allocating more and more file descriptors until it's hit the max specified in /proc/sys/fs/file-max.


My tests were as follows: 

1) I ran with the "old" readfile()-based script, and monitored /proc/sys/fs/file-nr. With this test, the number of active file descriptors (the 2nd number in the output) would remain at a fairly consistent level. It would vary a bit, of course, but maybe up or down a few hundred file descriptors and that's all. This method never added too many file descriptors, it just seemed to re-use the ones that had been rendered inactive from old request. No problems (except that readfile() doesn't let me define timeouts and such, which is why I'm trying cURL).

2) I ran with the new, cURL-based script and again monitored /proc/sys/fs/file-nr. This time, the number of active requests would start high (as it did with method #1), and then immediately begin plummeting towards 0... fast. As soon as it got to 0 (or close to it), the number of open file descriptors (the first number in the output of 'file-nr') would start to climb and the 2nd number (the open file descriptors) would stay at less than 10. 

From this test it appears as though the cURL module was making it so that any file descriptor it used once could not be used again, and was forcing the system to open new file descriptors for every single request. Once this hits the maximum, of course, then the whole thing falls apart. 

Also, I noticed that there were FAR more Apache processes running using method #2 than there were with #1. After about an hour of method #2, Apache would have forked out the maximum number of threads assigned to it as set by the MaxClients directive in httpd.conf. 

It seems as though there's something fishy about the way cURL interacts with an Apache-based PHP module, all but rendering it useless. I'm hoping I just have something configured incorrectly, because I would love to take advantage of cURL. 

Patches

Add a Patch

Pull Requests

Add a Pull Request

History

AllCommentsChangesGit/SVN commitsRelated reports
 [2000-12-04 15:40 UTC] sterling@php.net
try the latest cvs and let me know if this still happens.
 [2001-01-01 15:48 UTC] sterling@php.net
Reported fixed by the user.
 
PHP Copyright © 2001-2024 The PHP Group
All rights reserved.
Last updated: Fri Mar 29 14:01:28 2024 UTC