|
php.net | support | documentation | report a bug | advanced search | search howto | statistics | random bug | login |
[2001-03-31 09:56 UTC] jakub at icewarp dot com
When using the fgets function for large data transmission the data transmitted are never freed from memory. When using fgets with files it works fine. The PHP CGI module starts to utilize the CPU to 100% and never ends, stops processing and doesn't exit the fgets function after few read cycles. This happens only on Windows PHP CGI. We tested Apache and that worked fine. Even the previous version of PHP did that. Large data I'm talking about is 8MB and above. PatchesPull RequestsHistoryAllCommentsChangesGit/SVN commits
|
|||||||||||||||||||||||||||
Copyright © 2001-2025 The PHP GroupAll rights reserved. |
Last updated: Thu Nov 06 07:00:01 2025 UTC |
The code snippet: test.php: <!-- This is a PHP POP3 Socket Test that demonstrates a Windows PHP CGI bug which consumes the transfered data thru the socket and never releases them. Each round of the for cycle below gets slower and slower and never finishes the end. To test it edit the POP3 server variables below. Make sure there's only one 500K message in the POP3 mailbox. Then run the test. Currently the PHP gives up in the 11th cycle. On windows 9x the PHP stops running. On windows NT/2000 the PHP goes crazy, consumes 100% CPU and never finishes. Please, contact me: Jakub Klos jakub@icewarp.com --> <HTML> <HEAD> <TITLE>Test</TITLE> </HEAD> <BODY> PHP POP3 Socket Test<BR><BR> <? $server = "localhost"; $port = 110; $user = "user"; $pass = "pass"; $round = 20; // connect to the POP3 server and read the response $socket = fsockopen($server, $port); $line = fgets($socket, 1024); // USER and PASS commands fputs($socket, "USER $user\r\n"); $line = fgets($socket, 1024); fputs($socket, "PASS $pass\r\n"); $line = fgets($socket, 1024); $length = 0; for ($i = 1; $i < $round + 1; $i++) { // RETR command to receive the message fputs($socket, "RETR 1\r\n"); $line = fgets($socket, 1024); if ($line[0] == "-") break; while (($line = fgets($socket, 1024)) <> ".\r\n") { if (!($line)) break; $length += strlen($line); } echo "Round $i/$round, Transfered $length<BR>"; flush(); } fclose($socket); echo "Total bytes received: $length"; ?> </BODY> </HTML>Ok, the problem is that the current implementation of the PHP socket reading functions keeps all the data read from a socket in the memory. Under linux this is not a problem even with realtively larger chunks of data (I tested with ~20Mb and there was no significant delay) but with awkward operating systems like MS Windows - this causes significant delays. I do not see any reason to keep the data read from the socket for the time this socket exists (except if someone is planning to write fseek in a socket ;) ). Therefore thr whole bunch of functions should be rewritten. I quick hack to solve the problem with this bug is to move the memory after each fgets read thus not allocating more than one block. This is the patch: --- /php-4.0.5/ext/standard/fsock.c Thu Apr 26 17:07:58 2001 +++ fsock.c Thu May 31 11:54:53 2001 @@ -644,7 +645,9 @@ if(amount > 0) { memcpy(buf, READPTR(sock), amount); - sock->readpos += amount; +/* sock->readpos += amount; */ + memmove(sock->readbuf, sock->readbuf + amount, sock->readbuflen - amount); + sock->writepos -= amount; } buf[amount] = '\0';