|  support |  documentation |  report a bug |  advanced search |  search howto |  statistics |  random bug |  login
Bug #13437 PHP cores on exit; memory deallocation problem?
Submitted: 2001-09-25 13:55 UTC Modified: 2002-08-05 01:00 UTC
Avg. Score:4.7 ± 0.7
Reproduced:9 of 9 (100.0%)
Same Version:3 (33.3%)
Same OS:6 (66.7%)
From: dshadow at zort dot net Assigned:
Status: No Feedback Package: Reproducible crash
PHP Version: 4.1.1 OS: Linux & Solaris
Private report: No CVE-ID: None
View Add Comment Developer Edit
Anyone can comment on a bug. Have a simpler test case? Does it work for you on a different platform? Let us know!
Just going to say 'Me too!'? Don't clutter the database with that please — but make sure to vote on the bug!
Your email address:
Solve the problem:
23 + 1 = ?
Subscribe to this entry?

 [2001-09-25 13:55 UTC] dshadow at zort dot net
When I unserialize a ~7mb object that contains several levels of nested objects and arrays, PHP displays abnormal behavior. (Script #1)

First, when script execution is complete, PHP uses 100% of the CPU until it has consumed the limit set by set_time_limit(). At this point, it segfaults with the following backtrace (Backtrace #1).

When compiled into Apache, this causes the memory footprint for each child process to skyrocket; the memory is not freed until the child exits. Over time, this has resulted in Apache using 70mb * 10 children = 700mb of RAM.

Additonally: I have experienced random crashes when PHP (4.0.4pl1) exits on Solaris. As I can not consistently reproduce this, I can't provide a sample script that always exhibits the problem, but the script that crashes does use mysql, and does NOT use unserialize() at all. This problem is included in this report because both crash in the same function when PHP is doing the same thing (shutting down).

Backtrace #1 - Linux / php4-200109251035

./configure  --with-mysql=/usr/local/mysql --enable-track-vars --with-xml --with-imap=/usr --with-zlib-dir=/usr --with-ttf=/usr --enable-bcmath --with-kerberos=/usr/kerberos --with-openssl=/usr

Program received signal SIGSEGV, Segmentation fault.
0x80ee455 in _efree (ptr=0xa585b54) at zend_alloc.c:240
240             REMOVE_POINTER_FROM_LIST(p);
(gdb) bt
#0  0x80ee455 in _efree (ptr=0xa585b54) at zend_alloc.c:240
#1  0x80ee7eb in shutdown_memory_manager (silent=1, clean_cache=1)
    at zend_alloc.c:469
#2  0x806affe in php_module_shutdown () at main.c:1008
#3  0x8069ba9 in main (argc=2, argv=0xbffffbf4) at cgi_main.c:787

Backtrace #2: Solaris / php 4.0.4pl1

./configure  --with-mysql=/apps/mysql --enable-track-vars --with-xml --enable-bcmath

#0  0x89074 in _efree (ptr=0x14d1c0) at zend_alloc.c:232
232             REMOVE_POINTER_FROM_LIST(p);
(gdb) bt
#0  0x89074 in _efree (ptr=0x14d1c0) at zend_alloc.c:232
#1  0x9ad48 in zend_hash_destroy (ht=0x158008) at zend_hash.c:569
#2  0x962f8 in _zval_dtor (zvalue=0x14a328) at zend_variables.c:69
#3  0x8e9f8 in _zval_ptr_dtor (zval_ptr=0x14acf4) at zend_execute_API.c:261
#4  0x9acdc in zend_hash_destroy (ht=0x11fdf4) at zend_hash.c:564
#5  0x8e824 in shutdown_executor () at zend_execute_API.c:165
#6  0x96ffc in zend_deactivate () at zend.c:525
#7  0x24c38 in php_request_shutdown (dummy=0x0) at main.c:688
#8  0x23a78 in main (argc=3, argv=0xeffffd34) at cgi_main.c:771

Script #1:
#!/usr/local/bin/php -q
$fn = '/path/to/very-lage-serialized.file';
$fd = fopen($fn, 'r');
$str = fread($fd, filesize($fn));
$us = unserialize($str);


Add a Patch

Pull Requests

Add a Pull Request


AllCommentsChangesGit/SVN commitsRelated reports
 [2001-10-10 22:17 UTC]
Confirmed on Redhat 7.2

No crash, but print_r($us) at the end of this script displays no output.
 [2001-12-03 15:58 UTC] dshadow at zort dot net
This problem is still happening on 4.1RC5, though it seems to be doing a little better than before. However it's also ignoring the time value I'm passing in to set_time_limit(), but only when it runs out of time during clean-up.

#0  0x8107f05 in _efree (ptr=0xa62065c) at zend_alloc.c:240
240             REMOVE_POINTER_FROM_LIST(p);
(gdb) bt
#0  0x8107f05 in _efree (ptr=0xa62065c) at zend_alloc.c:240
#1  0x810829b in shutdown_memory_manager (silent=1, clean_cache=1) at zend_alloc.c:469
#2  0x807169e in php_module_shutdown () at main.c:1007
#3  0x8070239 in main (argc=3, argv=0xbffffc44) at cgi_main.c:788

 [2001-12-03 17:46 UTC]
I've seen this as well. The time limit you set is removed
when the script finishes but before the memory is cleaned
up. What I did to avoid the problem was to change the code
in php_request_shutdown() main/main.c. The end of the function looks like:

zend_try { 
        shutdown_memory_manager(CG(unclean_shutdown), 0);
} zend_end_try();

zend_try { 
} zend_end_try();

I switched the timeout and the shutdown, and then it worked
for me. It still takes a long time, but you don't get a
timeout. Not sure if this is the correct fix, but maybe
you want to test and confirm that it helps?

 [2001-12-04 10:54 UTC] dshadow at zort dot net
This helps with my problem of apache children being left in unusable states with large memory allocations. However, it is still entirely unreasonable that memory that takes only a few seconds to allocate should need several minutes to be disposed of. I just watched PHP take *six* minutes to dispose of 70 megs of memory it allocated, and on prior occasions when no one was watching, I've found it to have be running for *hours* cleaning up after itself.
 [2001-12-10 22:32 UTC]
Could you try calling apache_child_terminate() at the end 
of your script?

 [2001-12-11 09:51 UTC] dshadow at zort dot net
I tried using apache_child_terminate() as suggested, however, this doesn't help any. (It might be beneficial if this function and its required configuration option were documented somewhere.)
 [2002-02-04 02:20 UTC]
The version of PHP that this bug was reported in is too old. Please
try to reproduce this bug in the latest version of PHP (available

If you are still able to reproduce the bug with one of the latest
versions of PHP, please change the PHP version on this bug report
to the version you tested and change the status back to "Open".
 [2002-02-04 16:06 UTC] dshadow at zort dot net
This problem still happens with PHP 4.1.1.

The edit submission page does not permit me to reopen bugs; I can't change that status. Can someone else please reopen this? Thanks.
 [2002-02-04 16:21 UTC]
On user request status => open
 [2002-07-04 16:18 UTC]
Thank you for taking the time to report a problem with PHP.
Unfortunately your are not using a current version of PHP -- 
the problem might already be fixed. Please download a new
PHP version from

If you are able to reproduce the bug with one of the latest
versions of PHP, please change the PHP version on this bug report
to the version you tested and change the status back to "Open".
Again, thank you for your continued support of PHP.

 [2002-08-05 01:00 UTC] php-bugs at lists dot php dot net
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
 [2002-11-27 11:04 UTC] kennon at retirequickly dot com
This same behavior is happening to me using PHP version 4.1.2.

I have a script that has several very large arrays (~200MB each), and after the script is finished, the php process begins to max out the processor, and can take hours to finish.

Sometimes the script will stop with a "Maximum execution time of 30 seconds in file Unknown" error, even though I have the max time at 1 hour.

I even tried to manually unset() the large variables, and while the unset() calls appear to be successful, the same behavior is still present.

This feels like exactly the same bug, but I can provide a backtrace if necessary.

I'm running Redhat 7.3 on a dual processor intel system.

 [2003-07-02 02:19 UTC] morgan at mindviz dot com
I am currently doing some work with php for a server 
application. At times I will read in roughly 1-2MB of 
data and unserialize it, at this point the memory usage 
in top will spike up to 10MB and sit there 
indefinitely. I have run the program over and tested 
without the unserialize being used, still loading the 
data being into a string, and no jump to 10MB.

Is there some memory leak issue in the unserialize 

This is pretty critical for me, everytime I call 
unserialize on the large set of data loaded in it will 
jump a bit more from the 10MB, and keep doing that 
until my entire system is exhausted of memory. At this 
point I am considering writing my own unserialize 
routine to see if it helps.

I have tried this with version 4.2.3, 4.3.1 and 4.3.2 
and get the same results.

Any help would be great, thanks
 [2005-01-06 18:45 UTC] tomi at vacilando dot org
Same deal.. this bug seems to have returned back in PHP 5.0.3!! I have serialized 3D array with 30000 elements that is something like 3.1 MB long. Unserialize() of this takes several minutes, which is absolutely unacceptable. In PHP 4.3.3 the same takes a split second...
PHP Copyright © 2001-2024 The PHP Group
All rights reserved.
Last updated: Wed Apr 17 02:01:30 2024 UTC