|
php.net | support | documentation | report a bug | advanced search | search howto | statistics | random bug | login |
[2021-02-24 16:38 UTC] terrafrost@php.net
Description: ------------ decbin(float(0xFFFFFFFF)) produces an error on 32-bit PHP 8 installs but decbin(float(0x7FFFFFFF)) doesn't. On 32-bit installs 0xFFFFFFFF is outside the range of a signed 32-bit integer so it can't be represented as anything other than a float but still... I wonder if maybe allowing for floats might not be a bad idea? This is, in particular, an issue on Raspberry Pi's. I mean if it's intended behavior that's cool too - just seems like it could be an oversight. Test script: --------------- <?php echo decbin(floatval(0xFFFFFFFF)) . "\n"; // errors out on PHP 8; works fine on PHP 7.4 echo decbin(floatval(0x7FFFFFFF)) . "\n"; // works fine on PHP 8 and PHP 7.4 Expected result: ---------------- 11111111111111111111111111111111 Actual result: -------------- Fatal error: Uncaught TypeError: decbin(): Argument #1 ($num) must be of type int, float given PatchesPull RequestsHistoryAllCommentsChangesGit/SVN commits
|
|||||||||||||||||||||||||||||||||||||
Copyright © 2001-2025 The PHP GroupAll rights reserved. |
Last updated: Fri Oct 24 14:00:01 2025 UTC |
Well let me put it another way. Since bindec() gives float(4294967295) for str_repeat('1', 32) on 32-bit PHP installs it seems to me that decbin() ought to accept float(4294967295). A more poignant example: var_dump(bindec(decbin(-1))); gives float(4294967295). It doesn't give int(-1) - it gives float(4294967295). I mean, personally, I don't see it as an issue if decbin(-1) and decbin(0xFFFFFFFF) give the same result (with -1 being encoded in two's complement and the latter being encoded as an unsigned int). Sure, this means the output of bindec() is ambiguous but ambiguous is better than inconsistent, which is the situation we're currently in, wherein the only way to get decbin() to return str_repeat('1', 32) is by giving it a completely different value than what bindec() would return. Also, technically, for 32-bit systems, floats are still 64 bits, which means that you can represent every whole numbers between 0 and (1<<52)-1 without a loss of precision. See IEEE 754. Just my two cents...