go to bug id or search bugs for
we got a bug report in our http client library https://github.com/sabre-io/http/pull/119 which suggests to change our call to stream_copy_to_stream() to stream in chunks with a size of 4MiB because its nearly 2x faster .
the reason is that when you only copy in 4MiB chunks php internally uses mmap.
wouldn't it make sense that stream_copy_to_stream() internally would do the chunking to speedup this use-case instead (on php-src level)?
Add a Patch
Add a Pull Request
please find the very detailed bug report on https://github.com/sabre-io/http/pull/119
Is there any script that can be used to benchmark this?
After staring at this code for a bit, I think we should just drop the size limitation entirely, for a couple of reasons:
First, the limitation already doesn't trigger if you copy the whole file (i.e. use copy() or stream_copy_to_stream() and don't specify a length). This happens because length will be 0 at the time of the check and only later calculated based on the file size. This means that we're already completely blowing the length limit for what is is likely the most common case, and it doesn't seem like anyone complained about that.
Second, the premise of the code comment ("to avoid runaway swapping") seems incorrect to me. Because this performs a file-backed non-private mmap, no swap backing is needed for the mapping. Concerns over "memory usage" are also misplaced, as this is a virtual mapping.
Automatic comment on behalf of firstname.lastname@example.org
Log: Fix bug #77930: Remove mmap limit
is it using sendfile() under the hood?