summaryrefslogtreecommitdiff
path: root/wrapper.c
diff options
context:
space:
mode:
authorSteffen Prohaska <prohaska@zib.de>2013-08-20 08:43:54 +0200
committerJunio C Hamano <gitster@pobox.com>2013-08-20 11:10:59 -0700
commit0b6806b9e45c659d25b87fb5713c920a3081eac8 (patch)
treecede5427618100e650e6dd2c95394529f187766a /wrapper.c
parent4d06473928ee574910accbde05c19ef2263abdf6 (diff)
downloadgit-0b6806b9e45c659d25b87fb5713c920a3081eac8.tar.gz
xread, xwrite: limit size of IO to 8MB
Checking out 2GB or more through an external filter (see test) fails on Mac OS X 10.8.4 (12E55) for a 64-bit executable with: error: read from external filter cat failed error: cannot feed the input to external filter cat error: cat died of signal 13 error: external filter cat failed 141 error: external filter cat failed The reason is that read() immediately returns with EINVAL when asked to read more than 2GB. According to POSIX [1], if the value of nbyte passed to read() is greater than SSIZE_MAX, the result is implementation-defined. The write function has the same restriction [2]. Since OS X still supports running 32-bit executables, the 32-bit limit (SSIZE_MAX = INT_MAX = 2GB - 1) seems to be also imposed on 64-bit executables under certain conditions. For write, the problem has been addressed earlier [6c642a]. Address the problem for read() and write() differently, by limiting size of IO chunks unconditionally on all platforms in xread() and xwrite(). Large chunks only cause problems, like causing latencies when killing the process, even if OS X was not buggy. Doing IO in reasonably sized smaller chunks should have no negative impact on performance. The compat wrapper clipped_write() introduced earlier [6c642a] is not needed anymore. It will be reverted in a separate commit. The new test catches read and write problems. Note that 'git add' exits with 0 even if it prints filtering errors to stderr. The test, therefore, checks stderr. 'git add' should probably be changed (sometime in another commit) to exit with nonzero if filtering fails. The test could then be changed to use test_must_fail. Thanks to the following people for suggestions and testing: Johannes Sixt <j6t@kdbg.org> John Keeping <john@keeping.me.uk> Jonathan Nieder <jrnieder@gmail.com> Kyle J. McKay <mackyle@gmail.com> Linus Torvalds <torvalds@linux-foundation.org> Torsten Bögershausen <tboegi@web.de> [1] http://pubs.opengroup.org/onlinepubs/009695399/functions/read.html [2] http://pubs.opengroup.org/onlinepubs/009695399/functions/write.html [6c642a] commit 6c642a878688adf46b226903858b53e2d31ac5c3 compate/clipped-write.c: large write(2) fails on Mac OS X/XNU Signed-off-by: Steffen Prohaska <prohaska@zib.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Diffstat (limited to 'wrapper.c')
-rw-r--r--wrapper.c12
1 files changed, 12 insertions, 0 deletions
diff --git a/wrapper.c b/wrapper.c
index 6a015de5f0..f92b147598 100644
--- a/wrapper.c
+++ b/wrapper.c
@@ -131,6 +131,14 @@ void *xcalloc(size_t nmemb, size_t size)
}
/*
+ * Limit size of IO chunks, because huge chunks only cause pain. OS X
+ * 64-bit is buggy, returning EINVAL if len >= INT_MAX; and even in
+ * the absense of bugs, large chunks can result in bad latencies when
+ * you decide to kill the process.
+ */
+#define MAX_IO_SIZE (8*1024*1024)
+
+/*
* xread() is the same a read(), but it automatically restarts read()
* operations with a recoverable error (EAGAIN and EINTR). xread()
* DOES NOT GUARANTEE that "len" bytes is read even if the data is available.
@@ -138,6 +146,8 @@ void *xcalloc(size_t nmemb, size_t size)
ssize_t xread(int fd, void *buf, size_t len)
{
ssize_t nr;
+ if (len > MAX_IO_SIZE)
+ len = MAX_IO_SIZE;
while (1) {
nr = read(fd, buf, len);
if ((nr < 0) && (errno == EAGAIN || errno == EINTR))
@@ -154,6 +164,8 @@ ssize_t xread(int fd, void *buf, size_t len)
ssize_t xwrite(int fd, const void *buf, size_t len)
{
ssize_t nr;
+ if (len > MAX_IO_SIZE)
+ len = MAX_IO_SIZE;
while (1) {
nr = write(fd, buf, len);
if ((nr < 0) && (errno == EAGAIN || errno == EINTR))