From 9d7c5cc38e58fb0923e88901f87174a511b61552 Mon Sep 17 00:00:00 2001 From: Adhemerval Zanella Date: Wed, 31 Mar 2021 13:53:34 -0300 Subject: linux: Normalize and return timeout on select (BZ #27651) The commit 2433d39b697, which added time64 support to select, changed the function to use __NR_pselect6 (or __NR_pelect6_time64) on all architectures. However, on architectures where the symbol was implemented with __NR_select the kernel normalizes the passed timeout instead of return EINVAL. For instance, the input timeval { 0, 5000000 } is interpreted as { 5, 0 }. And as indicated by BZ #27651, this semantic seems to be expected and changing it results in some performance issues (most likely the program does not check the return code and keeps issuing select with unormalized tv_usec argument). To avoid a different semantic depending whether which syscall the architecture used to issue, select now always normalize the timeout input. This is a slight change for some ABIs (for instance aarch64). Checked on x86_64-linux-gnu and i686-linux-gnu. --- sunrpc/svcauth_des.c | 1 - 1 file changed, 1 deletion(-) (limited to 'sunrpc') diff --git a/sunrpc/svcauth_des.c b/sunrpc/svcauth_des.c index 7607abc818..25a85c9097 100644 --- a/sunrpc/svcauth_des.c +++ b/sunrpc/svcauth_des.c @@ -58,7 +58,6 @@ #define debug(msg) /*printf("svcauth_des: %s\n", msg) */ -#define USEC_PER_SEC ((uint32_t) 1000000L) #define BEFORE(t1, t2) timercmp(t1, t2, <) /* -- cgit v1.2.1