summaryrefslogtreecommitdiff
path: root/snappy_unittest.cc
diff options
context:
space:
mode:
authorsnappy.mirrorbot@gmail.com <snappy.mirrorbot@gmail.com@03e5f5b5-db94-4691-08a0-1a8bf15f6143>2013-01-04 11:54:20 +0000
committersnappy.mirrorbot@gmail.com <snappy.mirrorbot@gmail.com@03e5f5b5-db94-4691-08a0-1a8bf15f6143>2013-01-04 11:54:20 +0000
commitf1d5be35642968c2e5b1eb729d619ea38b915abc (patch)
treeda73bb9379b43b2588c46437aa29154c265dc7ac /snappy_unittest.cc
parentc3e036dbe6a0da2da8c59fcc1d1a86fc3c7ab3ec (diff)
downloadsnappy-f1d5be35642968c2e5b1eb729d619ea38b915abc.tar.gz
Change a few ORs to additions where they don't matter. This helps the compiler
use the LEA instruction more efficiently, since e.g. a + (b << 2) can be encoded as one instruction. Even more importantly, it can constant-fold the COPY_* enums together with the shifted negative constants, which also saves some instructions. (We don't need it for LITERAL, since it happens to be 0.) I am unsure why the compiler couldn't do this itself, but the theory is that it cannot prove that len-1 and len-4 cannot underflow/wrap, and thus can't do the optimization safely. The gains are small but measurable; 0.5-1.0% over the BM_Z* benchmarks (measured on Westmere, Sandy Bridge and Istanbul). R=sanjay git-svn-id: http://snappy.googlecode.com/svn/trunk@69 03e5f5b5-db94-4691-08a0-1a8bf15f6143
Diffstat (limited to 'snappy_unittest.cc')
0 files changed, 0 insertions, 0 deletions