diff options
author | Jussi Kivilinna <jussi.kivilinna@iki.fi> | 2018-01-06 22:19:56 +0200 |
---|---|---|
committer | Jussi Kivilinna <jussi.kivilinna@iki.fi> | 2018-01-09 19:18:15 +0200 |
commit | a518b6680ea80a4325731028545a701c1d71fc02 (patch) | |
tree | 8175316a4c8b4f08b9c78a2ff636bc7c7ce34c71 /cipher/rijndael.c | |
parent | 135250e3060e79be698d4f36a819aa8a880789f8 (diff) | |
download | libgcrypt-a518b6680ea80a4325731028545a701c1d71fc02.tar.gz |
Move AMD64 MS to SysV calling convention conversion to assembly side
* cipher/Makefile.am: Add 'asm-common-amd64.h'.
* cipher/asm-common-amd64.h: New.
* cipher/blowfish-amd64.S: Add ENTER_SYSV_FUNC_* and EXIT_SYSV_FUNC for
each global function from 'asm-common-amd64.h'.
* cipher/cast5-amd64.S: Ditto.
* cipher/des-amd64.S: Ditto.
* cipher/rijndael-amd64.S: Ditto.
* cipher/twofish-amd64.S: Ditto.
* cipher/arcfour-amd64.S: Ditto.
* cipher/blowfish.c [HAVE_COMPATIBLE_GCC_WIN64_PLATFORM_AS]
(call_sysv_fn): Remove.
* cipher/cast5.c [HAVE_COMPATIBLE_GCC_WIN64_PLATFORM_AS]
(call_sysv_fn): Remove.
* cipher/twofish.c [HAVE_COMPATIBLE_GCC_WIN64_PLATFORM_AS]
(call_sysv_fn, call_sysv_fn5, call_sysv_fn6): Remove.
* cipher/rijndael.c (do_encrypt, do_decrypt)
[HAVE_COMPATIBLE_GCC_WIN64_PLATFORM_AS]: Remove assembly block for
calling SysV ABI function.
* cipher/arcfour.c [USE_AMD64_ASM] (encrypt_stream): Ditto.
--
Old approach was to convert MS ABI to SysV ABI calling convention
for AMD64 assembly functions at caller side. This patch moves
calling convention conversion to assembly/callee side.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Diffstat (limited to 'cipher/rijndael.c')
-rw-r--r-- | cipher/rijndael.c | 38 |
1 files changed, 0 insertions, 38 deletions
diff --git a/cipher/rijndael.c b/cipher/rijndael.c index 548bfa09..df1363f2 100644 --- a/cipher/rijndael.c +++ b/cipher/rijndael.c @@ -740,27 +740,8 @@ do_encrypt (const RIJNDAEL_context *ctx, unsigned char *bx, const unsigned char *ax) { #ifdef USE_AMD64_ASM -# ifdef HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS return _gcry_aes_amd64_encrypt_block(ctx->keyschenc, bx, ax, ctx->rounds, encT); -# else - /* Call SystemV ABI function without storing non-volatile XMM registers, - * as target function does not use vector instruction sets. */ - const void *key = ctx->keyschenc; - uintptr_t rounds = ctx->rounds; - uintptr_t ret; - asm volatile ("movq %[encT], %%r8\n\t" - "callq *%[ret]\n\t" - : [ret] "=a" (ret), - "+D" (key), - "+S" (bx), - "+d" (ax), - "+c" (rounds) - : "0" (_gcry_aes_amd64_encrypt_block), - [encT] "r" (encT) - : "cc", "memory", "r8", "r9", "r10", "r11"); - return ret; -# endif /* HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS */ #elif defined(USE_ARM_ASM) return _gcry_aes_arm_encrypt_block(ctx->keyschenc, bx, ax, ctx->rounds, encT); #else @@ -1123,27 +1104,8 @@ do_decrypt (const RIJNDAEL_context *ctx, unsigned char *bx, const unsigned char *ax) { #ifdef USE_AMD64_ASM -# ifdef HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS return _gcry_aes_amd64_decrypt_block(ctx->keyschdec, bx, ax, ctx->rounds, &dec_tables); -# else - /* Call SystemV ABI function without storing non-volatile XMM registers, - * as target function does not use vector instruction sets. */ - const void *key = ctx->keyschdec; - uintptr_t rounds = ctx->rounds; - uintptr_t ret; - asm volatile ("movq %[dectabs], %%r8\n\t" - "callq *%[ret]\n\t" - : [ret] "=a" (ret), - "+D" (key), - "+S" (bx), - "+d" (ax), - "+c" (rounds) - : "0" (_gcry_aes_amd64_decrypt_block), - [dectabs] "r" (&dec_tables) - : "cc", "memory", "r8", "r9", "r10", "r11"); - return ret; -# endif /* HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS */ #elif defined(USE_ARM_ASM) return _gcry_aes_arm_decrypt_block(ctx->keyschdec, bx, ax, ctx->rounds, &dec_tables); |