#
b6461792 |
| 20-Mar-2024 |
Richard Levitte |
Copyright year updates Reviewed-by: Neil Horman <nhorman@openssl.org> Release: yes (cherry picked from commit 0ce7d1f355c1240653e320a3f6f8109c1f05f8c0) Reviewed-by: Hugo Lan
Copyright year updates Reviewed-by: Neil Horman <nhorman@openssl.org> Release: yes (cherry picked from commit 0ce7d1f355c1240653e320a3f6f8109c1f05f8c0) Reviewed-by: Hugo Landau <hlandau@openssl.org> Reviewed-by: Tomas Mraz <tomas@openssl.org> (Merged from https://github.com/openssl/openssl/pull/24034)
show more ...
|
#
1d1ca79f |
| 08-Jan-2024 |
fangming.fang |
Preserve callee-saved registers in aarch64 AES-CTR code The AES-CTR assembly code uses v8-v15 registers, they are callee-saved registers, they must be preserved before the use and re
Preserve callee-saved registers in aarch64 AES-CTR code The AES-CTR assembly code uses v8-v15 registers, they are callee-saved registers, they must be preserved before the use and restored after the use. Change-Id: If9192d1f0f3cea7295f4b0d72ace88e6e8067493 Reviewed-by: Shane Lontis <shane.lontis@oracle.com> Reviewed-by: Tomas Mraz <tomas@openssl.org> (Merged from https://github.com/openssl/openssl/pull/23233)
show more ...
|
#
cc82b09c |
| 17-Oct-2023 |
fisher.yu |
Optimize AES-CTR for ARM Neoverse V1 and V2. Unroll AES-CTR loops to a maximum 12 blocks for ARM Neoverse V1 and V2, to fully utilize their AES pipeline resources. I
Optimize AES-CTR for ARM Neoverse V1 and V2. Unroll AES-CTR loops to a maximum 12 blocks for ARM Neoverse V1 and V2, to fully utilize their AES pipeline resources. Improvement on ARM Neoverse V1. Package Size(Bytes) 16 32 64 128 256 1024 Improvement(%) 3.93 -0.45 11.30 4.31 12.48 37.66 Package Size(Bytes) 1500 8192 16384 61440 65536 Improvement(%) 37.16 38.90 39.89 40.55 40.41 Change-Id: Ifb8fad9af22476259b9ba75132bc3d8010a7fdbd Reviewed-by: Tom Cosgrove <tom.cosgrove@arm.com> Reviewed-by: Tomas Mraz <tomas@openssl.org> (Merged from https://github.com/openssl/openssl/pull/22733)
show more ...
|
#
da1c088f |
| 07-Sep-2023 |
Matt Caswell |
Copyright year updates Reviewed-by: Richard Levitte <levitte@openssl.org> Release: yes
|
#
4df13d10 |
| 21-Apr-2023 |
Liu-ErMeng |
fix aes-xts bug on aarch64 big-endian env. Signed-off-by: Liu-ErMeng <liuermeng2@huawei.com> Reviewed-by: Tom Cosgrove <tom.cosgrove@arm.com> Reviewed-by: Tomas Mraz <tomas@open
fix aes-xts bug on aarch64 big-endian env. Signed-off-by: Liu-ErMeng <liuermeng2@huawei.com> Reviewed-by: Tom Cosgrove <tom.cosgrove@arm.com> Reviewed-by: Tomas Mraz <tomas@openssl.org> (Merged from https://github.com/openssl/openssl/pull/20797)
show more ...
|
#
72dfe465 |
| 17-Apr-2023 |
Tomas Mraz |
aesv8-armx.pl: Avoid buffer overrread in AES-XTS decryption Original author: Nevine Ebeid (Amazon) Fixes: CVE-2023-1255 The buffer overread happens on decrypts of 4 mod 5 sizes.
aesv8-armx.pl: Avoid buffer overrread in AES-XTS decryption Original author: Nevine Ebeid (Amazon) Fixes: CVE-2023-1255 The buffer overread happens on decrypts of 4 mod 5 sizes. Unless the memory just after the buffer is unmapped this is harmless. Reviewed-by: Paul Dale <pauli@openssl.org> Reviewed-by: Tom Cosgrove <tom.cosgrove@arm.com> (Merged from https://github.com/openssl/openssl/pull/20759)
show more ...
|
#
65523758 |
| 12-Jun-2022 |
Bernd Edlinger |
Fix reported performance degradation on aarch64 This restores the implementation prior to commit 2621751 ("aes/asm/aesv8-armx.pl: avoid 32-bit lane assignment in CTR mode") for 64bit
Fix reported performance degradation on aarch64 This restores the implementation prior to commit 2621751 ("aes/asm/aesv8-armx.pl: avoid 32-bit lane assignment in CTR mode") for 64bit targets only, since it is reportedly 2-17% slower, and the silicon errata only affects 32bit targets. Only for 32bit targets the new algorithm is used. Fixes #18445 Reviewed-by: Paul Dale <pauli@openssl.org> Reviewed-by: Tomas Mraz <tomas@openssl.org> (Merged from https://github.com/openssl/openssl/pull/18581)
show more ...
|
#
fecb3aae |
| 03-May-2022 |
Matt Caswell |
Update copyright year Reviewed-by: Tomas Mraz <tomas@openssl.org> Release: yes
|
#
9c140a33 |
| 28-Mar-2022 |
Sebastian Pop |
disable 5x interleave on buffers shorter than 512 bytes: 3% speedup on Graviton2 d6e4287c9726691e800bff221be71edd894a3c6a introduced 5x interleaving as an optimization for ThunderX2, and
disable 5x interleave on buffers shorter than 512 bytes: 3% speedup on Graviton2 d6e4287c9726691e800bff221be71edd894a3c6a introduced 5x interleaving as an optimization for ThunderX2, and that leads to some performance degradation on when encoding short buffers. We found this performance degradation by measuring the performance of nginx on Ubuntu 20.04 that comes with OpenSSL 1.1.1f and Ubuntu 22.04 with OpenSSL 3.0.1. This patch limits the 5x interleave to buffers larger than 512 bytes. On Graviton2 we see the following performance with this patch: $ openssl speed -evp aes-128-gcm -bytes 128 AES-128-GCM 64 bytes 79 bytes 80 bytes 128 bytes 256 bytes 511 bytes 512 bytes 1024 bytes master 1062564.71k 775113.11k 1069959.33k 1411716.28k 1653114.86k 1585981.16k 1973683.03k 2203214.08k master+patch 1062729.28k 771915.11k 1103883.42k 1458665.43k 1708701.20k 1647060.84k 1975571.80k 2204038.42k diff 0% 0% 3% 3% 3% 4% 0% 0% revert d6e428 1055290.03k 773448.92k 1117411.97k 1441478.57k 1695698.52k 1634598.04k 1981851.65k 2196680.36k CLA: trivial Reviewed-by: Tomas Mraz <tomas@openssl.org> Reviewed-by: Paul Dale <pauli@openssl.org> (Merged from https://github.com/openssl/openssl/pull/17984)
show more ...
|
#
40c24d74 |
| 29-Dec-2021 |
David Benjamin |
Don't use __ARMEL__/__ARMEB__ in aarch64 assembly GCC's __ARMEL__ and __ARMEB__ defines denote little- and big-endian arm, respectively. They are not defined on aarch64, which instead us
Don't use __ARMEL__/__ARMEB__ in aarch64 assembly GCC's __ARMEL__ and __ARMEB__ defines denote little- and big-endian arm, respectively. They are not defined on aarch64, which instead use __AARCH64EL__ and __AARCH64EB__. However, OpenSSL's assembly originally used the 32-bit defines on both platforms and even define __ARMEL__ and __ARMEB__ in arm_arch.h. This is less portable and can even interfere with other headers, which use __ARMEL__ to detect little-endian arm. Over time, the aarch64 assembly has switched to the correct defines, such as in 32bbb62ea634239e7cb91d6450ba23517082bab6. This commit finishes the job: poly1305-armv8.pl needed a fix and the dual-arch armx.pl files get one more transform to convert from 32-bit to 64-bit. (There is an even more official endianness detector, __ARM_BIG_ENDIAN in the Arm C Language Extensions. But I've stuck with the GCC ones here as that would be a larger change.) Reviewed-by: Matt Caswell <matt@openssl.org> Reviewed-by: Tomas Mraz <tomas@openssl.org> Reviewed-by: Paul Dale <pauli@openssl.org> Reviewed-by: Bernd Edlinger <bernd.edlinger@hotmail.de> (Merged from https://github.com/openssl/openssl/pull/17373)
show more ...
|
#
e304aa87 |
| 02-Jan-2022 |
Dimitris Apostolou |
Fix typos Reviewed-by: Tim Hudson <tjh@openssl.org> Reviewed-by: Tomas Mraz <tomas@openssl.org> (Merged from https://github.com/openssl/openssl/pull/17392)
|
#
19e277dd |
| 28-Aug-2021 |
Russ Butler |
aarch64: support BTI and pointer authentication in assembly This change adds optional support for - Armv8.3-A Pointer Authentication (PAuth) and - Armv8.5-A Branch Target Identificat
aarch64: support BTI and pointer authentication in assembly This change adds optional support for - Armv8.3-A Pointer Authentication (PAuth) and - Armv8.5-A Branch Target Identification (BTI) features to the perl scripts. Both features can be enabled with additional compiler flags. Unless any of these are enabled explicitly there is no code change at all. The extensions are briefly described below. Please read the appropriate chapters of the Arm Architecture Reference Manual for the complete specification. Scope ----- This change only affects generated assembly code. Armv8.3-A Pointer Authentication -------------------------------- Pointer Authentication extension supports the authentication of the contents of registers before they are used for indirect branching or load. PAuth provides a probabilistic method to detect corruption of register values. PAuth signing instructions generate a Pointer Authentication Code (PAC) based on the value of a register, a seed and a key. The generated PAC is inserted into the original value in the register. A PAuth authentication instruction recomputes the PAC, and if it matches the PAC in the register, restores its original value. In case of a mismatch, an architecturally unmapped address is generated instead. With PAuth, mitigation against ROP (Return-oriented Programming) attacks can be implemented. This is achieved by signing the contents of the link-register (LR) before it is pushed to stack. Once LR is popped, it is authenticated. This way a stack corruption which overwrites the LR on the stack is detectable. The PAuth extension adds several new instructions, some of which are not recognized by older hardware. To support a single codebase for both pre Armv8.3-A targets and newer ones, only NOP-space instructions are added by this patch. These instructions are treated as NOPs on hardware which does not support Armv8.3-A. Furthermore, this patch only considers cases where LR is saved to the stack and then restored before branching to its content. There are cases in the code where LR is pushed to stack but it is not used later. We do not address these cases as they are not affected by PAuth. There are two keys available to sign an instruction address: A and B. PACIASP and PACIBSP only differ in the used keys: A and B, respectively. The keys are typically managed by the operating system. To enable generating code for PAuth compile with -mbranch-protection=<mode>: - standard or pac-ret: add PACIASP and AUTIASP, also enables BTI (read below) - pac-ret+b-key: add PACIBSP and AUTIBSP Armv8.5-A Branch Target Identification -------------------------------------- Branch Target Identification features some new instructions which protect the execution of instructions on guarded pages which are not intended branch targets. If Armv8.5-A is supported by the hardware, execution of an instruction changes the value of PSTATE.BTYPE field. If an indirect branch lands on a guarded page the target instruction must be one of the BTI <jc> flavors, or in case of a direct call or jump it can be any other instruction. If the target instruction is not compatible with the value of PSTATE.BTYPE a Branch Target Exception is generated. In short, indirect jumps are compatible with BTI <j> and <jc> while indirect calls are compatible with BTI <c> and <jc>. Please refer to the specification for the details. Armv8.3-A PACIASP and PACIBSP are implicit branch target identification instructions which are equivalent with BTI c or BTI jc depending on system register configuration. BTI is used to mitigate JOP (Jump-oriented Programming) attacks by limiting the set of instructions which can be jumped to. BTI requires active linker support to mark the pages with BTI-enabled code as guarded. For ELF64 files BTI compatibility is recorded in the .note.gnu.property section. For a shared object or static binary it is required that all linked units support BTI. This means that even a single assembly file without the required note section turns-off BTI for the whole binary or shared object. The new BTI instructions are treated as NOPs on hardware which does not support Armv8.5-A or on pages which are not guarded. To insert this new and optional instruction compile with -mbranch-protection=standard (also enables PAuth) or +bti. When targeting a guarded page from a non-guarded page, weaker compatibility restrictions apply to maintain compatibility between legacy and new code. For detailed rules please refer to the Arm ARM. Compiler support ---------------- Compiler support requires understanding '-mbranch-protection=<mode>' and emitting the appropriate feature macros (__ARM_FEATURE_BTI_DEFAULT and __ARM_FEATURE_PAC_DEFAULT). The current state is the following: ------------------------------------------------------- | Compiler | -mbranch-protection | Feature macros | +----------+---------------------+--------------------+ | clang | 9.0.0 | 11.0.0 | +----------+---------------------+--------------------+ | gcc | 9 | expected in 10.1+ | ------------------------------------------------------- Available Platforms ------------------ Arm Fast Model and QEMU support both extensions. https://developer.arm.com/tools-and-software/simulation-models/fast-models https://www.qemu.org/ Implementation Notes -------------------- This change adds BTI landing pads even to assembly functions which are likely to be directly called only. In these cases, landing pads might be superfluous depending on what code the linker generates. Code size and performance impact for these cases would be negligible. Interaction with C code ----------------------- Pointer Authentication is a per-frame protection while Branch Target Identification can be turned on and off only for all code pages of a whole shared object or static binary. Because of these properties if C/C++ code is compiled without any of the above features but assembly files support any of them unconditionally there is no incompatibility between the two. Useful Links ------------ To fully understand the details of both PAuth and BTI it is advised to read the related chapters of the Arm Architecture Reference Manual (Arm ARM): https://developer.arm.com/documentation/ddi0487/latest/ Additional materials: "Providing protection for complex software" https://developer.arm.com/architectures/learn-the-architecture/providing-protection-for-complex-software Arm Compiler Reference Guide Version 6.14: -mbranch-protection https://developer.arm.com/documentation/101754/0614/armclang-Reference/armclang-Command-line-Options/-mbranch-protection?lang=en Arm C Language Extensions (ACLE) https://developer.arm.com/docs/101028/latest Addional Notes -------------- This patch is a copy of the work done by Tamas Petz in boringssl. It contains the changes from the following commits: aarch64: support BTI and pointer authentication in assembly Change-Id: I4335f92e2ccc8e209c7d68a0a79f1acdf3aeb791 URL: https://boringssl-review.googlesource.com/c/boringssl/+/42084 aarch64: Improve conditional compilation Change-Id: I14902a64e5f403c2b6a117bc9f5fb1a4f4611ebf URL: https://boringssl-review.googlesource.com/c/boringssl/+/43524 aarch64: Fix name of gnu property note section Change-Id: I6c432d1c852129e9c273f6469a8b60e3983671ec URL: https://boringssl-review.googlesource.com/c/boringssl/+/44024 Change-Id: I2d95ebc5e4aeb5610d3b226f9754ee80cf74a9af Reviewed-by: Paul Dale <pauli@openssl.org> Reviewed-by: Tomas Mraz <tomas@openssl.org> (Merged from https://github.com/openssl/openssl/pull/16674)
show more ...
|
Revision tags: openssl-3.0.0-alpha17, openssl-3.0.0-alpha16, openssl-3.0.0-alpha15, openssl-3.0.0-alpha14, OpenSSL_1_1_1k, openssl-3.0.0-alpha13, openssl-3.0.0-alpha12, OpenSSL_1_1_1j, openssl-3.0.0-alpha11, openssl-3.0.0-alpha10, OpenSSL_1_1_1i, openssl-3.0.0-alpha9 |
|
#
26217510 |
| 24-Nov-2020 |
Ard Biesheuvel |
aes/asm/aesv8-armx.pl: avoid 32-bit lane assignment in CTR mode ARM Cortex-A57 and Cortex-A72 cores running in 32-bit mode are affected by silicon errata #1742098 [0] and #1655431 [1], r
aes/asm/aesv8-armx.pl: avoid 32-bit lane assignment in CTR mode ARM Cortex-A57 and Cortex-A72 cores running in 32-bit mode are affected by silicon errata #1742098 [0] and #1655431 [1], respectively, where the second instruction of a AES instruction pair may execute twice if an interrupt is taken right after the first instruction consumes an input register of which a single 32-bit lane has been updated the last time it was modified. This is not such a rare occurrence as it may seem: in counter mode, only the least significant 32-bit word is incremented in the absence of a carry, which makes our counter mode implementation susceptible to these errata. So let's shuffle the counter assignments around a bit so that the most recent updates when the AES instruction pair executes are 128-bit wide. [0] ARM-EPM-049219 v23 Cortex-A57 MPCore Software Developers Errata Notice [1] ARM-EPM-012079 v11.0 Cortex-A72 MPCore Software Developers Errata Notice Signed-off-by: Ard Biesheuvel <ard.biesheuvel@arm.com> Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Tomas Mraz <tmraz@fedoraproject.org> (Merged from https://github.com/openssl/openssl/pull/13504)
show more ...
|
Revision tags: openssl-3.0.0-alpha8, openssl-3.0.0-alpha7, OpenSSL_1_1_1h, openssl-3.0.0-alpha6, openssl-3.0.0-alpha5, openssl-3.0.0-alpha4, openssl-3.0.0-alpha3, openssl-3.0.0-alpha2, openssl-3.0.0-alpha1, OpenSSL_1_1_1g, OpenSSL_1_1_1f, OpenSSL_1_1_1e |
|
#
9ce8e0d1 |
| 13-Mar-2020 |
XiaokangQian |
Optimize AES-XTS mode in OpenSSL for aarch64 Aes-xts mode can be optimized by interleaving cipher operation on several blocks and loop unrolling. Interleaving needs one ideal unrolli
Optimize AES-XTS mode in OpenSSL for aarch64 Aes-xts mode can be optimized by interleaving cipher operation on several blocks and loop unrolling. Interleaving needs one ideal unrolling factor, here we adopt the same factor with aes-cbc, which is described as below: If blocks number > 5, select 5 blocks as one iteration,every loop, decrease the blocks number by 5. If left blocks < 5, treat them as tail blocks. Detailed implementation has a little adjustment for squeezing code space. With this way, for small size such as 16 bytes, the performance is similar as before, but for big size such as 16k bytes, the performance improves a lot, even reaches to 2x uplift, for some arches such as A57, the improvement even reaches more than 2x uplift. We collect many performance datas on different micro-archs such as thunderx2, ampere-emag, a72, a75, a57, a53 and N1, all of which reach 0.5-2x uplift. The following table lists the encryption performance data on aarch64, take a72, a75, a57, a53 and N1 as examples. Performance value takes the unit of cycles per byte, takes the format as comparision of values. List them as below: A72: Before optimization After optimization Improve evp-aes-128-xts@16 8.899913518 5.949087263 49.60% evp-aes-128-xts@64 4.525512668 3.389141845 33.53% evp-aes-128-xts@256 3.502906908 1.633573479 114.43% evp-aes-128-xts@1024 3.174210419 1.155952639 174.60% evp-aes-128-xts@8192 3.053019303 1.028134888 196.95% evp-aes-128-xts@16384 3.025292462 1.02021169 196.54% evp-aes-256-xts@16 9.971105023 6.754233758 47.63% evp-aes-256-xts@64 4.931479093 3.786527393 30.24% evp-aes-256-xts@256 3.746788153 1.943975947 92.74% evp-aes-256-xts@1024 3.401743802 1.477394648 130.25% evp-aes-256-xts@8192 3.278769327 1.32950421 146.62% evp-aes-256-xts@16384 3.27093296 1.325276257 146.81% A75: Before optimization After optimization Improve evp-aes-128-xts@16 8.397965173 5.126839098 63.80% evp-aes-128-xts@64 4.176860631 2.59817764 60.76% evp-aes-128-xts@256 3.069126585 1.284561028 138.92% evp-aes-128-xts@1024 2.805962699 0.932754655 200.83% evp-aes-128-xts@8192 2.725820131 0.829820397 228.48% evp-aes-128-xts@16384 2.71521905 0.823251591 229.82% evp-aes-256-xts@16 11.24790935 7.383914448 52.33% evp-aes-256-xts@64 5.294128847 3.048641998 73.66% evp-aes-256-xts@256 3.861649617 1.570359905 145.91% evp-aes-256-xts@1024 3.537646797 1.200493533 194.68% evp-aes-256-xts@8192 3.435353012 1.085345319 216.52% evp-aes-256-xts@16384 3.437952563 1.097963822 213.12% A57: Before optimization After optimization Improve evp-aes-128-xts@16 10.57455446 7.165438012 47.58% evp-aes-128-xts@64 5.418185447 3.721241202 45.60% evp-aes-128-xts@256 3.855184592 1.747145379 120.66% evp-aes-128-xts@1024 3.477199757 1.253049735 177.50% evp-aes-128-xts@8192 3.36768104 1.091943159 208.41% evp-aes-128-xts@16384 3.360373443 1.088942789 208.59% evp-aes-256-xts@16 12.54559459 8.745489036 43.45% evp-aes-256-xts@64 6.542808937 4.326387568 51.23% evp-aes-256-xts@256 4.62668822 2.119908754 118.25% evp-aes-256-xts@1024 4.161716505 1.557335554 167.23% evp-aes-256-xts@8192 4.032462227 1.377749511 192.68% evp-aes-256-xts@16384 4.023293877 1.371558933 193.34% A53: Before optimization After optimization Improve evp-aes-128-xts@16 18.07842135 13.96980808 29.40% evp-aes-128-xts@64 7.933818397 6.07159276 30.70% evp-aes-128-xts@256 5.264604704 2.611155744 101.60% evp-aes-128-xts@1024 4.606660117 1.722713454 167.40% evp-aes-128-xts@8192 4.405160115 1.454379201 202.90% evp-aes-128-xts@16384 4.401592028 1.442279392 205.20% evp-aes-256-xts@16 20.07084054 16.00803726 25.40% evp-aes-256-xts@64 9.192647294 6.883876732 33.50% evp-aes-256-xts@256 6.336143161 3.108140452 103.90% evp-aes-256-xts@1024 5.62502952 2.097960651 168.10% evp-aes-256-xts@8192 5.412085608 1.807294191 199.50% evp-aes-256-xts@16384 5.403062591 1.790135764 201.80% N1: Before optimization After optimization Improve evp-aes-128-xts@16 6.48147613 4.209415473 53.98% evp-aes-128-xts@64 2.847744115 1.950757468 45.98% evp-aes-128-xts@256 2.085711968 1.061903238 96.41% evp-aes-128-xts@1024 1.842014669 0.798486302 130.69% evp-aes-128-xts@8192 1.760449052 0.713853939 146.61% evp-aes-128-xts@16384 1.760763546 0.707702009 148.80% evp-aes-256-xts@16 7.264142817 5.265970454 37.94% evp-aes-256-xts@64 3.251356212 2.41176323 34.81% evp-aes-256-xts@256 2.380488469 1.342095742 77.37% evp-aes-256-xts@1024 2.08853022 1.041718215 100.49% evp-aes-256-xts@8192 2.027432668 0.944571334 114.64% evp-aes-256-xts@16384 2.00740782 0.941991415 113.10% Add more XTS test cases to cover the cipher stealing mode and cases of different number of blocks. CustomizedGitHooks: yes Change-Id: I93ee31b2575e1413764e27b599af62994deb4c96 Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Tomas Mraz <tmraz@fedoraproject.org> (Merged from https://github.com/openssl/openssl/pull/11399)
show more ...
|
#
33388b44 |
| 23-Apr-2020 |
Matt Caswell |
Update copyright year Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/11616)
|
#
a21314db |
| 17-Feb-2020 |
David Benjamin |
Also check for errors in x86_64-xlate.pl. In https://github.com/openssl/openssl/pull/10883, I'd meant to exclude the perlasm drivers since they aren't opening pipes and do not partic
Also check for errors in x86_64-xlate.pl. In https://github.com/openssl/openssl/pull/10883, I'd meant to exclude the perlasm drivers since they aren't opening pipes and do not particularly need it, but I only noticed x86_64-xlate.pl, so arm-xlate.pl and ppc-xlate.pl got the change. That seems to have been fine, so be consistent and also apply the change to x86_64-xlate.pl. Checking for errors is generally a good idea. Reviewed-by: Richard Levitte <levitte@openssl.org> Reviewed-by: David Benjamin <davidben@google.com> (Merged from https://github.com/openssl/openssl/pull/10930)
show more ...
|
#
bc8b648f |
| 03-Jan-2020 |
simplelins |
Fix a bug for aarch64 BigEndian FIXES #10692 #10638 a bug for aarch64 bigendian with instructions 'st1' and 'ld1' on AES-GCM mode. CLA: trivial Reviewed-by: Richard Levitte
Fix a bug for aarch64 BigEndian FIXES #10692 #10638 a bug for aarch64 bigendian with instructions 'st1' and 'ld1' on AES-GCM mode. CLA: trivial Reviewed-by: Richard Levitte <levitte@openssl.org> Reviewed-by: Tim Hudson <tjh@openssl.org> Reviewed-by: Paul Dale <paul.dale@oracle.com> (Merged from https://github.com/openssl/openssl/pull/10751)
show more ...
|
#
32be631c |
| 17-Jan-2020 |
David Benjamin |
Do not silently truncate files on perlasm errors If one of the perlasm xlate drivers crashes, OpenSSL's build will currently swallow the error and silently truncate the output to however
Do not silently truncate files on perlasm errors If one of the perlasm xlate drivers crashes, OpenSSL's build will currently swallow the error and silently truncate the output to however far the driver got. This will hopefully fail to build, but better to check such things. Handle this by checking for errors when closing STDOUT (which is a pipe to the xlate driver). Reviewed-by: Richard Levitte <levitte@openssl.org> Reviewed-by: Tim Hudson <tjh@openssl.org> Reviewed-by: Tomas Mraz <tmraz@fedoraproject.org> (Merged from https://github.com/openssl/openssl/pull/10883)
show more ...
|
Revision tags: OpenSSL_1_0_2u |
|
#
2ff16afc |
| 07-Nov-2019 |
XiaokangQian |
Optimize AES-ECB mode in OpenSSL for both aarch64 and aarch32 Aes-ecb mode can be optimized by inverleaving cipher operation on several blocks and loop unrolling. Interleaving needs one
Optimize AES-ECB mode in OpenSSL for both aarch64 and aarch32 Aes-ecb mode can be optimized by inverleaving cipher operation on several blocks and loop unrolling. Interleaving needs one ideal unrolling factor, here we adopt the same factor with aes-cbc, which is described as below: If blocks number > 5, select 5 blocks as one iteration,every loop, decrease the blocks number by 5. If 3 < left blocks < 5 select 3 blocks as one iteration, every loop, decrease the block number by 3. If left blocks < 3, treat them as tail blocks. Detailed implementation will have a little adjustment for squeezing code space. With this way, for small size such as 16 bytes, the performance is similar as before, but for big size such as 16k bytes, the performance improves a lot, even reaches to 100%, for some arches such as A57, the improvement even exceeds 100%. The following table will list the encryption performance data on aarch64, take a72 and a57 as examples. Performance value takes the unit of cycles per byte, takes the format as comparision of values. List them as below: A72: Before optimization After optimization Improve evp-aes-128-ecb@16 17.26538237 16.82663866 2.61% evp-aes-128-ecb@64 5.50528499 5.222637557 5.41% evp-aes-128-ecb@256 2.632700213 1.908442892 37.95% evp-aes-128-ecb@1024 1.876102047 1.078018868 74.03% evp-aes-128-ecb@8192 1.6550392 0.853982929 93.80% evp-aes-128-ecb@16384 1.636871283 0.847623957 93.11% evp-aes-192-ecb@16 17.73104961 17.09692468 3.71% evp-aes-192-ecb@64 5.78984398 5.418545192 6.85% evp-aes-192-ecb@256 2.872005308 2.081815274 37.96% evp-aes-192-ecb@1024 2.083226672 1.25095642 66.53% evp-aes-192-ecb@8192 1.831992057 0.995916251 83.95% evp-aes-192-ecb@16384 1.821590009 0.993820525 83.29% evp-aes-256-ecb@16 18.0606306 17.96963317 0.51% evp-aes-256-ecb@64 6.19651997 5.762465812 7.53% evp-aes-256-ecb@256 3.176991394 2.24642538 41.42% evp-aes-256-ecb@1024 2.385991919 1.396018192 70.91% evp-aes-256-ecb@8192 2.147862636 1.142222597 88.04% evp-aes-256-ecb@16384 2.131361787 1.135944617 87.63% A57: Before optimization After optimization Improve evp-aes-128-ecb@16 18.61045121 18.36456218 1.34% evp-aes-128-ecb@64 6.438628994 5.467959461 17.75% evp-aes-128-ecb@256 2.957452881 1.97238604 49.94% evp-aes-128-ecb@1024 2.117096219 1.099665054 92.52% evp-aes-128-ecb@8192 1.868385973 0.837440804 123.11% evp-aes-128-ecb@16384 1.853078526 0.822420027 125.32% evp-aes-192-ecb@16 19.07021756 18.50018552 3.08% evp-aes-192-ecb@64 6.672351486 5.696088921 17.14% evp-aes-192-ecb@256 3.260427769 2.131449916 52.97% evp-aes-192-ecb@1024 2.410522832 1.250529718 92.76% evp-aes-192-ecb@8192 2.17921605 0.973225504 123.92% evp-aes-192-ecb@16384 2.162250997 0.95919871 125.42% evp-aes-256-ecb@16 19.3008384 19.12743654 0.91% evp-aes-256-ecb@64 6.992950658 5.92149541 18.09% evp-aes-256-ecb@256 3.576361743 2.287619504 56.34% evp-aes-256-ecb@1024 2.726671027 1.381267599 97.40% evp-aes-256-ecb@8192 2.493583657 1.110959913 124.45% evp-aes-256-ecb@16384 2.473916816 1.099967073 124.91% Change-Id: Iccd23d972e0d52d22dc093f4c208f69c9d5a0ca7 Reviewed-by: Shane Lontis <shane.lontis@oracle.com> Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/10518)
show more ...
|
#
1aa89a7a |
| 12-Sep-2019 |
Richard Levitte |
Unify all assembler file generators They now generally conform to the following argument sequence: script.pl "$(PERLASM_SCHEME)" [ C preprocessor arguments ... ] \
Unify all assembler file generators They now generally conform to the following argument sequence: script.pl "$(PERLASM_SCHEME)" [ C preprocessor arguments ... ] \ $(PROCESSOR) <output file> However, in the spirit of being able to use these scripts manually, they also allow for no argument, or for only the flavour, or for only the output file. This is done by only using the last argument as output file if it's a file (it has an extension), and only using the first argument as flavour if it isn't a file (it doesn't have an extension). While we're at it, we make all $xlate calls the same, i.e. the $output argument is always quoted, and we always die on error when trying to start $xlate. There's a perl lesson in this, regarding operator priority... This will always succeed, even when it fails: open FOO, "something" || die "ERR: $!"; The reason is that '||' has higher priority than list operators (a function is essentially a list operator and gobbles up everything following it that isn't lower priority), and since a non-empty string is always true, so that ends up being exactly the same as: open FOO, "something"; This, however, will fail if "something" can't be opened: open FOO, "something" or die "ERR: $!"; The reason is that 'or' has lower priority that list operators, i.e. it's performed after the 'open' call. Reviewed-by: Matt Caswell <matt@openssl.org> (Merged from https://github.com/openssl/openssl/pull/9884)
show more ...
|
Revision tags: OpenSSL_1_0_2t, OpenSSL_1_1_0l, OpenSSL_1_1_1d, OpenSSL_1_1_1c, OpenSSL_1_1_0k, OpenSSL_1_0_2s |
|
#
d6e4287c |
| 17-Apr-2019 |
Andy Polyakov |
aes/asm/aesv8-armx.pl: ~20% improvement on ThunderX2. Reviewed-by: Tim Hudson <tjh@openssl.org> Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/ope
aes/asm/aesv8-armx.pl: ~20% improvement on ThunderX2. Reviewed-by: Tim Hudson <tjh@openssl.org> Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/8776)
show more ...
|
#
6465321e |
| 17-Apr-2019 |
Andy Polyakov |
ARM64 assembly pack: add ThunderX2 results. Reviewed-by: Tim Hudson <tjh@openssl.org> Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/opens
ARM64 assembly pack: add ThunderX2 results. Reviewed-by: Tim Hudson <tjh@openssl.org> Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/8776)
show more ...
|
Revision tags: OpenSSL_1_0_2r, OpenSSL_1_1_1b |
|
#
3405db97 |
| 15-Feb-2019 |
Andy Polyakov |
ARM assembly pack: make it Windows-friendly. "Windows friendliness" means a) flipping .thumb and .text directives, b) always generate Thumb-2 code when asked(*); c) Windows-specific
ARM assembly pack: make it Windows-friendly. "Windows friendliness" means a) flipping .thumb and .text directives, b) always generate Thumb-2 code when asked(*); c) Windows-specific references to external OPENSSL_armcap_P. (*) so far *some* modules were compiled as .code 32 even if Thumb-2 was targeted. It works at hardware level because processor can alternate between the modes with no overhead. But clang --target=arm-windows's builtin assembler just refuses to compile .code 32... Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/8252)
show more ...
|
#
9a18aae5 |
| 11-Feb-2019 |
Andy Polyakov |
AArch64 assembly pack: authenticate return addresses. ARMv8.3 adds pointer authentication extension, which in this case allows to ensure that, when offloaded to stack, return address is
AArch64 assembly pack: authenticate return addresses. ARMv8.3 adds pointer authentication extension, which in this case allows to ensure that, when offloaded to stack, return address is same at return as at entry to the subroutine. The new instructions are nops on processors that don't implement the extension, so that the vetification is backward compatible. Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/8205)
show more ...
|
#
c918d8e2 |
| 06-Dec-2018 |
Richard Levitte |
Following the license change, modify the boilerplates in crypto/aes/ Reviewed-by: Matt Caswell <matt@openssl.org> (Merged from https://github.com/openssl/openssl/pull/7771)
|