simd/arm/neon-compat.h.in


Log

Author Commit Date CI Message
DRC 2a2970af 2021-07-09T15:35:56 Neon/AArch32: Work around Clang T32 miscompilation Referring to the C standard (http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf, J.2 Undefined behavior), the behavior of the compiler is undefined if "conversion between two pointer types produces a result that is incorrectly aligned." Thus, the behavior of this code *((uint32_t *)buffer) = BUILTIN_BSWAP32(put_buffer); in the AArch32 version of the FLUSH() macro is undefined unless 'buffer' is 32-bit-aligned. Referring to https://bugs.llvm.org/show_bug.cgi?id=50785, certain versions of Clang, when generating Thumb (T32) instructions, miscompile that code into an assembly instruction (stm) that requires the destination to be 32-bit-aligned. Since such alignment cannot be guaranteed within the Huffman encoder, this reportedly led to crashes (SIGBUS: illegal alignment) with AArch32/Thumb builds of libjpeg-turbo running on Android devices, although thus far I have been unable to reproduce those crashes with a plain Linux/Arm system. The miscompilation is visible with the Compiler Explorer: https://godbolt.org/z/rv1ccx1Pb However, it goes away when removing the return statement from the function. Thus, it seems that Clang's behavior in this regard is somewhat variable, which may explain why the crashes are only reproducible on certain platforms. The suggested workaround is to use memcpy(), but whereas Clang and recent GCC releases are smart enough to compile a 4-byte memcpy() call into a str instruction, GCC < 6 is not. Referring to https://godbolt.org/z/ae7Wje3P6, the only way to consistently produce the desired str instruction across all supported compilers is to use inline assembly. Visual C++ presumably does not miscompile the code in question, since no issues have been reported with it, but since the code relies on undefined compiler behavior, prudence dictates that e4ec23d7ae051c1c73947f889818900362fdc52d should be reverted for Visual C++, which this commit does. The performance impact of e4ec23d7ae051c1c73947f889818900362fdc52d for Visual C++/Arm builds is unknown (I have no ability to test such builds), but regardless, this commit reverts the Visual C++/Arm performance to that of libjpeg-turbo 2.1 beta1. Closes #529
Jonathan Wright e4ec23d7 2021-02-10T16:45:50 Neon: Use byte-swap builtins instead of inline asm Define compiler-independent byte-swap macros and use them instead of executing 'rev' via inline assembly code with GCC-compatible compilers or a slow shift-store sequence with Visual C++. * This produces identical assembly code with: - 64-bit GCC 8.4.0 (Linux) - 64-bit GCC 9.3.0 (Linux) - 64-bit Clang 10.0.0 (Linux) - 64-bit Clang 10.0.0 (MinGW) - 64-bit Clang 12.0.0 (Xcode 12.2, macOS) - 64-bit Clang 12.0.0 (Xcode 12.2, iOS) * This produces different assembly code with: - 64-bit GCC 4.9.1 (Linux) - 32-bit GCC 4.8.2 (Linux) - 32-bit GCC 8.4.0 (Linux) - 32-bit GCC 9.3.0 (Linux) Since the intrinsics implementation of Huffman encoding is not used by default with these compilers, this is not a concern. - 32-bit Clang 10.0.0 (Linux) Verified performance neutrality Closes #507
Richard Townsend 74e6ea45 2021-01-05T20:23:11 Neon: Fix Huffman enc. error w/Visual Studio+Clang The GNU builtin function __builtin_clzl() accepts an unsigned long argument, which is 8 bytes wide on LP64 systems (most Un*x systems, including Mac) but 4 bytes wide on LLP64 systems (Windows.) This caused the Neon intrinsics implementation of Huffman encoding to produce mathematically incorrect results when compiled using Visual Studio with Clang. This commit changes all invocations of __builtin_clzl() in the Neon SIMD extensions to __builtin_clzll(), which accepts an unsigned long long argument that is guaranteed to be 8 bytes wide on all systems. Fixes #480 Closes #490
Jonathan Wright eb14189c 2020-11-17T12:48:49 Fix Neon SIMD build issues with Visual Studio - Use the _M_ARM and _M_ARM64 macros provided by Visual Studio for compile-time detection of Arm builds, since __arm__ and __aarch64__ are only present in GNU-compatible compilers. - Neon/intrinsics: Use the _CountLeadingZeros() and _CountLeadingZeros64() intrinsics provided by Visual Studio, since __builtin_clz() and __builtin_clzl() are only present in GNU-compatible compilers. - Neon/intrinsics: Since Visual Studio does not support static vector initialization, replace static initialization of Neon vectors with the appropriate intrinsics. Compared to the static initialization approach, this produces identical assembly code with both GCC and Clang. - Neon/intrinsics: Since Visual Studio does not support inline assembly code, provide alternative code paths for Visual Studio whenever inline assembly is used. - Build: Set FLOATTEST appropriately for AArch64 Visual Studio builds (Visual Studio does not emit fused multiply-add [FMA] instructions by default for such builds.) - Neon/intrinsics: Move temporary buffer allocation outside of nested loops. Since Visual Studio configures Arm builds with a relatively small amount of stack memory, attempting to allocate those buffers within the inner loops caused a stack overflow. Closes #461 Closes #475
DRC 33859880 2020-11-13T12:12:47 Neon: Auto-detect compiler intrinsics completeness This allows the Neon intrinsics code to be built successfully (albeit likely with reduced run-time performance) with Xcode 5.0-6.2 (iOS/AArch64) and Android NDK < r19 (AArch32). Note that Xcode 5.0-6.2 will not build the Armv8 GAS code without gas-preprocessor.pl, and no version of Xcode will build the Armv7 GAS code without gas-preprocessor.pl, so we always use the full Neon intrinsics implementation by default with macOS and iOS builds. Auto-detecting the completeness of the compiler's set of Neon intrinsics also allows us to more intelligently set the default value of NEON_INTRINSICS, based on the values of HAVE_VLD1*. This is a reasonable, albeit imperfect, proxy for whether a compiler has a full and optimal set of Neon intrinsics. Specific notes: - 64-bit RGB-to-YCbCr color conversion does not use any of the intrinsics in question, regresses with GCC - 64-bit accurate integer forward DCT uses vld1_s16_x3(), regresses with GCC - 64-bit Huffman encoding uses vld1q_u8_x4(), regresses with GCC - 64-bit YCbCr-to-RGB color conversion does not use any of the intrinsics in question, regresses with GCC - 64-bit accurate integer inverse DCT uses vld1_s16_x3(), regresses with GCC - 64-bit 4x4 inverse DCT uses vld1_s16_x3(). I did not test this algorithm in isolation, so it may in fact regress with GCC, but the regression may be hidden by the speedup from the new SIMD-accelerated upsampling algorithms. - 32-bit RGB-to-YCbCr color conversion: uses vld1_u16_x2(), regresses with GCC - 32-bit accurate integer forward DCT uses vld1_s16_x3(), regression irrelevant because there was no previous implementation - 32-bit accurate integer inverse DCT uses vld1_s16_x3(), regresses with GCC - 32-bit fast integer inverse DCT does not use any of the intrinsics in question, regresses with GCC - 32-bit 4x4 inverse DCT uses vld1_s16_x3(). I did not test this algorithm in isolation, so it may in fact regress with GCC, but the regression may be hidden by the speedup from the new SIMD-accelerated upsampling algorithms. Presumably when GCC includes a full and optimal set of Neon intrinsics, the HAVE_VLD1* tests will pass, and the full Neon intrinsics implementation will be enabled automatically.