|
59337a67
|
2022-07-06T12:11:50
|
|
PowerPC: Detect AltiVec support on OS X
libjpeg-turbo's AltiVec SIMD extensions previously assumed that AltiVec
instructions were available on all Power Macs that supported OS X 10.4
"Tiger" (the earliest version of OS X that libjpeg-turbo has ever
supported), but Tiger can actually run on PowerPC G3 processors, which
lack AltiVec instructions. This commit enables run-time detection of
AltiVec instructions on OS X/PowerPC systems if AltiVec instructions are
not force-enabled at compile time (using -maltivec). This allows the
same build of libjpeg-turbo to support G3, G4, and G5 Power Macs.
Closes #609
|
|
fac83814
|
2022-05-14T14:34:43
|
|
Build: Don't enable Loongson MMI with MIPS R6+
MIPS R6 removed some instructions, so Loongson MMI cannot be built with
MIPS R6+ toolchains.
Closes #598
|
|
9abeff46
|
2022-03-09T11:48:30
|
|
Remove extraneous #include directives
jinclude.h already includes stdio.h, stdlib.h, and string.h.
|
|
c5f269eb
|
2021-09-03T11:52:40
|
|
Neon/AArch64: Explicitly unroll quant loop w/Clang
The loop in jsimd_quantize_neon() is only executed twice and should be
unrolled for AArch64 targets. GCC does that by default, but Clang 11
and later versions available at the time of this writing do not. This
patch adds an unroll pragma when targetting AArch64 with Clang. We do
not use the unroll pragma for AArch32 targets, because it causes the
Clang-generated assembly code to exhaust the available Neon registers
(32 x 64-bit) and spill to the stack. (DRC: Referring to the discussion
in #570, this is likely due to compiler confusion that results in poor
register allocation. It is possible to eliminate the spillage and
reduce the instruction count by loading the data on a just-in-time
basis, thus explicitly interleaving compute and I/O, but the performance
implications of that are currently unknown.)
The effects of unrolling the quantization loop are:
1) elimination of the loop control flow overhead and
2) enabling the use of LDP/STP instructions that work from a single
base pointer, instead of using double the number of LDR/STR
instructions, each requiring an address calculation.
Closes #570
|
|
98bc3eeb
|
2022-02-24T23:09:58
|
|
Neon/AArch64: Fix/suppress UBSan warnings
- Suppress a UBSan warning regarding storing a 64-bit value to a
non-64-bit-aligned address. That behavior is technically undefined
per the C spec but is supported in the context of the AArch64
architecture and compilers.
- Explicitly promote block_diff[i] to unsigned int prior to left
shifting it, in order to avoid a UBSan warning. This warning also
described behavior that is technically undefined per the C spec but is
supported in the context of the AArch64 architecture and compilers.
Changing the type cast order eliminated the warning without changing
the generated assembly code.
Closes #582
|
|
147548c0
|
2021-09-06T11:31:37
|
|
Neon/AArch64: Accelerate Huffman encoding
- Make better use of 128-bit vector registers, thus reducing the number
of Neon instructions required to construct the AC coefficient bitmap.
- Refactor the Neon computations of 'nbits' and 'diff' to use shorter
and higher-throughput instruction sequences.
DRC's notes:
This commit partially integrates #570. Arm reported a 1-4% speedup on
Cortex-A55 and Neoverse-N1 cores when using recent compilers but little
or no speedup with Clang 10. I observed no speedup with Clang 10 on my
Cortex-A53 and Cortex-A72 cores. Thus, referring to #582, the primary
purpose of this commit is to fix UBSan warnings regarding the shift
operations previously located at Line 253:
https://github.com/libjpeg-turbo/libjpeg-turbo/blob/d640a457305164417a60f30c6457d316f0b44a9d/simd/arm/aarch64/jchuff-neon.c#L253
|
|
607b668f
|
2022-02-10T11:33:49
|
|
MSVC: Eliminate C4996 warnings in API libs
The primary purpose of this is to encourage adoption of libjpeg-turbo in
downstream Windows projects that forbid the use of "deprecated"
functions. libjpeg-turbo's usage of those functions was not actually
unsafe, because:
- libjpeg-turbo always checks the return value of fopen() and ensures
that a NULL filename can never be passed to it.
- libjpeg-turbo always checks the return value of getenv() and never
passes a NULL argument to it.
- The sprintf() calls in format_message() (jerror.c) could never
overflow the destination string buffer or leave it unterminated as
long as the buffer was at least JMSG_LENGTH_MAX bytes in length, as
instructed. (Regardless, this commit replaces those calls with
snprintf() calls.)
- libjpeg-turbo never uses sscanf() to read strings or multi-byte
character arrays.
- Because of b7d6e84d6a9283dc2bc50ef9fcaadc0cdeb25c9f, wrjpgcom
explicitly checks the bounds of the source and destination strings
before calling strcat() and strcpy().
- libjpeg-turbo always ensures that the destination string is
terminated when using strncpy().
(548490fe5e2aa31cb00f6602d5a478b068b99682 made this explicit.)
Regarding thread safety:
Technically speaking, getenv() is not thread-safe, because the returned
pointer may be invalidated if another thread sets the same environment
variable between the time that the first thread calls getenv() and the
time that that thread uses the return value. In practice, however, this
could only occur with libjpeg-turbo if:
(1) A multithreaded calling application used the deprecated and
undocumented TJFLAG_FORCEMMX/TJFLAG_FORCESSE/TJFLAG_FORCESSE2 flags in
the TurboJPEG API or set one of the corresponding environment variables
(which are only intended for testing purposes.) Since the TurboJPEG API
library only ever passed string constants to putenv(), the only inherent
risk (i.e. the only risk introduced by the library and not the calling
application) was that the SIMD extensions may have read an incorrect
value from one of the aforementioned environment variables.
or
(2) A multithreaded calling application modified the value of the
JPEGMEM environment variable in one thread while another thread was
reading the value of that environment variable (in the body of
jpeg_create_compress() or jpeg_create_decompress().) Given that the
libjpeg API provides a thread-safe way for applications to modify the
default memory limit without using the JPEGMEM environment variable,
direct modification of that environment variable by calling applications
is not supported.
Microsoft's implementation of getenv_s() does not claim to be
thread-safe either, so this commit uses getenv_s() solely to mollify
Visual Studio. New inline functions and macros (GETENV_S() and
PUTENV_S) wrap getenv_s()/_putenv_s() when building for Visual Studio
and getenv()/setenv() otherwise, but GETENV_S()/PUTENV_S() provide no
advantages over getenv()/setenv() other than parameter validation. They
are implemented solely for convenience.
Technically speaking, strerror() is not thread-safe, because the
returned pointer may be invalidated if another thread changes the locale
and/or calls strerror() between the time that the first thread calls
strerror() and the time that that thread uses the return value. In
practice, however, this could only occur with libjpeg-turbo if a
multithreaded calling application encountered a file I/O error in
tjLoadImage() or tjSaveImage(). Since both of those functions
immediately copy the string returned from strerror() into a thread-local
buffer, the risk is minimal, and the worst case would involve an
incorrect error string being reported to the calling application.
Regardless, this commit uses strerror_s() in the TurboJPEG API library
when building for Visual Studio. Note that strerror_r() could have been
used on Un*x systems, but it would have been necessary to handle both
the POSIX and GNU implementations of that function and perform
widespread compatibility testing. Such is left as an exercise for
another day.
Fixes #568
|
|
6d2d6d3b
|
2022-02-11T09:34:01
|
|
"YASM" = "Yasm"
The assembler name was initially spelled "YASM", but it has been "Yasm"
for the entirety of libjpeg-turbo's existence.
|
|
e1588a2a
|
2022-02-10T22:23:26
|
|
Build: Fix Neon capability detection w/ MSVC
(broken by 57ba02a408a9a55ccff25aae8b164632a3a4f177)
Refer to #547
|
|
a01857cf
|
2021-12-01T17:08:29
|
|
Build: Disallow NEON_INTRINSICS=0 if GAS is broken
If NEON_INTRINSICS=0, then run the GAS sanity check from libjpeg-turbo
2.0.x and force-enable NEON_INTRINSICS if the test fails. This fixes
the AArch32 build when using Clang 6.0 on Linux or the Clang toolchain
in the Android NDK r15*-r16*. It also prevents users from manually
disabling NEON_INTRINSICS if doing so would break the build (such as
with Xcode 5.)
|
|
57ba02a4
|
2021-10-01T16:28:54
|
|
Build: Improve Neon capability detection
- Use check_c_source_compiles() rather than check_symbol_exists() to
detect the presence of vld1_s16_x3(), vld1_u16_x2(), and
vld1q_u8_x4(). check_symbol_exists() is unreliable for detecting
intrinsics, and in practice, it did not detect the presence of the
aforementioned intrinsics in versions of GCC that support them.
- Set DEFAULT_NEON_INTRINSICS=0 for GCC < 12, even if the aforementioned
intrinsics are available. The AArch64 back end in GCC 10 and 11
supports the necessary intrinsics, but the GAS implementation is still
faster when using those compilers.
Fixes #547
|
|
d401d625
|
2021-10-27T03:39:09
|
|
PowerPC: Detect AltiVec support on FreeBSD
Recent FreeBSD/PowerPC compilers, such as Clang 11.0.x on FreeBSD 13, do
the equivalent of passing -maltivec to the compiler by default, so
run-time AltiVec detection is unnecessary. However, it becomes
necessary when using other compilers or when passing -mno-altivec to the
compiler.
Closes #552
|
|
a9c41fbc
|
2021-10-03T12:43:15
|
|
Build: Don't enable Neon SIMD exts with Armv6-
When building for 32-bit Arm platforms, test whether basic Neon
intrinsics will compile with the specified compiler and C flags. This
prevents the build system from enabling the Neon SIMD extensions when
targetting Armv6 and other legacy architectures that do not support Neon
instructions.
Regression introduced by bbd8089297862efb6c39a22b5623f04567ff6443.
(Checking whether gas-preprocessor.pl was needed for 32-bit Arm builds
had the effect of checking whether Neon instructions were supported.)
Fixes #553
|
|
129f0cb7
|
2021-08-25T12:07:58
|
|
Neon/AArch64: Don't put GAS functions in .rodata
Regression introduced by 240ba417aa4b3174850d05ea0d22dbe5f80553c1
Closes #546
|
|
2849d86a
|
2021-08-06T13:41:15
|
|
SSE2/64-bit: Fix trans. segfault w/ malformed JPEG
Attempting to losslessly transform certain malformed JPEG images can
cause the nbits table index in the Huffman encoder to exceed 32768, so
we need to pad the SSE2 implementation of that table to 65536 entries as
we do with the C implementation.
Regression introduced by 087c29e07f7533ec82fd7eb1dafc84c29e7870ec
Fixes #543
|
|
b201838d
|
2021-07-10T16:07:05
|
|
Neon: Silence -Wimplicit-fallthrough warnings
Refer to https://bugs.chromium.org/p/chromium/issues/detail?id=995993
Closes #534
|
|
a1bfc058
|
2021-07-12T13:52:38
|
|
Neon/AArch32: Mark inline asm output as read/write
'buffer' is both passed into the inline assembly code and modified by
it. See https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html, 6.47.2.3.
With GCC 4, this commit does not change the generated assembly code at
all.
With GCC 8, this commit fixes an assembly error:
/tmp/{foo}.s: Assembler messages:
/tmp/{foo}.s:775: Error: registers may not be the same --
`str r9,[r9],#4'
I'm not sure why that error went unnoticed, since I definitely
benchmarked the previous commit with GCC 8. Anyhow, this commit changes
the generated assembly code slightly but does not alter performance.
With Clang 10, this commit changes the generated assembly code slightly
but does not alter performance.
Refer to #529
|
|
2a2970af
|
2021-07-09T15:35:56
|
|
Neon/AArch32: Work around Clang T32 miscompilation
Referring to the C standard
(http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf,
J.2 Undefined behavior), the behavior of the compiler is undefined if
"conversion between two pointer types produces a result that is
incorrectly aligned." Thus, the behavior of this code
*((uint32_t *)buffer) = BUILTIN_BSWAP32(put_buffer);
in the AArch32 version of the FLUSH() macro is undefined unless 'buffer'
is 32-bit-aligned. Referring to
https://bugs.llvm.org/show_bug.cgi?id=50785, certain versions of Clang,
when generating Thumb (T32) instructions, miscompile that code into an
assembly instruction (stm) that requires the destination to be
32-bit-aligned. Since such alignment cannot be guaranteed within the
Huffman encoder, this reportedly led to crashes (SIGBUS: illegal
alignment) with AArch32/Thumb builds of libjpeg-turbo running on Android
devices, although thus far I have been unable to reproduce those crashes
with a plain Linux/Arm system.
The miscompilation is visible with the Compiler Explorer:
https://godbolt.org/z/rv1ccx1Pb
However, it goes away when removing the return statement from the
function. Thus, it seems that Clang's behavior in this regard is
somewhat variable, which may explain why the crashes are only
reproducible on certain platforms.
The suggested workaround is to use memcpy(), but whereas Clang and
recent GCC releases are smart enough to compile a 4-byte memcpy() call
into a str instruction, GCC < 6 is not. Referring to
https://godbolt.org/z/ae7Wje3P6, the only way to consistently produce
the desired str instruction across all supported compilers is to use
inline assembly. Visual C++ presumably does not miscompile the code in
question, since no issues have been reported with it, but since the code
relies on undefined compiler behavior, prudence dictates that
e4ec23d7ae051c1c73947f889818900362fdc52d should be reverted for Visual
C++, which this commit does. The performance impact of
e4ec23d7ae051c1c73947f889818900362fdc52d for Visual C++/Arm builds is
unknown (I have no ability to test such builds), but regardless, this
commit reverts the Visual C++/Arm performance to that of libjpeg-turbo
2.1 beta1.
Closes #529
|
|
0081c2de
|
2021-07-07T10:12:46
|
|
Neon/AArch32: Fix build if 'soft' float ABI used
Arm compilers have three floating point ABI options:
'soft' compiles floating point operations as function calls into a
software floating point library, which emulates floating point
operations using integer operations. Floating point function arguments
are passed using integer registers.
'softfp' also compiles floating point operations as function calls into
a floating point library and passes floating point function arguments
using integer registers, but the floating point library functions can
use FPU instructions if the CPU supports them.
'hard' compiles floating point operations into inline FPU instructions,
similarly to x86 and other architectures, and passes floating point
function arguments using FPU registers.
Not all AArch32 CPUs have FPUs or support Neon instructions, so on Linux
and Android platforms, the AArch32 SIMD dispatcher in libjpeg-turbo only
enables the Neon SIMD extensions at run time if /proc/cpuinfo indicates
that the CPU supports Neon instructions or if Neon instructions are
explicitly enabled (e.g. by passing -mfpu=neon to the compiler.) In
order to support all AArch32 CPUs using the same code base, i.e. to
support run-time FPU and Neon auto-detection, it is necessary to compile
the scalar C source code using -mfloat-abi=soft. However, the 'soft'
floating point ABI cannot be used when compiling Neon intrinsics, so the
intrinsics implementation of the Neon SIMD extensions must be compiled
using -mfloat-abi=softfp if the scalar C source code is compiled using
-mfloat-abi=soft.
This commit modifies the build system so that it detects whether
-mfloat-abi=softfp must be explicitly added to the compiler flags when
building the intrinsics implementation of the Neon SIMD extensions.
This will be necessary if the build is using the 'soft' floating
point ABI along with run-time auto-detection of Neon instructions.
Fixes #523
|
|
e4ec23d7
|
2021-02-10T16:45:50
|
|
Neon: Use byte-swap builtins instead of inline asm
Define compiler-independent byte-swap macros and use them instead of
executing 'rev' via inline assembly code with GCC-compatible compilers
or a slow shift-store sequence with Visual C++.
* This produces identical assembly code with:
- 64-bit GCC 8.4.0 (Linux)
- 64-bit GCC 9.3.0 (Linux)
- 64-bit Clang 10.0.0 (Linux)
- 64-bit Clang 10.0.0 (MinGW)
- 64-bit Clang 12.0.0 (Xcode 12.2, macOS)
- 64-bit Clang 12.0.0 (Xcode 12.2, iOS)
* This produces different assembly code with:
- 64-bit GCC 4.9.1 (Linux)
- 32-bit GCC 4.8.2 (Linux)
- 32-bit GCC 8.4.0 (Linux)
- 32-bit GCC 9.3.0 (Linux)
Since the intrinsics implementation of Huffman encoding is not used
by default with these compilers, this is not a concern.
- 32-bit Clang 10.0.0 (Linux)
Verified performance neutrality
Closes #507
|
|
e795afc3
|
2021-03-25T22:36:15
|
|
SSE2: Fix prog Huff enc err if Sl%32==0 && Al!=0
(regression introduced by 16bd984557fa2c490be0b9665e2ea0d4274528a8)
This implements the same fix for
jsimd_encode_mcu_AC_refine_prepare_sse2() that
a81a8c137b3f1c65082aa61f236aa88af61b3ad4 implemented for
jsimd_encode_mcu_AC_first_prepare_sse2().
Based on:
https://github.com/MegaByte/libjpeg-turbo/commit/1a59587397150c9ef9dffc5813cb3891db4bc0c8
https://github.com/MegaByte/libjpeg-turbo/commit/eb176a91d87a470bf8c987be786668aa944dd1dd
Fixes #509
Closes #510
|
|
2c01200c
|
2021-03-15T19:56:53
|
|
Build: Fix incorrect regexes w/ if(...MATCHES...)
"arm*" as a regex means 'ar' followed by zero or more 'm' characters,
which matches 'parisc' and 'sparc64' as well.
|
|
74e6ea45
|
2021-01-05T20:23:11
|
|
Neon: Fix Huffman enc. error w/Visual Studio+Clang
The GNU builtin function __builtin_clzl() accepts an unsigned long
argument, which is 8 bytes wide on LP64 systems (most Un*x systems,
including Mac) but 4 bytes wide on LLP64 systems (Windows.) This caused
the Neon intrinsics implementation of Huffman encoding to produce
mathematically incorrect results when compiled using Visual Studio with
Clang.
This commit changes all invocations of __builtin_clzl() in the Neon SIMD
extensions to __builtin_clzll(), which accepts an unsigned long long
argument that is guaranteed to be 8 bytes wide on all systems.
Fixes #480
Closes #490
|
|
eb14189c
|
2020-11-17T12:48:49
|
|
Fix Neon SIMD build issues with Visual Studio
- Use the _M_ARM and _M_ARM64 macros provided by Visual Studio for
compile-time detection of Arm builds, since __arm__ and __aarch64__
are only present in GNU-compatible compilers.
- Neon/intrinsics: Use the _CountLeadingZeros() and
_CountLeadingZeros64() intrinsics provided by Visual Studio, since
__builtin_clz() and __builtin_clzl() are only present in
GNU-compatible compilers.
- Neon/intrinsics: Since Visual Studio does not support static vector
initialization, replace static initialization of Neon vectors with the
appropriate intrinsics. Compared to the static initialization
approach, this produces identical assembly code with both GCC and
Clang.
- Neon/intrinsics: Since Visual Studio does not support inline assembly
code, provide alternative code paths for Visual Studio whenever inline
assembly is used.
- Build: Set FLOATTEST appropriately for AArch64 Visual Studio builds
(Visual Studio does not emit fused multiply-add [FMA] instructions by
default for such builds.)
- Neon/intrinsics: Move temporary buffer allocation outside of nested
loops. Since Visual Studio configures Arm builds with a relatively
small amount of stack memory, attempting to allocate those buffers
within the inner loops caused a stack overflow.
Closes #461
Closes #475
|
|
33859880
|
2020-11-13T12:12:47
|
|
Neon: Auto-detect compiler intrinsics completeness
This allows the Neon intrinsics code to be built successfully (albeit
likely with reduced run-time performance) with Xcode 5.0-6.2
(iOS/AArch64) and Android NDK < r19 (AArch32). Note that Xcode 5.0-6.2
will not build the Armv8 GAS code without gas-preprocessor.pl, and no
version of Xcode will build the Armv7 GAS code without
gas-preprocessor.pl, so we always use the full Neon intrinsics
implementation by default with macOS and iOS builds.
Auto-detecting the completeness of the compiler's set of Neon intrinsics
also allows us to more intelligently set the default value of
NEON_INTRINSICS, based on the values of HAVE_VLD1*. This is a
reasonable, albeit imperfect, proxy for whether a compiler has a full
and optimal set of Neon intrinsics. Specific notes:
- 64-bit RGB-to-YCbCr color conversion
does not use any of the intrinsics in question, regresses with GCC
- 64-bit accurate integer forward DCT
uses vld1_s16_x3(), regresses with GCC
- 64-bit Huffman encoding
uses vld1q_u8_x4(), regresses with GCC
- 64-bit YCbCr-to-RGB color conversion
does not use any of the intrinsics in question, regresses with GCC
- 64-bit accurate integer inverse DCT
uses vld1_s16_x3(), regresses with GCC
- 64-bit 4x4 inverse DCT
uses vld1_s16_x3(). I did not test this algorithm in isolation, so
it may in fact regress with GCC, but the regression may be hidden by
the speedup from the new SIMD-accelerated upsampling algorithms.
- 32-bit RGB-to-YCbCr color conversion:
uses vld1_u16_x2(), regresses with GCC
- 32-bit accurate integer forward DCT
uses vld1_s16_x3(), regression irrelevant because there was no
previous implementation
- 32-bit accurate integer inverse DCT
uses vld1_s16_x3(), regresses with GCC
- 32-bit fast integer inverse DCT
does not use any of the intrinsics in question, regresses with GCC
- 32-bit 4x4 inverse DCT
uses vld1_s16_x3(). I did not test this algorithm in isolation, so
it may in fact regress with GCC, but the regression may be hidden by
the speedup from the new SIMD-accelerated upsampling algorithms.
Presumably when GCC includes a full and optimal set of Neon intrinsics,
the HAVE_VLD1* tests will pass, and the full Neon intrinsics
implementation will be enabled automatically.
|
|
bbd80892
|
2020-11-10T17:54:14
|
|
Neon: Finalize intrinsics implementation
- Remove gas-preprocessor.pl. None of the compilers that can build the
new intrinsics implementation require gas-preprocessor.pl (tested
with Xcode and with Clang 3.9+ for Linux.)
- Document that Xcode 6.3.x or later is now required for iOS builds
(older versions of Xcode do not have a full set of Neon intrinsics.)
- Add a change log entry.
- Do not enable the ASM CMake language unless NEON_INTRINSICS is false.
- Add a Clang/Arm64 test to .travis.yml in order to test the new
intrinsics implementation.
Closes #455
|
|
141f26ff
|
2018-09-18T18:28:31
|
|
Neon: Intrinsics impl. of 2x2 and 4x4 scaled IDCTs
The previous AArch32 and AArch64 GAS implementations have been removed,
since the intrinsics implementations provide the same or better
performance.
|
|
4574f01f
|
2018-06-28T16:17:36
|
|
Neon: Intrinsics impl. of h2v1 & h2v2 plain upsamp
There was no previous GAS implementation.
NOTE: This doesn't produce much of a speedup when using -O3, because -O3
already enables Neon autovectorization, which works well for the scalar
C implementation of plain upsampling. However, the Neon SIMD
implementation will benefit other optimization levels.
|
|
ba52a3de
|
2018-07-19T18:46:24
|
|
Neon: Intrinsics impl of h2v1 & h2v2 merged upsamp
There was no previous GAS implementation.
This commit also reverts 40557b23015d2f8b576420231b8dd1f39f2ceed8 and
7723d7f7d0aa40349d5bdd1fbe4f8631fd5a2b57.
7723d7f7d0aa40349d5bdd1fbe4f8631fd5a2b57 was only necessary because
there was no Neon implementation of merged upsampling/color conversion,
and 40557b23015d2f8b576420231b8dd1f39f2ceed8 was only necessary because
of 7723d7f7d0aa40349d5bdd1fbe4f8631fd5a2b57.
|
|
240ba417
|
2020-01-07T16:40:32
|
|
Neon: Intrinsics impl. of prog. Huffman encoding
The previous AArch64 GAS implementation has been removed, since the
intrinsics implementation provides the same or better performance.
There was no previous AArch32 GAS implementation.
|
|
ed581cd9
|
2019-06-12T18:16:53
|
|
Neon: Intrinsics impl. of accurate int inverse DCT
The previous AArch32 and AArch64 GAS implementations are retained by
default when using GCC, in order to avoid a performance regression. The
intrinsics implementation can be forced on or off using the new
NEON_INTRINSICS CMake variable.
|
|
2c6b68e2
|
2018-09-25T18:20:25
|
|
Neon: Intrinsics impl. of fast integer Inverse DCT
The previous AArch32 GAS implementation is retained by default when
using GCC, in order to avoid a performance regression. The intrinsics
implementation can be forced on or off using the new NEON_INTRINSICS
CMake variable. The previous AArch64 GAS implementation has been
removed, since the intrinsics implementation provides the same or better
performance.
|
|
2acfb93c
|
2019-05-08T15:43:26
|
|
Neon: Intrinsics impl. of h1v2 fancy upsamling
There was no previous GAS implementation.
|
|
97530777
|
2018-06-15T11:13:52
|
|
Neon: Intrinsics impl. of h2v1 & h2v2 fancy upsamp
The previous AArch32 GAS implementation of h2v1 fancy upsampling has
been removed, since the intrinsics implementation provides the same or
better performance. There was no previous GAS implementation of h2v2
fancy upsampling, and there was no previous AArch64 GAS implementation
of h2v1 fancy upsampling.
|
|
5dbd3932
|
2018-08-01T16:52:31
|
|
Neon: Intrinsics implementation of YCbCr->RGB565
The previous AArch64 GAS implementation is retained by default when
using GCC, in order to avoid a performance regression. The intrinsics
implementation can be forced on or off using the new NEON_INTRINSICS
CMake variable. The previous AArch32 GAS implementation has been
removed, since the intrinsics implementation provides the same or better
performance.
|
|
0f35cd68
|
2018-07-16T10:25:14
|
|
Neon: Intrinsics implementation of YCbCr->RGB
The previous AArch64 GAS implementation is retained by default when
using GCC, in order to avoid a performance regression. The intrinsics
implementation can be forced on or off using the new NEON_INTRINSICS
CMake variable. The previous AArch32 GAS implementation has been
removed, since the intrinsics implementation provides the same or better
performance.
|
|
f3c3f01d
|
2018-09-24T04:35:20
|
|
Neon: Intrinsics impl. of Huffman encoding
The previous AArch64 GAS implementation is retained by default when
using GCC, in order to avoid a performance regression. The intrinsics
implementation can be forced on or off using the new NEON_INTRINSICS
CMake variable. The previous AArch32 GAS implementation has been
removed, since the intrinsics implementation provides the same or better
performance.
|
|
d0004de5
|
2018-08-22T13:38:37
|
|
Neon: Intrinsics impl. of accurate int forward DCT
The previous AArch64 GAS implementation is retained by default when
using GCC, in order to avoid a performance regression. The intrinsics
implementation can be forced on or off using the new NEON_INTRINSICS
CMake variable. There was no previous AArch32 GAS implementation.
|
|
3d84668d
|
2018-08-23T14:22:23
|
|
Neon: Intrinsics impl. of fast integer forward DCT
The previous AArch32 and AArch64 GAS implementations have been removed,
since the intrinsics implementation provides the same or better
performance.
|
|
951d3677
|
2018-08-24T18:04:21
|
|
Neon: Intrinsics impl. of int sample conv./quant.
The previous AArch32 and AArch64 GAS implementations have been removed,
since the intrinsics implementation provides the same or better
performance.
|
|
366168aa
|
2018-08-06T15:14:34
|
|
Neon: Intrinsics impl. of h2v1 & h2v2 downsampling
The previous AArch64 GAS implementation has been removed, since the
intrinsics implementation provides the same or better performance.
There was no previous AArch32 GAS implementation.
|
|
f73b1dbc
|
2018-08-09T15:08:21
|
|
Neon: Intrinsics implementation of RGB->Grayscale
There was no previous GAS implementation.
|
|
4f2216b4
|
2019-11-26T18:14:33
|
|
Neon: Intrinsics implementation of RGB->YCbCr
The previous AArch32 and AArch64 GAS implementations are retained by
default when using GCC, in order to avoid a performance regression. The
intrinsics implementation can be forced on or off using a new
NEON_INTRINSICS CMake variable.
|
|
7c1a1789
|
2020-11-05T16:04:55
|
|
Merge branch 'master' into dev
|
|
6e632af9
|
2020-11-04T10:13:06
|
|
Demote "fast" [I]DCT algorithms to legacy status
- Refer to the "slow" [I]DCT algorithms as "accurate" instead, since
they are not slow under libjpeg-turbo.
- Adjust documentation claims to reflect the fact that the "slow" and
"fast" algorithms produce about the same performance on AVX2-equipped
CPUs (because of the dual-lane nature of AVX2, it was not possible to
accelerate the "fast" algorithm beyond what was achievable with SSE2.)
Also adjust the claims to reflect the fact that the "fast" algorithm
tends to be ~5-15% faster than the "slow" algorithm on
non-AVX2-equipped CPUs, regardless of the use of the libjpeg-turbo
SIMD extensions.
- Indicate the legacy status of the "fast" and float algorithms in the
documentation and cjpeg/djpeg usage info.
- Remove obsolete paragraph in the djpeg man page that suggested that
the float algorithm could be faster than the "fast" algorithm on some
CPUs.
|
|
cd342acf
|
2020-10-27T16:42:14
|
|
Merge branch 'master' into dev
|
|
d27b935a
|
2020-10-27T15:04:39
|
|
Consistify formatting to simplify checkstyle
The checkstyle script was hastily developed prior to libjpeg-turbo 2.0
beta1, so it has a lot of exceptions and is thus prone to false
negatives. This commit eliminates some of those exceptions.
|
|
59352195
|
2020-10-19T21:17:46
|
|
Merge branch 'master' into dev
|
|
1ed312ea
|
2020-10-15T17:47:31
|
|
"ARM"="Arm", "NEON"="Neon"
Refer to:
https://www.arm.com/company/policies/trademarks/arm-trademark-list/arm-trademark
https://www.arm.com/company/policies/trademarks/arm-trademark-list/neon-trademark
NOTE: These changes are only applied to change log entries for 2.0.x and
later, since the change log is a historical record and Arm's new
trademark policy did not go into effect until late 2017.
|
|
ae08115d
|
2020-10-15T10:25:46
|
|
Merge branch 'master' into dev
|
|
b5a14727
|
2020-10-15T10:22:51
|
|
Build: Fix permissions
|
|
6ab61fa1
|
2020-09-13T17:02:09
|
|
Merge branch 'master' into dev
|
|
6ee5d5f5
|
2020-07-28T18:06:20
|
|
ARMv8 NEON: Support Windows builds w/AArch64 MinGW
Based on:
https://github.com/mstorsjo/libjpeg-turbo/commit/c5ef6659285a7d5bc74c679aa87ad187186cf7e1
Closes #438
|
|
00d48d7e
|
2020-02-17T18:14:10
|
|
Merge branch 'master' into dev
|
|
035262a1
|
2020-02-17T18:03:10
|
|
MIPS DSPr2: Work around various 'make test' errors
Referring to #408, this commit #ifdefs DSPr2 SIMD functions that only
work on little endian processors, and it completely excludes
jsimd_h2v1_downsample_dspr2() and jsimd_h2v2_downsample_dspr2(). The
latter two functions fail with the TJBench tiling regression tests, most
likely because the implementation of the functions predates those tests.
|
|
ed7cab47
|
2020-02-17T16:35:00
|
|
MIPS DSPr2: Fix compiler warning with -mdspr2
If -mdspr2 is passed to the compiler, __mips_dsp will be defined, and
__mips_dsp_rev will be >= 2, so parse_proc_cpuinfo() will not be used.
|
|
42d679b9
|
2020-02-17T15:19:32
|
|
MIPS SIMD: Always honor JSIMD_FORCE* env vars
Previously, these environment variables were not honored unless a 74K
CPU was detected, but this detection doesn't work properly with QEMU's
user mode emulation. With all other CPU types, libjpeg-turbo honors
JSIMD_FORCE* regardless of CPU detection.
|
|
b34c85ea
|
2019-12-31T01:20:12
|
|
Merge branch 'master' into dev
|
|
166e3421
|
2019-12-31T01:10:30
|
|
simd/arm64/jsimd_neon.S: Fix checkstyle issue
|
|
c4675d62
|
2019-12-31T00:42:53
|
|
Merge branch 'master' into dev
|
|
b542e4c8
|
2019-12-20T13:18:23
|
|
ARMv8 SIMD: Support execute-only memory (XOM)
Move constants out of the .text section in simd/arm64/jsimd_neon.S and
into a .rodata section. This ensures that the ARMv8 NEON SIMD
extensions are compatible with memory layouts that are marked
execute-only (and thus unreadable.)
Based on:
https://github.com/ivanloz/libjpeg-turbo/commit/88f3ca7664fadfb5e106efecb7845753aaf330b7
Closes #318
|
|
81b8c0ee
|
2019-12-17T14:18:35
|
|
Loongson MMI: Merge with MIPS64/add auto-detection
Modern Loongson processors are MIPS64-compatible, and MMI instructions
are now supported in the mainline of GCC. Thus, this commit adds
compile-time and run-time auto-detection of MMI instructions and moves
the MMI SIMD extensions for libjpeg-turbo from simd/loongson/ to
simd/mips64/. That will allow MMI and MSA instructions to co-exist
in the same build once #377 has been integrated.
Based on:
https://github.com/FlyGoat/libjpeg-turbo/commit/82953ddd61549428f58066f7eff0d60ce7429865
Closes #383
|
|
e821464f
|
2018-04-03T12:47:54
|
|
ARM64 NEON SIMD impl. of prog. Huffman encoding
This commit adds ARM64 NEON optimizations for the
encode_mcu_AC_first() and encode_mcu_AC_refine() functions used in
progressive Huffman encoding.
Compression speedups for the typical set of five libjpeg-turbo test
images (https://libjpeg-turbo.org/About/Performance):
Cortex-A53: 23.8-39.2% (avg. 32.2%)
Cortex-A72: 26.8-41.1% (avg. 33.5%)
Apple A7: 29.7-45.9% (avg. 39.6%)
Closes #229
|
|
9c6f79e9
|
2019-11-13T14:36:37
|
|
Fix formatting issues detected by checkstyle
|
|
f60b6dd3
|
2019-11-12T17:42:39
|
|
Remove vestigial jpeg_nbits_table.inc
Not needed since 087c29e07f7533ec82fd7eb1dafc84c29e7870ec
|
|
713c451f
|
2019-11-08T14:53:55
|
|
Enable SSE2 progressive Huffman encoder for x32
Referring to #289, I'm not sure where I arrived at the conclusion that
the SSE2 progressive Huffman encoder doesn't provide any speedup for
x32. Upon re-testing, I discovered it to be about 50% faster than the
C encoder.
This commit also re-purposes one of the CI tests (specifically, the
jpeg-7 API/ABI test) so that it tests x32 as well.
|
|
cbf0fcc8
|
2019-11-05T11:56:06
|
|
i386 SSE2 Huffman: Fix pointer arithmetic issue
Splitting the pointer arithmetic in GET_SYM() into a separate add and
sub instruction was an attempt to work around an error ("invalid operand
type") that occurred when assembling the file with NASM. However, this
created a link error on macOS ("ld: illegal text-relocation to
'_jconst_huff_encode_one_block' in
simd/CMakeFiles/simd.dir/i386/jchuff-sse2.asm.o from
'_jsimd_huff_encode_one_block_sse2' in
simd/CMakeFiles/simd.dir/i386/jchuff-sse2.asm.o for architecture i386")
and also changed the alignment of the code in ways that might have
affected the previous benchmark results (which took a great deal of time
to obtain.) Ultimately, the path of least resistance is just to
require NASM 2.13 or later.
|
|
bbedb4b5
|
2019-11-05T15:43:21
|
|
Merge branch 'master' into dev
|
|
cf54623b
|
2019-11-05T12:21:25
|
|
Mac: Support hiding SIMD fct symbols w/ NASM 2.14+
(NASM 2.14+ now supports the private_extern section directive, which was
previously only available with YASM.)
|
|
087c29e0
|
2018-10-22T10:05:18
|
|
Optimize Huffman encoding
This commit improves the C and SSE2 Huffman encoding implementations in
the following ways:
- Avoid using xmm8-xmm15 in the x86-64 SSE2 implementation. There is no
actual need to use those registers, and avoiding them produces a
cleaner WIN64 function entry/exit-- as well as shorter code, since REX
prefixes can be avoided (this is helpful on certain CPUs, such as
Intel Atom, for which instruction fetch and decoding can be a
bottleneck.)
- Optimize register usage so that fewer REX prefixes and
register-register moves are needed.
- Use the bit counter to store the number of free bits in the bit buffer
rather than the number of bits in the bit buffer. This changes the
method for inserting a code into the bit buffer to:
(put_buffer |= code << (free_bits -= code_size));
As a result:
* Only one bit counter needs to stay in a register (we just keep it in
cl.)
* The bit buffer contents are already properly aligned to be written
out (after a byte swap.)
* Adjusting the free bits counter and checking if the bit buffer is
full can be combined into a single operation.
* We can wait to flush the bit buffer until the buffer is actually
full and not just in danger of becoming full. Thus, eight bytes can
be flushed at a time.
- Speed is quite sensitive to the alignment of branch target labels, so
insert some padding and remove branches from the flush code.
(Flushing this way isn't actually faster when compared to using
branches, but the branchless code doesn't need extra alignment and is
thus smaller.)
- Speculatively write out the bit buffer as a single 8-byte write,
falling back to a byte-by-byte write only if there are any 0xFF bytes
in the bit buffer that need to be encoded as 0xFF 0x00.
- Use MMX registers for the 32-bit implementation (so the bit buffer can
be 64 bits wide.)
- Slightly reduce overall function code size.
- Eliminate or combine a few SSE instructions.
- Make some minor improvements to instruction scheduling.
- Adjust flush_bits() in jchuff.c to handle cases in which the bit
buffer has less than 7 free bits (apparently that couldn't happen
before.)
Based on:
https://github.com/1camper/libjpeg-turbo/commit/947a09defa2ec848322b1bae050d1b57b316a32a
https://github.com/1camper/libjpeg-turbo/commit/262ebb6b816fd8a49ff4d7185f6c5153dddde02f
https://github.com/1camper/libjpeg-turbo/commit/6e9a091221bb244c8ba232a942650e94254ffcf0
See change log for performance claims.
Closes #292
|
|
d92ae5df
|
2019-11-04T18:50:45
|
|
Merge branch 'master' into dev
|
|
6902cdb1
|
2019-10-28T14:28:29
|
|
Build: Don't require ASM_NASM if !REQUIRE_SIMD
The build system is supposed to fall back to a non-SIMD build if
WITH_SIMD==1 but REQUIRE_SIMD==0.
Based on:
https://github.com/emlix/libjpeg-turbo/commit/972df912d08b755db3ca60d93b058a42e97a79eb
Closes #384
|
|
95f4d6ef
|
2019-10-24T02:13:23
|
|
Merge branch 'master' into dev
|
|
3a32d199
|
2019-10-17T19:59:01
|
|
x86 SIMD: Consistify capitalization of NASM types
byte, word, dword, qword, oword, and yword are all assembler keywords,
so it makes sense to use lowercase for these so as not to mistake them
for macros or constants.
|
|
9a51a87a
|
2019-10-17T11:21:32
|
|
x86 SIMD: Remove obsolete [TAB8] comments
With apologies to Richard Hendricks, our assembly code no longer uses
tabs.
|
|
8ef53b10
|
2019-08-14T22:08:59
|
|
Merge branch 'master' into dev
|
|
a81a8c13
|
2019-08-14T13:17:11
|
|
SSE2 SIMD: Fix prog Huffman enc. error if Sl%16==0
(regression introduced by 5b177b3cab5cfb661256c1e74df160158ec6c34e)
The SSE2 implementation of progressive Huffman encoding performed
extraneous iterations when the scan length was a multiple of 16.
Based on:
https://github.com/rouault/libjpeg-turbo/commit/bb7f1ef98305da915e581b59bd0ec2ef7bdb8468
Fixes #335
Closes #367
|
|
7fbfe29c
|
2019-07-18T15:18:27
|
|
Merge branch 'master' into dev
|
|
f37b7c1f
|
2019-07-02T11:28:26
|
|
Build: Fix build/install with Xcode IDE
Closes #355
|
|
f36d5315
|
2019-04-23T14:54:23
|
|
Merge branch 'master' into dev
|
|
aa9db616
|
2019-04-15T17:55:47
|
|
x86 SIMD: Check for CPUID leaf 07H before using
According to Intel's manual [1], "If a value entered for CPUID.EAX is
higher than the maximum input value for basic or extended function for
that processor then the data for the highest basic information leaf is
returned."
Right now, libjpeg-turbo doesn't first check that leaf 07H is supported
before attempting to use it, so the ostensible AVX2 bit (Bit 05) of the
CPUID result might actually be Bit 05 from a lower leaf. That bit might
be set, even if the CPU doesn't support AVX2.
This commit modifies the x86 and x86-64 SIMD feature detection code so
that it first checks whether CPUID leaf 07H is supported before
attempting to use it to check for AVX2 instruction support.
DRC:
This commit should fix
https://bugzilla.mozilla.org/show_bug.cgi?id=1520760
However, I have not personally been able to reproduce that issue,
despite using a Nehalem (pre-AVX2) CPU on which the maximum CPUID leaf
has been limited via a BIOS setting.
Closes #348
[1]
"IntelĀ® 64 and IA-32 Architectures Software Developer's Manual, Volume 2 (2A, 2B, 2C & 2D): Instruction Set Reference, A-Z", https://software.intel.com/sites/default/files/managed/a4/60/325383-sdm-vol-2abcd.pdf, page 3-192.
|
|
afbe48c2
|
2019-02-27T13:05:58
|
|
MMI: Support 32-bit Loongson architectures
|
|
98ff5507
|
2019-02-27T13:17:35
|
|
MMI: Fix bug in jsimd_h2v1_merged_upsample_mmi()
... that occurred when ((image width) & 1) != 0.
|
|
3ca6dba9
|
2019-02-17T09:33:57
|
|
Merge branch 'master' into dev
|
|
b46af82c
|
2019-02-12T17:35:10
|
|
ARMv7 NEON: #ifdef unused funcs/vars w/ -mfpu=neon
When simd/arm/jsimd.c is compiled with __ARM_NEON__ defined (which will
be the case if -mfpu=neon is passed to the compiler), the
parse_proc_cpuinfo() and check_feature() functions and the bufsize
variable are unused and thus need to be #ifdef'ed out in order to avoid
compiler warnings. Note that the bufsize variable was already #ifdef'ed
out on Linux but not on Android due to lack of parentheses (&& takes
precedence over ||.)
Closes #331
|
|
bdec9958
|
2019-02-01T01:16:13
|
|
MMI: Fix unaligned decomp. perf. for 32-bit PFs
(Oversight from db84125fcb05bbc48430b53d45b89392a8f3609e)
|
|
fa905fbf
|
2019-02-01T00:42:09
|
|
MMI: Use unaligned stores w/ merged upsampling
... when necessary. This was an oversight from
2f9e7c84d1a95c1bae8bb8d38fbc2adaeecf4d41
|
|
9aada25c
|
2019-02-01T01:01:38
|
|
Merge branch 'master' into dev
|
|
e2442e07
|
2019-02-01T00:56:02
|
|
MMI: Fix unaligned comp. perf. for 32-bit PFs also
(Oversight from 1c2d3cfaaf7324d9091ba3cc4e900f60a16fe1aa)
|
|
73fd6041
|
2019-02-01T00:24:09
|
|
MMI: Fix formatting issue detected by checkstyle
|
|
2f9e7c84
|
2019-01-31T21:24:13
|
|
Loongson MMI h2v1 and h2v2 merged upsampling
Based on:
https://github.com/zhanglixia-hf/libjpeg-turbo/commit/e8f5cee5aa83e2d1b0af3a7bd0aaa2391c930fb2
|
|
3c7199ff
|
2019-01-31T16:22:31
|
|
Loongson MMI h2v1 fancy upsampling
Based on:
https://github.com/zhanglixia-hf/libjpeg-turbo/commit/e8f5cee5aa83e2d1b0af3a7bd0aaa2391c930fb2
|
|
73b98acd
|
2019-01-31T13:54:46
|
|
Loongson MMI RGB-to-Grayscale conversion
Based on:
https://github.com/zhanglixia-hf/libjpeg-turbo/commit/e8f5cee5aa83e2d1b0af3a7bd0aaa2391c930fb2
|
|
bb0d1702
|
2019-01-30T22:41:57
|
|
Improve readability of Loongson MMI code
We have more than eight registers to work with, as well as three-operand
intrinsics, so there's no need for the implementation to be such a
literal port of the MMX code.
|
|
db84125f
|
2019-01-30T14:12:06
|
|
MMI: Use aligned store instructions when possible
This improves decompression performance by 2-5%.
|
|
ae4221f9
|
2019-01-29T19:51:08
|
|
Loongson MMI fast forward/inverse DCT
Based on:
https://github.com/zhanglixia-hf/libjpeg-turbo/commit/32a9ca222d7e4c8e01bb75cd137f204a0587b383
|
|
674343ab
|
2019-01-31T15:30:25
|
|
Merge branch 'master' into dev
|
|
1c2d3cfa
|
2019-01-30T12:43:45
|
|
MMI: Fix comp. perf. issue w/ unaligned image rows
Using ldc1 with a non-64-bit-aligned memory location causes as much as a
10x slow-down in overall compression performance.
|
|
01e30323
|
2019-01-23T14:58:24
|
|
Eliminate support for compilers w/o unsigned char
libjpeg-turbo has never really supported such compilers, since (AFAIK)
they are non-existent on any modern computing platform and thus
impossible for us to test. (Also, the TurboJPEG API would break without
unsigned chars.)
Furthermore, the unified CMake-based build system introduced in 2.0
always defines HAVE_UNSIGNED_CHAR, so retaining other code paths is
pointless. Eliminating support for compilers without unsigned char
eliminates the need for the GETJSAMPLE() macro, which improves the
readability of many parts of the code as well as improving the
performance of writing Targa and Windows BMP files.
Fixes #317
|
|
2cc4f93c
|
2018-11-12T14:40:19
|
|
Merge branch 'master' into dev
|