|
6d48aaac
|
2025-10-22T10:30:53
|
|
TJ: Handle lossless/CS params w/ YUV enc/compress
- If TJPARAM_LOSSLESS was set, then tj3EncodeYUV*8() called
jpeg_enable_lossless() (via setCompDefaults()), which caused the
underlying libjpeg API to silently disable subsampling and color
conversion. This led to three issues:
1. Attempting to encode RGB pixels produced incorrect YCbCr or
grayscale components, since color conversion did not occur. The
same issue occurred if TJPARAM_COLORSPACE was explicitly set to
TJCS_RGB.
2. Attempting to encode RGB pixels into a grayscale plane caused
tj3EncodeYUVPlanes8() to overflow the caller's destination pointer
array if the array was not big enough to accommodate three
pointers. If called from tj3EncodeYUV8(), tj3EncodeYUVPlanes8()
did not overflow the caller's destination pointer array, but a
segfault occurred when it attempted to copy to the Cb and Cr
pointers, which were NULL. The same issue occurred if
TJPARAM_COLORSPACE was explicitly set to anything other than
TJCS_GRAY.
3. Attempting to encode RGB pixels into subsampled YUV planes caused
tj3EncodeYUV*8() to overflow the caller's buffer(s) if the
buffer(s) were not big enough to accommodate 4:4:4 (non-subsampled)
YUV planes. That would have been the case if the caller allocated
its buffer(s) based on the return value of tj3YUVBufSize() or
tj3YUVPlaneSize(). The same issue occurs if TJPARAM_SUBSAMP is
explicitly set to TJSAMP_444.
tj3EncodeYUV*8() now ignores TJPARAM_LOSSLESS and TJPARAM_COLORSPACE.
- If TJPARAM_LOSSLESS was set, then attempting to compress a grayscale
plane into a JPEG image caused tj3CompressFromYUVPlanes8() to overflow
the caller's source pointer array if the array was not big enough to
accommodate three pointers. If called from tj3CompressFromYUV8(),
tj3CompressFromYUVPlanes8() did not overflow the caller's source
pointer array, but a segfault occurred when it attempted to copy from
the Cb and Cr pointers, which were NULL. This was similar to Issue 2
above. The same issue occurred if TJPARAM_COLORSPACE was explicitly
set to anything other than TJCS_GRAY.
tj3CompressFromYUV*8() now throws an error if TJPARAM_LOSSLESS is set,
and it now ignores TJPARAM_COLORSPACE.
These issues did not pose a security risk, since security exploits
involve supported workflows that function normally except when supplied
with malformed input data. It is documented that colorspace conversion,
chrominance subsampling, and compression from planar YUV images are
unavailable when TJPARAM_LOSSLESS is set. When TJPARAM_LOSSLESS is set,
the library effectively sets TJPARAM_SUBSAMP to TJSAMP_444 and
TJPARAM_COLORSPACE to TJCS_RGB, TJCS_GRAY, or TJCS_CMYK, depending on
the pixel format of the source image. That behavior is strongly implied
by the documentation of TJPARAM_LOSSLESS, although the documentation
isn't specific about whether TJPARAM_LOSSLESS applies to
tj3EncodeYUV*8(). In any case, setting TJPARAM_LOSSLESS before calling
tj3CompressFromYUV*8() was never a supported or functional workflow, and
setting TJPARAM_LOSSLESS before calling tj3EncodeYUV*8() was never a
functional workflow. Thus, there should be no applications "in the
wild" that use either workflow. Such applications would crash every
time they attempted to encode to or compress from a YUV image. In other
words, setting TJPARAM_LOSSLESS or TJPARAM_COLORSPACE required the
caller to understand the ramifications of the loss of color conversion
and/or subsampling, and failing to do so was essentially API abuse
(albeit subtle API abuse, hence the desire to make the behavior more
intuitive.)
This commit also removes no-op code introduced by
6da05150efa9b7a190d05bf82295127c778d15ab. Since setCompDefaults()
returns after calling jpeg_enable_lossless(), modifying the subsampling
level locally had no effect. The libjpeg API already silently disables
subsampling in jinit_c_master_control() if lossless compression is
enabled, so it was not necessary for setCompDefaults() to handle that.
Fixes #839
|
|
1f3614f1
|
2025-10-08T10:42:18
|
|
TJ: Guard against reused JPEG dst buf w/0 buf size
The libjpeg in-memory destination manager has always re-allocated the
JPEG destination buffer if the specified buffer pointer is NULL or the
specified buffer size is 0. TurboJPEG's destination manager inherited
that behavior. Because of fe80ec22752cce224c55d7b429d46503634ef034,
TurboJPEG's destination manager tries to reuse the most recent
destination buffer if the same buffer pointer is specified. (The
purpose of that is to enable repeated invocations of tj*Compress*() or
tj*Transform() to automatically grow the destination buffer, as needed,
with no intervention from the calling program.) However, because of the
inherited code, TurboJPEG's destination manager also reallocated the
destination buffer if the specified buffer size was 0. Thus, passing a
previously-used JPEG destination buffer pointer to tj*Compress*() or
tj*Transform() while specifying a destination buffer size of 0 confused
the destination manager. It reallocated the destination buffer to 4096
bytes but reported the old destination buffer size to the libjpeg API.
This caused a buffer overrun if the old destination buffer size was
larger than 4096 bytes.
The documentation for tj*Compress*() is contradictory on this matter.
It states that the JPEG destination buffer size must be specified if the
destination buffer pointer is non-NULL. However, it also states that,
if the destination buffer is reused, the specified destination buffer
size is ignored. The documentation for tj*Transform() does not specify
the function's behavior if the destination buffer is reused. Thus, the
behavior of the API is at best undefined if a calling application
attempts to reuse a destination buffer while specifying a destination
buffer size of 0. If that ever worked, it only worked in libjpeg-turbo
1.3.x and prior.
This issue was exposed only through API abuse, and calling applications
that abused the API in that manner would not have worked for the last 11
years. Thus, the issue did not represent a security threat. This
commit merely hardens the API against such abuse, by modifying
TurboJPEG's destination manager so that it refuses to re-allocate the
JPEG destination buffer if the buffer pointer is reused and the
specified buffer size is 0. That is consistent with the most permissive
interpretation of the TurboJPEG API documentation. (The API already
ignored the destination buffer size if the destination buffer pointer
was reused and the specified buffer size was non-zero. It makes sense
for it to do likewise if the specified buffer size is 0.) This commit
also modifies TJUnitTest so that it verifies whether the API is hardened
against the aforementioned abuse.
|
|
8af737ed
|
2025-09-23T08:26:20
|
|
ChangeLog.md: List CVE ID for 2a9e3bd7 and c30b1e7
|
|
98c45838
|
2025-08-21T11:22:51
|
|
Fix issues with Windows Arm64EC builds
Arm64EC basically wraps native Arm64 functions with an emulated
Windows/x64 ABI, which can improve performance for Windows/x64
applications running under the x64 emulator on Windows/Arm. When
building for Arm64EC, the compiler defines _M_X64 and _M_ARM64EC but not
_M_ARM64.
|
|
f158143e
|
2025-07-28T20:45:02
|
|
jpeg_skip_scanlines: Fix UAF w/merged upsamp/quant
jpeg_skip_scanlines() (more specifically, read_and_discard_scanlines())
should check whether merged upsampling is disabled before attempting
to dereference cinfo->cconvert, and it should check whether color
quantization is enabled before attempting to dereference
cinfo->cquantize. Otherwise, executing one of the following sequences
with the same libjpeg API instance and any 4:2:0 or 4:2:2 JPEG image
will cause a use-after-free issue:
- Disable merged upsampling (default)
- jpeg_start_decompress()
- jpeg_finish_decompress()
(frees but doesn't zero cinfo->cconvert)
- Enable merged upsampling
- jpeg_start_decompress()
(doesn't re-allocate cinfo->cconvert, because
j*init_color_deconverter() isn't called)
- jpeg_skip_scanlines()
- Enable 1-pass color quantization
- jpeg_start_decompress()
- jpeg_finish_decompress()
(frees but doesn't zero cinfo->cquantize)
- Disable 1-pass color quantization
- jpeg_start_decompress()
(doesn't re-allocate cinfo->cquantize, because j*init_*_quantizer()
isn't called)
- jpeg_skip_scanlines()
These sequences are very unlikely to occur in a real-world application.
In practice, this issue does not even cause a segfault or other
user-visible errant behavior, so it is only detectable with ASan. That
is because the memory region is small enough that it doesn't get
reclaimed by either the application or the O/S between the point at
which it is freed and the point at which it is used (even though a
subsequent malloc() call requests exactly the same amount of memory.)
Thus, this is an undefined behavior issue, but it is unlikely to be
exploitable.
|
|
51cee036
|
2025-06-13T14:52:09
|
|
Build: Use wrappers rather than CMake object libs
Some downstream projects need to adapt the libjpeg-turbo source code to
non-CMake build systems, and the use of CMake object libraries made that
difficult. Referring to #754, the use of CMake object libraries also
caused the libjpeg-turbo libraries to contain duplicate object names,
which caused problems with certain development tools. This commit
modifies the build system so that it uses wrappers, rather than CMake
object libraries, to compile source files for multiple data precisions.
For convenience, the wrappers are included in the source tree, but they
can be re-generated by building the "wrappers" target.
In addition to facilitating downstream integration, using wrappers
improves code readability, since multiple data precisions are now
handled at the source code level instead of at the build system level.
Since this will be pushed to a bug-fix release, the goal was to avoid
changing any existing source code. A future major release of
libjpeg-turbo may restructure the libjpeg API source code so that only
the functions that need to be compiled for multiple data precisions are
wrapped. (That is how the TurboJPEG API source code is structured.)
Closes #817
|
|
c889b1da
|
2025-06-12T09:40:20
|
|
TJBench: Require additional argument with -copy
(oversight from e4c67aff50420d7bacff503ceb4556c896128413)
|
|
e0e18dea
|
2024-12-18T12:50:38
|
|
Ensure methods called by global funcs are init'd
If a hypothetical calling application does something really stupid and
changes cinfo->data_precision after calling jpeg_start_*compress(), then
the precision-specific methods called by jpeg_write_scanlines(),
jpeg_write_raw_data(), jpeg_finish_compress(), jpeg_read_scanlines(),
jpeg_read_raw_data(), or jpeg_start_output() may not be initialized.
Ensure that the first precision-specific method (which will always be
cinfo->main->process_data*(), cinfo->coef->compress_data*(), or
cinfo->coef->decompress_data()) called by any global function that may
be called after jpeg_start_*compress() is initialized and non-NULL.
This increases the likelihood (but does not guarantee) that a
hypothetical stupid calling application will fail gracefully rather than
segfault if it changes cinfo->data_precision after calling
jpeg_start_*compress(). A hypothetical stupid calling application can
still bork itself by changing cinfo->data_precision after initializing
the source manager but before calling jpeg_start_compress(), or after
initializing the destination manager but before calling
jpeg_start_decompress().
|
|
c3446d64
|
2024-12-12T11:01:53
|
|
Bump version to 3.1.0
|
|
6da05150
|
2024-10-24T13:25:10
|
|
Allow disabling prog/opt/lossless if prev. enabled
- Due to an oversight, a113506d175d03ae0e40965c3d3d21a5d561e119
(libjpeg-turbo 1.4 beta1) effectively made the call to
std_huff_tables() in jpeg_set_defaults() a no-op if the Huffman tables
were previously defined, which made it impossible to disable Huffman
table optimization or progressive mode if they were previously enabled
in the same API instance. std_huff_tables() retains its previous
behavior for decompression instances, but it now force-enables the
standard (baseline) Huffman tables for compression instances.
- Due to another oversight, there was no way to disable lossless mode
if it was previously enabled in a particular API instance.
jpeg_set_defaults() now accomplishes this, which makes
TJ*PARAM_LOSSLESS behave as intended/documented.
- Due to yet another oversight, setCompDefaults() in the TurboJPEG API
library permanently modified the value of TJ*PARAM_SUBSAMP when
generating a lossless JPEG image, which affected subsequent lossy
compression operations. This issue was hidden by the issue above and
thus does not need to be publicly documented.
Fixes #792
|
|
bfe77d31
|
2024-09-23T15:30:45
|
|
ChangeLog: Document accidental fix from 9983840e
Closes #789
|
|
ad9c02c6
|
2024-09-23T13:22:09
|
|
tj3Set(): Allow TJPARAM_LOSSLESSPT vals from 0..15
The target data precision isn't necessarily known at the time that the
calling program sets TJPARAM_LOSSLESSPT, so tj3Set() needs to allow all
possible values (from 0 to 15.) jpeg_enable_lossless(), which is called
within the body of tj3Compress*(), will throw an error if the point
transform value is greater than {data precision} - 1.
|
|
9d76821f
|
2024-09-14T16:00:53
|
|
3.1 beta1
|
|
9b01f5a0
|
2024-09-14T11:56:14
|
|
TJ: Add func/method for computing xformed buf size
|
|
a2728582
|
2024-09-03T07:54:17
|
|
TurboJPEG: ICC profile support
|
|
c519d7b6
|
2024-09-05T11:10:44
|
|
Don't ignore JPEG buf size with TJPARAM_NOREALLOC
Since the introduction of TJFLAG_NOREALLOC in libjpeg-turbo 1.2.x, the
TurboJPEG C API documentation has (confusingly) stated that:
- if the JPEG buffer pointer points to a pre-allocated buffer, then the
JPEG buffer size must be specified, and
- the JPEG buffer size should be specified if the JPEG buffer is
pre-allocated to an arbitrary size.
The documentation never explicitly stated that the JPEG buffer size
should be specified if the JPEG buffer is pre-allocated to a worst-case
size, but since focus does not imply exclusion, it also never explicitly
stated the reverse. Furthermore, the documentation never stated that
this was contingent upon TJPARAM_NOREALLOC/TJFLAG_NOREALLOC. However,
effectively the compression and lossless transformation functions
ignored the JPEG buffer size(s) passed to them, and assumed that the
JPEG buffer(s) had been allocated to a worst-case size, if
TJPARAM_NOREALLOC/TJFLAG_NOREALLOC was set. This behavior was an
accidental and undocumented throwback to libjpeg-turbo 1.1.x, in which
the tjCompress() function provided no way to specify the JPEG buffer
size. It was always a bad idea for applications to rely upon that
behavior (although our own TJBench application unfortunately did.)
However, if such applications exist in the wild, the new behavior would
constitute a breaking change, so it has been introduced only into
libjpeg-turbo 3.1.x and only into TurboJPEG 3 API functions. The
previous behavior has been retained when calling functions from the
TurboJPEG 2.1.x API and prior versions.
Did I mention that APIs are hard?
|
|
e4c67aff
|
2024-09-04T12:06:42
|
|
TJBench: More argument consistification
-copynone --> -copy none
Add '-copy all', even though it's the default.
-rgb, -bgr, -rgbx, -bgrx, -xbgr, -xrgb, -gray, -cmyk -->
-pixelformat {rgb|bgr|rgbx|bgrx|xbgr|xrgb|gray|cmyk}
(This is mainly so -gray won't interfere with -grayscale.)
Fix an ArrayIndexOutOfBoundsException that occurred when passing -dct
to the Java version without specifying the DCT algorithm (oversight from
24fbf64d31a0758c63bcc27cf5d92fc5611717d0.)
|
|
d43ed7a1
|
2024-09-04T08:38:13
|
|
Merge branch 'main' into dev
|
|
e7e9344d
|
2024-09-04T06:58:48
|
|
TJ: Honor TJ*OPT_COPYNONE for individual xforms
jcopy_markers_execute() has historically ignored its option argument,
which is OK for jpegtran, but tj*Transform() needs to be able to save a
set of markers from the source image and write a subset of those markers
to each destination image. Without that ability, the function
effectively behaved as if TJ*OPT_COPYNONE was not specified unless all
transforms specified it.
|
|
37851a32
|
2024-09-01T15:07:27
|
|
TurboJPEG: Add restart markers when transforming
|
|
f464728a
|
2024-09-02T08:00:08
|
|
ChangeLog.md: Minor wordsmithing
|
|
fad61007
|
2024-08-20T18:52:53
|
|
Replace TJExample with IJG workalike programs
|
|
3e303e72
|
2024-08-31T18:38:05
|
|
TJBench: Allow British spellings in arguments
|
|
9b119896
|
2024-08-31T17:46:37
|
|
Move test scripts into test/
|
|
64567381
|
2024-08-31T17:31:02
|
|
Merge branch 'main' into dev
|
|
eb753630
|
2024-08-31T16:50:08
|
|
Update URLs
- Eliminate unnecessary "www."
- Use HTTPS.
- Update Java, MSYS, tdm-gcc, and NSIS URLs.
- Update URL and title of Agner Fog's assembly language optimization
manual.
- Remove extraneous information about MASM and Borland Turbo Assembler
and outdated NASM URLs from the x86 assembly headers, and mention
Yasm.
|
|
8d76e4e5
|
2024-08-31T15:33:55
|
|
Doc: "EXIF" = "Exif"
|
|
9983840e
|
2024-08-31T12:41:13
|
|
TJ/xform: Check crop region against dest. image
Lossless cropping is performed after other lossless transform
operations, so the cropping region must be specified relative to the
destination image dimensions and level of chrominance subsampling, not
the source image dimensions and level of chrominance subsampling.
More specifically, if the lossless transform operation swaps the X and Y
axes, or if the image is converted to grayscale, then that changes the
cropping region requirements.
|
|
8456d2b9
|
2024-08-30T10:50:13
|
|
Doc: "MCU block" = "iMCU" or "MCU"
The JPEG-1 spec never uses the term "MCU block". That term is rarely
used in other literature to describe the equivalent of an MCU in an
interleaved JPEG image, but the libjpeg documentation uses "iMCU" to
describe the same thing. "iMCU" is a better term, since the equivalent
of an interleaved MCU can contain multiple DCT blocks (or samples in
lossless mode) that are only grouped together if the image is
interleaved.
In the case of restart markers, "MCU block" was used in the libjpeg
documentation instead of "MCU", but "MCU" is more accurate and less
confusing. (The restart interval is literally in MCUs, where one MCU
is one data unit in a non-interleaved JPEG image and multiple data units
in a multi-component interleaved JPEG image.)
In the case of 9b704f96b2dccc54363ad7a2fe8e378fc1a2893b, the issue was
actually with progressive JPEG images exactly two DCT blocks wide, not
two MCU blocks wide.
This commit also defines "MCU" and "MCU row" in the description of the
various restart marker options/parameters. Although an MCU row is
technically always a row of samples in lossless mode, "sample row" was
confusing, since it is used in other places to describe a row of samples
for a single component (whereas an MCU row in a typical lossless JPEG
image consists of a row of interleaved samples for all components.)
|
|
6a9565ce
|
2024-08-26T16:45:41
|
|
Merge branch 'main' into dev
|
|
4851cbe4
|
2024-08-26T12:14:20
|
|
djpeg/jpeg_crop_scanline(): Disallow crop vals < 0
Because the crop spec was parsed using unsigned 32-bit integers,
negative numbers were interpreted as values ~= UINT_MAX (4,294,967,295).
This had the following ramifications:
- If the cropping region width was negative and the adjusted width + the
adjusted left boundary was greater than 0, then the 32-bit unsigned
integer bounds checks in djpeg and jpeg_crop_scanline() overflowed and
failed to detect the out-of-bounds width, jpeg_crop_scanline() set
cinfo->output_width to a value ~= UINT_MAX, and a buffer overrun and
subsequent segfault occurred in the upsampling or color conversion
routine. The segfault occurred in the body of
jpeg_skip_scanlines() --> read_and_discard_scanlines() if the cropping
region upper boundary was greater than 0 and the JPEG image used
chrominance subsampling and in the body of jpeg_read_scanlines()
otherwise.
- If the cropping region width was negative and the adjusted width + the
adjusted left boundary was 0, then a zero-width output image was
generated.
- If the cropping region left boundary was negative, then an output
image with bogus data was generated.
This commit modifies djpeg and jpeg_crop_scanline() so that the
aforementioned bounds checks use 64-bit unsigned integers, thus guarding
against overflow. It similarly modifies jpeg_skip_scanlines(). In the
case of jpeg_skip_scanlines(), the issue was not reproducible with
djpeg, but passing a negative number of lines to jpeg_skip_scanlines()
caused a similar overflow if the number of lines +
cinfo->output_scanline was greater than 0. That caused
jpeg_skip_scanlines() to read past the end of the JPEG image, throw a
warning ("Corrupt JPEG data: premature end of data segment"), and fail
to return unless warnings were treated as fatal. Also, djpeg now parses
the crop spec using signed integers and checks for negative values.
|
|
de4bbac5
|
2024-08-23T12:48:01
|
|
TJCompressor.compress(): Fix lossls buf size calc
|
|
79b8d65f
|
2024-08-22T13:50:32
|
|
Java: Add official packed-pixel image I/O methods
|
|
e2932b68
|
2024-08-22T14:00:37
|
|
ChangeLog.md: Formatting tweak
(oversight from d6ce7df3520a8ce83b44e51b7b557323cf0a184e)
|
|
24fbf64d
|
2024-08-21T11:24:44
|
|
TJBench: Consistify args with djpeg
-fastdct --> -dct fast
-fastupsample --> -nosmooth
|
|
d6ce7df3
|
2024-08-20T15:22:07
|
|
TJBench: Consistify args with cjpeg/djpeg/jpegtran
-hflip --> -flip horizontal
-limitscans --> -maxscans N
-rot90 --> -rotate 90
-rot180 --> -rotate 180
-rot270 --> -rotate 270
-stoponwarning --> -strict
-vflip --> -flip vertical
|
|
07378442
|
2024-08-19T14:13:03
|
|
Java: Remove deprecated constants and methods
|
|
26d978b6
|
2024-08-16T11:58:02
|
|
Merge branch 'main' into dev
|
|
0c23b0ad
|
2024-08-14T09:21:54
|
|
Various doc tweaks
- "Optimized baseline entropy coding" = "Huffman table optimization"
"Optimized baseline entropy coding" was meant to emphasize that the
feature is only useful when generating baseline (single-scan lossy
8-bit-per-sample Huffman-coded) JPEG images, because it is
automatically enabled when generating Huffman-coded progressive
(multi-scan), 12-bit-per-sample, and lossless JPEG images. However,
Huffman table optimization isn't actually an integral part of those
non-baseline modes. You can forego Huffman table optimization with
12-bit data precision if you supply your own Huffman tables. The spec
doesn't require it with progressive or lossless mode, either, although
our implementation does. Furthermore, "baseline" describes more than
just the type of entropy coding used. It was incorrect to say that
optimized "baseline" entropy coding is automatically enabled for
Huffman-coded progressive, 12-bit-per-sample, and lossless JPEG
images, since those are clearly not baseline images.
- "Progressive entropy coding" = "Progressive JPEG"
"Progressive" describes more than just the type of entropy coding
used. (In fact, both Huffman-coded and arithmetic-coded images can be
progressive.)
- Mention that TJPARAM_OPTIMIZE/TJ.PARAM_OPTIMIZE can be used with
lossless transformation as well.
- General wordsmithing
- Formatting tweaks
|
|
6ec8e41f
|
2024-06-13T11:52:13
|
|
Handle lossless JPEG images w/2-15 bits per sample
Closes #768
Closes #769
|
|
3290711d
|
2024-06-22T17:45:31
|
|
cjpeg: Only support 8-bit precision w/ GIF input
Creating 12-bit-per-sample JPEG images from GIF input images was a
useful testing feature when the data precision was a compile-time
setting. However, now that the data precision is a runtime setting,
it doesn't make sense for cjpeg to allow data precisions other than
8-bit with GIF input images. GIF images are limited to 256 colors from
a palette of 8-bit-per-component RGB values, so they cannot take
advantage of the additional gamut afforded by higher data precisions.
|
|
ed79114a
|
2024-06-18T13:06:30
|
|
TJBench: Test end-to-end grayscale comp./decomp.
Because the TurboJPEG API originated in VirtualGL and TurboVNC as a
means of compressing from/decompressing to extended RGB framebuffers,
its earliest incarnations did not handle grayscale packed-pixel images.
Thus, TJBench has always converted the input image (even if it is
grayscale) to an extended RGB source buffer prior to compression, and it
has always decompressed JPEG images (even if they are grayscale) into an
extended RGB destination buffer. That allows TJBench to benchmark the
RGB-to-grayscale and grayscale-to-RGB color conversion paths used by
VirtualGL and TurboVNC when grayscale subsampling (AKA the grayscale
JPEG colorspace) is selected. However, more recent versions of the
TurboJPEG API handle grayscale packed-pixel images, so it is beneficial
to allow TJBench to benchmark the end-to-end grayscale compression and
decompression paths. This commit accomplishes that by adding a new
command-line option (-gray) that causes TJBench to use a grayscale
source buffer (which only works if the input image is PGM or grayscale
BMP), to decompress JPEG images (even if they are full-color) into a
grayscale destination buffer, and to save output images in PGM or
grayscale BMP format.
|
|
55bcad88
|
2024-06-24T22:16:07
|
|
Merge branch 'main' into dev
|
|
51d021bf
|
2024-06-24T12:17:22
|
|
TurboJPEG: Fix 12-bit-per-sample arith-coded compr
(Regression introduced by 7bb958b732e6b4f261595e2d1527d46964fe3aed)
Because of 7bb958b732e6b4f261595e2d1527d46964fe3aed, the TurboJPEG
compression and encoding functions no longer transfer the value of
TJPARAM_OPTIMIZE into cinfo->data_precision unless the data precision
is 8. The intent of that was to prevent using_std_huff_tables() from
being called more than once when reusing the same compressor object to
generate multiple 12-bit-per-sample JPEG images. However, because
cinfo->optimize_coding is always set to TRUE by jpeg_set_defaults() if
the data precision is 12, calling applications that use 12-bit data
precision had to unset cinfo->optimize_coding if they set
cinfo->arith_code after calling jpeg_set_defaults(). Because of
7bb958b732e6b4f261595e2d1527d46964fe3aed, the TurboJPEG API stopped
doing that except with 8-bit data precision. Thus, attempting to
generate a 12-bit-per-sample arithmetic-coded lossy JPEG image using
the TurboJPEG API failed with "Requested features are incompatible."
Since the compressor will always fail if cinfo->arith_code and
cinfo->optimize_coding are both set, and since cinfo->optimize_coding
has no relevance for arithmetic coding, the most robust and user-proof
solution is for jinit_c_master_control() to set cinfo->optimize_coding
to FALSE if cinfo->arith_code is TRUE.
This commit also:
- modifies TJBench so that it no longer reports that it is using
optimized baseline entropy coding in modes where that setting
is irrelevant,
- amends the cjpeg documentation to clarify that -optimize is implied
when specifying -progressive or '-precision 12' without -arithmetic,
and
- prevents jpeg_set_defaults() from uselessly checking the value of
cinfo->arith_code immediately after it has been set to FALSE.
|
|
bb3ab531
|
2024-06-19T17:27:01
|
|
cjpeg: Don't enable lossless until precision known
jpeg_enable_lossless() checks the point transform value against the data
precision, so we need to defer calling jpeg_enable_lossless() until
after all command-line options have been parsed.
|
|
94c64ead
|
2024-06-17T20:27:57
|
|
Various doc tweaks
- "bits per component" = "bits per sample"
Describing the data precision of a JPEG image using "bits per
component" is technically correct, but "bits per sample" is the
terminology that the JPEG-1 spec uses. Also, "bits per component" is
more commonly used to describe the precision of packed-pixel formats
(as opposed to "bits per pixel") rather than planar formats, in which
all components are grouped together.
- Unmention legacy display technologies. Colormapped and monochrome
displays aren't a thing anymore, and even when they were still a
thing, it was possible to display full-color images to them. In 1991,
when JPEG decompression time was measured in minutes per megapixel, it
made sense to keep a decompressed copy of JPEG images on disk, in a
format that could be displayed without further color conversion (since
color conversion was slow and memory-intensive.) In 2024, JPEG
decompression time is measured in milliseconds per megapixel, and
color conversion is even faster. Thus, JPEG images can be
decompressed, displayed, and color-converted (if necessary) "on the
fly" at speeds too fast for human vision to perceive. (In fact, your
TV performs much more complicated decompression algorithms at least 60
times per second.)
- Document that color quantization (and associated features), GIF
input/output, Targa input/output, and OS/2 BMP input/output are legacy
features. Legacy status doesn't necessarily mean that the features
are deprecated. Rather, it is meant to discourage users from using
features that may be of little or no benefit on modern machines (such
as low-quality modes that had significant performance advantages in
the early 1990s but no longer do) and that are maintained on a
break/fix basis only.
- General wordsmithing, grammar/punctuation policing, and formatting
tweaks
- Clarify which data precisions each cjpeg input format and each djpeg
output format supports.
- cjpeg.1: Remove unnecessary and impolitic statement about the -targa
switch.
- Adjust or remove performance claims to reflect the fact that:
* On modern machines, the djpeg "-fast" switch has a negligible effect
on performance.
* There is a measurable difference between the performance of Floyd-
Steinberg dithering and no dithering, but it is not likely
perceptible to most users.
* There is a measurable difference between the performance of 1-pass
and 2-pass color quantization, but it is not likely perceptible to
most users.
* There is a measurable difference between the performance of
full-color and grayscale output when decompressing a full-color JPEG
image, but it is not likely perceptible to most users.
* IDCT scaling does not necessarily improve performance. (It
generally does if the scaling factor is <= 1/2 and generally doesn't
if the scaling factor is > 1/2, at least on my machine. The
performance claim made in jpeg-6b was probably invalidated when we
merged the additional scaling factors from jpeg-7.)
- Clarify which djpeg switches/output formats cannot be used when
decompressing lossless JPEG images.
- Remove djpeg hints, since those involve quality vs. speed tradeoffs
that are no longer relevant for modern machines.
- Remove documentation regarding using color quantization with 16-bit
data precision. (Color quantization requires lossy mode.)
- Java: Fix typos in TJDecompressor.decompress12() and
TJDecompressor.decompress16() documentation.
- jpegtran.1: Fix truncated paragraph
In a man page, a single quote at the start of a line is interpreted as
a macro.
Closes #775
- libjpeg.txt:
* Mention J16SAMPLE data type (oversight.)
* Remove statement about extending jdcolor.c. (libjpeg-turbo is not
quite as DIY as libjpeg once was.)
* Remove paragraph about tweaking the various typedefs in jmorecfg.h.
It is no longer relevant for modern machines.
* Remove caveat regarding systems with ints less than 16 bits wide.
(ANSI/ISO C requires an int to be at least 16 bits wide, and
libjpeg-turbo has never supported non-ANSI compilers.)
- usage.txt:
* Add copyright header.
* Document cjpeg -icc, -memdst, -report, -strict, and -version
switches.
* Document djpeg -icc, -maxscans, -memsrc, -report, -skip, -crop,
-strict, and -version switches.
* Document jpegtran -icc, -maxscans, -report, -strict, and -version
switches.
|
|
095e62b6
|
2024-05-29T10:16:05
|
|
Merge branch 'main' into dev
|
|
3c17063e
|
2024-05-27T11:41:35
|
|
Guard against dupe SOF w/ incorrect source manager
Referring to https://bugzilla.mozilla.org/show_bug.cgi?id=1898606,
attempting to decompress a specially-crafted malformed JPEG image
(specifically an image with a complete 12-bit Start Of Frame segment
followed by an incomplete 8-bit Start Of Frame segment) using the
default marker processor, buffered-image mode, and input prefetching
triggered the following sequence of events:
- When the 12-bit SOF segment was encountered (in the body of
jpeg_read_header()), the marker processor's read_markers() method
called the get_sof() function, which processed the 12-bit SOF segment
and set cinfo->data_precision to 12.
- If the application subsequently called jpeg_consume_input() in a loop
to prefetch input data, and it didn't stop calling
jpeg_consume_input() when the function returned JPEG_REACHED_SOS, then
the 8-bit SOF segment was encountered in the body of
jpeg_consume_input(). As a result, the marker processor's
read_markers() method called get_sof(), which started to process the
8-bit SOF segment and set cinfo->data_precision to 8.
- Since the 8-bit SOF segment was incomplete, the end of the JPEG data
stream was encountered when get_sof() attempted to read the image
height, width, and number of components.
- If the fill_input_buffer() method in the application's custom source
manager incorrectly returned FALSE in response to a prematurely-
terminated JPEG data stream, then get_sof() returned FALSE while
attempting to read the image height, width, and number of components
(before the duplicate SOF check was reached.) That caused the default
marker processor's read_markers() method, and subsequently
jpeg_consume_input(), to return JPEG_SUSPENDED.
- If the application failed to respond to the JPEG_SUSPENDED return
value and subsequently attempted to call jpeg_read_scanlines(),
then the data precision check in jpeg_read_scanlines() succeeded
(because cinfo->data_precision was now 8.) However, because
cinfo->data_precision had been 12 during the previous call to
jpeg_start_decompress(), only the 12-bit version of the main
controller was initialized, and the cinfo->main->process_data() method
was undefined. Thus, a segfault occurred when jpeg_read_scanlines()
attempted to invoke that method.
Scenarios in which the issue was thwarted:
1. The default source managers handle a prematurely-terminated JPEG data
stream by inserting a fake EOI marker into the data stream. Thus, when
using one of those source managers, the INPUT_2BYTES() and INPUT_BYTE()
macros (which get_sof() invokes to read the image height, width, and
number of components) succeeded-- albeit with bogus data, since the fake
EOI marker was read into those fields. The duplicate SOF check in
get_sof() then failed, generating a fatal libjpeg error.
2. When using a custom source manager that correctly returns TRUE in
response to a prematurely-terminated JPEG data stream, the
aforementioned INPUT_2BYTES() and INPUT_BYTE() macros also succeeded
(albeit with bogus data read from the previous bytes of the data
stream), and the duplicate SOF check failed.
3. If the application did not prefetch input data, or if it stopped
invoking jpeg_consume_input() when the function returned
JPEG_REACHED_SOS, then the duplicate SOF segment was not read prior to
the first call to jpeg_read_scanlines(). Thus, the data precision check
in jpeg_read_scanlines() failed. If the application instead called
jpeg12_read_scanlines() (that is, if it properly supported multiple data
precisions), then the duplicate SOF segment was not read until the body
of jpeg_finish_decompress(). At that point, its only negative effect
was to cause jpeg_finish_decompress() to return FALSE before the
duplicate SOF check was reached.
In other words, this issue depended not only upon an incorrectly-written
source manager but also upon a very specific sequence of API calls. It
also depended upon the multi-precision feature introduced in
libjpeg-turbo 3.0.x. When using an 8-bit-per-sample build of
libjpeg-turbo 2.1.x, jpeg_read_header() failed with "Unsupported JPEG
data precision 12" after the 12-bit SOF segment was processed. When
using a 12-bit-per-sample build of libjpeg-turbo 2.1.x, the behavior
was the same as if the application called jpeg12_read_scanlines() in
Scenario 3 above.
This commit simply moves the duplicate SOF check to the top of
get_sof() so the check will fail before the marker processor attempts to
read the duplicate SOF. It should be noted that this issue isn't a
libjpeg-turbo bug per se, because it occurs only when the calling
application does something it shouldn't. It is, rather, an issue of API
hardening/caller-proofing.
|
|
b698a04c
|
2024-05-16T17:34:49
|
|
Merge branch 'main' into dev
|
|
bc491b16
|
2024-05-16T17:32:02
|
|
ChangeLog.md: Document previous commit
|
|
e0d660f1
|
2024-05-08T11:42:39
|
|
Merge branch 'main' into dev
|
|
6a522fcd
|
2024-05-02T14:07:45
|
|
jpegtran -drop: Ensure all quant tables defined
It is possible to craft a malformed JPEG image in which all of the
scans contain fewer components than the number of components specified
in the Start Of Frame (SOF) segment. Attempting to use such an image as
either an input image or a drop image with 'jpegtran -drop' caused a
NULL dereference and subsequent segfault in transupp.c:adjust_quant(),
so this commit adds appropriate checks to guard against that.
Since the issue involved an interface that is only exposed on the
jpegtran command line, it did not represent a security risk.
'jpegtran -drop' could not ever be used successfully with images such as
the ones described above. This commit simply makes jpegtran fail
gracefully rather than crash.
Fixes #758
|
|
7e45654c
|
2024-03-04T18:10:16
|
|
Merge branch 'main' into dev
|
|
7bb958b7
|
2024-03-04T12:12:03
|
|
12-bit: Don't gen opt Huff tbls if tbls supplied
(regression introduced by e8b40f3c2ba187ba95c13c3e8ce21c8534256df7)
The documented behavior of the libjpeg API is to compute optimal Huffman
tables when generating 12-bit lossy Huffman-coded JPEG images, unless
the calling application supplies its own Huffman tables. However,
e8b40f3c2ba187ba95c13c3e8ce21c8534256df7 and
96bc40c1b36775afdbad4ae05a6b3f48e2eebeb9 modified
jinit_c_master_control() so that it always set cinfo->optimize_coding to
TRUE when generarating 12-bit lossy Huffman-coded JPEG images, which
prevented calling applications from supplying custom Huffman tables for
such images.
This commit modifies jinit_c_master_control() so that it only overrides
cinfo->optimize_coding when generating 12-bit lossy Huffman-coded JPEG
images if all Huffman table slots are empty or all slots contain default
Huffman tables. Determining whether the latter is true requires using
memcmp() to compare the allocated Huffman tables with the default
Huffman tables, because:
- The documented behavior of jpeg_set_defaults() is to initialize any
empty Huffman table slot with the default Huffman table corresponding
to that slot, regardless of the data precision. There is also no
requirement that the data precision be specified prior to calling
jpeg_set_defaults(). Thus, there is no reliable way to prevent
jpeg_set_defaults() from initializing empty Huffman table slots with
default Huffman tables, which are useless for 12-bit data precision.
- There is no requirement that custom Huffman tables be defined prior to
calling jpeg_set_defaults(). A calling application could call
jpeg_set_defaults() and modify the values in the default Huffman
tables rather than allocating new tables. Thus, there is no reliable
way to detect whether the allocated Huffman tables contain default
values without comparing the tables with the default Huffman tables.
Fortunately, comparing the allocated Huffman tables with the default
Huffman tables is the last stop on the logic train, so it won't happen
unless cinfo->data_precision == 12, cinfo->arith_code == FALSE,
cinfo->optimize_coding == FALSE, and one or more Huffman tables are
allocated. (If the compressor object is reused, this ensures that the
full comparison will be performed at most once.) Custom Huffman tables
will be flagged as non-default when the first non-default value is
encountered, and the worst case (comparing 400 bytes) is very fast on
modern CPUs anyhow.
Fixes #751
|
|
3202feb0
|
2024-02-29T16:10:20
|
|
x86-64 SIMD: Support CET if C compiler enables it
- Detect at configure time, via the __CET__ C preprocessor macro,
whether the C compiler will include either indirect branch tracking
(IBT) or shadow stack support, and define a NASM macro (__CET__) if
so.
- Modify the x86-64 SIMD code so that it includes appropriate endbr64
instructions (to support IBT) and an appropriate .note.gnu.property
section (to support both IBT and shadow stack) when __CET__ is
defined.
Closes #350
|
|
98b6ed78
|
2024-01-26T09:56:14
|
|
Merge branch 'main' into dev
|
|
17df25f9
|
2024-01-25T13:52:58
|
|
Build/Win: Eliminate MSVC run-time DLL dependency
(regression introduced by 1644bdb7d2fac66cd0ce25adef7754e008b5bc1e)
Setting a maximum version in cmake_minimum_required() effectively sets
the behavior to NEW for all policies introduced in all CMake versions up
to and including that maximum version. The NEW behavior for CMP0091,
introduced in CMake 3.15, uses CMake variables to specify the MSVC
runtime library against which to link, rather than placing the relevant
flags in CMAKE_C_FLAGS*. Thus, replacing /MD with /MT in CMAKE_C_FLAGS*
no longer has any effect when using CMake 3.15+.
|
|
e69dd40c
|
2024-01-23T13:26:41
|
|
Reorganize source to make things easier to find
- Move all libjpeg documentation, except for README.ijg, into the doc/
subdirectory.
- Move the TurboJPEG C API documentation from doc/html/ into
doc/turbojpeg/.
- Move all C source code and headers into a src/ subdirectory.
- Move turbojpeg-jni.c into the java/ subdirectory.
Referring to #226, there is no ideal solution to this problem. A
semantically ideal solution would have involved placing all source code,
including the SIMD and Java source code, under src/ (or perhaps placing
C library source code under lib/ and C test program source code under
test/), all header files under include/, and all documentation under
doc/. However:
- To me it makes more sense to have separate top-level directories for
each language, since the SIMD extensions and the Java API are
technically optional features. src/ now contains only the code that
is relevant to the core C API libraries and associated programs.
- I didn't want to bury the java/ and simd/ directories or add a level
of depth to them, since both directories already contain source code
that is 3-4 levels deep.
- I would prefer not to separate the header files from the C source
code, because:
1. It would be disruptive. libjpeg and libjpeg-turbo have
historically placed C source code and headers in the same
directory, and people who are familiar with both projects (self
included) are used to looking for the headers in the same directory
as the C source code.
2. In terms of how the headers are used internally in libjpeg-turbo,
the distinction between public and private headers is a bit fuzzy.
- It didn't make sense to separate the test source code from the library
source code, since there is not a clear distinction in some cases.
(For instance, the IJG image I/O functions are used by cjpeg and djpeg
as well as by the TurboJPEG API.)
This solution is minimally disruptive, since it keeps all C source code
and headers together and keeps java/ and simd/ as top-level directories.
It is a bit awkward, because java/ and simd/ technically contain source
code, even though they are not under src/. However, other solutions
would have been more awkward for different reasons.
Closes #226
|
|
335ed793
|
2024-01-19T12:58:13
|
|
Assume 3-comp lossls JPEG w/o Adobe marker is RGB
libjpeg-turbo always includes Adobe APP14 markers in the lossless JPEG
images that it generates, but some compressors (e.g. accusoft PICTools
Medical) do not.
Fixes #743
|
|
3eee0dd7
|
2023-11-29T10:03:49
|
|
ChangeLog.md: "since" = "relative to"
|
|
55d342c7
|
2023-11-16T15:36:47
|
|
TurboJPEG: Expose/extend hidden "max pixels" param
TJPARAM_MAXPIXELS was previously hidden and used only for fuzz testing,
but it is potentially useful for calling applications as well,
particularly if they want to guard against excessive memory consumption
by the tj3LoadImage*() functions. The parameter has also been extended
to decompression and lossless transformation functions/methods, mainly
as a convenience. (It was already possible for calling applications to
impose their own JPEG image size limits by reading the JPEG header prior
to decompressing or transforming the image.)
|
|
df9dbff8
|
2023-11-11T12:25:03
|
|
TurboJPEG: New param to limit virt array mem usage
This corresponds to max_memory_to_use in the jpeg_memory_mgr struct in
the libjpeg API, except that the TurboJPEG parameter is specified in
megabytes. Because this is 2023 and computers with less than 1 MB of
memory are not a thing (at least not within the scope of libjpeg-turbo
support), it isn't useful to allow a limit less than 1 MB to be
specified. Furthermore, because TurboJPEG parameters are signed
integers, if we allowed the memory limit to be specified in bytes, then
it would be impossible to specify a limit larger than 2 GB on 64-bit
machines. Because max_memory_to_use is a long signed integer,
effectively we can specify a limit of up to 2 petabytes on 64-bit
machines if the TurboJPEG parameter is specified in megabytes. (2 PB
should be enough for anybody, right?)
This commit also bumps the TurboJPEG API version to 3.0.1. Since the
TurboJPEG API version no longer tracks the libjpeg-turbo version, it
makes sense to increment the API revision number when adding constants,
to increment the minor version number when adding functions, and to
increment the major version number for a complete overhaul.
This commit also removes the vestigial TJ_NUMPARAM macro, which was
never defined because it proved unnecessary.
Partially implements #735
|
|
78eaf0d4
|
2023-11-07T13:44:40
|
|
tj3*YUV8(): Fix int overflow w/ huge row alignment
If the align parameter was set to an unreasonably large value, such as
0x2000000, strides[0] * ph0 and strides[1] * ph1 could have overflowed
the int datatype and wrapped around when computing (src|dst)Planes[1]
and (src|dst)Planes[2] (respectively.) This would have caused
(src|dst)Planes[1] and (src|dst)Planes[2] to point to lower addresses in
the YUV buffer than expected, so the worst case would have been a
visually incorrect output image, not a buffer overrun or other
exploitable issue.
|
|
da48edfc
|
2023-10-09T14:13:55
|
|
jchuff.c: Fix uninit read w/ AArch64, WITH_SIMD=0
Because of bf01ed2fbc02c15e86f414ff4946b66b4e5a00f1, the simd field in
huff_entropy_encoder (and, by extension, the simd field in
savable_state) is only initialized if WITH_SIMD is defined. Due to an
oversight, the simd field in savable_state was queried in flush_bits()
regardless of whether WITH_SIMD was defined. In most cases, both
branches of the query have identical code, and the optimizer removes the
branch. However, because the legacy Neon GAS Huffman encoder uses the
older bit buffer logic from libjpeg-turbo 2.0.x and prior (refer to
087c29e07f7533ec82fd7eb1dafc84c29e7870ec), the branches do not have
identical code when building for AArch64 with NEON_INTRINSICS undefined
(which will be the case if WITH_SIMD is undefined.) Thus, if
libjpeg-turbo was built for AArch64 with the SIMD extensions disabled
at build time, it was possible for the Neon GAS branch in flush_bits()
to be taken, which would have set put_bits to a value that is incorrect
for the C Huffman encoder. Referring to #728, a user reported that this
issue sometimes caused libjpeg-turbo to generate bogus JPEG images if it
was built for AArch64 without SIMD extensions and subsequently used
through the Qt framework. (It should be noted, however, that disabling
the SIMD extensions in AArch64 builds of libjpeg-turbo is inadvisable
for performance reasons.)
I was unable to reproduce the issue on Linux/AArch64 using libjpeg-turbo
alone, despite testing various versions of GCC and Clang and various
optimization levels. However, the issue is reproducible using MSan with
-O0, so this commit also modifies the GitHub Actions workflow so that
compiler optimization is disabled in the linux-msan job. That should
prevent the issue or similar issues from re-emerging.
Fixes #728
|
|
c0412b56
|
2023-09-14T17:19:36
|
|
ChangeLog.md: List CVE ID fixed by ccaba5d7
Closes #724
|
|
9b704f96
|
2023-08-15T11:03:57
|
|
Fix block smoothing w/vert.-subsampled prog. JPEGs
The 5x5 interblock smoothing implementation, introduced in libjpeg-turbo
2.1, improperly extended the logic from the traditional 3x3 smoothing
implementation. Both implementations point prev_block_row and
next_block_row to the current block row when processing, respectively,
the first and the last block row in the image:
if (block_row > 0 || cinfo->output_iMCU_row > 0)
prev_block_row =
buffer[block_row - 1] + cinfo->master->first_MCU_col[ci];
else
prev_block_row = buffer_ptr;
if (block_row < block_rows - 1 ||
cinfo->output_iMCU_row < last_iMCU_row)
next_block_row =
buffer[block_row + 1] + cinfo->master->first_MCU_col[ci];
else
next_block_row = buffer_ptr;
6d91e950c871103a11bac2f10c63bf998796c719 naively extended that logic to
accommodate a 5x5 smoothing window:
if (block_row > 1 || cinfo->output_iMCU_row > 1)
prev_prev_block_row =
buffer[block_row - 2] + cinfo->master->first_MCU_col[ci];
else
prev_prev_block_row = prev_block_row;
if (block_row < block_rows - 2 ||
cinfo->output_iMCU_row + 1 < last_iMCU_row)
next_next_block_row =
buffer[block_row + 2] + cinfo->master->first_MCU_col[ci];
else
next_next_block_row = next_block_row;
However, this new logic was only correct if block_rows == 1, so the
values of prev_prev_block_row and next_next_block_row were incorrect
when processing, respectively, the second and second to last iMCU rows
in a vertically-subsampled progressive JPEG image.
The intent was to:
- point prev_block_row to the current block row when processing the
first block row in the image,
- point prev_prev_block_row to prev_block_row when processing the first
two block rows in the image,
- point next_block_row to the current block row when processing the
last block row in the image, and
- point next_next_block_row to next_block_row when processing the last
two block rows in the image.
This commit modifies decompress_smooth_data() so that it computes the
current block row's position relative to the whole image and sets
the block row pointers based on that value.
This commit also restores a line of code that was accidentally deleted
by 6d91e950c871103a11bac2f10c63bf998796c719:
access_rows += compptr->v_samp_factor; /* prior iMCU row too */
access_rows is merely a sanity check that tells the access_virt_barray()
method to generate an error if accessing the specified number of rows
would cause a buffer overrun. Essentially, it is a belt-and-suspenders
measure to ensure that j*init_d_coef_controller() allocated enough rows
for the full-image virtual array. Thus, excluding that line of code did
not cause an observable issue.
This commit also documents dbae59281fdc6b3a6304a40134e8576d50d662c0 in
the change log.
Fixes #721
|
|
7b844bfd
|
2023-07-28T11:46:10
|
|
x86-64 SIMD: Use std stack frame/prologue/epilogue
This allows debuggers and profilers to reliably capture backtraces from
within the x86-64 SIMD functions.
In places where rbp was previously used to access temporary variables
(after stack alignment), we now use r15 and save/restore it accordingly.
The total amount of work is approximately the same, because the previous
code pushed the pre-alignment stack pointer to the aligned stack. The
new prologue and epilogue actually have fewer instructions.
Also note that the {un}collect_args macros now use rbp instead of rax to
access arguments passed on the stack, so we save a few instructions
there as well.
Based on:
https://github.com/alk/libjpeg-turbo/commit/debcc7c3b436467aea8d02c66a514c5099d0ad37
Closes #707
Closes #708
|
|
e0c53aa3
|
2023-06-30T17:58:45
|
|
jchuff.c: Test for out-of-range coefficients
Restore two coefficient range checks from libjpeg to the C baseline
Huffman encoder. This fixes an issue
(https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=60253) whereby
the encoder could read from uninitialized memory when attempting to
transform a specially-crafted malformed arithmetic-coded JPEG source
image into a baseline Huffman-coded JPEG destination image with default
Huffman tables. More specifically, the out-of-range coefficients caused
r to equal 256, which overflowed the actbl->ehufsi[] array. Because the
overflow was contained within the huff_entropy_encoder structure, this
issue was not exploitable (nor was it observable at all on x86 or Arm
CPUs unless JSIMD_NOHUFFENC=1 or JSIMD_FORCENONE=1 was set in the
environment or unless libjpeg-turbo was built with WITH_SIMD=0.)
The fix is performance-neutral (+/- 1-2%) for x86-64 code and causes a
0-4% (avg. 1-2%, +/- 1-2%) compression regression for i386 code on Intel
CPUs when the C baseline Huffman encoder is used (JSIMD_NOHUFFENC=1).
The fix is performance-neutral (+/- 1-2%) on Intel CPUs when all of the
libjpeg-turbo SIMD extensions are disabled (JSIMD_FORCENONE=1). The fix
causes a 0-2% (avg. <1%, +/- 1%) compression regression for PowerPC
code.
|
|
240a5a5c
|
2023-06-29T17:45:07
|
|
ChangeLog.md: Fix typo
|
|
71738892
|
2023-06-29T16:31:37
|
|
djpeg: Fix -map option with 12-bit data precision
|
|
bf9f319c
|
2023-06-29T16:07:42
|
|
Disallow color quantization with lossless decomp
Color quantization is a legacy feature that serves little or no purpose
with lossless JPEG images. 9f756bc67a84d4566bf74a0c2432aa55da404021
eliminated interaction issues between the lossless decompressor and the
color quantizers related to out-of-range 12-bit samples, but referring
to #701, other interaction issues apparently still exist. Such issues
are likely, given the fact that the color quantizers were not designed
with lossless decompression in mind.
This commit reverts 9f756bc67a84d4566bf74a0c2432aa55da404021, since the
issues it fixed are no longer relevant because of this commit and
2192560d74e6e6cf99dd05928885573be00a8208.
Fixed #672
Fixes #673
Fixes #674
Fixes #676
Fixes #677
Fixes #678
Fixes #679
Fixes #681
Fixes #683
Fixes #701
|
|
c8d52f1c
|
2023-06-26T11:53:03
|
|
tj3Transform: Calc dst buf size from xformed dims
When used with TJPARAM_NOREALLOC and with TJXOP_TRANSPOSE,
TJXOP_TRANSVERSE, TJXOP_ROT90, or TJXOP_ROT270, tj3Transform()
incorrectly based the destination buffer size for a transform on the
source image dimensions rather than the transformed image dimensions.
This was apparently a long-standing bug that had existed in the
tj*Transform() function since its inception. As initially implemented
in the evolving libjpeg-turbo v1.2 code base, tjTransform() required
dstSizes[i] to be set regardless of whether TJFLAG_NOREALLOC (the
predecessor to TJPARAM_NOREALLOC) was set.
ff78e37595c8462f64fd100f928aa1d08539527e, which was introduced later in
the evolving libjpeg-turbo v1.2 code base, removed that requirement and
planted the seed for the bug. However, the bug was not activated until
9b49f0e4c77c727648c6d3a4915eefdf5436de4a was introduced still later in
the evolving libjpeg-turbo v1.2 code base, adding a subsampling type
argument to the (new at the time) tjBufSize() function and thus making
the width and height arguments no longer commutative.
The bug opened up the possibility that a JPEG source image could cause
tj3Transform() to overflow the destination buffer for a transform if all
of the following were true:
- The JPEG source image used 4:2:2, 4:4:0, 4:1:1, or 4:4:1 subsampling.
(These are the only subsampling types for which the width and height
arguments to tj3JPEGBufSize() are not commutative.)
- The width and height of the JPEG source image were such that
tj3JPEGBufSize(height, width, subsamplingType) returned a smaller
value than tj3JPEGBufSize(width, height, subsamplingType).
- The JPEG source image contained enough metadata that the size of the
transformed image was larger than
tj3JPEGBufSize(height, width, subsamplingType).
- TJPARAM_NOREALLOC was set.
- TJXOP_TRANSPOSE, TJXOP_TRANSVERSE, TJXOP_ROT90, or TJXOP_ROT270 was
used.
- TJXOPT_COPYNONE was not set.
- TJXOPT_CROP was not set.
- The calling program allocated
tj3JPEGBufSize(height, width, subsamplingType) bytes for the
destination buffer, as the API documentation instructs.
The API documentation cautions that JPEG source images containing a
large amount of extraneous metadata (EXIF, IPTC, ICC, etc.) cannot
reliably be transformed if TJPARAM_NOREALLOC is set and TJXOPT_COPYNONE
is not set. Irrespective of the bug, there are still cases in which a
JPEG source image with a large amount of metadata can, when transformed,
exceed the worst-case transformed JPEG image size. For instance, if you
try to losslessly crop a JPEG image with 3 kB of EXIF data to 16x16
pixels, then you are guaranteed to exceed the worst-case 16x16 JPEG
image size unless you discard the EXIF data.
Even without the bug, tj3Transform() will still fail with "Buffer passed
to JPEG library is too small" when attempting to transform JPEG source
images that meet the aforementioned criteria. The bug is that the
function segfaults rather than failing gracefully, but the chances of
that occurring in a real-world application are very slim. Any
real-world application developers who attempted to transform arbitrary
JPEG source images with TJPARAM_NOREALLOC set would very quickly realize
that they cannot reliably do that without also setting TJXOPT_COPYNONE.
Thus, I posit that the actual risk posed by this bug is low.
Applications such as web browsers that are the most exposed to security
risks from arbitrary JPEG source images do not use the TurboJPEG
lossless transform feature. (None of those applications even use the
TurboJPEG API, to the best of my knowledge, and the public libjpeg API
has no equivalent transform function.) Our only command-line interface
to the tj3Transform() function, TJBench, was not exposed to the bug
because it had a compatible bug whereby it allocated the JPEG
destination buffer to the same size that tj3Transform() erroneously
expected. The TurboJPEG Java API was also not exposed to the bug
because of a similar compatible bug in the
Java_org_libjpegturbo_turbojpeg_TJTransformer_transform() JNI function.
(This commit fixes both compatible bugs.)
In short, best practices for tj3Transform() are to use TJPARAM_NOREALLOC
only with JPEG source images that are known to be free of metadata (such
as images generated by tj3Compress*()) or to use TJXOPT_COPYNONE along
with TJPARAM_NOREALLOC. Still, however, the function shouldn't segfault
as long as the calling program allocates the suggested amount of space
for the JPEG destination buffer.
Usability notes:
tj3Transform() could hypothetically require dstSizes[i] to be set
regardless of the value of TJPARAM_NOREALLOC, but there are usability
pitfalls either way. The main pitfall I sought to avoid with
ff78e37595c8462f64fd100f928aa1d08539527e was a calling program failing
to set dstSizes[i] at all, thus leaving its value undefined. It could
be argued that requiring dstSizes[i] to be set in all cases is more
consistent, but it could also be argued that not requiring it to be set
when TJPARAM_NOREALLOC is set is more user-proof. tj3Transform() could
also hypothetically set TJXOPT_COPYNONE automatically when
TJPARAM_NOREALLOC is set, but that could lead to user confusion.
Ultimately, I would like to address these issues in TurboJPEG v4 by
using managed buffer objects, but that would be an extensive overhaul.
|
|
36aaeebb
|
2023-05-30T17:46:58
|
|
ChangeLog.md: List CVE ID fixed by 9f756bc6
|
|
3a536273
|
2023-04-06T18:33:41
|
|
jpeg_crop_scanline: Fix calc w/sclg + 2x4,4x2 samp
When computing the downsampled width for a particular component,
jpeg_crop_scanline() needs to take into account the fact that the
libjpeg code uses a combination of IDCT scaling and upsampling to
implement 4x2 and 2x4 upsampling with certain decompression scaling
factors. Failing to account for that led to incomplete upsampling of
4x2- or 2x4-subsampled components, which caused the color converter to
read from uninitialized memory. With 12-bit data precision, this caused
a buffer overrun or underrun and subsequent segfault if the
uninitialized memory contained a value that was outside of the valid
sample range (because the color converter uses the value as an array
index.)
Fixes #669
|
|
62590d42
|
2023-04-04T19:06:20
|
|
Decomp: Don't enable 2-pass color quant w/ RGB565
The 2-pass color quantization algorithm assumes 3-sample pixels. RGB565
is the only 3-component colorspace that doesn't have 3-sample pixels, so
we need to treat it as a special case when determining whether to enable
2-pass color quantization. Otherwise, attempting to initialize 2-pass
color quantization with an RGB565 output buffer could cause
prescan_quantize() to read from uninitialized memory and subsequently
underflow/overflow the histogram array.
djpeg is supposed to fail gracefully if both -rgb565 and -colors are
specified, because none of its destination managers (image writers)
support color quantization with RGB565. However, prescan_quantize() was
called before that could occur. It is possible but very unlikely that
these issues could have been reproduced in applications other than
djpeg. The issues involve the use of two features (12-bit precision and
RGB565) that are incompatible, and they also involve the use of two
rarely-used legacy features (RGB565 and color quantization) that don't
make much sense when combined.
Fixes #668
Fixes #671
Fixes #680
|
|
9f756bc6
|
2023-04-04T13:53:21
|
|
Lossless decomp: Range-limit 12-bit samples
12-bit is the only data precision for which the range of the sample data
type exceeds the valid sample range, so it is possible to craft a 12-bit
lossless JPEG image that contains out-of-range 12-bit samples.
Attempting to decompress such an image using color quantization or merged
upsampling (NOTE: libjpeg-turbo cannot generate YCbCr or subsampled
lossless JPEG images, but it can decompress them) caused segfaults or
buffer overruns when those algorithms attempted to use the out-of-range
sample values as array indices. This commit modifies the lossless
decompressor so that it range-limits the output of the scaler when using
12-bit samples.
Fixes #670
Fixes #672
Fixes #673
Fixes #674
Fixes #675
Fixes #676
Fixes #677
Fixes #678
Fixes #679
Fixes #681
Fixes #683
|
|
fc881ebb
|
2023-03-09T20:55:43
|
|
TurboJPEG: Implement 4:4:1 chrominance subsampling
This allows losslessly transposed or rotated 4:1:1 JPEG images to be
losslessly cropped, partially decompressed, or decompressed to planar
YUV images.
Because tj3Transform() allows multiple lossless transformations to be
chained together, all subsampling options need to have a corresponding
transposed subsampling option. (This is why 4:4:0 was originally
implemented as well.) Otherwise, the documentation would be technically
incorrect. It says that images with unknown subsampling types cannot be
losslessly cropped, partially decompressed, or decompressed to planar
YUV images, but it doesn't say anything about images with known
subsampling types whose subsampling type becomes unknown if the image is
rotated or transposed. This is one of those situations in which it is
easier to implement a feature that works around the problem than to
document the problem.
Closes #659
|
|
0827eaff
|
2023-03-09T21:04:40
|
|
ChangeLog.md: Add literal vers # to 3.0 beta2 hdr
(per our convention)
|
|
6c610333
|
2023-02-08T09:23:51
|
|
ChangeLog.md: Document 4e028ecd
+ bump version to 3.0 beta2
|
|
fd8c4da0
|
2023-01-27T14:05:07
|
|
Bump revision to 2.1.90 to prepare for beta
+ acknowledge upcoming 2.1.5 release
|
|
db9f297f
|
2023-01-27T07:10:49
|
|
ChangeLog.md: Document TurboJPEG 3 API overhaul
|
|
7ab6222c
|
2023-01-20T14:09:25
|
|
Merge branch 'main' into dev
|
|
98a64558
|
2023-01-20T13:14:11
|
|
TJBench: Set TJ*OPT_PROGRESSIVE with -progressive
The documented behavior of the -progressive option is to use progressive
entropy coding in JPEG images generated by compression and transform
operations. However, setting TJFLAG_PROGRESSIVE was insufficient to
accomplish that, because TJBench doesn't enable lossless transformation
if xformOpt == 0.
|
|
b99e7590
|
2023-01-20T10:50:21
|
|
TJBench/Java: Fix parsing of quality ranges
|
|
c7c02d92
|
2023-01-17T18:31:31
|
|
Merge branch 'main' into dev
|
|
08cbc233
|
2023-01-17T11:04:38
|
|
12-bit: Set alpha channel to 4095 rather than 255
|
|
d4589f4f
|
2023-01-14T18:07:53
|
|
Merge branch 'main' into dev
|
|
94a2b953
|
2023-01-11T15:01:35
|
|
tjDecompressToYUV2: Use scaled dims for plane calc
The documented behavior of the function is to use decompression scaling
to generate the largest possible image that will fit within the desired
image dimensions. Thus, if the desired image dimensions are larger than
the scaled image dimensions, then tjDecompressToYUV2() should use the
scaled image dimensions when computing the plane pointers and strides to
pass to tjDecompressToYUVPlanes().
Note that this bug was not previously detected, because tjunittest and
tjbench always passed the scaled image dimensions to
tjDecompressToYUV2().
|
|
9a146f0f
|
2023-01-06T10:29:10
|
|
TurboJPEG: Numerous documentation improvements
- Wordsmithing, formatting, and grammar tweaks
- Various clarifications and corrections, including specifying whether
a particular buffer or image is used as a source or destination
- Accommodate/mention features that were introduced since the API
documentation was created.
- For clarity, use "packed-pixel" to describe uncompressed
source/destination images that are not planar YUV.
- Use "row" rather than "line" to refer to a single horizontal group of
pixels or component values, for consistency with the libjpeg API
documentation. (libjpeg also uses "scanline", which is a more archaic
term.)
- Use "alignment" rather than "padding" to refer to the number of bytes
by which a row's width is evenly divisible. This consistifies the
documention of the YUV functions and tjLoadImage(). ("Padding"
typically refers to the number of bytes added to each row, which is
not the same thing.)
- Remove all references to "the underlying codec." Although the
TurboJPEG API originated as a cross-platform wrapper for the Intel
Integrated Performance Primitives, Sun mediaLib, QuickTime, and
libjpeg, none of those TurboJPEG implementations has been maintained
since 2009. Nothing would prevent someone from implementing the
TurboJPEG API without libjpeg-turbo, but such an implementation would
not necessarily have an "underlying codec." (It could be fully
self-contained.)
- Use "destination image" rather than "output image", for consistency,
or describe the type of image that will be output.
- Avoid the term "image buffer" and instead use "byte buffer" to
refer to buffers that will hold JPEG images, or describe the type of
image that will be contained in the buffer. (The Java documentation
doesn't use "byte buffer", because the buffer arrays literally have
"byte" in front of them, and since Java doesn't have pointers, it is
not possible for mere mortals to store any other type of data in those
arrays.)
- C: Use "unified" to describe YUV images stored in a single buffer, for
consistency with the Java documentation.
- Use "planar YUV" rather than "YUV planar". Is is our convention to
describe images using {component layout} {colorspace/pixel format}
{image function}, e.g. "packed-pixel RGB source image" or "planar YUV
destination image."
- C: Document the TurboJPEG API version in which a particular function
or macro was introduced, and reorder the backward compatibility
function stubs in turbojpeg.h alphabetically by API version.
- C: Use Markdown rather than HTML tags, where possible, in the Doxygen
comments.
|
|
d2608583
|
2023-01-05T10:51:12
|
|
TurboJPEG: Ensure 'pad' arg is a power of 2
Because the PAD() macro can only handle powers of 2, this is a necessary
restriction (and a documented one, except in the case of
tjCompressFromYUV()-- oops.) Failing to check the 'pad' argument
caused tjBufSizeYUV2() to return bogus results if 'pad' was less than 1
or otherwise not a power of 2. tjEncodeYUV3() and tjDecodeYUV()
effectively treated a 'pad' value of 0 as unpadded, but that was subtle
and undocumented behavior. tjCompressFromYUV() did not check whether
'pad' was a power of 2, so the strides passed to
tjCompressFromYUVPlanes() would have been incorrect if 'pad' was not a
power of 2. That would not have caused tjCompressFromYUV() to overrun
the source buffer, as long as the calling application allocated the
buffer based on the return value of tjBufSizeYUV2() (which computes the
strides in the same manner as tjCompressFromYUV().) However, if the
calling application attempted to initialize the source buffer using
correctly-computed strides, then it could have overrun its own
buffer in certain cases or produced incorrect JPEG images in others.
Realistically, there is no reason why an application would want to pass
a non-power-of-2 'pad' value to a TurboJPEG API function, so this commit
is about user-proofing the API rather than fixing any known issue.
|
|
2241434e
|
2022-12-15T12:20:50
|
|
16-bit lossless JPEG support
|
|
80352340
|
2022-12-07T14:11:37
|
|
Merge branch 'main' into dev
|
|
dc4a93fa
|
2022-12-07T13:37:16
|
|
jpegtran: Fix FPE w/ -drop & -trim on corrupt JPEG
requant_comp() in transupp.c, a function that supports the jpegtran
-drop option, borrows code from the C quantization function in order to
re-quantize the coefficients from the dropped image. However, the
function does not guard against the possibility that a corrupt source
image could inject quantization table values equal to 0, thus causing a
divide-by-zero error. Since this error affected only jpegtran and not
any of the libraries (the tjTransform() function in the TurboJPEG API
does not expose the image drop feature), it did not represent a security
risk. In fact, this commit does not change the output of jpegtran when
attempting to transform the aforementioned corrupt source image. It
merely eliminates the floating point exception. Like most issues of
this type, however, eliminating the error prevents it from hiding
legitimate security issues that may later be introduced.
Fixes #635
Fixes #636
|
|
5da86f74
|
2022-12-07T09:45:33
|
|
ChangeLog.md: List CVE ID fixed by 9120a247
|
|
7bb5cb56
|
2022-12-07T09:39:03
|
|
ChangeLog.md: List CVE ID fixed by f35fd27e
|
|
e7a248eb
|
2022-11-29T01:08:27
|
|
Merge branch 'main' into dev
|
|
45cd2ded
|
2022-11-28T21:02:42
|
|
12-bit: Prevent RGB-to-YCC table overrun/underrun
cjpeg relies on the various file I/O modules to range-limit the input
samples, but no range limiting is performed by the
jpeg_write_scanlines() function itself. With 8-bit samples, that isn't
a problem, because sample values > MAXJSAMPLE will overflow the data
type and wrap around to 0. With 12-bit samples, however, it is possible
to pass sample values < 0 or > 4095 to jpeg_write_scanlines(), which
would cause the RGB-to-YCbCr color converter to underflow or overflow
the RGB-to-YCbCr conversion tables. That issue has existed in libjpeg
all along. This commit mitigates the issue by masking off all but the
lowest 12 bits of each 12-bit input sample prior to using the input
sample value to index the RGB-to-YCbCr conversion tables.
Fixes #633
|
|
98ff1fd1
|
2022-11-21T20:57:39
|
|
TurboJPEG: Add lossless JPEG detection capability
Add a new TurboJPEG C API function (tjDecompressHeader4()) and Java API
method (TJDecompressor.getFlags()) that return the bitwise OR of any
flags that are relevant to the JPEG image being decompressed (currently
TJFLAG_PROGRESSIVE, TJFLAG_ARITHMETIC, TJFLAG_LOSSLESS, and their Java
equivalents.) This allows a calling program to determine whether the
image being decompressed is a lossless JPEG image, which means that the
decompression scaling feature will not be available and that a
full-sized destination buffer should be allocated.
More specifically, this fixes a buffer overrun in TJBench, TJExample,
and the decompress* fuzz targets that occurred when attempting (in vain)
to decompress a lossless JPEG image with decompression scaling enabled.
|
|
25ccad99
|
2022-11-16T15:57:25
|
|
TurboJPEG: 8-bit lossless JPEG support
|
|
6002720c
|
2022-11-15T23:10:35
|
|
TurboJPEG: Opt. enable arithmetic entropy coding
|