|
0c51ecf2
|
2017-10-07T00:10:06
|
|
travis: add custom apt sources
Move back to Travis's VM infrastructure for efficiency.
(cherry picked from commit 9dc21efdbf275dec18b9c34b472f8df9f8e8c169)
|
|
93434828
|
2017-10-31T14:43:28
|
|
travis: let's try a 5GB ramdisk
(cherry picked from commit 71ba464435bb430b02d94c653cd518c11f7289ff)
|
|
4eecbdd0
|
2017-10-31T10:40:24
|
|
travis: put clar's sandbox in a ramdisk on macOS
The macOS tests are by far the slowest right now. This attempts to remedy the
situation somewhat by asking clar to put its test data on a ramdisk.
(cherry picked from commit 37bb15122e30bb13aabc213079da53b5cdac2678)
|
|
736356a6
|
2017-11-06T12:47:40
|
|
examples: network: fix Win32 linking errors due to getline
The getline(3) function call is not part of ISO C and, most importantly,
it is not implemented on Microsoft Windows platforms. As our networking
example code makes use of getline, this breaks builds on MSVC and MinGW.
As this code wasn't built prior to the previous commit, this was never
noticed.
Fix the error by instead implementing a `readline` function, which
simply reads the password from stdin until it reads a newline
character.
(cherry picked from commit bf15dbf6cf19146082c1245e9db4016d773dbe7e)
|
|
1c85bcd8
|
2017-11-06T11:16:02
|
|
appveyor: build examples
By default, CMake will not build our examples directory. As we do not
instruct either the MinGW or MSVC builds on AppVeyor to enable building
these examples, we cannot verify that those examples at least build on
Windows systems.
Fix that by passing `-DBUILD_EXAMPLES=ON` to AppVeyor's CMake
invocation.
(cherry picked from commit 0b98a66baae83056401a0a5fef5dc5cd2ed3468b)
|
|
fad7f7a2
|
2017-07-24T13:10:43
|
|
travis: use trusty
(cherry picked from commit 4da38193c568ca3842bc1130c82e7a9f955f23aa)
|
|
dc413239
|
2017-07-24T17:53:32
|
|
travis: only install custom libcurl on trusty
(cherry picked from commit c582fa4eb6bee7880f04080aa80357cca406e448)
|
|
7d1c72a4
|
2017-07-24T16:48:04
|
|
travis: only kill our own sshd
(cherry picked from commit 697583ea3aceb1379c576515ffa713ba29c50437)
|
|
16957a7f
|
2017-07-23T03:41:52
|
|
travis: build with patched libcurl
Ubuntu trusty has a bug in curl when using NTLM credentials in a proxy,
dereferencing a null pointer and causing segmentation faults. Use a
custom-patched version of libcurl that avoids this issue.
(cherry picked from commit f031e20b516209f19a56ef934e12fea6adec097a)
|
|
5491d0e1
|
2017-04-21T07:58:46
|
|
travis: upgrade container to Ubuntu 14.04
Ubuntu 12.04 (Precise Pangolin) reaches end of life on April 28th, 2017.
As such, we should update our build infrastructure to use the next
available LTS release, which is Ubuntu 14.04 LTS (Trusty Tahr). Note
that Trusty is still considered beta quality on Travis. But considering
we are able to correctly build and test libgit2, this seems to be a
non-issue for us.
Switch over our default distribution to Trusty. As Precise still has
extended support for paying customers, add an additional job which
compiles libgit2 on the old release.
(cherry picked from commit 7c8d460f8410cf7a110eb10e9c4bafdede6a49c6)
|
|
76a7d5f1
|
2017-04-26T13:04:23
|
|
travis: cibuild: set up our own sshd server
Some tests of ours require to be running against an SSH server.
Currently, we simply run against the SSH server provided and started by
Travis itself. As our Linux tests run in a sudo-less environment, we
have no control over its configuration and startup/shutdown procedure.
While this has been no problem until now, it will become a problem as
soon as we migrate over to newer Precise images, as the SSH server does
not have any host keys set up. Luckily, we can simply set up our own
unpriviledged SSH server. This has the benefit of us being able to
modify its configuration even in a sudo-less environment.
This commit sets up the unpriviledged SSH server on port 2222.
(cherry picked from commit 06619904a2ae2ffd5d8e34ab11d5eb484e9d5762)
|
|
b988f544
|
2017-04-26T13:16:18
|
|
tests: online::clone: use URL of test server
All our tests running against a local SSH server usually read the
server's URL from environment variables. But online::clone::ssh_cert
test fails to do so and instead always connects to
"ssh://localhost/foo". This assumption breaks whenever the SSH server is
not running on the standard port, e.g. when it is running as a user.
Fix the issue by using the URL provided by the environment.
(cherry picked from commit c2c95ad0a210be4811c247be51664bfe8b2e830a)
|
|
2bd9b6b6
|
2018-10-05T19:32:32
|
|
Merge pull request #4835 from pks-t/pks/v0.26.7
Security release v0.26.7
|
|
9102156c
|
2018-09-06T13:14:40
|
|
version: raise to v0.26.7
|
|
b1d39682
|
2018-09-06T13:14:19
|
|
CHANGELOG: update for v0.26.7
|
|
b93e82d4
|
2018-10-05T11:47:39
|
|
submodule: ignore path and url attributes if they look like options
These can be used to inject options in an implementation which performs a
recursive clone by executing an external command via crafted url and path
attributes such that it triggers a local executable to be run.
The library is not vulnerable as we do not rely on external executables but a
user of the library might be relying on that so we add this protection.
This matches this aspect of git's fix for CVE-2018-17456.
|
|
7e8d9789
|
2018-10-05T11:42:00
|
|
submodule: add failing test for option-injection protection in url and path
|
|
74937431
|
2018-10-05T10:56:02
|
|
config_file: properly ignore includes without "path" value
In case a configuration includes a key "include.path=" without any
value, the generated configuration entry will have its value set to
`NULL`. This is unexpected by the logic handling includes, and as soon
as we try to calculate the included path we will unconditionally
dereference that `NULL` pointer and thus segfault.
Fix the issue by returning early in both `parse_include` and
`parse_conditional_include` in case where the `file` argument is `NULL`.
Add a test to avoid future regression.
The issue has been found by the oss-fuzz project, issue 10810.
(cherry picked from commit d06d4220eec035466d1a837972a40546b8904330)
|
|
232fc469
|
2018-10-05T10:55:29
|
|
tests: always unlink created config files
While our tests in config::include create a plethora of configuration
files, most of them do not get removed at the end of each test. This can
cause weird interactions with tests that are being run at a later stage
if these later tests try to create files or directories with the same
name as any of the created configuration files.
Fix the issue by unlinking all created files at the end of these tests.
(cherry picked from commit bf662f7cf8daff2357923446cf9d22f5d4b4a66b)
|
|
21a2318b
|
2018-10-03T16:17:21
|
|
smart_pkt: do not accept callers passing in no line length
Right now, we simply ignore the `linelen` parameter of
`git_pkt_parse_line` in case the caller passed in zero. But in fact, we
never want to assume anything about the provided buffer length and
always want the caller to pass in the available number of bytes.
And in fact, checking all the callers, one can see that the funciton is
never being called in case where the buffer length is zero, and thus we
are safe to remove this check.
(cherry picked from commit 1bc5b05c614c7b10de021fa392943e8e6bd12c77)
|
|
5836d8b6
|
2018-08-09T11:16:15
|
|
smart_pkt: return parsed length via out-parameter
The `parse_len` function currently directly returns the parsed length of
a packet line or an error code in case there was an error. Instead,
convert this to our usual style of using the return value as error code
only and returning the actual value via an out-parameter. Thus, we can
now convert the output parameter to an unsigned type, as the size of a
packet cannot ever be negative.
While at it, we also move the check whether the input buffer is long
enough into `parse_len` itself. We don't really want to pass around
potentially non-NUL-terminated buffers to functions without also passing
along the length, as this is dangerous in the unlikely case where other
callers for that function get added. Note that we need to make sure
though to not mess with `GIT_EBUFS` error codes, as these indicate not
an error to the caller but that he needs to fetch more data.
(cherry picked from commit c05790a8a8dd4351e61fc06c0a06c6a6fb6134dc)
|
|
3bbda7d7
|
2018-08-09T11:13:59
|
|
smart_pkt: reorder and rename parameters of `git_pkt_parse_line`
The parameters of the `git_pkt_parse_line` function are quite confusing.
First, there is no real indicator what the `out` parameter is actually
all about, and it's not really clear what the `bufflen` parameter refers
to. Reorder and rename the parameters to make this more obvious.
(cherry picked from commit 0b3dfbf425d689101663341beb94237614f1b5c2)
|
|
a8356af8
|
2018-08-09T11:04:42
|
|
smart_pkt: fix buffer overflow when parsing "unpack" packets
When checking whether an "unpack" packet returned the "ok" status or
not, we use a call to `git__prefixcmp`. In case where the passed line
isn't properly NUL terminated, though, this may overrun the line buffer.
Fix this by using `git__prefixncmp` instead.
(cherry picked from commit 5fabaca801e1f5e7a1054be612e8fabec7cd6a7f)
|
|
02e4b27f
|
2018-08-09T11:03:37
|
|
smart_pkt: fix "ng" parser accepting non-space character
When parsing "ng" packets, we blindly assume that the character
immediately following the "ng" prefix is a space and skip it. As the
calling function doesn't make sure that this is the case, we can thus
end up blindly accepting an invalid packet line.
Fix the issue by using `git__prefixncmp`, checking whether the line
starts with "ng ".
(cherry picked from commit b5ba7af2d30c958b090dcf135749d9afe89ec703)
|
|
8cd0a897
|
2018-08-09T11:01:00
|
|
smart_pkt: fix buffer overflow when parsing "ok" packets
There are two different buffer overflows present when parsing "ok"
packets. First, we never verify whether the line already ends after
"ok", but directly go ahead and also try to skip the expected space
after "ok". Second, we then go ahead and use `strchr` to scan for the
terminating newline character. But in case where the line isn't
terminated correctly, this can overflow the line buffer.
Fix the issues by using `git__prefixncmp` to check for the "ok " prefix
and only checking for a trailing '\n' instead of using `memchr`. This
also fixes the issue of us always requiring a trailing '\n'.
Reported by oss-fuzz, issue 9749:
Crash Type: Heap-buffer-overflow READ {*}
Crash Address: 0x6310000389c0
Crash State:
ok_pkt
git_pkt_parse_line
git_smart__store_refs
Sanitizer: address (ASAN)
(cherry picked from commit a9f1ca09178af0640963e069a2142d5ced53f0b4)
|
|
82c3fc33
|
2018-08-09T10:38:10
|
|
smart_pkt: fix buffer overflow when parsing "ACK" packets
We are being quite lenient when parsing "ACK" packets. First, we didn't
correctly verify that we're not overrunning the provided buffer length,
which we fix here by using `git__prefixncmp` instead of
`git__prefixcmp`. Second, we do not verify that the actual contents make
any sense at all, as we simply ignore errors when parsing the ACKs OID
and any unknown status strings. This may result in a parsed packet
structure with invalid contents, which is being silently passed to the
caller. This is being fixed by performing proper input validation and
checking of return codes.
(cherry picked from commit bc349045b1be8fb3af2b02d8554483869e54d5b8)
|
|
3fd6ce0d
|
2018-08-09T10:57:06
|
|
smart_pkt: adjust style of "ref" packet parsing function
While the function parsing ref packets doesn't have any immediately
obvious buffer overflows, it's style is different to all the other
parsing functions. Instead of checking buffer length while we go, it
does a check up-front. This causes the code to seem a lot more magical
than it really is due to some magic constants. Refactor the function to
instead make use of the style of other packet parser and verify buffer
lengths as we go.
(cherry picked from commit 5edcf5d190f3b379740b223ff6a649d08fa49581)
|
|
e14dab2f
|
2018-08-09T10:46:58
|
|
smart_pkt: check whether error packets are prefixed with "ERR "
In the `git_pkt_parse_line` function, we determine what kind of packet
a given packet line contains by simply checking for the prefix of that
line. Except for "ERR" packets, we always only check for the immediate
identifier without the trailing space (e.g. we check for an "ACK"
prefix, not for "ACK "). But for "ERR" packets, we do in fact include
the trailing space in our check. This is not really much of a problem at
all, but it is inconsistent with all the other packet types and thus
causes confusion when the `err_pkt` function just immediately skips the
space without checking whether it overflows the line buffer.
Adjust the check in `git_pkt_parse_line` to not include the trailing
space and instead move it into `err_pkt` for consistency.
(cherry picked from commit 786426ea6ec2a76ffe2515dc5182705fb3d44603)
|
|
cfb9802b
|
2018-08-09T10:46:26
|
|
smart_pkt: explicitly avoid integer overflows when parsing packets
When parsing data, progress or error packets, we need to copy the
contents of the rest of the current packet line into the flex-array of
the parsed packet. To keep track of this array's length, we then assign
the remaining length of the packet line to the structure. We do have a
mismatch of types here, as the structure's `len` field is a signed
integer, while the length that we are assigning has type `size_t`.
On nearly all platforms, this shouldn't pose any problems at all. The
line length can at most be 16^4, as the line's length is being encoded
by exactly four hex digits. But on a platforms with 16 bit integers,
this assignment could cause an overflow. While such platforms will
probably only exist in the embedded ecosystem, we still want to avoid
this potential overflow. Thus, we now simply change the structure's
`len` member to be of type `size_t` to avoid any integer promotion.
(cherry picked from commit 40fd84cca68db24f325e460a40dabe805e7a5d35)
|
|
a7e87dd5
|
2018-08-09T10:36:44
|
|
smart_pkt: honor line length when determining packet type
When we parse the packet type of an incoming packet line, we do not
verify that we don't overflow the provided line buffer. Fix this by
using `git__prefixncmp` instead and passing in `len`. As we have
previously already verified that `len <= linelen`, we thus won't ever
overflow the provided buffer length.
(cherry picked from commit 4a5804c983317100eed509537edc32d69c8d7aa2)
|
|
5d108c9a
|
2018-10-03T15:39:40
|
|
tests: verify parsing logic for smart packets
The commits following this commit are about to introduce quite a lot of
refactoring and tightening of the smart packet parser. Unfortunately, we
do not yet have any tests despite our online tests that verify that our
parser does not regress upon changes. This is doubly unfortunate as our
online tests aren't executed by default.
Add new tests that exercise the smart parsing logic directly by
executing `git_pkt_parse_line`.
(cherry picked from commit 365d2720c1a5fc89f03fd85265c8b45195c7e4a8)
|
|
a8db6c92
|
2017-11-30T15:40:13
|
|
util: introduce `git__prefixncmp` and consolidate implementations
Introduce `git_prefixncmp` that will search up to the first `n`
characters of a string to see if it is prefixed by another string.
This is useful for examining if a non-null terminated character
array is prefixed by a particular substring.
Consolidate the various implementations of `git__prefixcmp` around a
single core implementation and add some test cases to validate its
behavior.
(cherry picked from commit 86219f40689c85ec4418575223f4376beffa45af)
|
|
5f557780
|
2018-06-24T19:47:08
|
|
Verify ref_pkt's are long enough
If the remote sends a too-short packet, we'll allow `len` to go
negative and eventually issue a malloc for <= 0 bytes on
```
pkt->head.name = git__malloc(alloclen);
```
(cherry picked from commit 437ee5a70711ac2e027877d71ee4ae17e5ec3d6c)
|
|
9561ec83
|
2017-08-22T16:29:07
|
|
smart: typedef git_pkt_type and clarify recv_pkt return type
(cherry picked from commit 08961c9d0d6927bfcc725bd64c9a87dbcca0c52c)
|
|
e91024e1
|
2018-06-28T05:27:36
|
|
Small style tweak, and set an error
(cherry picked from commit 895a668e19dc596e7b12ea27724ceb7b68556106)
|
|
c83c59b8
|
2018-06-26T02:32:50
|
|
Remove GIT_PKT_PACK entirely
(cherry picked from commit 90cf86070046fcffd5306915b57786da054d8964)
|
|
8ced4627
|
2018-08-11T13:06:14
|
|
Fix 'invalid packet line' for ng packets containing errors
(cherry picked from commit 50dd7fea5ad1bf6c013b72ad0aa803a9c84cdede)
|
|
ffc20564
|
2018-09-05T11:49:13
|
|
Prevent heap-buffer-overflow
When running repack while doing repo writes, `packfile_load__cb()` can see some temporary files in the directory that are bigger than the usual, and makes `memcmp` overflow on the `p->pack_name` string. ASAN detected this. This just uses `strncmp`, that should not have any performance impact and is safe for comparing strings of different sizes.
```
ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61200001a3f3 at pc 0x7f4a9e1976ec bp 0x7ffc1f80e100 sp 0x7ffc1f80d8b0
READ of size 89 at 0x61200001a3f3 thread T0
SCARINESS: 26 (multi-byte-read-heap-buffer-overflow)
#0 0x7f4a9e1976eb in __interceptor_memcmp.part.78 (/build/cfgr-admin#link-tree/libtools_build_sanitizers_asan-ubsan-py.so+0xcf6eb)
#1 0x7f4a518c5431 in packfile_load__cb /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb_pack.c:213
#2 0x7f4a518d9582 in git_path_direach /build/libgit2/0.27.0/src/libgit2-0.27.0/src/path.c:1134
#3 0x7f4a518c58ad in pack_backend__refresh /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb_pack.c:347
#4 0x7f4a518c1b12 in git_odb_refresh /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb.c:1511
#5 0x7f4a518bff5f in git_odb__freshen /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb.c:752
#6 0x7f4a518c17d4 in git_odb_stream_finalize_write /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb.c:1415
#7 0x7f4a51b9d015 in Repository_write /build/pygit2/0.27.0/src/pygit2-0.27.0/src/repository.c:509
```
(cherry picked from commit d22cd1f4a4c10ff47b04c57560e6765d77e5a8fd)
|
|
c4db1715
|
2018-09-03T10:49:46
|
|
config_parse: refactor error handling when parsing multiline variables
The current error handling for the multiline variable parser is a bit
fragile, as each error condition has its own code to clear memory.
Instead, unify error handling as far as possible to avoid this
repetitive code. While at it, make use of `GITERR_CHECK_ALLOC` to
correctly handle OOM situations and verify that the buffer we print into
does not run out of memory either.
(cherry picked from commit bc63e1ef521ab5900dc0b0dcd578b8bf18627fb1)
|
|
c18c913c
|
2018-09-01T03:50:26
|
|
config: Fix a leak parsing multi-line config entries
(cherry picked from commit 38b852558eb518f96c313cdcd9ce5a7af6ded194)
|
|
da70156e
|
2018-08-25T17:04:39
|
|
config: convert unbounded recursion into a loop
(cherry picked from commit a03113e80332fba6c77f43b21d398caad50b4b89)
|
|
e98d0a37
|
2018-08-06T10:49:54
|
|
Merge pull request #4757 from pks-t/pks/v0.26.6
Release v0.26.6
|
|
81532654
|
2018-08-03T11:24:31
|
|
version: bump to v0.26.6
|
|
495bc486
|
2018-08-03T11:24:14
|
|
CHANGELOG.md: document security release v0.26.6
|
|
50705a2a
|
2018-07-19T13:00:42
|
|
smart_pkt: fix potential OOB-read when processing ng packet
OSS-fuzz has reported a potential out-of-bounds read when processing a
"ng" smart packet:
==1==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6310000249c0 at pc 0x000000493a92 bp 0x7ffddc882cd0 sp 0x7ffddc882480
READ of size 65529 at 0x6310000249c0 thread T0
SCARINESS: 26 (multi-byte-read-heap-buffer-overflow)
#0 0x493a91 in __interceptor_strchr.part.35 /src/llvm/projects/compiler-rt/lib/sanitizer_common/sanitizer_common_interceptors.inc:673
#1 0x813960 in ng_pkt libgit2/src/transports/smart_pkt.c:320:14
#2 0x810f79 in git_pkt_parse_line libgit2/src/transports/smart_pkt.c:478:9
#3 0x82c3c9 in git_smart__store_refs libgit2/src/transports/smart_protocol.c:47:12
#4 0x6373a2 in git_smart__connect libgit2/src/transports/smart.c:251:15
#5 0x57688f in git_remote_connect libgit2/src/remote.c:708:15
#6 0x52e59b in LLVMFuzzerTestOneInput /src/download_refs_fuzzer.cc:145:9
#7 0x52ef3f in ExecuteFilesOnyByOne(int, char**) /src/libfuzzer/afl/afl_driver.cpp:301:5
#8 0x52f4ee in main /src/libfuzzer/afl/afl_driver.cpp:339:12
#9 0x7f6c910db82f in __libc_start_main /build/glibc-Cl5G7W/glibc-2.23/csu/libc-start.c:291
#10 0x41d518 in _start
When parsing an "ng" packet, we keep track of both the current position
as well as the remaining length of the packet itself. But instead of
taking care not to exceed the length, we pass the current pointer's
position to `strchr`, which will search for a certain character until
hitting NUL. It is thus possible to create a crafted packet which
doesn't contain a NUL byte to trigger an out-of-bounds read.
Fix the issue by instead using `memchr`, passing the remaining length as
restriction. Furthermore, verify that we actually have enough bytes left
to produce a match at all.
OSS-Fuzz-Issue: 9406
|
|
7e12ca68
|
2018-08-06T08:57:20
|
|
travis: force usage of Xcode 8.3 image
Travis has upgraded the default Xcode images from 8.3 to 9.4 on 31st
July 2018, including an upgrade to macOS 10.13. Unfortunately, this
breaks our CI builds on our maintenance branches. As we do not want to
include mayor changes to fix the integration right now, we force use of
the old Xcode 8.3 images.
|
|
a3e53c16
|
2018-07-09T14:11:45
|
|
Merge pull request #4718 from pks-t/pks/v0.26.5
Release v0.26.5
|
|
440a3636
|
2018-07-05T14:33:53
|
|
version: bump to v0.26.5
|
|
188fef5e
|
2018-07-05T14:20:57
|
|
CHANGELOG: add release notes for v0.26.5
|
|
47ea1f58
|
2018-07-05T13:30:46
|
|
delta: fix overflow when computing limit
When checking whether a delta base offset and length fit into the base
we have in memory already, we can trigger an overflow which breaks the
check. This would subsequently result in us reading memory from out of
bounds of the base.
The issue is easily fixed by checking for overflow when adding `off` and
`len`, thus guaranteeting that we are never indexing beyond `base_len`.
This corresponds to the git patch 8960844a7 (check patch_delta bounds
more carefully, 2006-04-07), which adds these overflow checks.
Reported-by: Riccardo Schirone <rschiron@redhat.com>
|
|
25d4a8c9
|
2018-06-29T09:11:02
|
|
delta: fix out-of-bounds read of delta
When computing the offset and length of the delta base, we repeatedly
increment the `delta` pointer without checking whether we have advanced
past its end already, which can thus result in an out-of-bounds read.
Fix this by repeatedly checking whether we have reached the end. Add a
test which would cause Valgrind to produce an error.
Reported-by: Riccardo Schirone <rschiron@redhat.com>
Test-provided-by: Riccardo Schirone <rschiron@redhat.com>
|
|
8ab4f363
|
2018-06-29T07:45:18
|
|
delta: fix sign-extension of big left-shift
Our delta code was originally adapted from JGit, which itself adapted it
from git itself. Due to this heritage, we inherited a bug from git.git
in how we compute the delta offset, which was fixed upstream in
48fb7deb5 (Fix big left-shifts of unsigned char, 2009-06-17). As
explained by Linus:
Shifting 'unsigned char' or 'unsigned short' left can result in sign
extension errors, since the C integer promotion rules means that the
unsigned char/short will get implicitly promoted to a signed 'int' due to
the shift (or due to other operations).
This normally doesn't matter, but if you shift things up sufficiently, it
will now set the sign bit in 'int', and a subsequent cast to a bigger type
(eg 'long' or 'unsigned long') will now sign-extend the value despite the
original expression being unsigned.
One example of this would be something like
unsigned long size;
unsigned char c;
size += c << 24;
where despite all the variables being unsigned, 'c << 24' ends up being a
signed entity, and will get sign-extended when then doing the addition in
an 'unsigned long' type.
Since git uses 'unsigned char' pointers extensively, we actually have this
bug in a couple of places.
In our delta code, we inherited such a bogus shift when computing the
offset at which the delta base is to be found. Due to the sign extension
we can end up with an offset where all the bits are set. This can allow
an arbitrary memory read, as the addition in `base_len < off + len` can
now overflow if `off` has all its bits set.
Fix the issue by casting the result of `*delta++ << 24UL` to an unsigned
integer again. Add a test with a crafted delta that would actually
succeed with an out-of-bounds read in case where the cast wouldn't
exist.
Reported-by: Riccardo Schirone <rschiron@redhat.com>
Test-provided-by: Riccardo Schirone <rschiron@redhat.com>
|
|
ca55adaa
|
2018-06-04T12:18:40
|
|
Merge pull request #4666 from pks-t/pks/v0.26.4
Release v0.26.4
|
|
9fcd4772
|
2018-05-29T14:05:33
|
|
version: bump library version to 0.26.4
|
|
ef5265e2
|
2018-05-29T14:05:10
|
|
CHANGELOG: update for v0.26.4
|
|
644c973f
|
2018-05-22T15:21:08
|
|
path: accept the name length as a parameter
We may take in names from the middle of a string so we want the caller to let us
know how long the path component is that we should be checking.
|
|
2143a0de
|
2018-05-22T14:16:45
|
|
checkout: add a failing test for refusing a symlinked .gitmodules
We want to reject these as they cause compatibility issues and can lead to git
writing to files outside of the repository.
|
|
3adb0dc3
|
2018-05-22T13:58:24
|
|
path: expose dotgit detection functions per filesystem
These will be used by the checkout code to detect them for the particular
filesystem they're on.
|
|
dc5591b4
|
2018-05-18T15:16:53
|
|
path: hide the dotgit file functions
These can't go into the public API yet as we don't want to introduce API or ABI
changes in a security release.
|
|
f98d140b
|
2018-05-16T15:56:04
|
|
path: add functions to detect .gitconfig and .gitattributes
|
|
4656e9c4
|
2018-05-16T15:42:08
|
|
path: add a function to detect an .gitmodules file
Given a path component it knows what to pass to the filesystem-specific
functions so we're protected even from trees which try to use the 8.3 naming
rules to get around us matching on the filename exactly.
The logic and test strings come from the equivalent git change.
|
|
9893f56b
|
2018-05-16T14:47:04
|
|
path: provide a generic function for checking dogit files on NTFS
It checks against the 8.3 shortname variants, including the one which includes
the checksum as part of its name.
|
|
f43ade0e
|
2018-05-16T11:56:04
|
|
path: provide a generic dogit checking function for HFS
This lets us check for other kinds of reserved files.
|
|
4a1753c2
|
2018-05-14T16:03:15
|
|
submodule: also validate Windows-separated paths for validity
Otherwise we would also admit `..\..\foo\bar` as a valid path and fail to
protect Windows users.
Ideally we would check for both separators without the need for the copied
string, but this'll get us over the RCE.
|
|
f77e40a1
|
2018-04-30T13:47:15
|
|
submodule: ignore submodules which include path traversal in their name
If the we decide that the "name" of the submodule (i.e. its path inside
`.git/modules/`) is trying to escape that directory or otherwise trick us, we
ignore the configuration for that submodule.
This leaves us with a half-configured submodule when looking it up by path, but
it's the same result as if the configuration really were missing.
The name check is potentially more strict than it needs to be, but it lets us
re-use the check we're doing for the checkout. The function that encapsulates
this logic is ready to be exported but we don't want to do that in a security
release so it remains internal for now.
|
|
2fc15ae8
|
2018-04-30T13:03:44
|
|
submodule: add a failing test for a submodule escaping .git/modules
We should pretend such submdules do not exist as it can lead to RCE.
|
|
8af6bce2
|
2018-03-20T07:47:27
|
|
online tests: update auth for bitbucket test
Update the settings to use a specific read-only token for accessing our
test repositories in Bitbucket.
|
|
999200cc
|
2018-03-19T09:20:35
|
|
online::clone: skip creds fallback test
At present, we have three online tests against bitbucket: one which
specifies the credentials in the payload, one which specifies the
correct credentials in the URL and a final one that specifies the
incorrect credentials in the URL. Bitbucket has begun responding to the
latter test with a 403, which causes us to fail.
Break these three tests into separate tests so that we can skip the
latter until this is resolved on Bitbucket's end or until we can change
the test to a different provider.
|
|
ea55c77c
|
2018-05-24T21:58:40
|
|
path: hand-code the zero-width joiner as UTF-8
|
|
95a0ab89
|
2018-05-24T20:28:36
|
|
submodule: plug leaks from the escape detection
|
|
d1aaa5e2
|
2018-05-24T19:05:59
|
|
submodule: replace index with strchr which exists on Windows
|
|
f78b6907
|
2018-05-24T19:00:13
|
|
submodule: the repostiory for _name_is_valid should not be const
We might modify caches due to us trying to load the configuration to figure out
what kinds of filesystem protections we should have.
|
|
98e8b11c
|
2018-05-23T08:40:17
|
|
path: check for a symlinked .gitmodules in fs-agnostic code
We still compare case-insensitively to protect more thoroughly as we don't know
what specifics we'll see on the system and it's the behaviour from git.
|
|
1c1e32b7
|
2018-05-22T20:37:23
|
|
checkout: change symlinked .gitmodules file test to expect failure
When dealing with `core.proectNTFS` and `core.protectHFS` we do check
against `.gitmodules` but we still have a failing test as the non-filesystem
codepath does not check for it.
|
|
5b855194
|
2018-05-22T16:13:47
|
|
path: reject .gitmodules as a symlink
Any part of the library which asks the question can pass in the mode to have it
checked against `.gitmodules` being a symlink.
This is particularly relevant for adding entries to the index from the worktree
and for checking out files.
|
|
17df18ae
|
2018-05-22T15:48:38
|
|
index: stat before creating the entry
This is so we have it available for the path validity checking. In a later
commit we will start rejecting `.gitmodules` files as symlinks.
|
|
b55bb43d
|
2018-03-12T13:50:02
|
|
Merge pull request #4475 from pks-t/pks/v0.26.1-backports
v0.26.3 backports
|
|
bb00842f
|
2018-03-10T17:57:40
|
|
Bump version to v0.26.3
|
|
7c8ddef0
|
2018-03-10T17:57:18
|
|
CHANGELOG.md: update for v0.26.3
|
|
32cc5edc
|
2017-07-07T17:10:57
|
|
tests: status: additional test for negative ignores with pattern
This test is by Carlos Martín Nieto.
|
|
5c15cd94
|
2017-07-07T13:27:27
|
|
ignore: keep negative rules containing wildcards
Ignore rules allow for reverting a previously ignored rule by prefixing
it with an exclamation mark. As such, a negative rule can only override
previously ignored files. While computing all ignore patterns, we try to
use this fact to optimize away some negative rules which do not override
any previous patterns, as they won't change the outcome anyway.
In some cases, though, this optimization causes us to get the actual
ignores wrong for some files. This may happen whenever the pattern
contains a wildcard, as we are unable to reason about whether a pattern
overrides a previous pattern in a sane way. This happens for example in
the case where a gitignore file contains "*.c" and "!src/*.c", where we
wouldn't un-ignore files inside of the "src/" subdirectory.
In this case, the first solution coming to mind may be to just strip the
"src/" prefix and simply compare the basenames. While that would work
here, it would stop working as soon as the basename pattern itself is
different, like for example with "*x.c" and "!src/*.c. As such, we
settle for the easier fix of just not optimizing away rules that contain
a wildcard.
|
|
8d86cdd4
|
2017-07-07T12:27:43
|
|
ignore: return early to avoid useless indentation
|
|
b35c3098
|
2018-02-28T12:06:59
|
|
curl: explicitly initialize and cleanup global curl state
Our curl-based streams make use of the easy curl interface. This
interface automatically initializes and de-initializes the global curl
state by calling out to `curl_global_init` and `curl_global_cleanup`.
Thus, all global state will be repeatedly re-initialized when creating
multiple curl streams in succession. Despite being inefficient, this is
not thread-safe due to `curl_global_init` being not thread-safe itself.
Thus a multi-threaded programing handling multiple curl streams at the
same time is inherently racy.
Fix the issue by globally initializing and cleaning up curl's state.
|
|
9e98f49d
|
2018-02-28T12:21:08
|
|
tree: initialize the id we use for testing submodule insertions
Instead of laving it uninitialized and relying on luck for it to be non-zero,
let's give it a dummy hash so we make valgrind happy (in this case the hash
comes from `sha1sum </dev/null`.
|
|
c24b15c3
|
2018-02-28T12:20:23
|
|
win32: strncmp -> git__strncmp
The win32 C library is compiled cdecl, however when configured with
`STDCALL=ON`, our functions (and function pointers) will use the stdcall
calling convention. You cannot set a `__stdcall` function pointer to a
`__cdecl` function, so it's easier to just use our `git__strncmp`
instead of sorting that mess out.
|
|
9ab8d153
|
2018-02-25T15:46:51
|
|
winhttp: enable TLS 1.2 on Windows 7 and earlier
Versions of Windows prior to Windows 8 do not enable TLS 1.2 by default,
though support may exist. Try to enable TLS 1.2 support explicitly on
connections.
This request may fail if the operating system does not have TLS 1.2
support - the initial release of Vista lacks TLS 1.2 support (though
it is available as a software update) and XP completely lacks TLS 1.2
support. If this request does fail, the HTTP context is still valid,
and still maintains the original protocol support. So we ignore the
failure from this operation.
|
|
aa0127c0
|
2018-02-27T11:24:30
|
|
winhttp: include constants for TLS 1.1/1.2 support
For platforms that do not define `WINHTTP_FLAG_SECURE_PROTOCOL_TLS1_1`
and/or `WINHTTP_FLAG_SECURE_PROTOCOL_TLS1_2`.
|
|
9bdc00b1
|
2018-02-27T10:32:29
|
|
mingw: update TLS option flags
Include the constants for `WINHTTP_FLAG_SECURE_PROTOCOL_TLS1_1` and
`WINHTTP_FLAG_SECURE_PROTOCOL_TLS1_2` so that they can be used by mingw.
This updates both the `deps/winhttp` framework (for classic mingw) and
adds the defines for mingw64, which does not use that framework.
|
|
1b853c48
|
2018-02-19T22:10:44
|
|
checkout test: further ensure workdir perms are updated
When both the index _and_ the working directory has changed
permissions on a file permissions on a file - but only the permissions,
such that the contents of the file are identical - ensure that
`git_checkout` updates the permissions to match the checkout target.
|
|
73615900
|
2018-02-19T22:09:27
|
|
checkout test: ensure workdir perms are updated
When the working directory has changed permissions on a file - but only
the permissions, such that the contents of the file are identical -
ensure that `git_checkout` updates the permissions to match the checkout
target.
|
|
3983fc1d
|
2018-02-18T16:10:33
|
|
checkout: take mode into account when comparing index to baseline
When checking out a file, we determine whether the baseline (what we
expect to be in the working directory) actually matches the contents
of the working directory. This is safe behavior to prevent us from
overwriting changes in the working directory.
We look at the index to optimize this test: if we know that the index
matches the working directory, then we can simply look at the index
data compared to the baseline.
We have historically compared the baseline to the index entry by oid.
However, we must also compare the mode of the two items to ensure that
they are identical. Otherwise, we will refuse to update the working
directory for a mode change.
|
|
e74e05ed
|
2018-02-20T10:38:27
|
|
diff_tform: fix rename detection with rewrite/delete pair
A rewritten file can either be classified as a modification of its
contents or of a delete of the complete file followed by an addition of
the new content. This distinction becomes important when we want to
detect renames for rewrites. Given a scenario where a file "a" has been
deleted and another file "b" has been renamed to "a", this should be
detected as a deletion of "a" followed by a rename of "a" -> "b". Thus,
splitting of the original rewrite into a delete/add pair is important
here.
This splitting is represented by a flag we can set at the current delta.
While the flag is already being set in case we want to break rewrites,
we do not do so in case where the `GIT_DIFF_FIND_RENAMES_FROM_REWRITES`
flag is set. This can trigger an assert when we try to match the source
and target deltas.
Fix the issue by setting the `GIT_DIFF_FLAG__TO_SPLIT` flag at the delta
when it is a rename target and `GIT_DIFF_FIND_RENAMES_FROM_REWRITES` is
set.
|
|
e229e90d
|
2018-02-20T10:03:48
|
|
tests: add rename-rewrite scenarios to "renames" repository
Add two more scenarios to the "renames" repository. The first scenario
has a major rewrite of a file and a delete of another file, the second
scenario has a deletion of a file and rename of another file to the
deleted file. Both scenarios will be used in the following commit.
|
|
be205dfa
|
2018-02-20T09:54:58
|
|
tests: diff::rename: use defines for commit OIDs
While we frequently reuse commit OIDs throughout the file, we do not
have any constants to refer to these commits. Make this a bit easier to
read by giving the commit OIDs somewhat descriptive names of what kind
of commit they refer to.
|
|
b3c0d43c
|
2018-01-22T14:44:31
|
|
merge: virtual commit should be last argument to merge-base
Our virtual commit must be the last argument to merge-base: since our
algorithm pushes _both_ parents of the virtual commit, it needs to be
the last argument, since merge-base:
> Given three commits A, B and C, git merge-base A B C will compute the
> merge base between A and a hypothetical commit M
We want to calculate the merge base between the actual commit ("two")
and the virtual commit ("one") - since one actually pushes its parents
to the merge-base calculation, we need to calculate the merge base of
"two" and the parents of one.
|
|
3619e0f0
|
2018-01-22T23:56:22
|
|
Add failing test case for virtual commit merge base issue
|
|
dc51d774
|
2018-01-21T16:50:40
|
|
merge::trees::recursive: test for virtual base building
Virtual base building: ensure that the virtual base is created and
revwalked in the same way as git.
|
|
b2b37077
|
2018-01-21T18:05:45
|
|
merge: reverse merge bases for recursive merge
When the commits being merged have multiple merge bases, reverse the
order when creating the virtual merge base. This is for compatibility
with git's merge-recursive algorithm, and ensures that we build
identical trees.
Git does this to try to use older merge bases first. Per 8918b0c:
> It seems to be the only sane way to do it: when a two-head merge is
> done, and the merge-base and one of the two branches agree, the
> merge assumes that the other branch has something new.
>
> If we start creating virtual commits from newer merge-bases, and go
> back to older merge-bases, and then merge with newer commits again,
> chances are that a patch is lost, _because_ the merge-base and the
> head agree on it. Unlikely, yes, but it happened to me.
|
|
457a81bb
|
2018-01-21T18:01:20
|
|
oidarray: introduce git_oidarray__reverse
Provide a simple function to reverse an oidarray.
|
|
08ab5902
|
2018-01-21T16:41:49
|
|
Introduce additional criss-cross merge branches
|