|
70df0721
|
2018-04-20T23:11:20
|
|
valgrind: silence libssh2 leaking something from gcrypt
==2957== 912 bytes in 19 blocks are still reachable in loss record 323 of 369
==2957== at 0x4C2AB80: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==2957== by 0x675B120: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x675BDF8: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x675FE0D: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x6761DC4: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x676477E: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x675B071: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x675B544: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x675914B: gcry_control (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x5D30EC9: libssh2_init (in /usr/lib/x86_64-linux-gnu/libssh2.so.1.0.1)
==2957== by 0x66BCCD: git_transport_ssh_global_init (ssh.c:910)
==2957== by 0x616443: init_common (global.c:65)
(cherry picked from commit dd75885ab45a590ff20404a3a0f20a1148cd4f64)
|
|
16a2fc53
|
2018-04-20T23:11:19
|
|
valgrind: skip buf::oom test
(cherry picked from commit 573c408921e02f61501b2982fc10af77a8412631)
|
|
243ee6c6
|
2018-04-20T23:11:16
|
|
travis: split valgrind check in its own script
(cherry picked from commit 74b0a4320726cb557bcf73f47ba25ee10c430066)
|
|
cd14fca1
|
2018-03-29T13:35:27
|
|
appveyor: disable DHE to avoid spurious failures
Our CI builds have intermittent failures in our online tests, e.g. with
the message "A provided buffer was too small". This is not a programming
error in libgit2 but rather an error in the SChannel component of
Windows. Under certain circumstances involving Diffie-Hellman key
exchange, SChannel is unable to correctly handle input from the server.
This bug has already been fixed in recent patches for Windows 10 and
Windows Server 2016, but they are not yet available for AppVeyor.
Manually pamper over that issue by disabling all ciphersuites using DHE
via the registry. While this disables more ciphers than necessary, we
really don't care for that at all but just want to avoid build failures
due to that bug.
See [1], [2] or [3] for additional information.
1: https://github.com/aws/aws-sdk-cpp/issues/671
2: https://github.com/dotnet/corefx/issues/7812
3: https://support.microsoft.com/en-us/help/2992611/ms14-066-vulnerability-in-schannel-could-allow-remote-code-execution-n
(cherry picked from commit 723e1e976d4a038d89940ecbcfb7ff685d204859)
|
|
1a62e303
|
2017-09-15T11:32:46
|
|
appveyor: add jobs to also build on Visual Studio 2015
In order to cover a wider range of build environments, add two more jobs
which build and test libgit2 on Visual Studio 14 2015.
(cherry picked from commit 03a95bc5f6418ffd0ebb7f904281935e856a1800)
|
|
e42f8f73
|
2018-04-20T23:11:14
|
|
travis: split testing from building
(cherry picked from commit 2f4e7cb0e8c21cc2d673946eddf9278c2863427b)
|
|
3dd462fd
|
2017-09-15T10:01:36
|
|
appveyor: explicitly specify build images
AppVeyor currently does provide three standard build worker images with
VS2013, VS2015 and VS2017. Right now, we are using the implicitly, which
is the VS2015 one. We want to be more explicit about this, so that we
can easily switch build images based on the job. So starting from this
commit, we explicitly set the `APPVEYOR_BUILD_WORKER_IMAGE` variable per
job, which enables us to choose different images.
To be able to test a wider range of build configurations, this commit
also switches the jobs for VC2010 over to use the older, VS2013 based
images. As the next commit will introduce two new jobs for building with
VS2015, we have then covered both build environments.
Also, let us be a bit more explicit regarding the CMake generator.
Instead of only saying "Visual Studio 10", use the more descriptive
value "Visual Studio 10 2010" to at least avoid some confusion
surrounding the versioning scheme of Visual Studio.
(cherry picked from commit e1076dbfd84218af7870a8f527c37695918b5cde)
|
|
727183c7
|
2018-04-20T23:11:17
|
|
valgrind: silence curl_global_init leaks
==18109== 664 bytes in 1 blocks are still reachable in loss record 279 of 339
==18109== at 0x4C2AB80: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==18109== by 0x675B120: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==18109== by 0x675C13C: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==18109== by 0x675C296: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==18109== by 0x679BD14: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==18109== by 0x679CC64: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==18109== by 0x6A64946: ??? (in /usr/lib/x86_64-linux-gnu/libgnutls.so.26.22.6)
==18109== by 0x6A116E8: ??? (in /usr/lib/x86_64-linux-gnu/libgnutls.so.26.22.6)
==18109== by 0x6A01114: gnutls_global_init (in /usr/lib/x86_64-linux-gnu/libgnutls.so.26.22.6)
==18109== by 0x52A6C78: ??? (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.3.0)
==18109== by 0x5285ADC: curl_global_init (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.3.0)
==18109== by 0x663524: git_curl_stream_global_init (curl.c:44)
(cherry picked from commit c0c9e9eeee5b4577eb930f56b8ddaf788f809067)
|
|
e5278f70
|
2017-11-11T15:38:27
|
|
clar: verify command line arguments before execute
When executing `libgit2_clar -smerge -invalid_option`, it will first execute
the merge test suite and afterwards output help because of the invalid option.
With this changa, it verifies all options before execute. If there are any
invalid options, it will output help and exit without actually executing
the test suites.
(cherry picked from commit 3275863134122892e2f8a8aa4ad0ce1c123a48ec)
|
|
306ffba3
|
2018-04-03T12:31:35
|
|
appveyor: fix typo in registry key to disable DHE
Commit 723e1e976 (appveyor: disable DHE to avoid spurious failures,
2018-03-29) added a workaround to fix spurious test failures due to a
bug in Windows' SChannel implementation. The workaround only worked by
accident, though, as the registry key was in fact mistyped. Fix the
typo.
(cherry picked from commit 3a72b0e2569c03ed0bd7ca63572eaf6384a2c81f)
|
|
637412cc
|
2017-11-20T13:26:33
|
|
tests: create new test target for all SSH-based tests
Some tests shall be run against our own SSH server we spin up in Travis.
As those need to be run separate from our previous tests which run
against git-daemon, we have to do this in a separate step. Instead of
bundling all that knowledge in the CI script, move it into the test
build instructions by creating a new test target.
(cherry picked from commit 5874e151d7b10de84fc1ca168339fdc622292219)
|
|
2362ce6c
|
2017-06-07T13:06:53
|
|
tests: online::clone: inline creds-test with nonexistent URL
Right now, we test our credential callback code twice, once via SSH on
localhost and once via a non-existent GitHub repository. While the first
URL makes sense to be configurable, it does not make sense to hard-code
the non-existing repository, which requires us to call tests multiple
times. Instead, we can just inline the URL into another set of tests.
(cherry picked from commit 54a1bf057a1123cf55ac3447c79761c817382f47)
|
|
a1a495f2
|
2017-06-07T12:48:48
|
|
tests: online::clone: construct credential-URL from environment
We support two types of passing credentials to the proxy, either via the
URL or explicitly by specifying user and password. We test these types
by modifying the proxy URL and executing the tests twice, which is
in fact unnecessary and requires us to maintain the list of environment
variables and test executions across multiple CI infrastructures.
To fix the situation, we can just always pass the host, port, user and
password to the tests. The tests can then assemble the complete URL
either with or without included credentials, allowing us to test both
cases in-process.
(cherry picked from commit fea6092079d5c09b499e472efead2f7aa81ce8a1)
|
|
89641431
|
2017-06-07T11:06:01
|
|
tests: perf: build but exclude performance tests by default
Our performance tests (or to be more concrete, our single performance
test) are not built by default, as they are always #ifdef'd out. While
it is true that we don't want to run performance tests by default, not
compiling them at all may cause code rot and is thus an unfavorable
approach to handle this.
We can easily improve this situation: this commit removes the #ifdef,
causing the code to always be compiled. Furthermore, we add `-xperf` to
the default command line parameters of `generate.py`, thus causing the
tests to be excluded by default.
Due to this approach, we are now able to execute the performance tests
by passing `-sperf` to `libgit2_clar`. Unfortunately, we cannot execute
the performance tests on Travis or AppVeyor as they rely on history
being available for the libgit2 repository. As both do a shallow clone
only, though, this is not given.
(cherry picked from commit 543ec149b86a68e12dd141a6141e82850dabbf21)
|
|
d2bbea82
|
2017-06-07T10:59:31
|
|
tests: iterator_helpers: assert number of iterator items
When the function `expect_iterator_items` surpasses the number of
expected items, we simply break the loop. This causes us to trigger an
assert later on which has message attached, which is annoying when
trying to locate the root error cause. Instead, directly assert that the
current count is still smaller or equal to the expected count inside of
the loop.
(cherry picked from commit 9aba76364fcb4755930856a7bafc5294ed3ee944)
|
|
293c5ef2
|
2017-06-07T10:59:03
|
|
tests: status::worktree: indicate skipped tests on Win32
Some function bodies of tests which are not applicable to the Win32
platform are completely #ifdef'd out instead of calling `cl_skip()`.
This leaves us with no indication that these tests are not being
executed at all and may thus cause decreased scrutiny when investigating
skipped tests. Improve the situation by calling `cl_skip()` instead of
just doing nothing.
(cherry picked from commit 72c28ab011759dce113c2a0c7c36ebcd56bd6ddf)
|
|
98378a3f
|
2017-06-07T11:00:26
|
|
tests: iterator::workdir: fix reference count in stale test
The test `iterator::workdir::filesystem_gunk` is usually not executed,
as it is guarded by the environment variable "GITTEST_INVASIVE_SPEED"
due to its effects on speed. As such, it has become stale and does not
account for new references which have meanwhile been added to the
testrepo, causing it to fail. Fix this by raising the number of expected
references to 15.
(cherry picked from commit b8c14499f9940feaab08a23651a2ef24d27b17b7)
|
|
8ba43299
|
2017-06-07T11:01:28
|
|
travis: build sources with tracing enabled
Our tracing architecture is not built by default, causing the Travis CI
to not execute some code and skip several tests. As AppVeyor has already
enabled the tracing architecture when building the code, we should do
the same for Travis CI to have this code being tested on macOS and
Linux.
Add "-DENABLE_TRACE=ON" to our release-build options of Travis.
(cherry picked from commit 8999f6acc78810680f282db4257e842971b80cb4)
|
|
13a6b203
|
2017-09-06T08:04:19
|
|
travis: drop support for Ubuntu Precise
Ubuntu Precise is end of life since April 2017. At that point in time,
Precise was still the main distro on which Travis CI built upon, with
the Trusty-based images still being in a beta state. But since June
21st, Trusty has officially moved out of beta and is now the default
image for all new builds. Right now, we build on both old and new images
to assure we support both.
Unfortunately, this leaves us with the highest minimum version for CMake
being 2.8.7, as Precise has no greater version in its repositories. And
because of this limitation, we cannot actually use object libraries in
our build instructions. But considering Precise is end of life and
Trusty is now the new default for Travis, we can and should drop support
for this old and unmaintained distribution. And so we do.
(cherry picked from commit c17c3f8a07377d76432fb2e4369b9805387ac099)
|
|
76ecd892
|
2018-01-10T15:13:23
|
|
travis: we use bintray's own key for signing
The VM on Travis apparently will still proceed, but it's good practice.
(cherry picked from commit 6e748130e4f910b6f8c03a3f6f2e11c856d19ba7)
|
|
6be03667
|
2018-01-10T12:33:56
|
|
travis: fetch trusty dependencies from bintray
The trusty dependencies are now hosted on Bintray.
(cherry picked from commit da9898aba0fe26ea683822e99853bfb2b02ac744)
|
|
0c51ecf2
|
2017-10-07T00:10:06
|
|
travis: add custom apt sources
Move back to Travis's VM infrastructure for efficiency.
(cherry picked from commit 9dc21efdbf275dec18b9c34b472f8df9f8e8c169)
|
|
93434828
|
2017-10-31T14:43:28
|
|
travis: let's try a 5GB ramdisk
(cherry picked from commit 71ba464435bb430b02d94c653cd518c11f7289ff)
|
|
4eecbdd0
|
2017-10-31T10:40:24
|
|
travis: put clar's sandbox in a ramdisk on macOS
The macOS tests are by far the slowest right now. This attempts to remedy the
situation somewhat by asking clar to put its test data on a ramdisk.
(cherry picked from commit 37bb15122e30bb13aabc213079da53b5cdac2678)
|
|
736356a6
|
2017-11-06T12:47:40
|
|
examples: network: fix Win32 linking errors due to getline
The getline(3) function call is not part of ISO C and, most importantly,
it is not implemented on Microsoft Windows platforms. As our networking
example code makes use of getline, this breaks builds on MSVC and MinGW.
As this code wasn't built prior to the previous commit, this was never
noticed.
Fix the error by instead implementing a `readline` function, which
simply reads the password from stdin until it reads a newline
character.
(cherry picked from commit bf15dbf6cf19146082c1245e9db4016d773dbe7e)
|
|
1c85bcd8
|
2017-11-06T11:16:02
|
|
appveyor: build examples
By default, CMake will not build our examples directory. As we do not
instruct either the MinGW or MSVC builds on AppVeyor to enable building
these examples, we cannot verify that those examples at least build on
Windows systems.
Fix that by passing `-DBUILD_EXAMPLES=ON` to AppVeyor's CMake
invocation.
(cherry picked from commit 0b98a66baae83056401a0a5fef5dc5cd2ed3468b)
|
|
dc413239
|
2017-07-24T17:53:32
|
|
travis: only install custom libcurl on trusty
(cherry picked from commit c582fa4eb6bee7880f04080aa80357cca406e448)
|
|
7d1c72a4
|
2017-07-24T16:48:04
|
|
travis: only kill our own sshd
(cherry picked from commit 697583ea3aceb1379c576515ffa713ba29c50437)
|
|
fad7f7a2
|
2017-07-24T13:10:43
|
|
travis: use trusty
(cherry picked from commit 4da38193c568ca3842bc1130c82e7a9f955f23aa)
|
|
16957a7f
|
2017-07-23T03:41:52
|
|
travis: build with patched libcurl
Ubuntu trusty has a bug in curl when using NTLM credentials in a proxy,
dereferencing a null pointer and causing segmentation faults. Use a
custom-patched version of libcurl that avoids this issue.
(cherry picked from commit f031e20b516209f19a56ef934e12fea6adec097a)
|
|
76a7d5f1
|
2017-04-26T13:04:23
|
|
travis: cibuild: set up our own sshd server
Some tests of ours require to be running against an SSH server.
Currently, we simply run against the SSH server provided and started by
Travis itself. As our Linux tests run in a sudo-less environment, we
have no control over its configuration and startup/shutdown procedure.
While this has been no problem until now, it will become a problem as
soon as we migrate over to newer Precise images, as the SSH server does
not have any host keys set up. Luckily, we can simply set up our own
unpriviledged SSH server. This has the benefit of us being able to
modify its configuration even in a sudo-less environment.
This commit sets up the unpriviledged SSH server on port 2222.
(cherry picked from commit 06619904a2ae2ffd5d8e34ab11d5eb484e9d5762)
|
|
b988f544
|
2017-04-26T13:16:18
|
|
tests: online::clone: use URL of test server
All our tests running against a local SSH server usually read the
server's URL from environment variables. But online::clone::ssh_cert
test fails to do so and instead always connects to
"ssh://localhost/foo". This assumption breaks whenever the SSH server is
not running on the standard port, e.g. when it is running as a user.
Fix the issue by using the URL provided by the environment.
(cherry picked from commit c2c95ad0a210be4811c247be51664bfe8b2e830a)
|
|
5491d0e1
|
2017-04-21T07:58:46
|
|
travis: upgrade container to Ubuntu 14.04
Ubuntu 12.04 (Precise Pangolin) reaches end of life on April 28th, 2017.
As such, we should update our build infrastructure to use the next
available LTS release, which is Ubuntu 14.04 LTS (Trusty Tahr). Note
that Trusty is still considered beta quality on Travis. But considering
we are able to correctly build and test libgit2, this seems to be a
non-issue for us.
Switch over our default distribution to Trusty. As Precise still has
extended support for paying customers, add an additional job which
compiles libgit2 on the old release.
(cherry picked from commit 7c8d460f8410cf7a110eb10e9c4bafdede6a49c6)
|
|
2bd9b6b6
|
2018-10-05T19:32:32
|
|
Merge pull request #4835 from pks-t/pks/v0.26.7
Security release v0.26.7
|
|
9102156c
|
2018-09-06T13:14:40
|
|
version: raise to v0.26.7
|
|
b1d39682
|
2018-09-06T13:14:19
|
|
CHANGELOG: update for v0.26.7
|
|
b93e82d4
|
2018-10-05T11:47:39
|
|
submodule: ignore path and url attributes if they look like options
These can be used to inject options in an implementation which performs a
recursive clone by executing an external command via crafted url and path
attributes such that it triggers a local executable to be run.
The library is not vulnerable as we do not rely on external executables but a
user of the library might be relying on that so we add this protection.
This matches this aspect of git's fix for CVE-2018-17456.
|
|
7e8d9789
|
2018-10-05T11:42:00
|
|
submodule: add failing test for option-injection protection in url and path
|
|
74937431
|
2018-10-05T10:56:02
|
|
config_file: properly ignore includes without "path" value
In case a configuration includes a key "include.path=" without any
value, the generated configuration entry will have its value set to
`NULL`. This is unexpected by the logic handling includes, and as soon
as we try to calculate the included path we will unconditionally
dereference that `NULL` pointer and thus segfault.
Fix the issue by returning early in both `parse_include` and
`parse_conditional_include` in case where the `file` argument is `NULL`.
Add a test to avoid future regression.
The issue has been found by the oss-fuzz project, issue 10810.
(cherry picked from commit d06d4220eec035466d1a837972a40546b8904330)
|
|
232fc469
|
2018-10-05T10:55:29
|
|
tests: always unlink created config files
While our tests in config::include create a plethora of configuration
files, most of them do not get removed at the end of each test. This can
cause weird interactions with tests that are being run at a later stage
if these later tests try to create files or directories with the same
name as any of the created configuration files.
Fix the issue by unlinking all created files at the end of these tests.
(cherry picked from commit bf662f7cf8daff2357923446cf9d22f5d4b4a66b)
|
|
21a2318b
|
2018-10-03T16:17:21
|
|
smart_pkt: do not accept callers passing in no line length
Right now, we simply ignore the `linelen` parameter of
`git_pkt_parse_line` in case the caller passed in zero. But in fact, we
never want to assume anything about the provided buffer length and
always want the caller to pass in the available number of bytes.
And in fact, checking all the callers, one can see that the funciton is
never being called in case where the buffer length is zero, and thus we
are safe to remove this check.
(cherry picked from commit 1bc5b05c614c7b10de021fa392943e8e6bd12c77)
|
|
5836d8b6
|
2018-08-09T11:16:15
|
|
smart_pkt: return parsed length via out-parameter
The `parse_len` function currently directly returns the parsed length of
a packet line or an error code in case there was an error. Instead,
convert this to our usual style of using the return value as error code
only and returning the actual value via an out-parameter. Thus, we can
now convert the output parameter to an unsigned type, as the size of a
packet cannot ever be negative.
While at it, we also move the check whether the input buffer is long
enough into `parse_len` itself. We don't really want to pass around
potentially non-NUL-terminated buffers to functions without also passing
along the length, as this is dangerous in the unlikely case where other
callers for that function get added. Note that we need to make sure
though to not mess with `GIT_EBUFS` error codes, as these indicate not
an error to the caller but that he needs to fetch more data.
(cherry picked from commit c05790a8a8dd4351e61fc06c0a06c6a6fb6134dc)
|
|
3bbda7d7
|
2018-08-09T11:13:59
|
|
smart_pkt: reorder and rename parameters of `git_pkt_parse_line`
The parameters of the `git_pkt_parse_line` function are quite confusing.
First, there is no real indicator what the `out` parameter is actually
all about, and it's not really clear what the `bufflen` parameter refers
to. Reorder and rename the parameters to make this more obvious.
(cherry picked from commit 0b3dfbf425d689101663341beb94237614f1b5c2)
|
|
a8356af8
|
2018-08-09T11:04:42
|
|
smart_pkt: fix buffer overflow when parsing "unpack" packets
When checking whether an "unpack" packet returned the "ok" status or
not, we use a call to `git__prefixcmp`. In case where the passed line
isn't properly NUL terminated, though, this may overrun the line buffer.
Fix this by using `git__prefixncmp` instead.
(cherry picked from commit 5fabaca801e1f5e7a1054be612e8fabec7cd6a7f)
|
|
02e4b27f
|
2018-08-09T11:03:37
|
|
smart_pkt: fix "ng" parser accepting non-space character
When parsing "ng" packets, we blindly assume that the character
immediately following the "ng" prefix is a space and skip it. As the
calling function doesn't make sure that this is the case, we can thus
end up blindly accepting an invalid packet line.
Fix the issue by using `git__prefixncmp`, checking whether the line
starts with "ng ".
(cherry picked from commit b5ba7af2d30c958b090dcf135749d9afe89ec703)
|
|
8cd0a897
|
2018-08-09T11:01:00
|
|
smart_pkt: fix buffer overflow when parsing "ok" packets
There are two different buffer overflows present when parsing "ok"
packets. First, we never verify whether the line already ends after
"ok", but directly go ahead and also try to skip the expected space
after "ok". Second, we then go ahead and use `strchr` to scan for the
terminating newline character. But in case where the line isn't
terminated correctly, this can overflow the line buffer.
Fix the issues by using `git__prefixncmp` to check for the "ok " prefix
and only checking for a trailing '\n' instead of using `memchr`. This
also fixes the issue of us always requiring a trailing '\n'.
Reported by oss-fuzz, issue 9749:
Crash Type: Heap-buffer-overflow READ {*}
Crash Address: 0x6310000389c0
Crash State:
ok_pkt
git_pkt_parse_line
git_smart__store_refs
Sanitizer: address (ASAN)
(cherry picked from commit a9f1ca09178af0640963e069a2142d5ced53f0b4)
|
|
82c3fc33
|
2018-08-09T10:38:10
|
|
smart_pkt: fix buffer overflow when parsing "ACK" packets
We are being quite lenient when parsing "ACK" packets. First, we didn't
correctly verify that we're not overrunning the provided buffer length,
which we fix here by using `git__prefixncmp` instead of
`git__prefixcmp`. Second, we do not verify that the actual contents make
any sense at all, as we simply ignore errors when parsing the ACKs OID
and any unknown status strings. This may result in a parsed packet
structure with invalid contents, which is being silently passed to the
caller. This is being fixed by performing proper input validation and
checking of return codes.
(cherry picked from commit bc349045b1be8fb3af2b02d8554483869e54d5b8)
|
|
3fd6ce0d
|
2018-08-09T10:57:06
|
|
smart_pkt: adjust style of "ref" packet parsing function
While the function parsing ref packets doesn't have any immediately
obvious buffer overflows, it's style is different to all the other
parsing functions. Instead of checking buffer length while we go, it
does a check up-front. This causes the code to seem a lot more magical
than it really is due to some magic constants. Refactor the function to
instead make use of the style of other packet parser and verify buffer
lengths as we go.
(cherry picked from commit 5edcf5d190f3b379740b223ff6a649d08fa49581)
|
|
e14dab2f
|
2018-08-09T10:46:58
|
|
smart_pkt: check whether error packets are prefixed with "ERR "
In the `git_pkt_parse_line` function, we determine what kind of packet
a given packet line contains by simply checking for the prefix of that
line. Except for "ERR" packets, we always only check for the immediate
identifier without the trailing space (e.g. we check for an "ACK"
prefix, not for "ACK "). But for "ERR" packets, we do in fact include
the trailing space in our check. This is not really much of a problem at
all, but it is inconsistent with all the other packet types and thus
causes confusion when the `err_pkt` function just immediately skips the
space without checking whether it overflows the line buffer.
Adjust the check in `git_pkt_parse_line` to not include the trailing
space and instead move it into `err_pkt` for consistency.
(cherry picked from commit 786426ea6ec2a76ffe2515dc5182705fb3d44603)
|
|
cfb9802b
|
2018-08-09T10:46:26
|
|
smart_pkt: explicitly avoid integer overflows when parsing packets
When parsing data, progress or error packets, we need to copy the
contents of the rest of the current packet line into the flex-array of
the parsed packet. To keep track of this array's length, we then assign
the remaining length of the packet line to the structure. We do have a
mismatch of types here, as the structure's `len` field is a signed
integer, while the length that we are assigning has type `size_t`.
On nearly all platforms, this shouldn't pose any problems at all. The
line length can at most be 16^4, as the line's length is being encoded
by exactly four hex digits. But on a platforms with 16 bit integers,
this assignment could cause an overflow. While such platforms will
probably only exist in the embedded ecosystem, we still want to avoid
this potential overflow. Thus, we now simply change the structure's
`len` member to be of type `size_t` to avoid any integer promotion.
(cherry picked from commit 40fd84cca68db24f325e460a40dabe805e7a5d35)
|
|
a7e87dd5
|
2018-08-09T10:36:44
|
|
smart_pkt: honor line length when determining packet type
When we parse the packet type of an incoming packet line, we do not
verify that we don't overflow the provided line buffer. Fix this by
using `git__prefixncmp` instead and passing in `len`. As we have
previously already verified that `len <= linelen`, we thus won't ever
overflow the provided buffer length.
(cherry picked from commit 4a5804c983317100eed509537edc32d69c8d7aa2)
|
|
5d108c9a
|
2018-10-03T15:39:40
|
|
tests: verify parsing logic for smart packets
The commits following this commit are about to introduce quite a lot of
refactoring and tightening of the smart packet parser. Unfortunately, we
do not yet have any tests despite our online tests that verify that our
parser does not regress upon changes. This is doubly unfortunate as our
online tests aren't executed by default.
Add new tests that exercise the smart parsing logic directly by
executing `git_pkt_parse_line`.
(cherry picked from commit 365d2720c1a5fc89f03fd85265c8b45195c7e4a8)
|
|
a8db6c92
|
2017-11-30T15:40:13
|
|
util: introduce `git__prefixncmp` and consolidate implementations
Introduce `git_prefixncmp` that will search up to the first `n`
characters of a string to see if it is prefixed by another string.
This is useful for examining if a non-null terminated character
array is prefixed by a particular substring.
Consolidate the various implementations of `git__prefixcmp` around a
single core implementation and add some test cases to validate its
behavior.
(cherry picked from commit 86219f40689c85ec4418575223f4376beffa45af)
|
|
5f557780
|
2018-06-24T19:47:08
|
|
Verify ref_pkt's are long enough
If the remote sends a too-short packet, we'll allow `len` to go
negative and eventually issue a malloc for <= 0 bytes on
```
pkt->head.name = git__malloc(alloclen);
```
(cherry picked from commit 437ee5a70711ac2e027877d71ee4ae17e5ec3d6c)
|
|
9561ec83
|
2017-08-22T16:29:07
|
|
smart: typedef git_pkt_type and clarify recv_pkt return type
(cherry picked from commit 08961c9d0d6927bfcc725bd64c9a87dbcca0c52c)
|
|
e91024e1
|
2018-06-28T05:27:36
|
|
Small style tweak, and set an error
(cherry picked from commit 895a668e19dc596e7b12ea27724ceb7b68556106)
|
|
c83c59b8
|
2018-06-26T02:32:50
|
|
Remove GIT_PKT_PACK entirely
(cherry picked from commit 90cf86070046fcffd5306915b57786da054d8964)
|
|
8ced4627
|
2018-08-11T13:06:14
|
|
Fix 'invalid packet line' for ng packets containing errors
(cherry picked from commit 50dd7fea5ad1bf6c013b72ad0aa803a9c84cdede)
|
|
ffc20564
|
2018-09-05T11:49:13
|
|
Prevent heap-buffer-overflow
When running repack while doing repo writes, `packfile_load__cb()` can see some temporary files in the directory that are bigger than the usual, and makes `memcmp` overflow on the `p->pack_name` string. ASAN detected this. This just uses `strncmp`, that should not have any performance impact and is safe for comparing strings of different sizes.
```
ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61200001a3f3 at pc 0x7f4a9e1976ec bp 0x7ffc1f80e100 sp 0x7ffc1f80d8b0
READ of size 89 at 0x61200001a3f3 thread T0
SCARINESS: 26 (multi-byte-read-heap-buffer-overflow)
#0 0x7f4a9e1976eb in __interceptor_memcmp.part.78 (/build/cfgr-admin#link-tree/libtools_build_sanitizers_asan-ubsan-py.so+0xcf6eb)
#1 0x7f4a518c5431 in packfile_load__cb /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb_pack.c:213
#2 0x7f4a518d9582 in git_path_direach /build/libgit2/0.27.0/src/libgit2-0.27.0/src/path.c:1134
#3 0x7f4a518c58ad in pack_backend__refresh /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb_pack.c:347
#4 0x7f4a518c1b12 in git_odb_refresh /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb.c:1511
#5 0x7f4a518bff5f in git_odb__freshen /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb.c:752
#6 0x7f4a518c17d4 in git_odb_stream_finalize_write /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb.c:1415
#7 0x7f4a51b9d015 in Repository_write /build/pygit2/0.27.0/src/pygit2-0.27.0/src/repository.c:509
```
(cherry picked from commit d22cd1f4a4c10ff47b04c57560e6765d77e5a8fd)
|
|
c4db1715
|
2018-09-03T10:49:46
|
|
config_parse: refactor error handling when parsing multiline variables
The current error handling for the multiline variable parser is a bit
fragile, as each error condition has its own code to clear memory.
Instead, unify error handling as far as possible to avoid this
repetitive code. While at it, make use of `GITERR_CHECK_ALLOC` to
correctly handle OOM situations and verify that the buffer we print into
does not run out of memory either.
(cherry picked from commit bc63e1ef521ab5900dc0b0dcd578b8bf18627fb1)
|
|
c18c913c
|
2018-09-01T03:50:26
|
|
config: Fix a leak parsing multi-line config entries
(cherry picked from commit 38b852558eb518f96c313cdcd9ce5a7af6ded194)
|
|
da70156e
|
2018-08-25T17:04:39
|
|
config: convert unbounded recursion into a loop
(cherry picked from commit a03113e80332fba6c77f43b21d398caad50b4b89)
|
|
e98d0a37
|
2018-08-06T10:49:54
|
|
Merge pull request #4757 from pks-t/pks/v0.26.6
Release v0.26.6
|
|
81532654
|
2018-08-03T11:24:31
|
|
version: bump to v0.26.6
|
|
495bc486
|
2018-08-03T11:24:14
|
|
CHANGELOG.md: document security release v0.26.6
|
|
50705a2a
|
2018-07-19T13:00:42
|
|
smart_pkt: fix potential OOB-read when processing ng packet
OSS-fuzz has reported a potential out-of-bounds read when processing a
"ng" smart packet:
==1==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6310000249c0 at pc 0x000000493a92 bp 0x7ffddc882cd0 sp 0x7ffddc882480
READ of size 65529 at 0x6310000249c0 thread T0
SCARINESS: 26 (multi-byte-read-heap-buffer-overflow)
#0 0x493a91 in __interceptor_strchr.part.35 /src/llvm/projects/compiler-rt/lib/sanitizer_common/sanitizer_common_interceptors.inc:673
#1 0x813960 in ng_pkt libgit2/src/transports/smart_pkt.c:320:14
#2 0x810f79 in git_pkt_parse_line libgit2/src/transports/smart_pkt.c:478:9
#3 0x82c3c9 in git_smart__store_refs libgit2/src/transports/smart_protocol.c:47:12
#4 0x6373a2 in git_smart__connect libgit2/src/transports/smart.c:251:15
#5 0x57688f in git_remote_connect libgit2/src/remote.c:708:15
#6 0x52e59b in LLVMFuzzerTestOneInput /src/download_refs_fuzzer.cc:145:9
#7 0x52ef3f in ExecuteFilesOnyByOne(int, char**) /src/libfuzzer/afl/afl_driver.cpp:301:5
#8 0x52f4ee in main /src/libfuzzer/afl/afl_driver.cpp:339:12
#9 0x7f6c910db82f in __libc_start_main /build/glibc-Cl5G7W/glibc-2.23/csu/libc-start.c:291
#10 0x41d518 in _start
When parsing an "ng" packet, we keep track of both the current position
as well as the remaining length of the packet itself. But instead of
taking care not to exceed the length, we pass the current pointer's
position to `strchr`, which will search for a certain character until
hitting NUL. It is thus possible to create a crafted packet which
doesn't contain a NUL byte to trigger an out-of-bounds read.
Fix the issue by instead using `memchr`, passing the remaining length as
restriction. Furthermore, verify that we actually have enough bytes left
to produce a match at all.
OSS-Fuzz-Issue: 9406
|
|
7e12ca68
|
2018-08-06T08:57:20
|
|
travis: force usage of Xcode 8.3 image
Travis has upgraded the default Xcode images from 8.3 to 9.4 on 31st
July 2018, including an upgrade to macOS 10.13. Unfortunately, this
breaks our CI builds on our maintenance branches. As we do not want to
include mayor changes to fix the integration right now, we force use of
the old Xcode 8.3 images.
|
|
a3e53c16
|
2018-07-09T14:11:45
|
|
Merge pull request #4718 from pks-t/pks/v0.26.5
Release v0.26.5
|
|
440a3636
|
2018-07-05T14:33:53
|
|
version: bump to v0.26.5
|
|
188fef5e
|
2018-07-05T14:20:57
|
|
CHANGELOG: add release notes for v0.26.5
|
|
47ea1f58
|
2018-07-05T13:30:46
|
|
delta: fix overflow when computing limit
When checking whether a delta base offset and length fit into the base
we have in memory already, we can trigger an overflow which breaks the
check. This would subsequently result in us reading memory from out of
bounds of the base.
The issue is easily fixed by checking for overflow when adding `off` and
`len`, thus guaranteeting that we are never indexing beyond `base_len`.
This corresponds to the git patch 8960844a7 (check patch_delta bounds
more carefully, 2006-04-07), which adds these overflow checks.
Reported-by: Riccardo Schirone <rschiron@redhat.com>
|
|
25d4a8c9
|
2018-06-29T09:11:02
|
|
delta: fix out-of-bounds read of delta
When computing the offset and length of the delta base, we repeatedly
increment the `delta` pointer without checking whether we have advanced
past its end already, which can thus result in an out-of-bounds read.
Fix this by repeatedly checking whether we have reached the end. Add a
test which would cause Valgrind to produce an error.
Reported-by: Riccardo Schirone <rschiron@redhat.com>
Test-provided-by: Riccardo Schirone <rschiron@redhat.com>
|
|
8ab4f363
|
2018-06-29T07:45:18
|
|
delta: fix sign-extension of big left-shift
Our delta code was originally adapted from JGit, which itself adapted it
from git itself. Due to this heritage, we inherited a bug from git.git
in how we compute the delta offset, which was fixed upstream in
48fb7deb5 (Fix big left-shifts of unsigned char, 2009-06-17). As
explained by Linus:
Shifting 'unsigned char' or 'unsigned short' left can result in sign
extension errors, since the C integer promotion rules means that the
unsigned char/short will get implicitly promoted to a signed 'int' due to
the shift (or due to other operations).
This normally doesn't matter, but if you shift things up sufficiently, it
will now set the sign bit in 'int', and a subsequent cast to a bigger type
(eg 'long' or 'unsigned long') will now sign-extend the value despite the
original expression being unsigned.
One example of this would be something like
unsigned long size;
unsigned char c;
size += c << 24;
where despite all the variables being unsigned, 'c << 24' ends up being a
signed entity, and will get sign-extended when then doing the addition in
an 'unsigned long' type.
Since git uses 'unsigned char' pointers extensively, we actually have this
bug in a couple of places.
In our delta code, we inherited such a bogus shift when computing the
offset at which the delta base is to be found. Due to the sign extension
we can end up with an offset where all the bits are set. This can allow
an arbitrary memory read, as the addition in `base_len < off + len` can
now overflow if `off` has all its bits set.
Fix the issue by casting the result of `*delta++ << 24UL` to an unsigned
integer again. Add a test with a crafted delta that would actually
succeed with an out-of-bounds read in case where the cast wouldn't
exist.
Reported-by: Riccardo Schirone <rschiron@redhat.com>
Test-provided-by: Riccardo Schirone <rschiron@redhat.com>
|
|
ca55adaa
|
2018-06-04T12:18:40
|
|
Merge pull request #4666 from pks-t/pks/v0.26.4
Release v0.26.4
|
|
9fcd4772
|
2018-05-29T14:05:33
|
|
version: bump library version to 0.26.4
|
|
ef5265e2
|
2018-05-29T14:05:10
|
|
CHANGELOG: update for v0.26.4
|
|
ea55c77c
|
2018-05-24T21:58:40
|
|
path: hand-code the zero-width joiner as UTF-8
|
|
95a0ab89
|
2018-05-24T20:28:36
|
|
submodule: plug leaks from the escape detection
|
|
d1aaa5e2
|
2018-05-24T19:05:59
|
|
submodule: replace index with strchr which exists on Windows
|
|
f78b6907
|
2018-05-24T19:00:13
|
|
submodule: the repostiory for _name_is_valid should not be const
We might modify caches due to us trying to load the configuration to figure out
what kinds of filesystem protections we should have.
|
|
98e8b11c
|
2018-05-23T08:40:17
|
|
path: check for a symlinked .gitmodules in fs-agnostic code
We still compare case-insensitively to protect more thoroughly as we don't know
what specifics we'll see on the system and it's the behaviour from git.
|
|
1c1e32b7
|
2018-05-22T20:37:23
|
|
checkout: change symlinked .gitmodules file test to expect failure
When dealing with `core.proectNTFS` and `core.protectHFS` we do check
against `.gitmodules` but we still have a failing test as the non-filesystem
codepath does not check for it.
|
|
5b855194
|
2018-05-22T16:13:47
|
|
path: reject .gitmodules as a symlink
Any part of the library which asks the question can pass in the mode to have it
checked against `.gitmodules` being a symlink.
This is particularly relevant for adding entries to the index from the worktree
and for checking out files.
|
|
17df18ae
|
2018-05-22T15:48:38
|
|
index: stat before creating the entry
This is so we have it available for the path validity checking. In a later
commit we will start rejecting `.gitmodules` files as symlinks.
|
|
644c973f
|
2018-05-22T15:21:08
|
|
path: accept the name length as a parameter
We may take in names from the middle of a string so we want the caller to let us
know how long the path component is that we should be checking.
|
|
2143a0de
|
2018-05-22T14:16:45
|
|
checkout: add a failing test for refusing a symlinked .gitmodules
We want to reject these as they cause compatibility issues and can lead to git
writing to files outside of the repository.
|
|
3adb0dc3
|
2018-05-22T13:58:24
|
|
path: expose dotgit detection functions per filesystem
These will be used by the checkout code to detect them for the particular
filesystem they're on.
|
|
dc5591b4
|
2018-05-18T15:16:53
|
|
path: hide the dotgit file functions
These can't go into the public API yet as we don't want to introduce API or ABI
changes in a security release.
|
|
f98d140b
|
2018-05-16T15:56:04
|
|
path: add functions to detect .gitconfig and .gitattributes
|
|
4656e9c4
|
2018-05-16T15:42:08
|
|
path: add a function to detect an .gitmodules file
Given a path component it knows what to pass to the filesystem-specific
functions so we're protected even from trees which try to use the 8.3 naming
rules to get around us matching on the filename exactly.
The logic and test strings come from the equivalent git change.
|
|
9893f56b
|
2018-05-16T14:47:04
|
|
path: provide a generic function for checking dogit files on NTFS
It checks against the 8.3 shortname variants, including the one which includes
the checksum as part of its name.
|
|
f43ade0e
|
2018-05-16T11:56:04
|
|
path: provide a generic dogit checking function for HFS
This lets us check for other kinds of reserved files.
|
|
4a1753c2
|
2018-05-14T16:03:15
|
|
submodule: also validate Windows-separated paths for validity
Otherwise we would also admit `..\..\foo\bar` as a valid path and fail to
protect Windows users.
Ideally we would check for both separators without the need for the copied
string, but this'll get us over the RCE.
|
|
f77e40a1
|
2018-04-30T13:47:15
|
|
submodule: ignore submodules which include path traversal in their name
If the we decide that the "name" of the submodule (i.e. its path inside
`.git/modules/`) is trying to escape that directory or otherwise trick us, we
ignore the configuration for that submodule.
This leaves us with a half-configured submodule when looking it up by path, but
it's the same result as if the configuration really were missing.
The name check is potentially more strict than it needs to be, but it lets us
re-use the check we're doing for the checkout. The function that encapsulates
this logic is ready to be exported but we don't want to do that in a security
release so it remains internal for now.
|
|
2fc15ae8
|
2018-04-30T13:03:44
|
|
submodule: add a failing test for a submodule escaping .git/modules
We should pretend such submdules do not exist as it can lead to RCE.
|
|
8af6bce2
|
2018-03-20T07:47:27
|
|
online tests: update auth for bitbucket test
Update the settings to use a specific read-only token for accessing our
test repositories in Bitbucket.
|
|
999200cc
|
2018-03-19T09:20:35
|
|
online::clone: skip creds fallback test
At present, we have three online tests against bitbucket: one which
specifies the credentials in the payload, one which specifies the
correct credentials in the URL and a final one that specifies the
incorrect credentials in the URL. Bitbucket has begun responding to the
latter test with a 403, which causes us to fail.
Break these three tests into separate tests so that we can skip the
latter until this is resolved on Bitbucket's end or until we can change
the test to a different provider.
|
|
b55bb43d
|
2018-03-12T13:50:02
|
|
Merge pull request #4475 from pks-t/pks/v0.26.1-backports
v0.26.3 backports
|
|
bb00842f
|
2018-03-10T17:57:40
|
|
Bump version to v0.26.3
|
|
7c8ddef0
|
2018-03-10T17:57:18
|
|
CHANGELOG.md: update for v0.26.3
|