|
8b68cb23
|
2018-08-26T15:25:15
|
|
clar: don't use a variable named `time`
(cherry picked from commit dbebcb04b42047df0d52ad3515077a134c5b7da7)
|
|
4c48aeb5
|
2018-07-27T23:00:09
|
|
Barebones JUnit XML output
(cherry picked from commit 59f1e477f772c73c76bc654a0853fdcf491a32a7)
|
|
698b0928
|
2018-07-26T23:02:20
|
|
Isolate test reports
This makes it possible to keep track of every test status (even
successful ones), and their errors, if any.
(cherry picked from commit bf9fc126709af948c2a324ceb1b2696046c91cfe)
|
|
57f86c22
|
2018-08-26T15:11:21
|
|
clar: refactor explicitly run test behavior
Previously, supplying `-s` to explicitly enable some test(s) would run
the tests immediately from the argument parser. This forces us to set
up the entire clar environment (for example: sandboxing) before argument
parsing takes place.
Refactor the behavior of `-s` to add the explicitly chosen tests to a
list that is executed later. This untangles the argument parsing from
the setup lifecycle, allowing us to use the arguments to perform the
setup.
(cherry picked from commit 90753a96515f85e2d0e79a16d3a06ba5b363c68e)
|
|
564ab8ae
|
2018-07-26T23:02:34
|
|
Documentation
(cherry picked from commit 3a9b96311d6f0ff364c6417cf3aab7c9745b18d4)
|
|
af405e42
|
2018-09-03T19:27:30
|
|
README: remove travis
(cherry picked from commit 76cfeb20fc75f02eee8e1b672889039be282666f)
|
|
fc9e051d
|
2018-08-30T21:53:58
|
|
ci: remove travis
(cherry picked from commit 6fc946e87025f22315c481509b6658726725b7a4)
|
|
b3ea4a51
|
2018-10-12T12:08:00
|
|
Update .vsts-ci.yml
(cherry picked from commit 7238a1e8c7e6b48439ce553c99b83915cb33b394)
|
|
c7f91f39
|
2018-08-06T12:00:21
|
|
odb: fix use of wrong printf formatters
The `git_odb_stream` members `declared_size` and `received_bytes` are
both of the type `git_off_t`, which we usually defined to be a 64 bit
signed integer. Thus, passing these members to "PRIdZ" formatters is not
correct, as they are not guaranteed to accept big enough numbers.
Instead, use the "PRId64" formatter, which is able to represent 64 bit
signed integers.
(cherry picked from commit 0fcd05631a1f59e156e613448262800c155e79d0)
|
|
25392688
|
2018-08-02T20:43:21
|
|
ci: run VSTS builds on master and maint branches
(cherry picked from commit cd7883145f76a24db47dfd911cc8b0b387813c7c)
|
|
fe56cd6c
|
2018-08-31T14:07:59
|
|
Update .vsts-nightly.yml
(cherry picked from commit 40c3a974656a3a9bb0b63e0bb0eb770bb1648303)
|
|
75b0142d
|
2018-08-14T21:26:14
|
|
ci: Correct the status code check so Coverity doesn't force-fail Travis
Otherwise you get something like
Emitted 525 C/C++ compilation units (100%) successfully
525 C/C++ compilation units (100%) are ready for analysis
The cov-build utility completed successfully.
Build successfully submitted.
Received error code 200 from Coverity
travis_time:end:14cf6373:start=1534254309066933889,finish=1534254728190974302,duration=419124040413
The command "if [ -n "$COVERITY" ]; then ../ci/coverity.sh; fi" exited with 1.
travis_time:start:01ed61d4
$ if [ -z "$COVERITY" ]; then ../ci/build.sh && ../ci/test.sh; fi
travis_time:end:01ed61d4:start=1534254728197560961,finish=1534254728202711214,duration=5150253
The command "if [ -z "$COVERITY" ]; then ../ci/build.sh && ../ci/test.sh; fi" exited with 0.
Done. Your build exited with 1.
(cherry picked from commit 351ca66126b08530d96556eb4521b601c69125e3)
|
|
c94dc053
|
2018-08-09T09:39:39
|
|
readme: remove appveyor build badge
(cherry picked from commit 658b8e8a59341a7042a839d0417723d494d7b4cb)
|
|
bc8a33c4
|
2018-08-06T16:33:15
|
|
ci: remove appveyor
(cherry picked from commit 3ce31df3ff34b494a67f7d18dced9930c69883bd)
|
|
75ca6092
|
2018-08-02T14:57:54
|
|
ci: add VSTS build badge to README
(cherry picked from commit a1ae41b80b56cd49ecec049b7d2509f17596e116)
|
|
b609b5cd
|
2018-08-06T07:13:56
|
|
travis: do not execute Coverity analysis for all cron jobs
The new Travis cron job gets executed daily, but our current
configuration will cause each job to execute our Coverity script instead
of the default build and testing scripts. This cannot work, as Coverity
is heavily rate-limiting its API, so our cron builds are doomed to
always fail. What we want to do instead is execute our normal builds,
but add an additional Coverity jobs.
This can easily be done by adding another Coverity-specific job with a
conditional "type = cron", which sets the "COVERITY" environment
variable. Instead of checking the build type, we then simply check
whether "COVERITY" is set or not.
(cherry picked from commit 0a6c13a239ef5e1427d8317b36c202ca9a580754)
|
|
5e1d64ff
|
2018-08-06T09:12:48
|
|
ci: enable compilation with "-Werror"
During the conversion of our CI scripts in bf418f09c (ci: refactor unix
ci build/test scripts, 2018-07-14), we accidentally dropped the
"-DENABLE_WERROR=ON" switch in our cmake invocation. Re-add it to help
us catch compiler warnings early.
(cherry picked from commit 900846571905cf7a9530d2680c627fde6044db92)
|
|
c4ec76fa
|
2018-08-02T14:47:03
|
|
ci: set PKG_CONFIG_PATH on travis
Homebrew's formula for openssl is "keg-only", which means it does not
install it into /usr/local. On macOS builds, we need to set
PKG_CONFIG_PATH to include it.
(cherry picked from commit abf5336304ad7df85bbca2289a61b7799029fa1b)
|
|
5953f789
|
2018-07-29T17:26:44
|
|
ci: run coverity from a nightly VSTS build
(cherry picked from commit d076db11a84b278e260139269c25fe692930f238)
|
|
1f7bb777
|
2018-07-28T22:29:53
|
|
ci: run coverity from travis's cron
Instead of trying to run coverity builds during the regular PR process,
run them during a regularly scheduled cron process. These only need to
run nightly, so it makes sense to bring them out of the PR process.
(cherry picked from commit 6b92368c859d0bf0dcdb15ca8bee520e0f4e84f2)
|
|
d752ab97
|
2018-07-27T16:40:44
|
|
ci: remove unused old ci scripts
(cherry picked from commit 24d175621b7ca6a218c7150ac47ea296f0766fa4)
|
|
6f8cc9b5
|
2018-07-27T12:31:32
|
|
ci: move travis to the new scripts
(cherry picked from commit 24b8dd8275adb13acc68281c200623f636690666)
|
|
ac46b959
|
2018-07-26T15:14:37
|
|
ci: move appveyor to new scripts
(cherry picked from commit 465f8b5163cdee708a6ee81a7c210b2a8baedde4)
|
|
612d50b5
|
2018-07-26T15:06:01
|
|
ci: use a single setup script for mingw
(cherry picked from commit f7bb4ff80bfa5e5173232685b13f143b572f36de)
|
|
99a0a733
|
2018-10-12T12:07:48
|
|
ci: use docker containers from libgit2 account
(cherry picked from commit 6fb63c9285b79bc2c6b67845273abdc7eaacaa1c)
|
|
6fd065f6
|
2018-10-12T12:07:30
|
|
ci: perform clang builds on Linux
(cherry picked from commit dc6e80e2ce7c4d1017ce41a67a0df50b29b36cc4)
|
|
3676834a
|
2018-07-25T01:04:55
|
|
ci: dissociate test from leaks process
The leaks process is not good about handling children. Ensure that its
child is `nohup`ed so that the grandparent shell won't wait for it to
exit.
(cherry picked from commit 6eb97b6ba93019741e7cf6147f0fab05dd3f831d)
|
|
8a54e39c
|
2018-07-21T10:49:23
|
|
ci: some additional debugging
(cherry picked from commit 230eeda8e464a4675e82007d0c505617a6c243ed)
|
|
bc2fec60
|
2018-07-20T19:47:40
|
|
ci: enable leak checking on osx
(cherry picked from commit b00672b9e404adb771601408d4b02711085d6f90)
|
|
d68c293a
|
2018-07-20T18:09:38
|
|
ci: msvc leak-checking
(cherry picked from commit afecd15cf6de53b8a0d28061fd9ffaeac358b91f)
|
|
f2087fc7
|
2018-07-20T14:14:16
|
|
buf tests: allocate a smaller size for the oom
On Linux (where we run valgrind) allocate a smaller buffer, but still an
insanely large size. This will cause malloc to fail but will not cause
valgrind to report a likely error with a negative-sized malloc.
Keep the original buffer size on non-Linux platforms: this is
well-tested on them and changing it may be problematic. On macOS, for
example, using the new size causes `malloc` to print a warning to
stderr.
(cherry picked from commit 219512e7989340d9efae8480fb79c08b91724014)
|
|
542a403f
|
2018-07-20T17:20:15
|
|
ci: xcode leaks leak-checking
(cherry picked from commit 7f12c12394ce3f5b76a32a312461e95fe9e78ce7)
|
|
7c686457
|
2018-10-12T12:07:09
|
|
ci: valgrind leak-checking
(cherry picked from commit 6d6700d23860d21e8e5043e5c7689a6ed4d8fc70)
|
|
01d00cd9
|
2018-07-14T12:42:50
|
|
ci: introduce vsts builds
(cherry picked from commit 67f5304f552a287dd46951b8ef96695f080c5ff2)
|
|
4e1218b4
|
2018-07-14T13:03:16
|
|
ci: scripts to setup mingw build environment
(cherry picked from commit 9e588060d93da064ca288db021def3d81fa13790)
|
|
95c728de
|
2018-07-14T12:35:02
|
|
ci: set up a macos host
Script to set up dependencies on a macOS build system.
(cherry picked from commit 8734240417a02930593e3a76b56ce6b51441723c)
|
|
064b933d
|
2018-07-14T12:34:05
|
|
ci: setup a linux host
Sets up a linux host to prepare for a build.
(cherry picked from commit 5bb2087b7c60da5c2ce50b9eefeebfbe255c9a0d)
|
|
6dceeb74
|
2018-07-14T12:25:32
|
|
ci: improved flexibility for citest.sh
Refactor citest.sh to enable local testing by developers.
(cherry picked from commit 451b001725e4a97f0a9f1ff1d87a2bf5666850a3)
|
|
ab55feee
|
2018-07-14T12:24:40
|
|
ci: refactor unix ci build/test scripts
(cherry picked from commit bf418f09ce20f9e70c416288798bd7054a5e28d0)
|
|
4609548d
|
2018-07-14T12:22:47
|
|
ci: move tests into citest.ps1
Add citest.ps1 PowerShell script to run the tests.
(cherry picked from commit e2cc5b6d9739591703cfb7f04efa84425ed63332)
|
|
b6faab9d
|
2018-07-14T12:22:16
|
|
ci: Windows PowerShell build script
(cherry picked from commit 3b6281fac165bd910abe7e961e5e65168723a187)
|
|
373bf31f
|
2018-07-04T10:56:56
|
|
tests: simplify cmake test configuration
Simplify the names for the tests, removing the unnecessary
"libgit2-clar" prefix. Make "all" the new default test run, and include
the online tests by default (since HTTPS should always be enabled).
For the CI tests, create an offline-only test, then the various online
tests.
(cherry picked from commit ce798b256b071f57bfd62664626c10339b3e36f7)
|
|
f675c45a
|
2018-04-20T23:11:30
|
|
travis: enable -Werror in the script instead of using the matrix
(cherry picked from commit 61eaaadf7f23a88a5bac67d44099d9d3fabf51fe)
|
|
c84b7a5b
|
2018-04-20T23:11:28
|
|
scripts: remove extraneous semicolons
(cherry picked from commit 149790b96eda8a1e48408decf92ba327479c2c33)
|
|
2f3240ff
|
2018-04-20T23:11:27
|
|
scripts: use leaks on macOS
(cherry picked from commit 4c969618f6ec6caa8facd199c3a6de0e6b06396f)
|
|
9da74c2a
|
2018-04-20T23:11:25
|
|
valgrind: bump num-callers to 50 for fuller stack traces
(cherry picked from commit 0fb8c1d09ca55751aec5f42bae9a3bc19da3248d)
|
|
1ec85a55
|
2018-04-20T23:11:23
|
|
travis: let cmake perform the build & install step
The goal is to let cmake manage the parallelism
(cherry picked from commit 1f4ada2a428c8d4af3cc0f12086700cda6e19e3a)
|
|
c409e73d
|
2018-04-20T23:11:22
|
|
valgrind: silence invalid free in libc atexit handler
==17851== Invalid free() / delete / delete[] / realloc()
==17851== at 0x4C2BDEC: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==17851== by 0x60BBE2B: __libc_freeres (in /lib/x86_64-linux-gnu/libc-2.19.so)
==17851== by 0x4A256BC: _vgnU_freeres (in /usr/lib/valgrind/vgpreload_core-amd64-linux.so)
==17851== by 0x5F8F16A: __run_exit_handlers (exit.c:97)
==17851== by 0x5F8F1F4: exit (exit.c:104)
==17851== by 0x5F74F4B: (below main) (libc-start.c:321)
==17851== Address 0x63153c0 is 0 bytes inside data symbol "noai6ai_cached"
(cherry picked from commit 234443e38be92ce14cff8574050f4714485a0102)
|
|
159d7b6d
|
2018-04-20T23:11:20
|
|
valgrind: silence libssh2 leaking something from gcrypt
==2957== 912 bytes in 19 blocks are still reachable in loss record 323 of 369
==2957== at 0x4C2AB80: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==2957== by 0x675B120: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x675BDF8: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x675FE0D: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x6761DC4: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x676477E: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x675B071: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x675B544: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x675914B: gcry_control (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==2957== by 0x5D30EC9: libssh2_init (in /usr/lib/x86_64-linux-gnu/libssh2.so.1.0.1)
==2957== by 0x66BCCD: git_transport_ssh_global_init (ssh.c:910)
==2957== by 0x616443: init_common (global.c:65)
(cherry picked from commit dd75885ab45a590ff20404a3a0f20a1148cd4f64)
|
|
6bec4b8b
|
2018-04-20T23:11:19
|
|
valgrind: skip buf::oom test
(cherry picked from commit 573c408921e02f61501b2982fc10af77a8412631)
|
|
eed5a31d
|
2018-04-20T23:11:17
|
|
valgrind: silence curl_global_init leaks
==18109== 664 bytes in 1 blocks are still reachable in loss record 279 of 339
==18109== at 0x4C2AB80: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==18109== by 0x675B120: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==18109== by 0x675C13C: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==18109== by 0x675C296: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==18109== by 0x679BD14: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==18109== by 0x679CC64: ??? (in /lib/x86_64-linux-gnu/libgcrypt.so.11.8.2)
==18109== by 0x6A64946: ??? (in /usr/lib/x86_64-linux-gnu/libgnutls.so.26.22.6)
==18109== by 0x6A116E8: ??? (in /usr/lib/x86_64-linux-gnu/libgnutls.so.26.22.6)
==18109== by 0x6A01114: gnutls_global_init (in /usr/lib/x86_64-linux-gnu/libgnutls.so.26.22.6)
==18109== by 0x52A6C78: ??? (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.3.0)
==18109== by 0x5285ADC: curl_global_init (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.3.0)
==18109== by 0x663524: git_curl_stream_global_init (curl.c:44)
(cherry picked from commit c0c9e9eeee5b4577eb930f56b8ddaf788f809067)
|
|
c87426d7
|
2018-04-20T23:11:16
|
|
travis: split valgrind check in its own script
(cherry picked from commit 74b0a4320726cb557bcf73f47ba25ee10c430066)
|
|
b2d7f596
|
2018-04-20T23:11:14
|
|
travis: split testing from building
(cherry picked from commit 2f4e7cb0e8c21cc2d673946eddf9278c2863427b)
|
|
8e0b1729
|
2018-10-05T19:32:10
|
|
Merge pull request #4834 from pks-t/pks/v0.27.5
Security release v0.27.5
|
|
c590b41f
|
2018-09-06T13:14:40
|
|
version: raise to v0.27.5
|
|
2f158e5b
|
2018-09-06T13:14:19
|
|
CHANGELOG: update for v0.27.5
|
|
a221f58e
|
2018-10-05T11:47:39
|
|
submodule: ignore path and url attributes if they look like options
These can be used to inject options in an implementation which performs a
recursive clone by executing an external command via crafted url and path
attributes such that it triggers a local executable to be run.
The library is not vulnerable as we do not rely on external executables but a
user of the library might be relying on that so we add this protection.
This matches this aspect of git's fix for CVE-2018-17456.
|
|
34597d10
|
2018-10-05T11:42:00
|
|
submodule: add failing test for option-injection protection in url and path
|
|
614c266d
|
2018-10-05T10:56:02
|
|
config_file: properly ignore includes without "path" value
In case a configuration includes a key "include.path=" without any
value, the generated configuration entry will have its value set to
`NULL`. This is unexpected by the logic handling includes, and as soon
as we try to calculate the included path we will unconditionally
dereference that `NULL` pointer and thus segfault.
Fix the issue by returning early in both `parse_include` and
`parse_conditional_include` in case where the `file` argument is `NULL`.
Add a test to avoid future regression.
The issue has been found by the oss-fuzz project, issue 10810.
(cherry picked from commit d06d4220eec035466d1a837972a40546b8904330)
|
|
aa220b0f
|
2018-10-05T10:55:29
|
|
tests: always unlink created config files
While our tests in config::include create a plethora of configuration
files, most of them do not get removed at the end of each test. This can
cause weird interactions with tests that are being run at a later stage
if these later tests try to create files or directories with the same
name as any of the created configuration files.
Fix the issue by unlinking all created files at the end of these tests.
(cherry picked from commit bf662f7cf8daff2357923446cf9d22f5d4b4a66b)
|
|
356f60f4
|
2018-08-09T11:04:42
|
|
smart_pkt: fix buffer overflow when parsing "unpack" packets
When checking whether an "unpack" packet returned the "ok" status or
not, we use a call to `git__prefixcmp`. In case where the passed line
isn't properly NUL terminated, though, this may overrun the line buffer.
Fix this by using `git__prefixncmp` instead.
(cherry picked from commit 5fabaca801e1f5e7a1054be612e8fabec7cd6a7f)
|
|
b5b7c303
|
2018-08-09T11:03:37
|
|
smart_pkt: fix "ng" parser accepting non-space character
When parsing "ng" packets, we blindly assume that the character
immediately following the "ng" prefix is a space and skip it. As the
calling function doesn't make sure that this is the case, we can thus
end up blindly accepting an invalid packet line.
Fix the issue by using `git__prefixncmp`, checking whether the line
starts with "ng ".
(cherry picked from commit b5ba7af2d30c958b090dcf135749d9afe89ec703)
|
|
319f0c03
|
2018-08-09T11:01:00
|
|
smart_pkt: fix buffer overflow when parsing "ok" packets
There are two different buffer overflows present when parsing "ok"
packets. First, we never verify whether the line already ends after
"ok", but directly go ahead and also try to skip the expected space
after "ok". Second, we then go ahead and use `strchr` to scan for the
terminating newline character. But in case where the line isn't
terminated correctly, this can overflow the line buffer.
Fix the issues by using `git__prefixncmp` to check for the "ok " prefix
and only checking for a trailing '\n' instead of using `memchr`. This
also fixes the issue of us always requiring a trailing '\n'.
Reported by oss-fuzz, issue 9749:
Crash Type: Heap-buffer-overflow READ {*}
Crash Address: 0x6310000389c0
Crash State:
ok_pkt
git_pkt_parse_line
git_smart__store_refs
Sanitizer: address (ASAN)
(cherry picked from commit a9f1ca09178af0640963e069a2142d5ced53f0b4)
|
|
0599c267
|
2018-08-09T10:38:10
|
|
smart_pkt: fix buffer overflow when parsing "ACK" packets
We are being quite lenient when parsing "ACK" packets. First, we didn't
correctly verify that we're not overrunning the provided buffer length,
which we fix here by using `git__prefixncmp` instead of
`git__prefixcmp`. Second, we do not verify that the actual contents make
any sense at all, as we simply ignore errors when parsing the ACKs OID
and any unknown status strings. This may result in a parsed packet
structure with invalid contents, which is being silently passed to the
caller. This is being fixed by performing proper input validation and
checking of return codes.
(cherry picked from commit bc349045b1be8fb3af2b02d8554483869e54d5b8)
|
|
0fe87761
|
2018-08-09T10:57:06
|
|
smart_pkt: adjust style of "ref" packet parsing function
While the function parsing ref packets doesn't have any immediately
obvious buffer overflows, it's style is different to all the other
parsing functions. Instead of checking buffer length while we go, it
does a check up-front. This causes the code to seem a lot more magical
than it really is due to some magic constants. Refactor the function to
instead make use of the style of other packet parser and verify buffer
lengths as we go.
(cherry picked from commit 5edcf5d190f3b379740b223ff6a649d08fa49581)
|
|
f5c3442b
|
2018-10-03T16:17:21
|
|
smart_pkt: do not accept callers passing in no line length
Right now, we simply ignore the `linelen` parameter of
`git_pkt_parse_line` in case the caller passed in zero. But in fact, we
never want to assume anything about the provided buffer length and
always want the caller to pass in the available number of bytes.
And in fact, checking all the callers, one can see that the funciton is
never being called in case where the buffer length is zero, and thus we
are safe to remove this check.
(cherry picked from commit 1bc5b05c614c7b10de021fa392943e8e6bd12c77)
|
|
f7c3f6cc
|
2018-08-09T11:16:15
|
|
smart_pkt: return parsed length via out-parameter
The `parse_len` function currently directly returns the parsed length of
a packet line or an error code in case there was an error. Instead,
convert this to our usual style of using the return value as error code
only and returning the actual value via an out-parameter. Thus, we can
now convert the output parameter to an unsigned type, as the size of a
packet cannot ever be negative.
While at it, we also move the check whether the input buffer is long
enough into `parse_len` itself. We don't really want to pass around
potentially non-NUL-terminated buffers to functions without also passing
along the length, as this is dangerous in the unlikely case where other
callers for that function get added. Note that we need to make sure
though to not mess with `GIT_EBUFS` error codes, as these indicate not
an error to the caller but that he needs to fetch more data.
(cherry picked from commit c05790a8a8dd4351e61fc06c0a06c6a6fb6134dc)
|
|
7e3cd611
|
2018-08-09T11:13:59
|
|
smart_pkt: reorder and rename parameters of `git_pkt_parse_line`
The parameters of the `git_pkt_parse_line` function are quite confusing.
First, there is no real indicator what the `out` parameter is actually
all about, and it's not really clear what the `bufflen` parameter refers
to. Reorder and rename the parameters to make this more obvious.
(cherry picked from commit 0b3dfbf425d689101663341beb94237614f1b5c2)
|
|
97156614
|
2018-08-09T10:46:58
|
|
smart_pkt: check whether error packets are prefixed with "ERR "
In the `git_pkt_parse_line` function, we determine what kind of packet
a given packet line contains by simply checking for the prefix of that
line. Except for "ERR" packets, we always only check for the immediate
identifier without the trailing space (e.g. we check for an "ACK"
prefix, not for "ACK "). But for "ERR" packets, we do in fact include
the trailing space in our check. This is not really much of a problem at
all, but it is inconsistent with all the other packet types and thus
causes confusion when the `err_pkt` function just immediately skips the
space without checking whether it overflows the line buffer.
Adjust the check in `git_pkt_parse_line` to not include the trailing
space and instead move it into `err_pkt` for consistency.
(cherry picked from commit 786426ea6ec2a76ffe2515dc5182705fb3d44603)
|
|
5c0d1100
|
2018-08-09T10:46:26
|
|
smart_pkt: explicitly avoid integer overflows when parsing packets
When parsing data, progress or error packets, we need to copy the
contents of the rest of the current packet line into the flex-array of
the parsed packet. To keep track of this array's length, we then assign
the remaining length of the packet line to the structure. We do have a
mismatch of types here, as the structure's `len` field is a signed
integer, while the length that we are assigning has type `size_t`.
On nearly all platforms, this shouldn't pose any problems at all. The
line length can at most be 16^4, as the line's length is being encoded
by exactly four hex digits. But on a platforms with 16 bit integers,
this assignment could cause an overflow. While such platforms will
probably only exist in the embedded ecosystem, we still want to avoid
this potential overflow. Thus, we now simply change the structure's
`len` member to be of type `size_t` to avoid any integer promotion.
(cherry picked from commit 40fd84cca68db24f325e460a40dabe805e7a5d35)
|
|
20e58aac
|
2018-08-09T10:36:44
|
|
smart_pkt: honor line length when determining packet type
When we parse the packet type of an incoming packet line, we do not
verify that we don't overflow the provided line buffer. Fix this by
using `git__prefixncmp` instead and passing in `len`. As we have
previously already verified that `len <= linelen`, we thus won't ever
overflow the provided buffer length.
(cherry picked from commit 4a5804c983317100eed509537edc32d69c8d7aa2)
|
|
bd069448
|
2018-10-03T15:39:40
|
|
tests: verify parsing logic for smart packets
The commits following this commit are about to introduce quite a lot of
refactoring and tightening of the smart packet parser. Unfortunately, we
do not yet have any tests despite our online tests that verify that our
parser does not regress upon changes. This is doubly unfortunate as our
online tests aren't executed by default.
Add new tests that exercise the smart parsing logic directly by
executing `git_pkt_parse_line`.
(cherry picked from commit 365d2720c1a5fc89f03fd85265c8b45195c7e4a8)
|
|
003cbc3f
|
2018-06-24T19:47:08
|
|
Verify ref_pkt's are long enough
If the remote sends a too-short packet, we'll allow `len` to go
negative and eventually issue a malloc for <= 0 bytes on
```
pkt->head.name = git__malloc(alloclen);
```
(cherry picked from commit 437ee5a70711ac2e027877d71ee4ae17e5ec3d6c)
|
|
4385aef3
|
2017-08-22T16:29:07
|
|
smart: typedef git_pkt_type and clarify recv_pkt return type
(cherry picked from commit 08961c9d0d6927bfcc725bd64c9a87dbcca0c52c)
|
|
21ffc57d
|
2018-06-28T05:27:36
|
|
Small style tweak, and set an error
(cherry picked from commit 895a668e19dc596e7b12ea27724ceb7b68556106)
|
|
be98c9e9
|
2018-06-26T02:32:50
|
|
Remove GIT_PKT_PACK entirely
(cherry picked from commit 90cf86070046fcffd5306915b57786da054d8964)
|
|
bf4342f7
|
2018-08-11T13:06:14
|
|
Fix 'invalid packet line' for ng packets containing errors
(cherry picked from commit 50dd7fea5ad1bf6c013b72ad0aa803a9c84cdede)
|
|
15e92284
|
2018-09-05T11:49:13
|
|
Prevent heap-buffer-overflow
When running repack while doing repo writes, `packfile_load__cb()` can see some temporary files in the directory that are bigger than the usual, and makes `memcmp` overflow on the `p->pack_name` string. ASAN detected this. This just uses `strncmp`, that should not have any performance impact and is safe for comparing strings of different sizes.
```
ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61200001a3f3 at pc 0x7f4a9e1976ec bp 0x7ffc1f80e100 sp 0x7ffc1f80d8b0
READ of size 89 at 0x61200001a3f3 thread T0
SCARINESS: 26 (multi-byte-read-heap-buffer-overflow)
#0 0x7f4a9e1976eb in __interceptor_memcmp.part.78 (/build/cfgr-admin#link-tree/libtools_build_sanitizers_asan-ubsan-py.so+0xcf6eb)
#1 0x7f4a518c5431 in packfile_load__cb /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb_pack.c:213
#2 0x7f4a518d9582 in git_path_direach /build/libgit2/0.27.0/src/libgit2-0.27.0/src/path.c:1134
#3 0x7f4a518c58ad in pack_backend__refresh /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb_pack.c:347
#4 0x7f4a518c1b12 in git_odb_refresh /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb.c:1511
#5 0x7f4a518bff5f in git_odb__freshen /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb.c:752
#6 0x7f4a518c17d4 in git_odb_stream_finalize_write /build/libgit2/0.27.0/src/libgit2-0.27.0/src/odb.c:1415
#7 0x7f4a51b9d015 in Repository_write /build/pygit2/0.27.0/src/pygit2-0.27.0/src/repository.c:509
```
(cherry picked from commit d22cd1f4a4c10ff47b04c57560e6765d77e5a8fd)
|
|
39706ded
|
2018-09-03T10:49:46
|
|
config_parse: refactor error handling when parsing multiline variables
The current error handling for the multiline variable parser is a bit
fragile, as each error condition has its own code to clear memory.
Instead, unify error handling as far as possible to avoid this
repetitive code. While at it, make use of `GITERR_CHECK_ALLOC` to
correctly handle OOM situations and verify that the buffer we print into
does not run out of memory either.
(cherry picked from commit bc63e1ef521ab5900dc0b0dcd578b8bf18627fb1)
|
|
68823395
|
2018-09-01T03:50:26
|
|
config: Fix a leak parsing multi-line config entries
(cherry picked from commit 38b852558eb518f96c313cdcd9ce5a7af6ded194)
|
|
24c7b23d
|
2018-08-25T17:04:39
|
|
config: convert unbounded recursion into a loop
(cherry picked from commit a03113e80332fba6c77f43b21d398caad50b4b89)
|
|
8b89f362
|
2018-08-06T10:49:49
|
|
Merge pull request #4756 from pks-t/pks/v0.27.4
Release v0.27.4
|
|
c5dd0ea1
|
2018-08-03T11:24:31
|
|
version: bump to v0.27.4
|
|
be0edb43
|
2018-08-03T11:24:14
|
|
CHANGELOG.md: document security release v0.27.4
|
|
1f9a8510
|
2018-07-19T13:00:42
|
|
smart_pkt: fix potential OOB-read when processing ng packet
OSS-fuzz has reported a potential out-of-bounds read when processing a
"ng" smart packet:
==1==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6310000249c0 at pc 0x000000493a92 bp 0x7ffddc882cd0 sp 0x7ffddc882480
READ of size 65529 at 0x6310000249c0 thread T0
SCARINESS: 26 (multi-byte-read-heap-buffer-overflow)
#0 0x493a91 in __interceptor_strchr.part.35 /src/llvm/projects/compiler-rt/lib/sanitizer_common/sanitizer_common_interceptors.inc:673
#1 0x813960 in ng_pkt libgit2/src/transports/smart_pkt.c:320:14
#2 0x810f79 in git_pkt_parse_line libgit2/src/transports/smart_pkt.c:478:9
#3 0x82c3c9 in git_smart__store_refs libgit2/src/transports/smart_protocol.c:47:12
#4 0x6373a2 in git_smart__connect libgit2/src/transports/smart.c:251:15
#5 0x57688f in git_remote_connect libgit2/src/remote.c:708:15
#6 0x52e59b in LLVMFuzzerTestOneInput /src/download_refs_fuzzer.cc:145:9
#7 0x52ef3f in ExecuteFilesOnyByOne(int, char**) /src/libfuzzer/afl/afl_driver.cpp:301:5
#8 0x52f4ee in main /src/libfuzzer/afl/afl_driver.cpp:339:12
#9 0x7f6c910db82f in __libc_start_main /build/glibc-Cl5G7W/glibc-2.23/csu/libc-start.c:291
#10 0x41d518 in _start
When parsing an "ng" packet, we keep track of both the current position
as well as the remaining length of the packet itself. But instead of
taking care not to exceed the length, we pass the current pointer's
position to `strchr`, which will search for a certain character until
hitting NUL. It is thus possible to create a crafted packet which
doesn't contain a NUL byte to trigger an out-of-bounds read.
Fix the issue by instead using `memchr`, passing the remaining length as
restriction. Furthermore, verify that we actually have enough bytes left
to produce a match at all.
OSS-Fuzz-Issue: 9406
|
|
faa7650b
|
2018-08-06T08:46:49
|
|
travis: force usage of Xcode 8.3 image
Travis has upgraded the default Xcode images from 8.3 to 9.4 on 31st
July 2018, including an upgrade to macOS 10.13. Unfortunately, this
breaks our CI builds on our maintenance branches. As we do not want to
include mayor changes to fix the integration right now, we force use of
the old Xcode 8.3 images.
|
|
504bd54a
|
2018-07-09T13:26:21
|
|
Merge pull request #4717 from pks-t/pks/v0.27.3
Release v0.27.3
|
|
8fbd7563
|
2018-07-05T14:34:24
|
|
version: bump to v0.27.3
|
|
36f07807
|
2018-07-05T14:20:57
|
|
CHANGELOG: add release notes for v0.27.3
|
|
c1577110
|
2018-07-05T13:30:46
|
|
delta: fix overflow when computing limit
When checking whether a delta base offset and length fit into the base
we have in memory already, we can trigger an overflow which breaks the
check. This would subsequently result in us reading memory from out of
bounds of the base.
The issue is easily fixed by checking for overflow when adding `off` and
`len`, thus guaranteeting that we are never indexing beyond `base_len`.
This corresponds to the git patch 8960844a7 (check patch_delta bounds
more carefully, 2006-04-07), which adds these overflow checks.
Reported-by: Riccardo Schirone <rschiron@redhat.com>
|
|
9844d38b
|
2018-06-29T09:11:02
|
|
delta: fix out-of-bounds read of delta
When computing the offset and length of the delta base, we repeatedly
increment the `delta` pointer without checking whether we have advanced
past its end already, which can thus result in an out-of-bounds read.
Fix this by repeatedly checking whether we have reached the end. Add a
test which would cause Valgrind to produce an error.
Reported-by: Riccardo Schirone <rschiron@redhat.com>
Test-provided-by: Riccardo Schirone <rschiron@redhat.com>
|
|
3f461902
|
2018-06-29T07:45:18
|
|
delta: fix sign-extension of big left-shift
Our delta code was originally adapted from JGit, which itself adapted it
from git itself. Due to this heritage, we inherited a bug from git.git
in how we compute the delta offset, which was fixed upstream in
48fb7deb5 (Fix big left-shifts of unsigned char, 2009-06-17). As
explained by Linus:
Shifting 'unsigned char' or 'unsigned short' left can result in sign
extension errors, since the C integer promotion rules means that the
unsigned char/short will get implicitly promoted to a signed 'int' due to
the shift (or due to other operations).
This normally doesn't matter, but if you shift things up sufficiently, it
will now set the sign bit in 'int', and a subsequent cast to a bigger type
(eg 'long' or 'unsigned long') will now sign-extend the value despite the
original expression being unsigned.
One example of this would be something like
unsigned long size;
unsigned char c;
size += c << 24;
where despite all the variables being unsigned, 'c << 24' ends up being a
signed entity, and will get sign-extended when then doing the addition in
an 'unsigned long' type.
Since git uses 'unsigned char' pointers extensively, we actually have this
bug in a couple of places.
In our delta code, we inherited such a bogus shift when computing the
offset at which the delta base is to be found. Due to the sign extension
we can end up with an offset where all the bits are set. This can allow
an arbitrary memory read, as the addition in `base_len < off + len` can
now overflow if `off` has all its bits set.
Fix the issue by casting the result of `*delta++ << 24UL` to an unsigned
integer again. Add a test with a crafted delta that would actually
succeed with an out-of-bounds read in case where the cast wouldn't
exist.
Reported-by: Riccardo Schirone <rschiron@redhat.com>
Test-provided-by: Riccardo Schirone <rschiron@redhat.com>
|
|
8d36dc62
|
2018-06-10T18:06:38
|
|
Merge pull request #4632 from pks-t/pks/v0.27.1
Bugfix release v0.27.2
|
|
853ef86a
|
2018-05-30T08:15:30
|
|
version: bump soversion to v0.27.2
|
|
0818adec
|
2018-04-20T11:29:27
|
|
CHANGELOG.md: update for release v0.27.2
|
|
35865117
|
2018-06-06T09:23:01
|
|
tests: submodule: do not rely on config iteration order
The test submodule::lookup::duplicated_path, which tries to verify that
we detect submodules with duplicated paths, currently relies on the
gitmodules file of "submod2_target". While this file has two gitmodules
with the same path, one of these gitmodules has an empty name and thus
does not pass `git_submodule_name_is_valid`. Because of this, the test
is in fact dependent on the iteration order in which we process the
submodules. In fact the "valid" submodule comes first, the "invalid"
submodule will cause the desired error. In fact the "invalid" submodule
comes first, it will be skipped due to its name being invalid, and we
will not see the desired error. While this works on the master branch
just right due to the refactoring of our config code, where iteration
order is now deterministic, this breaks on all older maintenance
branches.
Fix the issue by simply using `cl_git_rewritefile` to rewrite the
gitmodules file. This greatly simplifies the test and also makes the
intentions of it much clearer.
|
|
7392799d
|
2018-05-30T08:35:06
|
|
submodule: detect duplicated submodule paths
When loading submodule names, we build a map of submodule paths and
their respective names. While looping over the configuration keys,
we do not check though whether a submodule path was seen already. This
leads to a memory leak in case we have multiple submodules with the same
path, as we just overwrite the old value in the map in that case.
Fix the error by verifying that the path to be added is not yet part of
the string map. Git does not allow to have multiple submodules for a
path anyway, so we now do the same and detect this duplication,
reporting it to the user.
|
|
f2e5c092
|
2018-04-27T15:31:43
|
|
cmake: remove now-useless LIBGIT2_LIBDIRS handling
With the recent change of always resolving pkg-config libraries to their
full path, we do not have to manage the LIBGIT2_LIBDIRS variable
anymore. The only other remaining user of LIBGIT2_LIBDIRS is winhttp,
which is a CMake-style library target and can thus be resolved by CMake
automatically.
Remove the variable to simplify our build system a bit.
|
|
0c8ff50f
|
2018-04-27T10:38:49
|
|
cmake: resolve libraries found by pkg-config
Libraries found by CMake modules are usually handled with their full
path. This makes linking against those libraries a lot more robust when
it comes to libraries in non-standard locations, as otherwise we might
mix up libraries from different locations when link directories are
given.
One excemption are libraries found by PKG_CHECK_MODULES. Instead of
returning libraries with their complete path, it will return the
variable names as well as a set of link directories. In case where
multiple sets of the same library are installed in different locations,
this can lead the compiler to link against the wrong libraries in the
end, when link directories of other dependencies are added.
To fix this shortcoming, we need to manually resolve library paths
returned by CMake against their respective library directories. This is
an easy task to do with `FIND_LIBRARY`.
|
|
b2f3ff56
|
2018-04-19T01:08:18
|
|
worktree: fix calloc of the wrong object type
|