tests


Log

Author Commit Date CI Message
Patrick Steinhardt 2362ce6c 2017-06-07T13:06:53 tests: online::clone: inline creds-test with nonexistent URL Right now, we test our credential callback code twice, once via SSH on localhost and once via a non-existent GitHub repository. While the first URL makes sense to be configurable, it does not make sense to hard-code the non-existing repository, which requires us to call tests multiple times. Instead, we can just inline the URL into another set of tests. (cherry picked from commit 54a1bf057a1123cf55ac3447c79761c817382f47)
Patrick Steinhardt a1a495f2 2017-06-07T12:48:48 tests: online::clone: construct credential-URL from environment We support two types of passing credentials to the proxy, either via the URL or explicitly by specifying user and password. We test these types by modifying the proxy URL and executing the tests twice, which is in fact unnecessary and requires us to maintain the list of environment variables and test executions across multiple CI infrastructures. To fix the situation, we can just always pass the host, port, user and password to the tests. The tests can then assemble the complete URL either with or without included credentials, allowing us to test both cases in-process. (cherry picked from commit fea6092079d5c09b499e472efead2f7aa81ce8a1)
Patrick Steinhardt 89641431 2017-06-07T11:06:01 tests: perf: build but exclude performance tests by default Our performance tests (or to be more concrete, our single performance test) are not built by default, as they are always #ifdef'd out. While it is true that we don't want to run performance tests by default, not compiling them at all may cause code rot and is thus an unfavorable approach to handle this. We can easily improve this situation: this commit removes the #ifdef, causing the code to always be compiled. Furthermore, we add `-xperf` to the default command line parameters of `generate.py`, thus causing the tests to be excluded by default. Due to this approach, we are now able to execute the performance tests by passing `-sperf` to `libgit2_clar`. Unfortunately, we cannot execute the performance tests on Travis or AppVeyor as they rely on history being available for the libgit2 repository. As both do a shallow clone only, though, this is not given. (cherry picked from commit 543ec149b86a68e12dd141a6141e82850dabbf21)
Patrick Steinhardt 98378a3f 2017-06-07T11:00:26 tests: iterator::workdir: fix reference count in stale test The test `iterator::workdir::filesystem_gunk` is usually not executed, as it is guarded by the environment variable "GITTEST_INVASIVE_SPEED" due to its effects on speed. As such, it has become stale and does not account for new references which have meanwhile been added to the testrepo, causing it to fail. Fix this by raising the number of expected references to 15. (cherry picked from commit b8c14499f9940feaab08a23651a2ef24d27b17b7)
Patrick Steinhardt d2bbea82 2017-06-07T10:59:31 tests: iterator_helpers: assert number of iterator items When the function `expect_iterator_items` surpasses the number of expected items, we simply break the loop. This causes us to trigger an assert later on which has message attached, which is annoying when trying to locate the root error cause. Instead, directly assert that the current count is still smaller or equal to the expected count inside of the loop. (cherry picked from commit 9aba76364fcb4755930856a7bafc5294ed3ee944)
Patrick Steinhardt 293c5ef2 2017-06-07T10:59:03 tests: status::worktree: indicate skipped tests on Win32 Some function bodies of tests which are not applicable to the Win32 platform are completely #ifdef'd out instead of calling `cl_skip()`. This leaves us with no indication that these tests are not being executed at all and may thus cause decreased scrutiny when investigating skipped tests. Improve the situation by calling `cl_skip()` instead of just doing nothing. (cherry picked from commit 72c28ab011759dce113c2a0c7c36ebcd56bd6ddf)
Patrick Steinhardt b988f544 2017-04-26T13:16:18 tests: online::clone: use URL of test server All our tests running against a local SSH server usually read the server's URL from environment variables. But online::clone::ssh_cert test fails to do so and instead always connects to "ssh://localhost/foo". This assumption breaks whenever the SSH server is not running on the standard port, e.g. when it is running as a user. Fix the issue by using the URL provided by the environment. (cherry picked from commit c2c95ad0a210be4811c247be51664bfe8b2e830a)
Carlos Martín Nieto 7e8d9789 2018-10-05T11:42:00 submodule: add failing test for option-injection protection in url and path
Patrick Steinhardt 74937431 2018-10-05T10:56:02 config_file: properly ignore includes without "path" value In case a configuration includes a key "include.path=" without any value, the generated configuration entry will have its value set to `NULL`. This is unexpected by the logic handling includes, and as soon as we try to calculate the included path we will unconditionally dereference that `NULL` pointer and thus segfault. Fix the issue by returning early in both `parse_include` and `parse_conditional_include` in case where the `file` argument is `NULL`. Add a test to avoid future regression. The issue has been found by the oss-fuzz project, issue 10810. (cherry picked from commit d06d4220eec035466d1a837972a40546b8904330)
Patrick Steinhardt 232fc469 2018-10-05T10:55:29 tests: always unlink created config files While our tests in config::include create a plethora of configuration files, most of them do not get removed at the end of each test. This can cause weird interactions with tests that are being run at a later stage if these later tests try to create files or directories with the same name as any of the created configuration files. Fix the issue by unlinking all created files at the end of these tests. (cherry picked from commit bf662f7cf8daff2357923446cf9d22f5d4b4a66b)
Patrick Steinhardt 3bbda7d7 2018-08-09T11:13:59 smart_pkt: reorder and rename parameters of `git_pkt_parse_line` The parameters of the `git_pkt_parse_line` function are quite confusing. First, there is no real indicator what the `out` parameter is actually all about, and it's not really clear what the `bufflen` parameter refers to. Reorder and rename the parameters to make this more obvious. (cherry picked from commit 0b3dfbf425d689101663341beb94237614f1b5c2)
Patrick Steinhardt 8cd0a897 2018-08-09T11:01:00 smart_pkt: fix buffer overflow when parsing "ok" packets There are two different buffer overflows present when parsing "ok" packets. First, we never verify whether the line already ends after "ok", but directly go ahead and also try to skip the expected space after "ok". Second, we then go ahead and use `strchr` to scan for the terminating newline character. But in case where the line isn't terminated correctly, this can overflow the line buffer. Fix the issues by using `git__prefixncmp` to check for the "ok " prefix and only checking for a trailing '\n' instead of using `memchr`. This also fixes the issue of us always requiring a trailing '\n'. Reported by oss-fuzz, issue 9749: Crash Type: Heap-buffer-overflow READ {*} Crash Address: 0x6310000389c0 Crash State: ok_pkt git_pkt_parse_line git_smart__store_refs Sanitizer: address (ASAN) (cherry picked from commit a9f1ca09178af0640963e069a2142d5ced53f0b4)
Patrick Steinhardt 82c3fc33 2018-08-09T10:38:10 smart_pkt: fix buffer overflow when parsing "ACK" packets We are being quite lenient when parsing "ACK" packets. First, we didn't correctly verify that we're not overrunning the provided buffer length, which we fix here by using `git__prefixncmp` instead of `git__prefixcmp`. Second, we do not verify that the actual contents make any sense at all, as we simply ignore errors when parsing the ACKs OID and any unknown status strings. This may result in a parsed packet structure with invalid contents, which is being silently passed to the caller. This is being fixed by performing proper input validation and checking of return codes. (cherry picked from commit bc349045b1be8fb3af2b02d8554483869e54d5b8)
Patrick Steinhardt 5d108c9a 2018-10-03T15:39:40 tests: verify parsing logic for smart packets The commits following this commit are about to introduce quite a lot of refactoring and tightening of the smart packet parser. Unfortunately, we do not yet have any tests despite our online tests that verify that our parser does not regress upon changes. This is doubly unfortunate as our online tests aren't executed by default. Add new tests that exercise the smart parsing logic directly by executing `git_pkt_parse_line`. (cherry picked from commit 365d2720c1a5fc89f03fd85265c8b45195c7e4a8)
Edward Thomson a8db6c92 2017-11-30T15:40:13 util: introduce `git__prefixncmp` and consolidate implementations Introduce `git_prefixncmp` that will search up to the first `n` characters of a string to see if it is prefixed by another string. This is useful for examining if a non-null terminated character array is prefixed by a particular substring. Consolidate the various implementations of `git__prefixcmp` around a single core implementation and add some test cases to validate its behavior. (cherry picked from commit 86219f40689c85ec4418575223f4376beffa45af)
Patrick Steinhardt 25d4a8c9 2018-06-29T09:11:02 delta: fix out-of-bounds read of delta When computing the offset and length of the delta base, we repeatedly increment the `delta` pointer without checking whether we have advanced past its end already, which can thus result in an out-of-bounds read. Fix this by repeatedly checking whether we have reached the end. Add a test which would cause Valgrind to produce an error. Reported-by: Riccardo Schirone <rschiron@redhat.com> Test-provided-by: Riccardo Schirone <rschiron@redhat.com>
Patrick Steinhardt 8ab4f363 2018-06-29T07:45:18 delta: fix sign-extension of big left-shift Our delta code was originally adapted from JGit, which itself adapted it from git itself. Due to this heritage, we inherited a bug from git.git in how we compute the delta offset, which was fixed upstream in 48fb7deb5 (Fix big left-shifts of unsigned char, 2009-06-17). As explained by Linus: Shifting 'unsigned char' or 'unsigned short' left can result in sign extension errors, since the C integer promotion rules means that the unsigned char/short will get implicitly promoted to a signed 'int' due to the shift (or due to other operations). This normally doesn't matter, but if you shift things up sufficiently, it will now set the sign bit in 'int', and a subsequent cast to a bigger type (eg 'long' or 'unsigned long') will now sign-extend the value despite the original expression being unsigned. One example of this would be something like unsigned long size; unsigned char c; size += c << 24; where despite all the variables being unsigned, 'c << 24' ends up being a signed entity, and will get sign-extended when then doing the addition in an 'unsigned long' type. Since git uses 'unsigned char' pointers extensively, we actually have this bug in a couple of places. In our delta code, we inherited such a bogus shift when computing the offset at which the delta base is to be found. Due to the sign extension we can end up with an offset where all the bits are set. This can allow an arbitrary memory read, as the addition in `base_len < off + len` can now overflow if `off` has all its bits set. Fix the issue by casting the result of `*delta++ << 24UL` to an unsigned integer again. Add a test with a crafted delta that would actually succeed with an out-of-bounds read in case where the cast wouldn't exist. Reported-by: Riccardo Schirone <rschiron@redhat.com> Test-provided-by: Riccardo Schirone <rschiron@redhat.com>
Carlos Martín Nieto ea55c77c 2018-05-24T21:58:40 path: hand-code the zero-width joiner as UTF-8
Carlos Martín Nieto 95a0ab89 2018-05-24T20:28:36 submodule: plug leaks from the escape detection
Carlos Martín Nieto 1c1e32b7 2018-05-22T20:37:23 checkout: change symlinked .gitmodules file test to expect failure When dealing with `core.proectNTFS` and `core.protectHFS` we do check against `.gitmodules` but we still have a failing test as the non-filesystem codepath does not check for it.
Carlos Martín Nieto 5b855194 2018-05-22T16:13:47 path: reject .gitmodules as a symlink Any part of the library which asks the question can pass in the mode to have it checked against `.gitmodules` being a symlink. This is particularly relevant for adding entries to the index from the worktree and for checking out files.
Carlos Martín Nieto 644c973f 2018-05-22T15:21:08 path: accept the name length as a parameter We may take in names from the middle of a string so we want the caller to let us know how long the path component is that we should be checking.
Carlos Martín Nieto 2143a0de 2018-05-22T14:16:45 checkout: add a failing test for refusing a symlinked .gitmodules We want to reject these as they cause compatibility issues and can lead to git writing to files outside of the repository.
Carlos Martín Nieto dc5591b4 2018-05-18T15:16:53 path: hide the dotgit file functions These can't go into the public API yet as we don't want to introduce API or ABI changes in a security release.
Carlos Martín Nieto 4656e9c4 2018-05-16T15:42:08 path: add a function to detect an .gitmodules file Given a path component it knows what to pass to the filesystem-specific functions so we're protected even from trees which try to use the 8.3 naming rules to get around us matching on the filename exactly. The logic and test strings come from the equivalent git change.
Carlos Martín Nieto 4a1753c2 2018-05-14T16:03:15 submodule: also validate Windows-separated paths for validity Otherwise we would also admit `..\..\foo\bar` as a valid path and fail to protect Windows users. Ideally we would check for both separators without the need for the copied string, but this'll get us over the RCE.
Carlos Martín Nieto f77e40a1 2018-04-30T13:47:15 submodule: ignore submodules which include path traversal in their name If the we decide that the "name" of the submodule (i.e. its path inside `.git/modules/`) is trying to escape that directory or otherwise trick us, we ignore the configuration for that submodule. This leaves us with a half-configured submodule when looking it up by path, but it's the same result as if the configuration really were missing. The name check is potentially more strict than it needs to be, but it lets us re-use the check we're doing for the checkout. The function that encapsulates this logic is ready to be exported but we don't want to do that in a security release so it remains internal for now.
Carlos Martín Nieto 2fc15ae8 2018-04-30T13:03:44 submodule: add a failing test for a submodule escaping .git/modules We should pretend such submdules do not exist as it can lead to RCE.
Edward Thomson 8af6bce2 2018-03-20T07:47:27 online tests: update auth for bitbucket test Update the settings to use a specific read-only token for accessing our test repositories in Bitbucket.
Edward Thomson 999200cc 2018-03-19T09:20:35 online::clone: skip creds fallback test At present, we have three online tests against bitbucket: one which specifies the credentials in the payload, one which specifies the correct credentials in the URL and a final one that specifies the incorrect credentials in the URL. Bitbucket has begun responding to the latter test with a 403, which causes us to fail. Break these three tests into separate tests so that we can skip the latter until this is resolved on Bitbucket's end or until we can change the test to a different provider.
Patrick Steinhardt cda18f9b 2017-10-06T11:24:11 refs: do not use peeled OID if peeling to a tag If a reference stored in a packed-refs file does not directly point to a commit, tree or blob, the packed-refs file will also will include a fully-peeled OID pointing to the first underlying object of that type. If we try to peel a reference to an object, we will use that peeled OID to speed up resolving the object. As a reference for an annotated tag does not directly point to a commit, tree or blob but instead to the tag object, the packed-refs file will have an accomodating fully-peeled OID pointing to the object referenced by that tag. When we use the fully-peeled OID pointing to the referenced object when peeling, we obviously cannot peel that to the tag anymore. Fix this issue by not using the fully-peeled OID whenever we want to peel to a tag. Note that this does not include the case where we want to resolve to _any_ object type. Existing code may make use from the fact that we resolve those to commit objects instead of tag objects, even though that behaviour is inconsistent between packed and loose references. Furthermore, some tests of ours make the assumption that we in fact resolve those references to a commit.
Patrick Steinhardt 4296a36b 2017-07-10T09:36:19 ignore: honor case insensitivity for negative ignores When computing negative ignores, we throw away any rule which does not undo a previous rule to optimize. But on case insensitive file systems, we need to keep in mind that a negative ignore can also undo a previous rule with different case, which we did not yet honor while determining whether a rule undoes a previous one. So in the following example, we fail to unignore the "/Case" directory: /case !/Case Make both paths checking whether a plain- or wildcard-based rule undo a previous rule aware of case-insensitivity. This fixes the described issue.
Patrick Steinhardt 32cc5edc 2017-07-07T17:10:57 tests: status: additional test for negative ignores with pattern This test is by Carlos Martín Nieto.
Patrick Steinhardt 5c15cd94 2017-07-07T13:27:27 ignore: keep negative rules containing wildcards Ignore rules allow for reverting a previously ignored rule by prefixing it with an exclamation mark. As such, a negative rule can only override previously ignored files. While computing all ignore patterns, we try to use this fact to optimize away some negative rules which do not override any previous patterns, as they won't change the outcome anyway. In some cases, though, this optimization causes us to get the actual ignores wrong for some files. This may happen whenever the pattern contains a wildcard, as we are unable to reason about whether a pattern overrides a previous pattern in a sane way. This happens for example in the case where a gitignore file contains "*.c" and "!src/*.c", where we wouldn't un-ignore files inside of the "src/" subdirectory. In this case, the first solution coming to mind may be to just strip the "src/" prefix and simply compare the basenames. While that would work here, it would stop working as soon as the basename pattern itself is different, like for example with "*x.c" and "!src/*.c. As such, we settle for the easier fix of just not optimizing away rules that contain a wildcard.
Carlos Martín Nieto 9e98f49d 2018-02-28T12:21:08 tree: initialize the id we use for testing submodule insertions Instead of laving it uninitialized and relying on luck for it to be non-zero, let's give it a dummy hash so we make valgrind happy (in this case the hash comes from `sha1sum </dev/null`.
Edward Thomson 1b853c48 2018-02-19T22:10:44 checkout test: further ensure workdir perms are updated When both the index _and_ the working directory has changed permissions on a file permissions on a file - but only the permissions, such that the contents of the file are identical - ensure that `git_checkout` updates the permissions to match the checkout target.
Edward Thomson 73615900 2018-02-19T22:09:27 checkout test: ensure workdir perms are updated When the working directory has changed permissions on a file - but only the permissions, such that the contents of the file are identical - ensure that `git_checkout` updates the permissions to match the checkout target.
Patrick Steinhardt e74e05ed 2018-02-20T10:38:27 diff_tform: fix rename detection with rewrite/delete pair A rewritten file can either be classified as a modification of its contents or of a delete of the complete file followed by an addition of the new content. This distinction becomes important when we want to detect renames for rewrites. Given a scenario where a file "a" has been deleted and another file "b" has been renamed to "a", this should be detected as a deletion of "a" followed by a rename of "a" -> "b". Thus, splitting of the original rewrite into a delete/add pair is important here. This splitting is represented by a flag we can set at the current delta. While the flag is already being set in case we want to break rewrites, we do not do so in case where the `GIT_DIFF_FIND_RENAMES_FROM_REWRITES` flag is set. This can trigger an assert when we try to match the source and target deltas. Fix the issue by setting the `GIT_DIFF_FLAG__TO_SPLIT` flag at the delta when it is a rename target and `GIT_DIFF_FIND_RENAMES_FROM_REWRITES` is set.
Patrick Steinhardt e229e90d 2018-02-20T10:03:48 tests: add rename-rewrite scenarios to "renames" repository Add two more scenarios to the "renames" repository. The first scenario has a major rewrite of a file and a delete of another file, the second scenario has a deletion of a file and rename of another file to the deleted file. Both scenarios will be used in the following commit.
Patrick Steinhardt be205dfa 2018-02-20T09:54:58 tests: diff::rename: use defines for commit OIDs While we frequently reuse commit OIDs throughout the file, we do not have any constants to refer to these commits. Make this a bit easier to read by giving the commit OIDs somewhat descriptive names of what kind of commit they refer to.
Tyrie Vella b3c0d43c 2018-01-22T14:44:31 merge: virtual commit should be last argument to merge-base Our virtual commit must be the last argument to merge-base: since our algorithm pushes _both_ parents of the virtual commit, it needs to be the last argument, since merge-base: > Given three commits A, B and C, git merge-base A B C will compute the > merge base between A and a hypothetical commit M We want to calculate the merge base between the actual commit ("two") and the virtual commit ("one") - since one actually pushes its parents to the merge-base calculation, we need to calculate the merge base of "two" and the parents of one.
Edward Thomson 3619e0f0 2018-01-22T23:56:22 Add failing test case for virtual commit merge base issue
Edward Thomson dc51d774 2018-01-21T16:50:40 merge::trees::recursive: test for virtual base building Virtual base building: ensure that the virtual base is created and revwalked in the same way as git.
Edward Thomson b2b37077 2018-01-21T18:05:45 merge: reverse merge bases for recursive merge When the commits being merged have multiple merge bases, reverse the order when creating the virtual merge base. This is for compatibility with git's merge-recursive algorithm, and ensures that we build identical trees. Git does this to try to use older merge bases first. Per 8918b0c: > It seems to be the only sane way to do it: when a two-head merge is > done, and the merge-base and one of the two branches agree, the > merge assumes that the other branch has something new. > > If we start creating virtual commits from newer merge-bases, and go > back to older merge-bases, and then merge with newer commits again, > chances are that a patch is lost, _because_ the merge-base and the > head agree on it. Unlikely, yes, but it happened to me.
Edward Thomson 08ab5902 2018-01-21T16:41:49 Introduce additional criss-cross merge branches
lhchavez e83efde4 2017-12-23T14:59:07 Fix unpack double free If an element has been cached, but then the call to packfile_unpack_compressed() fails, the very next thing that happens is that its data is freed and then the element is not removed from the cache, which frees the data again. This change sets obj->data to NULL to avoid the double-free. It also stops trying to resolve deltas after two continuous failed rounds of resolution, and adds a test for this.
Patrick Steinhardt a521f5b1 2017-12-15T10:47:01 diff_file: properly refcount blobs when initializing file contents When initializing a `git_diff_file_content` from a source whose data is derived from a blob, we simply assign the blob's pointer to the resulting struct without incrementing its refcount. Thus, the structure can only be used as long as the blob is kept alive by the caller. Fix the issue by using `git_blob_dup` instead of a direct assignment. This function will increment the refcount of the blob without allocating new memory, so it does exactly what we want. As `git_diff_file_content__unload` already frees the blob when `GIT_DIFF_FLAG__FREE_BLOB` is set, we don't need to add new code handling the free but only have to set that flag correctly.
lhchavez a3cd5e94 2017-12-06T03:03:18 libFuzzer: Fix missing trailer crash This change fixes an invalid memory access when the trailer is missing / corrupt. Found using libFuzzer.
lhchavez 5cc3971a 2017-12-06T03:22:58 libFuzzer: Fix a git_packfile_stream leak This change ensures that the git_packfile_stream object in git_indexer_append() does not leak when the stream has errors. Found using libFuzzer.
David Turner 68842cbb 2017-10-29T12:28:43 Ignore trailing whitespace in .gitignore files (as git itself does)
Edward Thomson e66bc08c 2016-06-15T01:59:56 checkout: test force checkout when mode changes Test that we can successfully force checkout a target when the file contents are identical, but the mode has changed.
Etienne Samson e7c24ea2 2017-07-20T21:00:15 tests: fix the rebase-submodule test
Etienne Samson 54d4e5de 2017-06-21T14:57:30 Remove invalid submodule Fixes #4274
Ariel Davis cc9b0b6c 2017-06-16T21:05:58 tests: try to init with empty template path
Edward Thomson 8296da5f 2017-06-14T10:49:28 Merge pull request #4267 from mohseenrm/master adding GIT_FILTER_VERSION to GIT_FILTER_INIT as part of convention
Mohseen Mukaddam a78441bc 2017-06-13T11:05:40 Adding git_filter_init for initializing `git_filter` struct + unit test
Patrick Steinhardt a180e7d9 2017-06-13T11:10:19 tests: odb: add more low-level backend tests Introduce a new test suite "odb::backend::simple", which utilizes the fake backend to exercise the ODB abstraction layer. While such tests already exist for the case where multiple backends are put together, no direct testing for functionality with a single backend exist yet.
Patrick Steinhardt b2e53f36 2017-06-13T11:39:36 tests: odb: implement `exists_prefix` for the fake backend The fake backend currently implements all reading functions except for the `exists_prefix` one. Implement it to enable further testing of the ODB layer.
Patrick Steinhardt 983e627d 2017-06-13T11:38:59 tests: odb: use correct OID length The `search_object` function takes the OID length as one of its parameters, where its maximum length is `GIT_OID_HEXSZ`. The `exists` function of the fake backend used `GIT_OID_RAWSZ` though, leading to only the first half of the OID being used when finding the correct object.
Patrick Steinhardt c4cbb3b1 2017-06-13T11:38:14 tests: odb: have the fake backend detect ambiguous prefixes In order to be able to test the ODB prefix functions, we need to be able to detect ambiguous prefixes in case multiple objects with the same prefix exist in the fake ODB. Extend `search_object` to detect ambiguous queries and have callers return its error code instead of always returning `GIT_ENOTFOUND`.
Patrick Steinhardt 95170294 2017-06-13T11:08:28 tests: core: test initialization of `git_proxy_options` Initialization of the `git_proxy_options` structure is never tested anywhere. Include it in our usual initialization test in "core::structinit::compare".
Patrick Steinhardt bee423cc 2017-06-13T10:29:23 tests: network: add missing include for `git_repository_new` A newly added test uses the `git_repository_new` function without the corresponding header file being included. While this works due to the compiler deducing the correct function signature, we should obviously just include the function's declaration file.
Edward Thomson 2ca088bd 2017-06-12T22:47:54 Merge pull request #4265 from pks-t/pks/read-prefix-tests Read prefix tests
Edward Thomson fe9a5dd3 2017-06-12T12:00:14 remote: ensure we can create an anon remote on inmemory repo Given a wholly in-memory repository, ensure that we can create an anonymous remote and perform actions on it.
Patrick Steinhardt f148258a 2017-06-12T16:19:45 tests: odb: add tests with multiple backends Previous to pulling out and extending the fake backend, it was quite cumbersome to write tests for very specific scenarios regarding backends. But as we have made it more generic, it has become much easier to do so. As such, this commit adds multiple tests for scenarios with multiple backends for the ODB. The changes also include a test for a very targeted scenario. When one backend found a matching object via `read_prefix`, but the last backend returns `GIT_ENOTFOUND` and when object hash verification is turned off, we fail to reset the error code to `GIT_OK`. This causes us to segfault later on, when doing a double-free on the returned object.
Patrick Steinhardt 6e010bb1 2017-06-12T15:43:56 tests: odb: allow passing fake objects to the fake backend Right now, the fake backend is quite restrained in the way how it works: we pass it an OID which it is to return later as well as an error code we want it to return. While this is sufficient for existing tests, we can make the fake backend a little bit more generic in order to allow us testing for additional scenarios. To do so, we change the backend to not accept an error code and OID which it is to return for queries, but instead a simple array of OIDs with their respective blob contents. On each query, the fake backend simply iterates through this array and returns the first matching object.
Patrick Steinhardt 369cb45f 2017-06-12T15:21:58 tests: do not reuse OID from backend In order to make the fake backend more useful, we want to enable it holding multiple object references. To do so, we need to decouple it from the single fake OID it currently holds, which we simply move up into the calling tests.
Patrick Steinhardt 2add34d0 2017-06-12T14:53:46 tests: odb: move fake backend into its own file The fake backend used by the test suite `odb::backend::nonrefreshing` is useful to have some low-level tests for the ODB layer. As such, we move the implementation into its own `backend_helpers` module.
Edward Thomson 6f960b55 2017-06-11T10:37:46 Merge pull request #4088 from chescock/packfile-name-using-complete-hash Ensure packfiles with different contents have different names
Edward Thomson d2c4f764 2017-06-11T09:54:04 Merge pull request #4260 from libgit2/ethomson/forced_checkout_2 Update to forced checkout and untracked files
Patrick Steinhardt 0ef405b3 2017-02-15T14:05:10 checkout: do not delete directories with untracked entries If the `GIT_CHECKOUT_FORCE` flag is given to any of the `git_checkout` invocations, we remove files which were previously staged. But while doing so, we unfortunately also remove unstaged files in a directory which contains at least one staged file, resulting in potential data loss. This commit adds two tests to verify behavior.
Patrick Steinhardt 6c23704d 2017-06-08T21:40:18 settings: rename `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION` Initially, the setting has been solely used to enable the use of `fsync()` when creating objects. Since then, the use has been extended to also cover references and index files. As the option is not yet part of any release, we can still correct this by renaming the option to something more sensible, indicating not only correlation to objects. This commit renames the option to `GIT_OPT_ENABLE_FSYNC_GITDIR`. We also move the variable from the object to repository source code.
Patrick Steinhardt 92d3ea4e 2017-05-19T13:04:32 tests: index::version: improve write test for index v4 The current write test does not trigger some edge-cases in the index version 4 path compression code. Rewrite the test to start off the an empty standard repository, creating index entries with interesting paths itself. This allows for more fine-grained control over checked paths. Furthermore, we now also verify that entry paths are actually reconstructed correctly.
Patrick Steinhardt 8fe33538 2017-05-19T12:45:48 tests: index::version: verify we write compressed index entries While we do have a test which checks whether a written index of version 4 has the correct version set, we do not check whether this actually enables path compression for index entries. This commit adds a new test by adding a number of index entries with equal path prefixes to the index and subsequently flushing that to disk. With suffix compression enabled by index version 4, only the last few bytes of these paths will actually have to be written to the index, saving a lot of disk space. For the test, differences are about an order of magnitude, allowing us to easily verify without taking a deeper look at actual on-disk contents.
Patrick Steinhardt 82368b1b 2017-05-12T10:04:42 tests: index::version: add test to read index version v4 While we have a simple test to determine whether we can write an index of version 4, we never verified that we are able to read this kind of index (and in fact, we were not able to do so). Add a new repository which has an index of version 4. This repository is then read from a new test.
Patrick Steinhardt fea0c81e 2017-05-12T09:09:07 tests: index::version: move up cleanup function The init and cleanup functions for test suites are usually prepended to our actual tests. The index::version test suite does not adhere to this stile. Fix this.
Patrick Steinhardt 8a5e7aae 2017-05-22T12:53:44 varint: fix computation for remaining buffer space When encoding varints to a buffer, we want to remain sure that the required buffer space does not exceed what is actually available. Our current check does not do the right thing, though, in that it does not honor that our `pos` variable counts the position down instead of up. As such, we will require too much memory for small varints and not enough memory for big varints. Fix the issue by correctly calculating the required size as `(sizeof(varint) - pos)`. Add a test which failed before.
Edward Thomson dd0aa811 2017-06-04T22:46:07 Merge branch 'pr/4228'
Edward Thomson 82e929a8 2017-06-04T19:35:39 Merge pull request #4239 from roblg/toplevel-dir-ignore-fix Fix issue with directory glob ignore in subdirectories
Edward Thomson 04de614b 2017-06-04T19:03:07 Merge pull request #4243 from pks-t/pks/submodule-workdir Submodule working directory
Carlos Martín Nieto a1023a43 2017-05-20T17:18:07 Merge pull request #4179 from libgit2/ethomson/expand_tilde Introduce home directory expansion function for config files, attribute files
Carlos Martín Nieto e694e4e9 2017-05-20T14:17:36 Merge pull request #4174 from libgit2/ethomson/set_head_to_tag git_repository_set_head: use tag name in reflog
Carlos Martín Nieto 119bdd86 2017-05-20T14:13:27 Merge pull request #4231 from wabain/open-revrange revparse: support open-ended ranges
Chris Hescock c0e54155 2017-01-11T10:39:59 indexer: name pack files after trailer hash Upstream git.git has changed the way how packfiles are named. Previously, they were using a hash of the contained object's OIDs, which has then been changed to use the hash of the complete packfile instead. See 1190a1acf (pack-objects: name pack files after trailer hash, 2013-12-05) in the git.git repository for more information on this change. This commit changes our logic to match the behavior of core git.
Patrick Steinhardt 2696c5c3 2017-05-19T09:21:17 repository: make check if repo is a worktree more strict To determine if a repository is a worktree or not, we currently check for the existence of a "gitdir" file inside of the repository's gitdir. While this is sufficient for non-broken repositories, we have at least one case of a subtly broken repository where there exists a gitdir file inside of a gitmodule. This will cause us to misidentify the submodule as a worktree. While this is not really a fault of ours, we can do better here by observing that a repository can only ever be a worktree iff its common directory and dotgit directory are different. This allows us to make our check whether a repo is a worktree or not more strict by doing a simple string comparison of these two directories. This will also allow us to do the right thing in the above case of a broken repository, as for submodules these directories will be the same. At the same time, this allows us to skip the `stat` check for the "gitdir" file for most repositories.
Robert Gay c3b8e8b3 2017-05-14T10:28:05 Fix issue with directory glob ignore in subdirectories
Patrick Steinhardt e526fbc7 2017-05-17T09:23:06 tests: add test suite for opening submodules
Patrick Steinhardt 7d7f6d33 2017-05-03T13:52:55 tests: clone::local: compile UNC functions for Windows only
Patrick Steinhardt 98a5f081 2017-05-03T13:53:13 tests: threads::basic: remove unused function `exit_abruptly`
William Bain 8b107dc5 2017-05-03T11:20:57 revparse: support open-ended ranges Support '..' and '...' ranges where one side is not specified. The unspecified side defaults to HEAD. Closes #4223
Patrick Steinhardt 883eeb5f 2017-05-02T12:35:59 worktree: switch over worktree pruning to an opts structure The current signature of `git_worktree_prune` accepts a flags field to alter its behavior. This is not as flexible as we'd like it to be when we want to enable passing additional options in the future. As the function has not been part of any release yet, we are still free to alter its current signature. This commit does so by using our usual pattern of an options structure, which is easily extendable without breaking the API.
Patrick Steinhardt 8264a30f 2017-05-02T10:11:28 worktree: support creating locked worktrees When creating a new worktree, we do have a potential race with us creating the worktree and another process trying to delete the same worktree as it is being created. As such, the upstream git project has introduced a flag `git worktree add --locked`, which will cause the newly created worktree to be locked immediately after its creation. This mitigates the race condition. We want to be able to mirror the same behavior. As such, a new flag `locked` is added to the options structure of `git_worktree_add` which allows the user to enable this behavior.
Patrick Steinhardt ffd264d9 2017-05-03T14:51:23 tests: repo: fix repo discovery tests on overlayfs Debian and Ubuntu often use schroot to build their DEB packages in a controlled environment. Depending on how schroot is configured, our tests regarding repository discovery break due to not being able to find the repositories anymore. It turns out that these errors occur when the schroot is configured to use an overlayfs on the directory structures. The reason for this failure is that we usually refrain from discovering repositories across devices. But unfortunately, overlayfs does not have consistent device identifiers for all its files but will instead use the device number of the filesystem the file stems from. So whenever we cross boundaries between the upper and lower layer of the overlay, we will fail to properly detect the repository and bail out. This commit fixes the issue by enabling cross-device discovery in our tests. While it would be preferable to have this turned off, it probably won't do much harm anyway as we set up our tests in a temporary location outside of the parent repository.
Patrick Steinhardt a7aa73a5 2017-05-02T10:02:36 worktree: introduce git_worktree_add options The `git_worktree_add` function currently accepts only a path and name for the new work tree. As we may want to expand these parameters in future versions without adding additional parameters to the function for every option, this commit introduces our typical pattern of an options struct. Right now, this structure is still empty, which will change with the next commit.
Edward Thomson 1dc89aab 2017-05-01T21:34:21 object validation: free some memleaks
Edward Thomson 13c1bf07 2017-05-01T16:17:48 Merge pull request #4197 from pks-t/pks/verify-object-hashes Verify object hashes
Edward Thomson 5700ee9c 2017-05-01T16:10:50 Merge pull request #4216 from pks-t/pks/debian-test-failures Debian HTTPS feature test failure
Patrick Steinhardt 35079f50 2017-04-21T07:31:56 odb: add option to turn off hash verification Verifying hashsums of objects we are reading from the ODB may be costly as we have to perform an additional hashsum calculation on the object. Especially when reading large objects, the penalty can be as high as 35%, as can be seen when executing the equivalent of `git cat-file` with and without verification enabled. To mitigate for this, we add a global option for libgit2 which enables the developer to turn off the verification, e.g. when he can be reasonably sure that the objects on disk won't be corrupted.
Patrick Steinhardt 28a0741f 2017-04-10T09:30:08 odb: verify object hashes The upstream git.git project verifies objects when looking them up from disk. This avoids scenarios where objects have somehow become corrupt on disk, e.g. due to hardware failures or bit flips. While our mantra is usually to follow upstream behavior, we do not do so in this case, as we never check hashes of objects we have just read from disk. To fix this, we create a new error class `GIT_EMISMATCH` which denotes that we have looked up an object with a hashsum mismatch. `odb_read_1` will then, after having read the object from its backend, hash the object and compare the resulting hash to the expected hash. If hashes do not match, it will return an error. This obviously introduces another computation of checksums and could potentially impact performance. Note though that we usually perform I/O operations directly before doing this computation, and as such the actual overhead should be drowned out by I/O. Running our test suite seems to confirm this guess. On a Linux system with best-of-five timings, we had 21.592s with the check enabled and 21.590s with the ckeck disabled. Note though that our test suite mostly contains very small blobs only. It is expected that repositories with bigger blobs may notice an increased hit by this check. In addition to a new test, we also had to change the odb::backend::nonrefreshing test suite, which now triggers a hashsum mismatch when looking up the commit "deadbeef...". This is expected, as the fake backend allocated inside of the test will return an empty object for the OID "deadbeef...", which will obviously not hash back to "deadbeef..." again. We can simply adjust the hash to equal the hash of the empty object here to fix this test.
Patrick Steinhardt d59dabe5 2017-04-10T09:00:51 tests: object: test looking up corrupted objects We currently have no tests which check whether we fail reading corrupted objects. Add one which modifies contents of an object stored on disk and then tries to read the object.