|
2de44f18
|
2018-09-08T18:54:21
|
|
clar: iterate errors in report_all / report_errors
Instead of trying to have a clever iterator pattern that increments the
error number, just iterate over errors in the report errors or report
all functions as it's easier to reason about in this fashion.
(cherry picked from commit d17e67d08d6e73dbf0daeae5049f92a38c2d8bb6)
|
|
423cadbe
|
2018-08-27T01:06:37
|
|
ci: use more compatible strftime formats
Windows lacks %F and %T formats for strftime. Expand them to the
year/month/day and hour/minute/second formats, respectively.
(cherry picked from commit e595eeb5ab88142b97798ed65e651de6560515e9)
|
|
61f5cb54
|
2018-09-04T14:00:49
|
|
clar: remove globals; error-check fprintf/fclose
Remove the global summary filename and file pointer; pass them in to the
summary functions as needed. Error check the results of buffered I/O
calls.
(cherry picked from commit b67a93ff81e2fbfcf9ebb52dd15db9aa4e9ca708)
|
|
ad3a0f0c
|
2018-08-26T15:11:21
|
|
clar: refactor explicitly run test behavior
Previously, supplying `-s` to explicitly enable some test(s) would run
the tests immediately from the argument parser. This forces us to set
up the entire clar environment (for example: sandboxing) before argument
parsing takes place.
Refactor the behavior of `-s` to add the explicitly chosen tests to a
list that is executed later. This untangles the argument parsing from
the setup lifecycle, allowing us to use the arguments to perform the
setup.
(cherry picked from commit 90753a96515f85e2d0e79a16d3a06ba5b363c68e)
|
|
d95d2201
|
2018-08-26T15:31:14
|
|
clar: accept a value for the summary filename
Accept an (optional) value for the summary filename. Continues to
default to summary.xml.
(cherry picked from commit baa5c20d0815441cac2d2135d2b0190cb543e637)
|
|
5e8ac164
|
2018-08-26T15:25:15
|
|
clar: don't use a variable named `time`
(cherry picked from commit dbebcb04b42047df0d52ad3515077a134c5b7da7)
|
|
3c12187d
|
2018-07-27T23:00:09
|
|
Barebones JUnit XML output
(cherry picked from commit 59f1e477f772c73c76bc654a0853fdcf491a32a7)
|
|
a503a9ea
|
2018-07-26T23:02:34
|
|
Documentation
(cherry picked from commit 3a9b96311d6f0ff364c6417cf3aab7c9745b18d4)
|
|
d8fd1c72
|
2018-07-26T23:02:20
|
|
Isolate test reports
This makes it possible to keep track of every test status (even
successful ones), and their errors, if any.
(cherry picked from commit bf9fc126709af948c2a324ceb1b2696046c91cfe)
|
|
69e86433
|
2018-07-20T14:14:16
|
|
buf tests: allocate a smaller size for the oom
On Linux (where we run valgrind) allocate a smaller buffer, but still an
insanely large size. This will cause malloc to fail but will not cause
valgrind to report a likely error with a negative-sized malloc.
Keep the original buffer size on non-Linux platforms: this is
well-tested on them and changing it may be problematic. On macOS, for
example, using the new size causes `malloc` to print a warning to
stderr.
(cherry picked from commit 219512e7989340d9efae8480fb79c08b91724014)
|
|
e5278f70
|
2017-11-11T15:38:27
|
|
clar: verify command line arguments before execute
When executing `libgit2_clar -smerge -invalid_option`, it will first execute
the merge test suite and afterwards output help because of the invalid option.
With this changa, it verifies all options before execute. If there are any
invalid options, it will output help and exit without actually executing
the test suites.
(cherry picked from commit 3275863134122892e2f8a8aa4ad0ce1c123a48ec)
|
|
2362ce6c
|
2017-06-07T13:06:53
|
|
tests: online::clone: inline creds-test with nonexistent URL
Right now, we test our credential callback code twice, once via SSH on
localhost and once via a non-existent GitHub repository. While the first
URL makes sense to be configurable, it does not make sense to hard-code
the non-existing repository, which requires us to call tests multiple
times. Instead, we can just inline the URL into another set of tests.
(cherry picked from commit 54a1bf057a1123cf55ac3447c79761c817382f47)
|
|
a1a495f2
|
2017-06-07T12:48:48
|
|
tests: online::clone: construct credential-URL from environment
We support two types of passing credentials to the proxy, either via the
URL or explicitly by specifying user and password. We test these types
by modifying the proxy URL and executing the tests twice, which is
in fact unnecessary and requires us to maintain the list of environment
variables and test executions across multiple CI infrastructures.
To fix the situation, we can just always pass the host, port, user and
password to the tests. The tests can then assemble the complete URL
either with or without included credentials, allowing us to test both
cases in-process.
(cherry picked from commit fea6092079d5c09b499e472efead2f7aa81ce8a1)
|
|
89641431
|
2017-06-07T11:06:01
|
|
tests: perf: build but exclude performance tests by default
Our performance tests (or to be more concrete, our single performance
test) are not built by default, as they are always #ifdef'd out. While
it is true that we don't want to run performance tests by default, not
compiling them at all may cause code rot and is thus an unfavorable
approach to handle this.
We can easily improve this situation: this commit removes the #ifdef,
causing the code to always be compiled. Furthermore, we add `-xperf` to
the default command line parameters of `generate.py`, thus causing the
tests to be excluded by default.
Due to this approach, we are now able to execute the performance tests
by passing `-sperf` to `libgit2_clar`. Unfortunately, we cannot execute
the performance tests on Travis or AppVeyor as they rely on history
being available for the libgit2 repository. As both do a shallow clone
only, though, this is not given.
(cherry picked from commit 543ec149b86a68e12dd141a6141e82850dabbf21)
|
|
98378a3f
|
2017-06-07T11:00:26
|
|
tests: iterator::workdir: fix reference count in stale test
The test `iterator::workdir::filesystem_gunk` is usually not executed,
as it is guarded by the environment variable "GITTEST_INVASIVE_SPEED"
due to its effects on speed. As such, it has become stale and does not
account for new references which have meanwhile been added to the
testrepo, causing it to fail. Fix this by raising the number of expected
references to 15.
(cherry picked from commit b8c14499f9940feaab08a23651a2ef24d27b17b7)
|
|
d2bbea82
|
2017-06-07T10:59:31
|
|
tests: iterator_helpers: assert number of iterator items
When the function `expect_iterator_items` surpasses the number of
expected items, we simply break the loop. This causes us to trigger an
assert later on which has message attached, which is annoying when
trying to locate the root error cause. Instead, directly assert that the
current count is still smaller or equal to the expected count inside of
the loop.
(cherry picked from commit 9aba76364fcb4755930856a7bafc5294ed3ee944)
|
|
293c5ef2
|
2017-06-07T10:59:03
|
|
tests: status::worktree: indicate skipped tests on Win32
Some function bodies of tests which are not applicable to the Win32
platform are completely #ifdef'd out instead of calling `cl_skip()`.
This leaves us with no indication that these tests are not being
executed at all and may thus cause decreased scrutiny when investigating
skipped tests. Improve the situation by calling `cl_skip()` instead of
just doing nothing.
(cherry picked from commit 72c28ab011759dce113c2a0c7c36ebcd56bd6ddf)
|
|
b988f544
|
2017-04-26T13:16:18
|
|
tests: online::clone: use URL of test server
All our tests running against a local SSH server usually read the
server's URL from environment variables. But online::clone::ssh_cert
test fails to do so and instead always connects to
"ssh://localhost/foo". This assumption breaks whenever the SSH server is
not running on the standard port, e.g. when it is running as a user.
Fix the issue by using the URL provided by the environment.
(cherry picked from commit c2c95ad0a210be4811c247be51664bfe8b2e830a)
|
|
7e8d9789
|
2018-10-05T11:42:00
|
|
submodule: add failing test for option-injection protection in url and path
|
|
74937431
|
2018-10-05T10:56:02
|
|
config_file: properly ignore includes without "path" value
In case a configuration includes a key "include.path=" without any
value, the generated configuration entry will have its value set to
`NULL`. This is unexpected by the logic handling includes, and as soon
as we try to calculate the included path we will unconditionally
dereference that `NULL` pointer and thus segfault.
Fix the issue by returning early in both `parse_include` and
`parse_conditional_include` in case where the `file` argument is `NULL`.
Add a test to avoid future regression.
The issue has been found by the oss-fuzz project, issue 10810.
(cherry picked from commit d06d4220eec035466d1a837972a40546b8904330)
|
|
232fc469
|
2018-10-05T10:55:29
|
|
tests: always unlink created config files
While our tests in config::include create a plethora of configuration
files, most of them do not get removed at the end of each test. This can
cause weird interactions with tests that are being run at a later stage
if these later tests try to create files or directories with the same
name as any of the created configuration files.
Fix the issue by unlinking all created files at the end of these tests.
(cherry picked from commit bf662f7cf8daff2357923446cf9d22f5d4b4a66b)
|
|
3bbda7d7
|
2018-08-09T11:13:59
|
|
smart_pkt: reorder and rename parameters of `git_pkt_parse_line`
The parameters of the `git_pkt_parse_line` function are quite confusing.
First, there is no real indicator what the `out` parameter is actually
all about, and it's not really clear what the `bufflen` parameter refers
to. Reorder and rename the parameters to make this more obvious.
(cherry picked from commit 0b3dfbf425d689101663341beb94237614f1b5c2)
|
|
8cd0a897
|
2018-08-09T11:01:00
|
|
smart_pkt: fix buffer overflow when parsing "ok" packets
There are two different buffer overflows present when parsing "ok"
packets. First, we never verify whether the line already ends after
"ok", but directly go ahead and also try to skip the expected space
after "ok". Second, we then go ahead and use `strchr` to scan for the
terminating newline character. But in case where the line isn't
terminated correctly, this can overflow the line buffer.
Fix the issues by using `git__prefixncmp` to check for the "ok " prefix
and only checking for a trailing '\n' instead of using `memchr`. This
also fixes the issue of us always requiring a trailing '\n'.
Reported by oss-fuzz, issue 9749:
Crash Type: Heap-buffer-overflow READ {*}
Crash Address: 0x6310000389c0
Crash State:
ok_pkt
git_pkt_parse_line
git_smart__store_refs
Sanitizer: address (ASAN)
(cherry picked from commit a9f1ca09178af0640963e069a2142d5ced53f0b4)
|
|
82c3fc33
|
2018-08-09T10:38:10
|
|
smart_pkt: fix buffer overflow when parsing "ACK" packets
We are being quite lenient when parsing "ACK" packets. First, we didn't
correctly verify that we're not overrunning the provided buffer length,
which we fix here by using `git__prefixncmp` instead of
`git__prefixcmp`. Second, we do not verify that the actual contents make
any sense at all, as we simply ignore errors when parsing the ACKs OID
and any unknown status strings. This may result in a parsed packet
structure with invalid contents, which is being silently passed to the
caller. This is being fixed by performing proper input validation and
checking of return codes.
(cherry picked from commit bc349045b1be8fb3af2b02d8554483869e54d5b8)
|
|
5d108c9a
|
2018-10-03T15:39:40
|
|
tests: verify parsing logic for smart packets
The commits following this commit are about to introduce quite a lot of
refactoring and tightening of the smart packet parser. Unfortunately, we
do not yet have any tests despite our online tests that verify that our
parser does not regress upon changes. This is doubly unfortunate as our
online tests aren't executed by default.
Add new tests that exercise the smart parsing logic directly by
executing `git_pkt_parse_line`.
(cherry picked from commit 365d2720c1a5fc89f03fd85265c8b45195c7e4a8)
|
|
a8db6c92
|
2017-11-30T15:40:13
|
|
util: introduce `git__prefixncmp` and consolidate implementations
Introduce `git_prefixncmp` that will search up to the first `n`
characters of a string to see if it is prefixed by another string.
This is useful for examining if a non-null terminated character
array is prefixed by a particular substring.
Consolidate the various implementations of `git__prefixcmp` around a
single core implementation and add some test cases to validate its
behavior.
(cherry picked from commit 86219f40689c85ec4418575223f4376beffa45af)
|
|
25d4a8c9
|
2018-06-29T09:11:02
|
|
delta: fix out-of-bounds read of delta
When computing the offset and length of the delta base, we repeatedly
increment the `delta` pointer without checking whether we have advanced
past its end already, which can thus result in an out-of-bounds read.
Fix this by repeatedly checking whether we have reached the end. Add a
test which would cause Valgrind to produce an error.
Reported-by: Riccardo Schirone <rschiron@redhat.com>
Test-provided-by: Riccardo Schirone <rschiron@redhat.com>
|
|
8ab4f363
|
2018-06-29T07:45:18
|
|
delta: fix sign-extension of big left-shift
Our delta code was originally adapted from JGit, which itself adapted it
from git itself. Due to this heritage, we inherited a bug from git.git
in how we compute the delta offset, which was fixed upstream in
48fb7deb5 (Fix big left-shifts of unsigned char, 2009-06-17). As
explained by Linus:
Shifting 'unsigned char' or 'unsigned short' left can result in sign
extension errors, since the C integer promotion rules means that the
unsigned char/short will get implicitly promoted to a signed 'int' due to
the shift (or due to other operations).
This normally doesn't matter, but if you shift things up sufficiently, it
will now set the sign bit in 'int', and a subsequent cast to a bigger type
(eg 'long' or 'unsigned long') will now sign-extend the value despite the
original expression being unsigned.
One example of this would be something like
unsigned long size;
unsigned char c;
size += c << 24;
where despite all the variables being unsigned, 'c << 24' ends up being a
signed entity, and will get sign-extended when then doing the addition in
an 'unsigned long' type.
Since git uses 'unsigned char' pointers extensively, we actually have this
bug in a couple of places.
In our delta code, we inherited such a bogus shift when computing the
offset at which the delta base is to be found. Due to the sign extension
we can end up with an offset where all the bits are set. This can allow
an arbitrary memory read, as the addition in `base_len < off + len` can
now overflow if `off` has all its bits set.
Fix the issue by casting the result of `*delta++ << 24UL` to an unsigned
integer again. Add a test with a crafted delta that would actually
succeed with an out-of-bounds read in case where the cast wouldn't
exist.
Reported-by: Riccardo Schirone <rschiron@redhat.com>
Test-provided-by: Riccardo Schirone <rschiron@redhat.com>
|
|
ea55c77c
|
2018-05-24T21:58:40
|
|
path: hand-code the zero-width joiner as UTF-8
|
|
95a0ab89
|
2018-05-24T20:28:36
|
|
submodule: plug leaks from the escape detection
|
|
1c1e32b7
|
2018-05-22T20:37:23
|
|
checkout: change symlinked .gitmodules file test to expect failure
When dealing with `core.proectNTFS` and `core.protectHFS` we do check
against `.gitmodules` but we still have a failing test as the non-filesystem
codepath does not check for it.
|
|
5b855194
|
2018-05-22T16:13:47
|
|
path: reject .gitmodules as a symlink
Any part of the library which asks the question can pass in the mode to have it
checked against `.gitmodules` being a symlink.
This is particularly relevant for adding entries to the index from the worktree
and for checking out files.
|
|
644c973f
|
2018-05-22T15:21:08
|
|
path: accept the name length as a parameter
We may take in names from the middle of a string so we want the caller to let us
know how long the path component is that we should be checking.
|
|
2143a0de
|
2018-05-22T14:16:45
|
|
checkout: add a failing test for refusing a symlinked .gitmodules
We want to reject these as they cause compatibility issues and can lead to git
writing to files outside of the repository.
|
|
dc5591b4
|
2018-05-18T15:16:53
|
|
path: hide the dotgit file functions
These can't go into the public API yet as we don't want to introduce API or ABI
changes in a security release.
|
|
4656e9c4
|
2018-05-16T15:42:08
|
|
path: add a function to detect an .gitmodules file
Given a path component it knows what to pass to the filesystem-specific
functions so we're protected even from trees which try to use the 8.3 naming
rules to get around us matching on the filename exactly.
The logic and test strings come from the equivalent git change.
|
|
4a1753c2
|
2018-05-14T16:03:15
|
|
submodule: also validate Windows-separated paths for validity
Otherwise we would also admit `..\..\foo\bar` as a valid path and fail to
protect Windows users.
Ideally we would check for both separators without the need for the copied
string, but this'll get us over the RCE.
|
|
f77e40a1
|
2018-04-30T13:47:15
|
|
submodule: ignore submodules which include path traversal in their name
If the we decide that the "name" of the submodule (i.e. its path inside
`.git/modules/`) is trying to escape that directory or otherwise trick us, we
ignore the configuration for that submodule.
This leaves us with a half-configured submodule when looking it up by path, but
it's the same result as if the configuration really were missing.
The name check is potentially more strict than it needs to be, but it lets us
re-use the check we're doing for the checkout. The function that encapsulates
this logic is ready to be exported but we don't want to do that in a security
release so it remains internal for now.
|
|
2fc15ae8
|
2018-04-30T13:03:44
|
|
submodule: add a failing test for a submodule escaping .git/modules
We should pretend such submdules do not exist as it can lead to RCE.
|
|
8af6bce2
|
2018-03-20T07:47:27
|
|
online tests: update auth for bitbucket test
Update the settings to use a specific read-only token for accessing our
test repositories in Bitbucket.
|
|
999200cc
|
2018-03-19T09:20:35
|
|
online::clone: skip creds fallback test
At present, we have three online tests against bitbucket: one which
specifies the credentials in the payload, one which specifies the
correct credentials in the URL and a final one that specifies the
incorrect credentials in the URL. Bitbucket has begun responding to the
latter test with a 403, which causes us to fail.
Break these three tests into separate tests so that we can skip the
latter until this is resolved on Bitbucket's end or until we can change
the test to a different provider.
|
|
9e98f49d
|
2018-02-28T12:21:08
|
|
tree: initialize the id we use for testing submodule insertions
Instead of laving it uninitialized and relying on luck for it to be non-zero,
let's give it a dummy hash so we make valgrind happy (in this case the hash
comes from `sha1sum </dev/null`.
|
|
1b853c48
|
2018-02-19T22:10:44
|
|
checkout test: further ensure workdir perms are updated
When both the index _and_ the working directory has changed
permissions on a file permissions on a file - but only the permissions,
such that the contents of the file are identical - ensure that
`git_checkout` updates the permissions to match the checkout target.
|
|
73615900
|
2018-02-19T22:09:27
|
|
checkout test: ensure workdir perms are updated
When the working directory has changed permissions on a file - but only
the permissions, such that the contents of the file are identical -
ensure that `git_checkout` updates the permissions to match the checkout
target.
|
|
e74e05ed
|
2018-02-20T10:38:27
|
|
diff_tform: fix rename detection with rewrite/delete pair
A rewritten file can either be classified as a modification of its
contents or of a delete of the complete file followed by an addition of
the new content. This distinction becomes important when we want to
detect renames for rewrites. Given a scenario where a file "a" has been
deleted and another file "b" has been renamed to "a", this should be
detected as a deletion of "a" followed by a rename of "a" -> "b". Thus,
splitting of the original rewrite into a delete/add pair is important
here.
This splitting is represented by a flag we can set at the current delta.
While the flag is already being set in case we want to break rewrites,
we do not do so in case where the `GIT_DIFF_FIND_RENAMES_FROM_REWRITES`
flag is set. This can trigger an assert when we try to match the source
and target deltas.
Fix the issue by setting the `GIT_DIFF_FLAG__TO_SPLIT` flag at the delta
when it is a rename target and `GIT_DIFF_FIND_RENAMES_FROM_REWRITES` is
set.
|
|
e229e90d
|
2018-02-20T10:03:48
|
|
tests: add rename-rewrite scenarios to "renames" repository
Add two more scenarios to the "renames" repository. The first scenario
has a major rewrite of a file and a delete of another file, the second
scenario has a deletion of a file and rename of another file to the
deleted file. Both scenarios will be used in the following commit.
|
|
be205dfa
|
2018-02-20T09:54:58
|
|
tests: diff::rename: use defines for commit OIDs
While we frequently reuse commit OIDs throughout the file, we do not
have any constants to refer to these commits. Make this a bit easier to
read by giving the commit OIDs somewhat descriptive names of what kind
of commit they refer to.
|
|
b3c0d43c
|
2018-01-22T14:44:31
|
|
merge: virtual commit should be last argument to merge-base
Our virtual commit must be the last argument to merge-base: since our
algorithm pushes _both_ parents of the virtual commit, it needs to be
the last argument, since merge-base:
> Given three commits A, B and C, git merge-base A B C will compute the
> merge base between A and a hypothetical commit M
We want to calculate the merge base between the actual commit ("two")
and the virtual commit ("one") - since one actually pushes its parents
to the merge-base calculation, we need to calculate the merge base of
"two" and the parents of one.
|
|
3619e0f0
|
2018-01-22T23:56:22
|
|
Add failing test case for virtual commit merge base issue
|
|
dc51d774
|
2018-01-21T16:50:40
|
|
merge::trees::recursive: test for virtual base building
Virtual base building: ensure that the virtual base is created and
revwalked in the same way as git.
|
|
b2b37077
|
2018-01-21T18:05:45
|
|
merge: reverse merge bases for recursive merge
When the commits being merged have multiple merge bases, reverse the
order when creating the virtual merge base. This is for compatibility
with git's merge-recursive algorithm, and ensures that we build
identical trees.
Git does this to try to use older merge bases first. Per 8918b0c:
> It seems to be the only sane way to do it: when a two-head merge is
> done, and the merge-base and one of the two branches agree, the
> merge assumes that the other branch has something new.
>
> If we start creating virtual commits from newer merge-bases, and go
> back to older merge-bases, and then merge with newer commits again,
> chances are that a patch is lost, _because_ the merge-base and the
> head agree on it. Unlikely, yes, but it happened to me.
|
|
08ab5902
|
2018-01-21T16:41:49
|
|
Introduce additional criss-cross merge branches
|
|
e83efde4
|
2017-12-23T14:59:07
|
|
Fix unpack double free
If an element has been cached, but then the call to
packfile_unpack_compressed() fails, the very next thing that happens is
that its data is freed and then the element is not removed from the
cache, which frees the data again.
This change sets obj->data to NULL to avoid the double-free. It also
stops trying to resolve deltas after two continuous failed rounds of
resolution, and adds a test for this.
|
|
a521f5b1
|
2017-12-15T10:47:01
|
|
diff_file: properly refcount blobs when initializing file contents
When initializing a `git_diff_file_content` from a source whose data is
derived from a blob, we simply assign the blob's pointer to the
resulting struct without incrementing its refcount. Thus, the structure
can only be used as long as the blob is kept alive by the caller.
Fix the issue by using `git_blob_dup` instead of a direct assignment.
This function will increment the refcount of the blob without allocating
new memory, so it does exactly what we want. As
`git_diff_file_content__unload` already frees the blob when
`GIT_DIFF_FLAG__FREE_BLOB` is set, we don't need to add new code
handling the free but only have to set that flag correctly.
|
|
a3cd5e94
|
2017-12-06T03:03:18
|
|
libFuzzer: Fix missing trailer crash
This change fixes an invalid memory access when the trailer is missing /
corrupt.
Found using libFuzzer.
|
|
5cc3971a
|
2017-12-06T03:22:58
|
|
libFuzzer: Fix a git_packfile_stream leak
This change ensures that the git_packfile_stream object in
git_indexer_append() does not leak when the stream has errors.
Found using libFuzzer.
|
|
68842cbb
|
2017-10-29T12:28:43
|
|
Ignore trailing whitespace in .gitignore files (as git itself does)
|
|
e66bc08c
|
2016-06-15T01:59:56
|
|
checkout: test force checkout when mode changes
Test that we can successfully force checkout a target when the file
contents are identical, but the mode has changed.
|
|
cda18f9b
|
2017-10-06T11:24:11
|
|
refs: do not use peeled OID if peeling to a tag
If a reference stored in a packed-refs file does not directly point to a
commit, tree or blob, the packed-refs file will also will include a
fully-peeled OID pointing to the first underlying object of that type.
If we try to peel a reference to an object, we will use that peeled OID
to speed up resolving the object.
As a reference for an annotated tag does not directly point to a commit,
tree or blob but instead to the tag object, the packed-refs file will
have an accomodating fully-peeled OID pointing to the object referenced
by that tag. When we use the fully-peeled OID pointing to the referenced
object when peeling, we obviously cannot peel that to the tag anymore.
Fix this issue by not using the fully-peeled OID whenever we want to
peel to a tag. Note that this does not include the case where we want to
resolve to _any_ object type. Existing code may make use from the fact
that we resolve those to commit objects instead of tag objects, even
though that behaviour is inconsistent between packed and loose
references. Furthermore, some tests of ours make the assumption that we
in fact resolve those references to a commit.
|
|
4296a36b
|
2017-07-10T09:36:19
|
|
ignore: honor case insensitivity for negative ignores
When computing negative ignores, we throw away any rule which does not
undo a previous rule to optimize. But on case insensitive file systems,
we need to keep in mind that a negative ignore can also undo a previous
rule with different case, which we did not yet honor while determining
whether a rule undoes a previous one. So in the following example, we
fail to unignore the "/Case" directory:
/case
!/Case
Make both paths checking whether a plain- or wildcard-based rule undo a
previous rule aware of case-insensitivity. This fixes the described
issue.
|
|
32cc5edc
|
2017-07-07T17:10:57
|
|
tests: status: additional test for negative ignores with pattern
This test is by Carlos Martín Nieto.
|
|
5c15cd94
|
2017-07-07T13:27:27
|
|
ignore: keep negative rules containing wildcards
Ignore rules allow for reverting a previously ignored rule by prefixing
it with an exclamation mark. As such, a negative rule can only override
previously ignored files. While computing all ignore patterns, we try to
use this fact to optimize away some negative rules which do not override
any previous patterns, as they won't change the outcome anyway.
In some cases, though, this optimization causes us to get the actual
ignores wrong for some files. This may happen whenever the pattern
contains a wildcard, as we are unable to reason about whether a pattern
overrides a previous pattern in a sane way. This happens for example in
the case where a gitignore file contains "*.c" and "!src/*.c", where we
wouldn't un-ignore files inside of the "src/" subdirectory.
In this case, the first solution coming to mind may be to just strip the
"src/" prefix and simply compare the basenames. While that would work
here, it would stop working as soon as the basename pattern itself is
different, like for example with "*x.c" and "!src/*.c. As such, we
settle for the easier fix of just not optimizing away rules that contain
a wildcard.
|
|
e7c24ea2
|
2017-07-20T21:00:15
|
|
tests: fix the rebase-submodule test
|
|
54d4e5de
|
2017-06-21T14:57:30
|
|
Remove invalid submodule
Fixes #4274
|
|
cc9b0b6c
|
2017-06-16T21:05:58
|
|
tests: try to init with empty template path
|
|
8296da5f
|
2017-06-14T10:49:28
|
|
Merge pull request #4267 from mohseenrm/master
adding GIT_FILTER_VERSION to GIT_FILTER_INIT as part of convention
|
|
a78441bc
|
2017-06-13T11:05:40
|
|
Adding git_filter_init for initializing `git_filter` struct + unit test
|
|
a180e7d9
|
2017-06-13T11:10:19
|
|
tests: odb: add more low-level backend tests
Introduce a new test suite "odb::backend::simple", which utilizes the
fake backend to exercise the ODB abstraction layer. While such tests
already exist for the case where multiple backends are put together, no
direct testing for functionality with a single backend exist yet.
|
|
b2e53f36
|
2017-06-13T11:39:36
|
|
tests: odb: implement `exists_prefix` for the fake backend
The fake backend currently implements all reading functions except for
the `exists_prefix` one. Implement it to enable further testing of the
ODB layer.
|
|
983e627d
|
2017-06-13T11:38:59
|
|
tests: odb: use correct OID length
The `search_object` function takes the OID length as one of its
parameters, where its maximum length is `GIT_OID_HEXSZ`. The `exists`
function of the fake backend used `GIT_OID_RAWSZ` though, leading to
only the first half of the OID being used when finding the correct
object.
|
|
c4cbb3b1
|
2017-06-13T11:38:14
|
|
tests: odb: have the fake backend detect ambiguous prefixes
In order to be able to test the ODB prefix functions, we need to be able
to detect ambiguous prefixes in case multiple objects with the same
prefix exist in the fake ODB. Extend `search_object` to detect ambiguous
queries and have callers return its error code instead of always
returning `GIT_ENOTFOUND`.
|
|
95170294
|
2017-06-13T11:08:28
|
|
tests: core: test initialization of `git_proxy_options`
Initialization of the `git_proxy_options` structure is never tested
anywhere. Include it in our usual initialization test in
"core::structinit::compare".
|
|
bee423cc
|
2017-06-13T10:29:23
|
|
tests: network: add missing include for `git_repository_new`
A newly added test uses the `git_repository_new` function without the
corresponding header file being included. While this works due to the
compiler deducing the correct function signature, we should obviously
just include the function's declaration file.
|
|
2ca088bd
|
2017-06-12T22:47:54
|
|
Merge pull request #4265 from pks-t/pks/read-prefix-tests
Read prefix tests
|
|
fe9a5dd3
|
2017-06-12T12:00:14
|
|
remote: ensure we can create an anon remote on inmemory repo
Given a wholly in-memory repository, ensure that we can create an
anonymous remote and perform actions on it.
|
|
f148258a
|
2017-06-12T16:19:45
|
|
tests: odb: add tests with multiple backends
Previous to pulling out and extending the fake backend, it was quite
cumbersome to write tests for very specific scenarios regarding
backends. But as we have made it more generic, it has become much easier
to do so. As such, this commit adds multiple tests for scenarios with
multiple backends for the ODB.
The changes also include a test for a very targeted scenario. When one
backend found a matching object via `read_prefix`, but the last backend
returns `GIT_ENOTFOUND` and when object hash verification is turned off,
we fail to reset the error code to `GIT_OK`. This causes us to segfault
later on, when doing a double-free on the returned object.
|
|
6e010bb1
|
2017-06-12T15:43:56
|
|
tests: odb: allow passing fake objects to the fake backend
Right now, the fake backend is quite restrained in the way how it
works: we pass it an OID which it is to return later as well as an error
code we want it to return. While this is sufficient for existing tests,
we can make the fake backend a little bit more generic in order to allow
us testing for additional scenarios.
To do so, we change the backend to not accept an error code and OID
which it is to return for queries, but instead a simple array of OIDs
with their respective blob contents. On each query, the fake backend
simply iterates through this array and returns the first matching
object.
|
|
369cb45f
|
2017-06-12T15:21:58
|
|
tests: do not reuse OID from backend
In order to make the fake backend more useful, we want to enable it
holding multiple object references. To do so, we need to decouple it
from the single fake OID it currently holds, which we simply move up
into the calling tests.
|
|
2add34d0
|
2017-06-12T14:53:46
|
|
tests: odb: move fake backend into its own file
The fake backend used by the test suite `odb::backend::nonrefreshing` is
useful to have some low-level tests for the ODB layer. As such, we move
the implementation into its own `backend_helpers` module.
|
|
6f960b55
|
2017-06-11T10:37:46
|
|
Merge pull request #4088 from chescock/packfile-name-using-complete-hash
Ensure packfiles with different contents have different names
|
|
d2c4f764
|
2017-06-11T09:54:04
|
|
Merge pull request #4260 from libgit2/ethomson/forced_checkout_2
Update to forced checkout and untracked files
|
|
0ef405b3
|
2017-02-15T14:05:10
|
|
checkout: do not delete directories with untracked entries
If the `GIT_CHECKOUT_FORCE` flag is given to any of the `git_checkout`
invocations, we remove files which were previously staged. But while
doing so, we unfortunately also remove unstaged files in a directory
which contains at least one staged file, resulting in potential data
loss.
This commit adds two tests to verify behavior.
|
|
6c23704d
|
2017-06-08T21:40:18
|
|
settings: rename `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION`
Initially, the setting has been solely used to enable the use of
`fsync()` when creating objects. Since then, the use has been extended
to also cover references and index files. As the option is not yet part
of any release, we can still correct this by renaming the option to
something more sensible, indicating not only correlation to objects.
This commit renames the option to `GIT_OPT_ENABLE_FSYNC_GITDIR`. We also
move the variable from the object to repository source code.
|
|
92d3ea4e
|
2017-05-19T13:04:32
|
|
tests: index::version: improve write test for index v4
The current write test does not trigger some edge-cases in the index
version 4 path compression code. Rewrite the test to start off the an
empty standard repository, creating index entries with interesting paths
itself. This allows for more fine-grained control over checked paths.
Furthermore, we now also verify that entry paths are actually
reconstructed correctly.
|
|
8fe33538
|
2017-05-19T12:45:48
|
|
tests: index::version: verify we write compressed index entries
While we do have a test which checks whether a written index of version
4 has the correct version set, we do not check whether this actually
enables path compression for index entries. This commit adds a new test
by adding a number of index entries with equal path prefixes to the
index and subsequently flushing that to disk. With suffix compression
enabled by index version 4, only the last few bytes of these paths will
actually have to be written to the index, saving a lot of disk space.
For the test, differences are about an order of magnitude, allowing us
to easily verify without taking a deeper look at actual on-disk
contents.
|
|
82368b1b
|
2017-05-12T10:04:42
|
|
tests: index::version: add test to read index version v4
While we have a simple test to determine whether we can write an index
of version 4, we never verified that we are able to read this kind of
index (and in fact, we were not able to do so). Add a new repository
which has an index of version 4. This repository is then read from a new
test.
|
|
fea0c81e
|
2017-05-12T09:09:07
|
|
tests: index::version: move up cleanup function
The init and cleanup functions for test suites are usually prepended to
our actual tests. The index::version test suite does not adhere to this
stile. Fix this.
|
|
8a5e7aae
|
2017-05-22T12:53:44
|
|
varint: fix computation for remaining buffer space
When encoding varints to a buffer, we want to remain sure that the
required buffer space does not exceed what is actually available. Our
current check does not do the right thing, though, in that it does not
honor that our `pos` variable counts the position down instead of up. As
such, we will require too much memory for small varints and not enough
memory for big varints.
Fix the issue by correctly calculating the required size as
`(sizeof(varint) - pos)`. Add a test which failed before.
|
|
dd0aa811
|
2017-06-04T22:46:07
|
|
Merge branch 'pr/4228'
|
|
82e929a8
|
2017-06-04T19:35:39
|
|
Merge pull request #4239 from roblg/toplevel-dir-ignore-fix
Fix issue with directory glob ignore in subdirectories
|
|
04de614b
|
2017-06-04T19:03:07
|
|
Merge pull request #4243 from pks-t/pks/submodule-workdir
Submodule working directory
|
|
a1023a43
|
2017-05-20T17:18:07
|
|
Merge pull request #4179 from libgit2/ethomson/expand_tilde
Introduce home directory expansion function for config files, attribute files
|
|
e694e4e9
|
2017-05-20T14:17:36
|
|
Merge pull request #4174 from libgit2/ethomson/set_head_to_tag
git_repository_set_head: use tag name in reflog
|
|
119bdd86
|
2017-05-20T14:13:27
|
|
Merge pull request #4231 from wabain/open-revrange
revparse: support open-ended ranges
|
|
c0e54155
|
2017-01-11T10:39:59
|
|
indexer: name pack files after trailer hash
Upstream git.git has changed the way how packfiles are named.
Previously, they were using a hash of the contained object's OIDs, which
has then been changed to use the hash of the complete packfile instead.
See 1190a1acf (pack-objects: name pack files after trailer hash,
2013-12-05) in the git.git repository for more information on this
change.
This commit changes our logic to match the behavior of core git.
|
|
2696c5c3
|
2017-05-19T09:21:17
|
|
repository: make check if repo is a worktree more strict
To determine if a repository is a worktree or not, we currently check
for the existence of a "gitdir" file inside of the repository's gitdir.
While this is sufficient for non-broken repositories, we have at least
one case of a subtly broken repository where there exists a gitdir file
inside of a gitmodule. This will cause us to misidentify the submodule
as a worktree.
While this is not really a fault of ours, we can do better here by
observing that a repository can only ever be a worktree iff its common
directory and dotgit directory are different. This allows us to make our
check whether a repo is a worktree or not more strict by doing a simple
string comparison of these two directories. This will also allow us to
do the right thing in the above case of a broken repository, as for
submodules these directories will be the same. At the same time, this
allows us to skip the `stat` check for the "gitdir" file for most
repositories.
|
|
c3b8e8b3
|
2017-05-14T10:28:05
|
|
Fix issue with directory glob ignore in subdirectories
|
|
e526fbc7
|
2017-05-17T09:23:06
|
|
tests: add test suite for opening submodules
|
|
98a5f081
|
2017-05-03T13:53:13
|
|
tests: threads::basic: remove unused function `exit_abruptly`
|
|
7d7f6d33
|
2017-05-03T13:52:55
|
|
tests: clone::local: compile UNC functions for Windows only
|