|
dc5591b4
|
2018-05-18T15:16:53
|
|
path: hide the dotgit file functions
These can't go into the public API yet as we don't want to introduce API or ABI
changes in a security release.
|
|
2fc15ae8
|
2018-04-30T13:03:44
|
|
submodule: add a failing test for a submodule escaping .git/modules
We should pretend such submdules do not exist as it can lead to RCE.
|
|
8af6bce2
|
2018-03-20T07:47:27
|
|
online tests: update auth for bitbucket test
Update the settings to use a specific read-only token for accessing our
test repositories in Bitbucket.
|
|
999200cc
|
2018-03-19T09:20:35
|
|
online::clone: skip creds fallback test
At present, we have three online tests against bitbucket: one which
specifies the credentials in the payload, one which specifies the
correct credentials in the URL and a final one that specifies the
incorrect credentials in the URL. Bitbucket has begun responding to the
latter test with a 403, which causes us to fail.
Break these three tests into separate tests so that we can skip the
latter until this is resolved on Bitbucket's end or until we can change
the test to a different provider.
|
|
4656e9c4
|
2018-05-16T15:42:08
|
|
path: add a function to detect an .gitmodules file
Given a path component it knows what to pass to the filesystem-specific
functions so we're protected even from trees which try to use the 8.3 naming
rules to get around us matching on the filename exactly.
The logic and test strings come from the equivalent git change.
|
|
4a1753c2
|
2018-05-14T16:03:15
|
|
submodule: also validate Windows-separated paths for validity
Otherwise we would also admit `..\..\foo\bar` as a valid path and fail to
protect Windows users.
Ideally we would check for both separators without the need for the copied
string, but this'll get us over the RCE.
|
|
f77e40a1
|
2018-04-30T13:47:15
|
|
submodule: ignore submodules which include path traversal in their name
If the we decide that the "name" of the submodule (i.e. its path inside
`.git/modules/`) is trying to escape that directory or otherwise trick us, we
ignore the configuration for that submodule.
This leaves us with a half-configured submodule when looking it up by path, but
it's the same result as if the configuration really were missing.
The name check is potentially more strict than it needs to be, but it lets us
re-use the check we're doing for the checkout. The function that encapsulates
this logic is ready to be exported but we don't want to do that in a security
release so it remains internal for now.
|
|
e83efde4
|
2017-12-23T14:59:07
|
|
Fix unpack double free
If an element has been cached, but then the call to
packfile_unpack_compressed() fails, the very next thing that happens is
that its data is freed and then the element is not removed from the
cache, which frees the data again.
This change sets obj->data to NULL to avoid the double-free. It also
stops trying to resolve deltas after two continuous failed rounds of
resolution, and adds a test for this.
|
|
a521f5b1
|
2017-12-15T10:47:01
|
|
diff_file: properly refcount blobs when initializing file contents
When initializing a `git_diff_file_content` from a source whose data is
derived from a blob, we simply assign the blob's pointer to the
resulting struct without incrementing its refcount. Thus, the structure
can only be used as long as the blob is kept alive by the caller.
Fix the issue by using `git_blob_dup` instead of a direct assignment.
This function will increment the refcount of the blob without allocating
new memory, so it does exactly what we want. As
`git_diff_file_content__unload` already frees the blob when
`GIT_DIFF_FLAG__FREE_BLOB` is set, we don't need to add new code
handling the free but only have to set that flag correctly.
|
|
1b853c48
|
2018-02-19T22:10:44
|
|
checkout test: further ensure workdir perms are updated
When both the index _and_ the working directory has changed
permissions on a file permissions on a file - but only the permissions,
such that the contents of the file are identical - ensure that
`git_checkout` updates the permissions to match the checkout target.
|
|
a3cd5e94
|
2017-12-06T03:03:18
|
|
libFuzzer: Fix missing trailer crash
This change fixes an invalid memory access when the trailer is missing /
corrupt.
Found using libFuzzer.
|
|
73615900
|
2018-02-19T22:09:27
|
|
checkout test: ensure workdir perms are updated
When the working directory has changed permissions on a file - but only
the permissions, such that the contents of the file are identical -
ensure that `git_checkout` updates the permissions to match the checkout
target.
|
|
5cc3971a
|
2017-12-06T03:22:58
|
|
libFuzzer: Fix a git_packfile_stream leak
This change ensures that the git_packfile_stream object in
git_indexer_append() does not leak when the stream has errors.
Found using libFuzzer.
|
|
e74e05ed
|
2018-02-20T10:38:27
|
|
diff_tform: fix rename detection with rewrite/delete pair
A rewritten file can either be classified as a modification of its
contents or of a delete of the complete file followed by an addition of
the new content. This distinction becomes important when we want to
detect renames for rewrites. Given a scenario where a file "a" has been
deleted and another file "b" has been renamed to "a", this should be
detected as a deletion of "a" followed by a rename of "a" -> "b". Thus,
splitting of the original rewrite into a delete/add pair is important
here.
This splitting is represented by a flag we can set at the current delta.
While the flag is already being set in case we want to break rewrites,
we do not do so in case where the `GIT_DIFF_FIND_RENAMES_FROM_REWRITES`
flag is set. This can trigger an assert when we try to match the source
and target deltas.
Fix the issue by setting the `GIT_DIFF_FLAG__TO_SPLIT` flag at the delta
when it is a rename target and `GIT_DIFF_FIND_RENAMES_FROM_REWRITES` is
set.
|
|
68842cbb
|
2017-10-29T12:28:43
|
|
Ignore trailing whitespace in .gitignore files (as git itself does)
|
|
e229e90d
|
2018-02-20T10:03:48
|
|
tests: add rename-rewrite scenarios to "renames" repository
Add two more scenarios to the "renames" repository. The first scenario
has a major rewrite of a file and a delete of another file, the second
scenario has a deletion of a file and rename of another file to the
deleted file. Both scenarios will be used in the following commit.
|
|
e66bc08c
|
2016-06-15T01:59:56
|
|
checkout: test force checkout when mode changes
Test that we can successfully force checkout a target when the file
contents are identical, but the mode has changed.
|
|
be205dfa
|
2018-02-20T09:54:58
|
|
tests: diff::rename: use defines for commit OIDs
While we frequently reuse commit OIDs throughout the file, we do not
have any constants to refer to these commits. Make this a bit easier to
read by giving the commit OIDs somewhat descriptive names of what kind
of commit they refer to.
|
|
b3c0d43c
|
2018-01-22T14:44:31
|
|
merge: virtual commit should be last argument to merge-base
Our virtual commit must be the last argument to merge-base: since our
algorithm pushes _both_ parents of the virtual commit, it needs to be
the last argument, since merge-base:
> Given three commits A, B and C, git merge-base A B C will compute the
> merge base between A and a hypothetical commit M
We want to calculate the merge base between the actual commit ("two")
and the virtual commit ("one") - since one actually pushes its parents
to the merge-base calculation, we need to calculate the merge base of
"two" and the parents of one.
|
|
cda18f9b
|
2017-10-06T11:24:11
|
|
refs: do not use peeled OID if peeling to a tag
If a reference stored in a packed-refs file does not directly point to a
commit, tree or blob, the packed-refs file will also will include a
fully-peeled OID pointing to the first underlying object of that type.
If we try to peel a reference to an object, we will use that peeled OID
to speed up resolving the object.
As a reference for an annotated tag does not directly point to a commit,
tree or blob but instead to the tag object, the packed-refs file will
have an accomodating fully-peeled OID pointing to the object referenced
by that tag. When we use the fully-peeled OID pointing to the referenced
object when peeling, we obviously cannot peel that to the tag anymore.
Fix this issue by not using the fully-peeled OID whenever we want to
peel to a tag. Note that this does not include the case where we want to
resolve to _any_ object type. Existing code may make use from the fact
that we resolve those to commit objects instead of tag objects, even
though that behaviour is inconsistent between packed and loose
references. Furthermore, some tests of ours make the assumption that we
in fact resolve those references to a commit.
|
|
3619e0f0
|
2018-01-22T23:56:22
|
|
Add failing test case for virtual commit merge base issue
|
|
dc51d774
|
2018-01-21T16:50:40
|
|
merge::trees::recursive: test for virtual base building
Virtual base building: ensure that the virtual base is created and
revwalked in the same way as git.
|
|
b2b37077
|
2018-01-21T18:05:45
|
|
merge: reverse merge bases for recursive merge
When the commits being merged have multiple merge bases, reverse the
order when creating the virtual merge base. This is for compatibility
with git's merge-recursive algorithm, and ensures that we build
identical trees.
Git does this to try to use older merge bases first. Per 8918b0c:
> It seems to be the only sane way to do it: when a two-head merge is
> done, and the merge-base and one of the two branches agree, the
> merge assumes that the other branch has something new.
>
> If we start creating virtual commits from newer merge-bases, and go
> back to older merge-bases, and then merge with newer commits again,
> chances are that a patch is lost, _because_ the merge-base and the
> head agree on it. Unlikely, yes, but it happened to me.
|
|
9e98f49d
|
2018-02-28T12:21:08
|
|
tree: initialize the id we use for testing submodule insertions
Instead of laving it uninitialized and relying on luck for it to be non-zero,
let's give it a dummy hash so we make valgrind happy (in this case the hash
comes from `sha1sum </dev/null`.
|
|
4296a36b
|
2017-07-10T09:36:19
|
|
ignore: honor case insensitivity for negative ignores
When computing negative ignores, we throw away any rule which does not
undo a previous rule to optimize. But on case insensitive file systems,
we need to keep in mind that a negative ignore can also undo a previous
rule with different case, which we did not yet honor while determining
whether a rule undoes a previous one. So in the following example, we
fail to unignore the "/Case" directory:
/case
!/Case
Make both paths checking whether a plain- or wildcard-based rule undo a
previous rule aware of case-insensitivity. This fixes the described
issue.
|
|
08ab5902
|
2018-01-21T16:41:49
|
|
Introduce additional criss-cross merge branches
|
|
32cc5edc
|
2017-07-07T17:10:57
|
|
tests: status: additional test for negative ignores with pattern
This test is by Carlos Martín Nieto.
|
|
5c15cd94
|
2017-07-07T13:27:27
|
|
ignore: keep negative rules containing wildcards
Ignore rules allow for reverting a previously ignored rule by prefixing
it with an exclamation mark. As such, a negative rule can only override
previously ignored files. While computing all ignore patterns, we try to
use this fact to optimize away some negative rules which do not override
any previous patterns, as they won't change the outcome anyway.
In some cases, though, this optimization causes us to get the actual
ignores wrong for some files. This may happen whenever the pattern
contains a wildcard, as we are unable to reason about whether a pattern
overrides a previous pattern in a sane way. This happens for example in
the case where a gitignore file contains "*.c" and "!src/*.c", where we
wouldn't un-ignore files inside of the "src/" subdirectory.
In this case, the first solution coming to mind may be to just strip the
"src/" prefix and simply compare the basenames. While that would work
here, it would stop working as soon as the basename pattern itself is
different, like for example with "*x.c" and "!src/*.c. As such, we
settle for the easier fix of just not optimizing away rules that contain
a wildcard.
|
|
e7c24ea2
|
2017-07-20T21:00:15
|
|
tests: fix the rebase-submodule test
|
|
54d4e5de
|
2017-06-21T14:57:30
|
|
Remove invalid submodule
Fixes #4274
|
|
cc9b0b6c
|
2017-06-16T21:05:58
|
|
tests: try to init with empty template path
|
|
8296da5f
|
2017-06-14T10:49:28
|
|
Merge pull request #4267 from mohseenrm/master
adding GIT_FILTER_VERSION to GIT_FILTER_INIT as part of convention
|
|
a78441bc
|
2017-06-13T11:05:40
|
|
Adding git_filter_init for initializing `git_filter` struct + unit test
|
|
a180e7d9
|
2017-06-13T11:10:19
|
|
tests: odb: add more low-level backend tests
Introduce a new test suite "odb::backend::simple", which utilizes the
fake backend to exercise the ODB abstraction layer. While such tests
already exist for the case where multiple backends are put together, no
direct testing for functionality with a single backend exist yet.
|
|
b2e53f36
|
2017-06-13T11:39:36
|
|
tests: odb: implement `exists_prefix` for the fake backend
The fake backend currently implements all reading functions except for
the `exists_prefix` one. Implement it to enable further testing of the
ODB layer.
|
|
983e627d
|
2017-06-13T11:38:59
|
|
tests: odb: use correct OID length
The `search_object` function takes the OID length as one of its
parameters, where its maximum length is `GIT_OID_HEXSZ`. The `exists`
function of the fake backend used `GIT_OID_RAWSZ` though, leading to
only the first half of the OID being used when finding the correct
object.
|
|
c4cbb3b1
|
2017-06-13T11:38:14
|
|
tests: odb: have the fake backend detect ambiguous prefixes
In order to be able to test the ODB prefix functions, we need to be able
to detect ambiguous prefixes in case multiple objects with the same
prefix exist in the fake ODB. Extend `search_object` to detect ambiguous
queries and have callers return its error code instead of always
returning `GIT_ENOTFOUND`.
|
|
95170294
|
2017-06-13T11:08:28
|
|
tests: core: test initialization of `git_proxy_options`
Initialization of the `git_proxy_options` structure is never tested
anywhere. Include it in our usual initialization test in
"core::structinit::compare".
|
|
bee423cc
|
2017-06-13T10:29:23
|
|
tests: network: add missing include for `git_repository_new`
A newly added test uses the `git_repository_new` function without the
corresponding header file being included. While this works due to the
compiler deducing the correct function signature, we should obviously
just include the function's declaration file.
|
|
2ca088bd
|
2017-06-12T22:47:54
|
|
Merge pull request #4265 from pks-t/pks/read-prefix-tests
Read prefix tests
|
|
fe9a5dd3
|
2017-06-12T12:00:14
|
|
remote: ensure we can create an anon remote on inmemory repo
Given a wholly in-memory repository, ensure that we can create an
anonymous remote and perform actions on it.
|
|
f148258a
|
2017-06-12T16:19:45
|
|
tests: odb: add tests with multiple backends
Previous to pulling out and extending the fake backend, it was quite
cumbersome to write tests for very specific scenarios regarding
backends. But as we have made it more generic, it has become much easier
to do so. As such, this commit adds multiple tests for scenarios with
multiple backends for the ODB.
The changes also include a test for a very targeted scenario. When one
backend found a matching object via `read_prefix`, but the last backend
returns `GIT_ENOTFOUND` and when object hash verification is turned off,
we fail to reset the error code to `GIT_OK`. This causes us to segfault
later on, when doing a double-free on the returned object.
|
|
6e010bb1
|
2017-06-12T15:43:56
|
|
tests: odb: allow passing fake objects to the fake backend
Right now, the fake backend is quite restrained in the way how it
works: we pass it an OID which it is to return later as well as an error
code we want it to return. While this is sufficient for existing tests,
we can make the fake backend a little bit more generic in order to allow
us testing for additional scenarios.
To do so, we change the backend to not accept an error code and OID
which it is to return for queries, but instead a simple array of OIDs
with their respective blob contents. On each query, the fake backend
simply iterates through this array and returns the first matching
object.
|
|
369cb45f
|
2017-06-12T15:21:58
|
|
tests: do not reuse OID from backend
In order to make the fake backend more useful, we want to enable it
holding multiple object references. To do so, we need to decouple it
from the single fake OID it currently holds, which we simply move up
into the calling tests.
|
|
2add34d0
|
2017-06-12T14:53:46
|
|
tests: odb: move fake backend into its own file
The fake backend used by the test suite `odb::backend::nonrefreshing` is
useful to have some low-level tests for the ODB layer. As such, we move
the implementation into its own `backend_helpers` module.
|
|
6f960b55
|
2017-06-11T10:37:46
|
|
Merge pull request #4088 from chescock/packfile-name-using-complete-hash
Ensure packfiles with different contents have different names
|
|
d2c4f764
|
2017-06-11T09:54:04
|
|
Merge pull request #4260 from libgit2/ethomson/forced_checkout_2
Update to forced checkout and untracked files
|
|
0ef405b3
|
2017-02-15T14:05:10
|
|
checkout: do not delete directories with untracked entries
If the `GIT_CHECKOUT_FORCE` flag is given to any of the `git_checkout`
invocations, we remove files which were previously staged. But while
doing so, we unfortunately also remove unstaged files in a directory
which contains at least one staged file, resulting in potential data
loss.
This commit adds two tests to verify behavior.
|
|
6c23704d
|
2017-06-08T21:40:18
|
|
settings: rename `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION`
Initially, the setting has been solely used to enable the use of
`fsync()` when creating objects. Since then, the use has been extended
to also cover references and index files. As the option is not yet part
of any release, we can still correct this by renaming the option to
something more sensible, indicating not only correlation to objects.
This commit renames the option to `GIT_OPT_ENABLE_FSYNC_GITDIR`. We also
move the variable from the object to repository source code.
|
|
92d3ea4e
|
2017-05-19T13:04:32
|
|
tests: index::version: improve write test for index v4
The current write test does not trigger some edge-cases in the index
version 4 path compression code. Rewrite the test to start off the an
empty standard repository, creating index entries with interesting paths
itself. This allows for more fine-grained control over checked paths.
Furthermore, we now also verify that entry paths are actually
reconstructed correctly.
|
|
8fe33538
|
2017-05-19T12:45:48
|
|
tests: index::version: verify we write compressed index entries
While we do have a test which checks whether a written index of version
4 has the correct version set, we do not check whether this actually
enables path compression for index entries. This commit adds a new test
by adding a number of index entries with equal path prefixes to the
index and subsequently flushing that to disk. With suffix compression
enabled by index version 4, only the last few bytes of these paths will
actually have to be written to the index, saving a lot of disk space.
For the test, differences are about an order of magnitude, allowing us
to easily verify without taking a deeper look at actual on-disk
contents.
|
|
82368b1b
|
2017-05-12T10:04:42
|
|
tests: index::version: add test to read index version v4
While we have a simple test to determine whether we can write an index
of version 4, we never verified that we are able to read this kind of
index (and in fact, we were not able to do so). Add a new repository
which has an index of version 4. This repository is then read from a new
test.
|
|
fea0c81e
|
2017-05-12T09:09:07
|
|
tests: index::version: move up cleanup function
The init and cleanup functions for test suites are usually prepended to
our actual tests. The index::version test suite does not adhere to this
stile. Fix this.
|
|
8a5e7aae
|
2017-05-22T12:53:44
|
|
varint: fix computation for remaining buffer space
When encoding varints to a buffer, we want to remain sure that the
required buffer space does not exceed what is actually available. Our
current check does not do the right thing, though, in that it does not
honor that our `pos` variable counts the position down instead of up. As
such, we will require too much memory for small varints and not enough
memory for big varints.
Fix the issue by correctly calculating the required size as
`(sizeof(varint) - pos)`. Add a test which failed before.
|
|
dd0aa811
|
2017-06-04T22:46:07
|
|
Merge branch 'pr/4228'
|
|
82e929a8
|
2017-06-04T19:35:39
|
|
Merge pull request #4239 from roblg/toplevel-dir-ignore-fix
Fix issue with directory glob ignore in subdirectories
|
|
04de614b
|
2017-06-04T19:03:07
|
|
Merge pull request #4243 from pks-t/pks/submodule-workdir
Submodule working directory
|
|
a1023a43
|
2017-05-20T17:18:07
|
|
Merge pull request #4179 from libgit2/ethomson/expand_tilde
Introduce home directory expansion function for config files, attribute files
|
|
e694e4e9
|
2017-05-20T14:17:36
|
|
Merge pull request #4174 from libgit2/ethomson/set_head_to_tag
git_repository_set_head: use tag name in reflog
|
|
119bdd86
|
2017-05-20T14:13:27
|
|
Merge pull request #4231 from wabain/open-revrange
revparse: support open-ended ranges
|
|
c0e54155
|
2017-01-11T10:39:59
|
|
indexer: name pack files after trailer hash
Upstream git.git has changed the way how packfiles are named.
Previously, they were using a hash of the contained object's OIDs, which
has then been changed to use the hash of the complete packfile instead.
See 1190a1acf (pack-objects: name pack files after trailer hash,
2013-12-05) in the git.git repository for more information on this
change.
This commit changes our logic to match the behavior of core git.
|
|
2696c5c3
|
2017-05-19T09:21:17
|
|
repository: make check if repo is a worktree more strict
To determine if a repository is a worktree or not, we currently check
for the existence of a "gitdir" file inside of the repository's gitdir.
While this is sufficient for non-broken repositories, we have at least
one case of a subtly broken repository where there exists a gitdir file
inside of a gitmodule. This will cause us to misidentify the submodule
as a worktree.
While this is not really a fault of ours, we can do better here by
observing that a repository can only ever be a worktree iff its common
directory and dotgit directory are different. This allows us to make our
check whether a repo is a worktree or not more strict by doing a simple
string comparison of these two directories. This will also allow us to
do the right thing in the above case of a broken repository, as for
submodules these directories will be the same. At the same time, this
allows us to skip the `stat` check for the "gitdir" file for most
repositories.
|
|
c3b8e8b3
|
2017-05-14T10:28:05
|
|
Fix issue with directory glob ignore in subdirectories
|
|
e526fbc7
|
2017-05-17T09:23:06
|
|
tests: add test suite for opening submodules
|
|
98a5f081
|
2017-05-03T13:53:13
|
|
tests: threads::basic: remove unused function `exit_abruptly`
|
|
7d7f6d33
|
2017-05-03T13:52:55
|
|
tests: clone::local: compile UNC functions for Windows only
|
|
8b107dc5
|
2017-05-03T11:20:57
|
|
revparse: support open-ended ranges
Support '..' and '...' ranges where one side is not specified.
The unspecified side defaults to HEAD.
Closes #4223
|
|
883eeb5f
|
2017-05-02T12:35:59
|
|
worktree: switch over worktree pruning to an opts structure
The current signature of `git_worktree_prune` accepts a flags field to
alter its behavior. This is not as flexible as we'd like it to be when
we want to enable passing additional options in the future. As the
function has not been part of any release yet, we are still free to
alter its current signature. This commit does so by using our usual
pattern of an options structure, which is easily extendable without
breaking the API.
|
|
8264a30f
|
2017-05-02T10:11:28
|
|
worktree: support creating locked worktrees
When creating a new worktree, we do have a potential race with us
creating the worktree and another process trying to delete the same
worktree as it is being created. As such, the upstream git project has
introduced a flag `git worktree add --locked`, which will cause the
newly created worktree to be locked immediately after its creation. This
mitigates the race condition.
We want to be able to mirror the same behavior. As such, a new flag
`locked` is added to the options structure of `git_worktree_add` which
allows the user to enable this behavior.
|
|
ffd264d9
|
2017-05-03T14:51:23
|
|
tests: repo: fix repo discovery tests on overlayfs
Debian and Ubuntu often use schroot to build their DEB packages in a
controlled environment. Depending on how schroot is configured, our
tests regarding repository discovery break due to not being able to find
the repositories anymore. It turns out that these errors occur when the
schroot is configured to use an overlayfs on the directory structures.
The reason for this failure is that we usually refrain from discovering
repositories across devices. But unfortunately, overlayfs does not have
consistent device identifiers for all its files but will instead use the
device number of the filesystem the file stems from. So whenever we
cross boundaries between the upper and lower layer of the overlay, we
will fail to properly detect the repository and bail out.
This commit fixes the issue by enabling cross-device discovery in our
tests. While it would be preferable to have this turned off, it probably
won't do much harm anyway as we set up our tests in a temporary location
outside of the parent repository.
|
|
a7aa73a5
|
2017-05-02T10:02:36
|
|
worktree: introduce git_worktree_add options
The `git_worktree_add` function currently accepts only a path and name
for the new work tree. As we may want to expand these parameters in
future versions without adding additional parameters to the function for
every option, this commit introduces our typical pattern of an options
struct. Right now, this structure is still empty, which will change with
the next commit.
|
|
1dc89aab
|
2017-05-01T21:34:21
|
|
object validation: free some memleaks
|
|
13c1bf07
|
2017-05-01T16:17:48
|
|
Merge pull request #4197 from pks-t/pks/verify-object-hashes
Verify object hashes
|
|
5700ee9c
|
2017-05-01T16:10:50
|
|
Merge pull request #4216 from pks-t/pks/debian-test-failures
Debian HTTPS feature test failure
|
|
35079f50
|
2017-04-21T07:31:56
|
|
odb: add option to turn off hash verification
Verifying hashsums of objects we are reading from the ODB may be costly
as we have to perform an additional hashsum calculation on the object.
Especially when reading large objects, the penalty can be as high as
35%, as can be seen when executing the equivalent of `git cat-file` with
and without verification enabled. To mitigate for this, we add a global
option for libgit2 which enables the developer to turn off the
verification, e.g. when he can be reasonably sure that the objects on
disk won't be corrupted.
|
|
28a0741f
|
2017-04-10T09:30:08
|
|
odb: verify object hashes
The upstream git.git project verifies objects when looking them up from
disk. This avoids scenarios where objects have somehow become corrupt on
disk, e.g. due to hardware failures or bit flips. While our mantra is
usually to follow upstream behavior, we do not do so in this case, as we
never check hashes of objects we have just read from disk.
To fix this, we create a new error class `GIT_EMISMATCH` which denotes
that we have looked up an object with a hashsum mismatch. `odb_read_1`
will then, after having read the object from its backend, hash the
object and compare the resulting hash to the expected hash. If hashes do
not match, it will return an error.
This obviously introduces another computation of checksums and could
potentially impact performance. Note though that we usually perform I/O
operations directly before doing this computation, and as such the
actual overhead should be drowned out by I/O. Running our test suite
seems to confirm this guess. On a Linux system with best-of-five
timings, we had 21.592s with the check enabled and 21.590s with the
ckeck disabled. Note though that our test suite mostly contains very
small blobs only. It is expected that repositories with bigger blobs may
notice an increased hit by this check.
In addition to a new test, we also had to change the
odb::backend::nonrefreshing test suite, which now triggers a hashsum
mismatch when looking up the commit "deadbeef...". This is expected, as
the fake backend allocated inside of the test will return an empty
object for the OID "deadbeef...", which will obviously not hash back to
"deadbeef..." again. We can simply adjust the hash to equal the hash of
the empty object here to fix this test.
|
|
d59dabe5
|
2017-04-10T09:00:51
|
|
tests: object: test looking up corrupted objects
We currently have no tests which check whether we fail reading corrupted
objects. Add one which modifies contents of an object stored on disk and
then tries to read the object.
|
|
86c03552
|
2017-04-10T09:27:04
|
|
tests: object: create sandbox
The object::lookup tests do use the "testrepo.git" repository in a
read-only way, so we do not set up the repository as a sandbox but
simply open it. But in a future commit, we will want to test looking up
objects which are corrupted in some way, which requires us to modify the
on-disk data. Doing this in a repository without creating the sandbox
will modify contents of our libgit2 repository, though.
Create the repository in a sandbox to avoid this.
|
|
e29e8029
|
2017-04-10T10:31:22
|
|
tests: odb: make hash of fake backend configurable
In the odb::backend::nonrefreshing test suite, we set up a fake backend
so that we are able to determine if backend functions are called
correctly. During the setup, we also parse an OID which is later on used
to read out the pseudo-object. While this procedure works right now, it
will create problems later when we implement hash verification for
looked up objects. The current OID ("deadbeef") will not match the hash
of contents we give back to the ODB layer and thus cannot be verified.
Make the hash configurable so that we can simply switch the returned for
single tests.
|
|
7df580fa
|
2017-04-28T11:58:49
|
|
Merge pull request #4191 from pks-t/pks/wt-ref-renames
Branch renames with worktrees
|
|
2a7086fa
|
2017-04-25T13:23:04
|
|
tests: config: verify functionality with read-only backends
|
|
417319cc
|
2017-04-25T10:14:37
|
|
tests: core::features: only check for HTTPS if it is supported
|
|
a4de1ae3
|
2017-04-25T10:14:19
|
|
cmake: define GIT_HTTPS when HTTPS is supported
|
|
13c275ab
|
2017-04-21T07:49:08
|
|
tests: threads::diff: fix warning for unused variable
The threads::diff test suite has a static variable `_retries`, which is
used on Windows platforms only. As it is unused on other systems, the
compiler throws a warning there. Fix the warning by wrapping the
declaration in an ifdef.
|
|
8d89e409
|
2017-04-17T17:19:03
|
|
Merge pull request #4192 from libgit2/ethomson/win32_posix
Refactor some of the win32 POSIX emulation
|
|
78b8f039
|
2017-02-15T14:00:38
|
|
tests: fix indentation in checkout::head::with_index_only_tree
|
|
cf07db2f
|
2017-04-07T16:05:10
|
|
filter: only close filter if it's been initialized correctly
In the function `git_filter_list_stream_data`, we initialize, write and
subesquently close the stream which should receive content processed by
the filter. While we skip writing to the stream if its initialization
failed, we still try to close it unconditionally -- even if the
initialization failed, where the stream might not be set at all, leading
us to segfault.
Semantics in this code is not really clear. The function handling the
same logic for files instead of data seems to do the right thing here in
only closing the stream when initialization succeeded. When stepping
back a bit, this is only reasonable: if a stream cannot be initialized,
the caller would not expect it to be closed again. So actually, both
callers of `stream_list_init` fail to do so. The data streaming function
will always close the stream and the file streaming function will not
close the stream if writing to it has failed.
The fix is thus two-fold:
- callers of `stream_list_init` now close the stream iff it has been
initialized
- `stream_list_init` now closes the lastly initialized stream if
the current stream in the chain failed to initialize
Add a test which segfaulted previous to these changes.
|
|
2a485dab
|
2017-04-04T18:55:57
|
|
refs: update worktree HEADs when renaming branches
Whenever we rename a branch, we update the repository's symbolic HEAD
reference if it currently points to the branch that is to be renamed.
But with the introduction of worktrees, we also have to iterate over all
HEADs of linked worktrees to adjust them. Do so.
|
|
60297256
|
2017-04-04T16:12:27
|
|
tests: worktree::refs: convert spaces to tabs
|
|
48f09c6c
|
2017-04-05T11:59:03
|
|
win32: only set `git_win32__retries` where it exists
|
|
89d403cc
|
2017-04-05T09:50:12
|
|
win32: enable `p_utimes` for readonly files
Instead of failing to set the timestamp of a read-only file (like any
object file), set it writable temporarily to update the timestamp.
|
|
7ece9065
|
2017-04-03T23:07:16
|
|
win32: make posix emulation retries configurable
POSIX emulation retries should be configurable so that tests can disable
them. In particular, maniacally threading tests may end up trying to
open locked files and need retries, which will slow continuous
integration tests significantly.
|
|
e86d02f9
|
2017-04-03T00:10:47
|
|
git_repository_set_head: use remote name in reflog
When `git_repository_set_head` is provided a remote reference, update
the reflog with the tag name, like we do with a branch. This helps
consumers match the semantics of `git checkout remote`.
|
|
ed812ee7
|
2017-03-23T12:03:29
|
|
config::include: sanitize homedir
Sanitize the home directory to ensure that we do not accidentally locate
a file called `~/.nonexistentfile`.
|
|
047fe29c
|
2016-06-20T13:05:48
|
|
add failing test to include a missing config file relative to home dir
|
|
6ad091dc
|
2017-03-23T09:33:09
|
|
Merge pull request #4176 from libgit2/ethomson/3872
inet_pton: don't assume addr families don't exist
|
|
f623cf89
|
2017-03-22T20:32:55
|
|
Merge pull request #4163 from pks-t/pks/submodules-with-worktrees
Worktree fixes
|
|
6fd6c678
|
2017-03-22T20:29:22
|
|
Merge pull request #4030 from libgit2/ethomson/fsync
fsync all the things
|
|
983979fa
|
2017-03-22T19:52:38
|
|
inet_pton: don't assume addr families don't exist
Address family 5 might exist on some crazy system like Haiku.
Use `INT_MAX-1` as an unsupported address family.
|
|
ea3bb5c0
|
2017-03-21T18:12:02
|
|
git_repository_set_head: use tag name in reflog
When `git_repository_set_head` is provided a tag reference, update the
reflog with the tag name, like we do with a branch. This helps
consumers match the semantics of `git checkout tag`.
|