|
d8d2f21e
|
2017-09-06T07:52:12
|
|
cmake: unify version check for target include directories
There are two locations where we check whether CMake supports
`TARGET_INCLUDE_DIRECTORIES`. While the first one uses `VERSION_LESS
2.8.12`, the second one uses `VERSION_GREATER 2.8.11`, which are
obviously equivalent to each other. It'd still be easier to grep for
specific CMake versions being required for some features if both used
the same conditional mentioning the actual target version required. So
this commit refactors these conditions to make them equal.
|
|
1d9dd882
|
2017-09-05T15:06:29
|
|
cmake: distinguish libgit2 objects and sources
Distinguish variables keeping track of our internal libgit2 sources and
the final objects which shall be linked into the library. This will ease
the transition to use object libraries for our bundled dependencies
instead of linking them in.
|
|
583e4141
|
2017-08-30T14:35:57
|
|
tests: deterministically generate test suite definitions
The script "generate.py" is used to parse all test source files for unit
tests. These are then written into a "clar.suite" file, which can be
included by the main test executable to make available all test suites
and unit tests.
Our current algorithm simply collects all test suites inside of a dict,
iterates through its items and dumps them in a special format into the
file. As the order is not guaranteed to be deterministic for Python
dictionaries, this may result in arbitrarily ordered C structs. This
obviously defeats the purpose of reproducible builds, where the same
input should always result in the exact same output.
Fix this issue by sorting the test suites by name previous to dumping
them as structs. This enables reproducible builds for the libgit2_clar
file.
|
|
3c216453
|
2017-08-25T21:06:46
|
|
Merge pull request #4296 from pks-t/pks/pattern-based-gitignore
Fix negative ignore rules with patterns
|
|
477b3e04
|
2017-07-10T12:25:43
|
|
submodule: refuse lookup in bare repositories
While it is technically possible to look up submodules inside of a
bare repository by reading the submodule configuration of a specific
commit, we do not offer this functionality right now. As such, calling
both `git_submodule_lookup` and `git_submodule_foreach` should error out
early when these functions encounter a bare repository. While
`git_submodule_lookup` already does return an error due to not being
able to parse the configuration, `git_submodule_foreach` simply returns
success and never invokes the callback function.
Fix the issue by having both functions check whether the repository is
bare and returning an error in that case.
|
|
a889c05f
|
2017-07-10T11:55:33
|
|
tests: submodule: add explicit cleanup function in lookup tests
|
|
64d1e0b3
|
2017-07-10T11:52:08
|
|
tests: submodule: fix declaration of test
The testcase "submodule::lookup::cached" was declared with a single
underscore separating the test suide and test name, only. As the clar
parser only catches tests with two underscores, it was never executed.
Add in the second underscore to actually have it detected and executed.
|
|
2d9ff8f5
|
2017-07-10T09:36:19
|
|
ignore: honor case insensitivity for negative ignores
When computing negative ignores, we throw away any rule which does not
undo a previous rule to optimize. But on case insensitive file systems,
we need to keep in mind that a negative ignore can also undo a previous
rule with different case, which we did not yet honor while determining
whether a rule undoes a previous one. So in the following example, we
fail to unignore the "/Case" directory:
/case
!/Case
Make both paths checking whether a plain- or wildcard-based rule undo a
previous rule aware of case-insensitivity. This fixes the described
issue.
|
|
38b44c3b
|
2017-07-07T17:10:57
|
|
tests: status: additional test for negative ignores with pattern
This test is by Carlos Martín Nieto.
|
|
b8922fc8
|
2017-07-07T13:27:27
|
|
ignore: keep negative rules containing wildcards
Ignore rules allow for reverting a previously ignored rule by prefixing
it with an exclamation mark. As such, a negative rule can only override
previously ignored files. While computing all ignore patterns, we try to
use this fact to optimize away some negative rules which do not override
any previous patterns, as they won't change the outcome anyway.
In some cases, though, this optimization causes us to get the actual
ignores wrong for some files. This may happen whenever the pattern
contains a wildcard, as we are unable to reason about whether a pattern
overrides a previous pattern in a sane way. This happens for example in
the case where a gitignore file contains "*.c" and "!src/*.c", where we
wouldn't un-ignore files inside of the "src/" subdirectory.
In this case, the first solution coming to mind may be to just strip the
"src/" prefix and simply compare the basenames. While that would work
here, it would stop working as soon as the basename pattern itself is
different, like for example with "*x.c" and "!src/*.c. As such, we
settle for the easier fix of just not optimizing away rules that contain
a wildcard.
|
|
8e31cc25
|
2017-06-28T12:51:14
|
|
cmake: keep track of libraries and includes via lists
Later on, we will move detection of required libraries, library
directories as well as include directories into a separate
CMakeLists.txt file inside of the source directory. Obviously, we want
to avoid duplication here regarding these parameters.
To prepare for the split, put the parameters into three variables
LIBGIT2_LIBS, LIBGIT2_LIBDIRS and LIBGIT2_INCLUDES, tracking the
required libraries, linking directory as well as include directories.
These variables can later be exported into the parent scope from inside
of the source build instructions, making them readily available for the
other subdirectories.
|
|
a390a846
|
2017-07-01T13:06:00
|
|
cmake: move defines into "features.h" header
In a future commit, we will split out the build instructions for our
library directory and move them into a subdirectory. One of the benefits
is fixing scoping issues, where e.g. defines do not leak to build
targets where they do not belong to. But unfortunately, this does also
pose the problem of how to propagate some defines which are required by
both the library and the test suite.
One way would be to create another variable keeping track of all added
defines and declare it inside of the parent scope. While this is the
most obvious and simplest way of going ahead, it is kind of unfortunate.
The main reason to not use this is that these defines become implicit
dependencies between the build targets. By simply observing a define
inside of the CMakeLists.txt file, one cannot reason whether this define
is only required by the current target or whether it is required by
different targets, as well.
Another approach would be to use an internal header file keeping track
of all defines shared between targets. While configuring the library, we
will set various variables and let CMake configure the file, adding or
removing defines based on what has been configured. Like this, one can
easily keep track of the current environment by simply inspecting the
header file. Furthermore, these dependencies are becoming clear inside
the CMakeLists.txt, as instead of simply adding a define, we now call
e.g. `SET(GIT_THREADSAFE 1)`.
Having this header file though requires us to make sure it is always
included before any "#ifdef"-preprocessor checks are executed. As we
have already refactored code to always include the "common.h" header
file before any statement inside of a file, this becomes easy: just make
sure "common.h" includes the new "features.h" header file first.
|
|
35087f0e
|
2017-06-28T15:42:54
|
|
cmake: create separate CMakeLists.txt for tests
Our CMakeLists.txt is very unwieldy in its current size, spanning more
than 700 lines of code. Furthermore, it has several issues regarding
scoping, where for example some defines, includes, etc. from our test
suite are also applied to our normal library code.
To fix this, we can separate out build instructions for our tests and
move them into their own CMakeLists.txt in the "tests" directory. This
reduced complexity of the root CMakeLists.txt file and fixes the issues
regarding leaking build context from tests into the library.
|
|
3267115f
|
2017-06-28T15:41:15
|
|
cmake: create own precompiled headers for tests
As soon as we split up our CMakeBuild.txt build instructions, we will be
unable to simply link against the git2 library's precompiled header from
other targets. To avoid this future breakage, create a new precompiled
header for our test suite. Next to being compatible with the split, this
enables us to also include additional files like the clar headers, which
may help speeding up compilation of the test suite.
|
|
fb585d01
|
2017-07-31T00:58:58
|
|
Merge branch '4233'
|
|
c0558c62
|
2017-07-28T09:01:41
|
|
tests: rebase::submodule: verify initialization method calls
Some return codes for functions which may fail are not being checked in
`test_rebase_submodule__initialize`. This may lead us to not notice
errors when initializing the environment and would possibly result in
either memory corruption or segfaults as soon as any of the
initialization steps fails. Fix this by wrapping these function calls
into `cl_git_pass`.
|
|
4f4bc573
|
2017-07-27T23:05:53
|
|
Merge pull request #4275 from tiennou/fix-rebase-submodule-test
tests: rewrite rebase-submodule .gitmodule file
|
|
35cb7b84
|
2017-07-20T21:00:15
|
|
tests: fix the rebase-submodule test
|
|
45994fdc
|
2017-06-21T14:57:30
|
|
Remove invalid submodule
Fixes #4274
|
|
e0568621
|
2017-07-19T13:55:55
|
|
Merge pull request #4250 from pks-t/pks/config-file-iteration
Configuration file fixes with includes
|
|
a94a5402
|
2017-07-19T13:28:32
|
|
Merge pull request #4272 from pks-t/pks/patch-id
Patch ID calculation
|
|
1b329089
|
2017-05-31T22:27:19
|
|
config_file: refuse modifying included variables
Modifying variables pulled in by an included file currently succeeds,
but it doesn't actually do what one would expect, as refreshing the
configuration will cause the values to reappear. As we are currently not
really able to support this use case, we will instead just return an
error for deleting and setting variables which were included via an
include.
|
|
28c2cc3d
|
2017-05-31T16:41:44
|
|
config_file: move reader into `config_read` only
Right now, we have multiple call sites which initialize a `reader`
structure. As the structure is only actually used inside of
`config_read`, we can instead just move the reader inside of the
`config_read` function. Instead, we can just pass in the configuration
file into `config_read`, which eases code readability.
|
|
83bcd3a1
|
2017-05-31T22:45:25
|
|
config_file: refresh all files if includes were modified
Currently, we only re-parse the top-level configuration file when it has
changed itself. This can cause problems when an include is changed, as
we were not updating all values correctly.
Instead of conditionally reparsing only refreshed files, the logic
becomes much clearer and easier to follow if we always re-parse the
top-level configuration file when either the file itself or one of its
included configuration files has changed on disk. This commit implements
this logic.
Note that this might impact performance in some cases, as we need to
re-read all configuration files whenever any of the included files
changed. It could increase performance to just re-parse include files
which have actually changed, but this would compromise maintainability
of the code without much gain. The only case where we will gain anything
is when we actually use includes and when only these includes are
updated, which will probably be quite an unusual scenario to actually be
worthwhile to optimize.
|
|
6f7aab0c
|
2017-06-06T09:45:11
|
|
tests: config::include: use init and cleanup functions
|
|
1f7af277
|
2017-07-05T11:52:47
|
|
tests: config: fix missing declaration causing error
On systems where we pull in our distributed version of the regex
library, all tests in config::readonly fail. This error is actually
quite interesting: the test suite is unable to find the declaration of
`git_path_exists` and assumes it has a signature of `int
git_path_exists(const char *)`. But actually, it has a `bool` return
value. Due to this confusion, some wrong conversion is done by the
compiler and the `cl_assert(!git_path_exists("file"))` checks
erroneously fail, even when the function does in fact return the correct
value.
The error is actually introduced by 56893bb9a (cmake: consistently use
TARGET_INCLUDE_DIRECTORIES if available, 2017-06-28), unfortunately
introduced by myself. Due to the delayed addition of include
directories, we will now find the "config.h" header inside of the
"deps/regex" directory instead of inside the "src/" directory, where it
should be. As such, we are missing definitions for the
`git_config_file__ondisk` and `git_path_exists` symbols.
The correct fix here would be to fix the order in which include search
directories are added. But due to the current restructuring of
CMakeBuild.txt, I'm refraining from doing so and delay the proper fix a
bit. Instead, we paper over the issue by explicitly including "path.h"
to fix its prototype. This ignores the issue that
`git_config_file__ondisk` is undeclared, as its signature is correctly
identified by the compiler.
|
|
89a34828
|
2017-06-16T13:34:43
|
|
diff: implement function to calculate patch ID
The upstream git project provides the ability to calculate a so-called
patch ID. Quoting from git-patch-id(1):
A "patch ID" is nothing but a sum of SHA-1 of the file diffs
associated with a patch, with whitespace and line numbers ignored."
Patch IDs can be used to identify two patches which are probably the
same thing, e.g. when a patch has been cherry-picked to another branch.
This commit implements a new function `git_diff_patchid`, which gets a
patch and derives an OID from the diff. Note the different terminology
here: a patch in libgit2 are the differences in a single file and a diff
can contain multiple patches for different files. The implementation
matches the upstream implementation and should derive the same OID for
the same diff. In fact, some code has been directly derived from the
upstream implementation.
The upstream implementation has two different modes to calculate patch
IDs, which is the stable and unstable mode. The old way of calculating
the patch IDs was unstable in a sense that a different ordering the
diffs was leading to different results. This oversight was fixed in git
1.9, but as git tries hard to never break existing workflows, the old
and unstable way is still default. The newer and stable way does not
care for ordering of the diff hunks, and in fact it is the mode that
should probably be used today. So right now, we only implement the
stable way of generating the patch ID.
|
|
0a513a94
|
2017-05-10T11:46:00
|
|
generate.py: disallow generating test suites for multiple paths
Our generate.py script is used to extract and write test suite
declarations into the clar.suite file. As is, the script accepts
multiple directories on the command line and will generate this file for
each of these directories.
The generate.py script will always write the clar.suite file into the
directory which is about to be scanned. This actually breaks
out-of-tree builds with libgit2, as the file will be generated in the
source tree instead of in the build tree. This is noticed especially in
the case where the source tree is mounted read-only, rendering us unable
to build unit tests.
Due to us accepting multiple paths which are to be scanned, it is not
trivial to fix though. The first solution which comes into mind would be
to re-create the directory hierarchy at a given output path or use
unique names for the clar.suite files, but this is rather cumbersome and
magical. The second and cleaner solution would be to fold all
directories into a single clar.suite file, but this would probably break
some use-cases.
Seeing that we do not ever pass multiple directories to generate.py, we
will now simply retire support for this. This allows us to later on
introduce an additional option to specify the path where the clar.suite
file will be generated at, defaulting to "clar.suite" inside of the
scanned directory.
|
|
b6ed67c2
|
2017-05-10T12:54:14
|
|
tests: refs::crashes: create sandbox for creating symref
The test `refs::crashes::double_free` operates on our in-source
"testrepo.git" repository without creating a copy first. As the test
will try to create a new symbolic reference, this will fail when we want
to do a pure out-of-tree build with a read-only source tree.
Fix the issue by creating a sandbox first.
|
|
6ee7d37a
|
2017-05-10T12:51:06
|
|
tests: index::tests: create sandboxed repo for locking
The test `index::tests::can_lock_index` operates on the "testrepo.git"
repository located inside of our source tree. While this is okay for
tests which do read-only operations on these resouces, this specific
test tries to lock the index by creating a lock. This will obviously
fail on out-of-tree builds with read-only source trees.
Fix the issue by creating a sandbox first.
|
|
8d22bcea
|
2017-05-10T12:21:53
|
|
generate.py: generate clar cache in binary directory
The source directory should usually not be touched when using
out-of-tree builds. But next to the previously fixed "clar.suite" file, we
are also writing the ".clarcache" into the project's source tree,
breaking the builds. Fix this by also honoring the output directory for
the ".clarcache" file.
|
|
9e240bd2
|
2017-05-10T12:02:03
|
|
generate.py: enable overriding path of generated clar.suite
The generate.py script will currently always write the generated
clar.suite file into the scanned directory, breaking out-of-tree builds
with read-only source directories. Fix this issue by adding another
option to allow overriding the output path of the generated file.
|
|
62c44c49
|
2017-06-21T12:25:26
|
|
Merge pull request #4211 from pks-t/pks/trusty
travis: upgrade container to Ubuntu 14.04
|
|
c2c95ad0
|
2017-04-26T13:16:18
|
|
tests: online::clone: use URL of test server
All our tests running against a local SSH server usually read the
server's URL from environment variables. But online::clone::ssh_cert
test fails to do so and instead always connects to
"ssh://localhost/foo". This assumption breaks whenever the SSH server is
not running on the standard port, e.g. when it is running as a user.
Fix the issue by using the URL provided by the environment.
|
|
8e912e79
|
2017-06-16T21:05:58
|
|
tests: try to init with empty template path
|
|
8296da5f
|
2017-06-14T10:49:28
|
|
Merge pull request #4267 from mohseenrm/master
adding GIT_FILTER_VERSION to GIT_FILTER_INIT as part of convention
|
|
a78441bc
|
2017-06-13T11:05:40
|
|
Adding git_filter_init for initializing `git_filter` struct + unit test
|
|
a180e7d9
|
2017-06-13T11:10:19
|
|
tests: odb: add more low-level backend tests
Introduce a new test suite "odb::backend::simple", which utilizes the
fake backend to exercise the ODB abstraction layer. While such tests
already exist for the case where multiple backends are put together, no
direct testing for functionality with a single backend exist yet.
|
|
b2e53f36
|
2017-06-13T11:39:36
|
|
tests: odb: implement `exists_prefix` for the fake backend
The fake backend currently implements all reading functions except for
the `exists_prefix` one. Implement it to enable further testing of the
ODB layer.
|
|
983e627d
|
2017-06-13T11:38:59
|
|
tests: odb: use correct OID length
The `search_object` function takes the OID length as one of its
parameters, where its maximum length is `GIT_OID_HEXSZ`. The `exists`
function of the fake backend used `GIT_OID_RAWSZ` though, leading to
only the first half of the OID being used when finding the correct
object.
|
|
c4cbb3b1
|
2017-06-13T11:38:14
|
|
tests: odb: have the fake backend detect ambiguous prefixes
In order to be able to test the ODB prefix functions, we need to be able
to detect ambiguous prefixes in case multiple objects with the same
prefix exist in the fake ODB. Extend `search_object` to detect ambiguous
queries and have callers return its error code instead of always
returning `GIT_ENOTFOUND`.
|
|
95170294
|
2017-06-13T11:08:28
|
|
tests: core: test initialization of `git_proxy_options`
Initialization of the `git_proxy_options` structure is never tested
anywhere. Include it in our usual initialization test in
"core::structinit::compare".
|
|
bee423cc
|
2017-06-13T10:29:23
|
|
tests: network: add missing include for `git_repository_new`
A newly added test uses the `git_repository_new` function without the
corresponding header file being included. While this works due to the
compiler deducing the correct function signature, we should obviously
just include the function's declaration file.
|
|
2ca088bd
|
2017-06-12T22:47:54
|
|
Merge pull request #4265 from pks-t/pks/read-prefix-tests
Read prefix tests
|
|
fe9a5dd3
|
2017-06-12T12:00:14
|
|
remote: ensure we can create an anon remote on inmemory repo
Given a wholly in-memory repository, ensure that we can create an
anonymous remote and perform actions on it.
|
|
f148258a
|
2017-06-12T16:19:45
|
|
tests: odb: add tests with multiple backends
Previous to pulling out and extending the fake backend, it was quite
cumbersome to write tests for very specific scenarios regarding
backends. But as we have made it more generic, it has become much easier
to do so. As such, this commit adds multiple tests for scenarios with
multiple backends for the ODB.
The changes also include a test for a very targeted scenario. When one
backend found a matching object via `read_prefix`, but the last backend
returns `GIT_ENOTFOUND` and when object hash verification is turned off,
we fail to reset the error code to `GIT_OK`. This causes us to segfault
later on, when doing a double-free on the returned object.
|
|
6e010bb1
|
2017-06-12T15:43:56
|
|
tests: odb: allow passing fake objects to the fake backend
Right now, the fake backend is quite restrained in the way how it
works: we pass it an OID which it is to return later as well as an error
code we want it to return. While this is sufficient for existing tests,
we can make the fake backend a little bit more generic in order to allow
us testing for additional scenarios.
To do so, we change the backend to not accept an error code and OID
which it is to return for queries, but instead a simple array of OIDs
with their respective blob contents. On each query, the fake backend
simply iterates through this array and returns the first matching
object.
|
|
369cb45f
|
2017-06-12T15:21:58
|
|
tests: do not reuse OID from backend
In order to make the fake backend more useful, we want to enable it
holding multiple object references. To do so, we need to decouple it
from the single fake OID it currently holds, which we simply move up
into the calling tests.
|
|
2add34d0
|
2017-06-12T14:53:46
|
|
tests: odb: move fake backend into its own file
The fake backend used by the test suite `odb::backend::nonrefreshing` is
useful to have some low-level tests for the ODB layer. As such, we move
the implementation into its own `backend_helpers` module.
|
|
6f960b55
|
2017-06-11T10:37:46
|
|
Merge pull request #4088 from chescock/packfile-name-using-complete-hash
Ensure packfiles with different contents have different names
|
|
d2c4f764
|
2017-06-11T09:54:04
|
|
Merge pull request #4260 from libgit2/ethomson/forced_checkout_2
Update to forced checkout and untracked files
|
|
0ef405b3
|
2017-02-15T14:05:10
|
|
checkout: do not delete directories with untracked entries
If the `GIT_CHECKOUT_FORCE` flag is given to any of the `git_checkout`
invocations, we remove files which were previously staged. But while
doing so, we unfortunately also remove unstaged files in a directory
which contains at least one staged file, resulting in potential data
loss.
This commit adds two tests to verify behavior.
|
|
6c23704d
|
2017-06-08T21:40:18
|
|
settings: rename `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION`
Initially, the setting has been solely used to enable the use of
`fsync()` when creating objects. Since then, the use has been extended
to also cover references and index files. As the option is not yet part
of any release, we can still correct this by renaming the option to
something more sensible, indicating not only correlation to objects.
This commit renames the option to `GIT_OPT_ENABLE_FSYNC_GITDIR`. We also
move the variable from the object to repository source code.
|
|
92d3ea4e
|
2017-05-19T13:04:32
|
|
tests: index::version: improve write test for index v4
The current write test does not trigger some edge-cases in the index
version 4 path compression code. Rewrite the test to start off the an
empty standard repository, creating index entries with interesting paths
itself. This allows for more fine-grained control over checked paths.
Furthermore, we now also verify that entry paths are actually
reconstructed correctly.
|
|
8fe33538
|
2017-05-19T12:45:48
|
|
tests: index::version: verify we write compressed index entries
While we do have a test which checks whether a written index of version
4 has the correct version set, we do not check whether this actually
enables path compression for index entries. This commit adds a new test
by adding a number of index entries with equal path prefixes to the
index and subsequently flushing that to disk. With suffix compression
enabled by index version 4, only the last few bytes of these paths will
actually have to be written to the index, saving a lot of disk space.
For the test, differences are about an order of magnitude, allowing us
to easily verify without taking a deeper look at actual on-disk
contents.
|
|
82368b1b
|
2017-05-12T10:04:42
|
|
tests: index::version: add test to read index version v4
While we have a simple test to determine whether we can write an index
of version 4, we never verified that we are able to read this kind of
index (and in fact, we were not able to do so). Add a new repository
which has an index of version 4. This repository is then read from a new
test.
|
|
fea0c81e
|
2017-05-12T09:09:07
|
|
tests: index::version: move up cleanup function
The init and cleanup functions for test suites are usually prepended to
our actual tests. The index::version test suite does not adhere to this
stile. Fix this.
|
|
8a5e7aae
|
2017-05-22T12:53:44
|
|
varint: fix computation for remaining buffer space
When encoding varints to a buffer, we want to remain sure that the
required buffer space does not exceed what is actually available. Our
current check does not do the right thing, though, in that it does not
honor that our `pos` variable counts the position down instead of up. As
such, we will require too much memory for small varints and not enough
memory for big varints.
Fix the issue by correctly calculating the required size as
`(sizeof(varint) - pos)`. Add a test which failed before.
|
|
dd0aa811
|
2017-06-04T22:46:07
|
|
Merge branch 'pr/4228'
|
|
82e929a8
|
2017-06-04T19:35:39
|
|
Merge pull request #4239 from roblg/toplevel-dir-ignore-fix
Fix issue with directory glob ignore in subdirectories
|
|
04de614b
|
2017-06-04T19:03:07
|
|
Merge pull request #4243 from pks-t/pks/submodule-workdir
Submodule working directory
|
|
a1023a43
|
2017-05-20T17:18:07
|
|
Merge pull request #4179 from libgit2/ethomson/expand_tilde
Introduce home directory expansion function for config files, attribute files
|
|
e694e4e9
|
2017-05-20T14:17:36
|
|
Merge pull request #4174 from libgit2/ethomson/set_head_to_tag
git_repository_set_head: use tag name in reflog
|
|
119bdd86
|
2017-05-20T14:13:27
|
|
Merge pull request #4231 from wabain/open-revrange
revparse: support open-ended ranges
|
|
c0e54155
|
2017-01-11T10:39:59
|
|
indexer: name pack files after trailer hash
Upstream git.git has changed the way how packfiles are named.
Previously, they were using a hash of the contained object's OIDs, which
has then been changed to use the hash of the complete packfile instead.
See 1190a1acf (pack-objects: name pack files after trailer hash,
2013-12-05) in the git.git repository for more information on this
change.
This commit changes our logic to match the behavior of core git.
|
|
2696c5c3
|
2017-05-19T09:21:17
|
|
repository: make check if repo is a worktree more strict
To determine if a repository is a worktree or not, we currently check
for the existence of a "gitdir" file inside of the repository's gitdir.
While this is sufficient for non-broken repositories, we have at least
one case of a subtly broken repository where there exists a gitdir file
inside of a gitmodule. This will cause us to misidentify the submodule
as a worktree.
While this is not really a fault of ours, we can do better here by
observing that a repository can only ever be a worktree iff its common
directory and dotgit directory are different. This allows us to make our
check whether a repo is a worktree or not more strict by doing a simple
string comparison of these two directories. This will also allow us to
do the right thing in the above case of a broken repository, as for
submodules these directories will be the same. At the same time, this
allows us to skip the `stat` check for the "gitdir" file for most
repositories.
|
|
c3b8e8b3
|
2017-05-14T10:28:05
|
|
Fix issue with directory glob ignore in subdirectories
|
|
e526fbc7
|
2017-05-17T09:23:06
|
|
tests: add test suite for opening submodules
|
|
98a5f081
|
2017-05-03T13:53:13
|
|
tests: threads::basic: remove unused function `exit_abruptly`
|
|
7d7f6d33
|
2017-05-03T13:52:55
|
|
tests: clone::local: compile UNC functions for Windows only
|
|
8b107dc5
|
2017-05-03T11:20:57
|
|
revparse: support open-ended ranges
Support '..' and '...' ranges where one side is not specified.
The unspecified side defaults to HEAD.
Closes #4223
|
|
883eeb5f
|
2017-05-02T12:35:59
|
|
worktree: switch over worktree pruning to an opts structure
The current signature of `git_worktree_prune` accepts a flags field to
alter its behavior. This is not as flexible as we'd like it to be when
we want to enable passing additional options in the future. As the
function has not been part of any release yet, we are still free to
alter its current signature. This commit does so by using our usual
pattern of an options structure, which is easily extendable without
breaking the API.
|
|
8264a30f
|
2017-05-02T10:11:28
|
|
worktree: support creating locked worktrees
When creating a new worktree, we do have a potential race with us
creating the worktree and another process trying to delete the same
worktree as it is being created. As such, the upstream git project has
introduced a flag `git worktree add --locked`, which will cause the
newly created worktree to be locked immediately after its creation. This
mitigates the race condition.
We want to be able to mirror the same behavior. As such, a new flag
`locked` is added to the options structure of `git_worktree_add` which
allows the user to enable this behavior.
|
|
9364f274
|
2017-05-05T09:40:38
|
|
remote: test creating and fetching detached remotes
|
|
34b79391
|
2017-05-05T09:14:20
|
|
tests: online::remotes: add defines for URL and refspec
The repository URL is duplicated several times and can be de-duplicated
like this. Furthermore, exchange the static refspec variable with a
define to reduce BSS size.
|
|
8897f8fe
|
2017-05-05T09:47:54
|
|
remote: reject various actions for detached remotes
There are only few actions which actually make sense for a detached
remote, like e.g. `git_remote_ls`, `git_remote_prune`. For all the other
actions, we have to report an error when the remote has no repository
attached to it. This commit does so and implements some tests.
|
|
ffd264d9
|
2017-05-03T14:51:23
|
|
tests: repo: fix repo discovery tests on overlayfs
Debian and Ubuntu often use schroot to build their DEB packages in a
controlled environment. Depending on how schroot is configured, our
tests regarding repository discovery break due to not being able to find
the repositories anymore. It turns out that these errors occur when the
schroot is configured to use an overlayfs on the directory structures.
The reason for this failure is that we usually refrain from discovering
repositories across devices. But unfortunately, overlayfs does not have
consistent device identifiers for all its files but will instead use the
device number of the filesystem the file stems from. So whenever we
cross boundaries between the upper and lower layer of the overlay, we
will fail to properly detect the repository and bail out.
This commit fixes the issue by enabling cross-device discovery in our
tests. While it would be preferable to have this turned off, it probably
won't do much harm anyway as we set up our tests in a temporary location
outside of the parent repository.
|
|
a7aa73a5
|
2017-05-02T10:02:36
|
|
worktree: introduce git_worktree_add options
The `git_worktree_add` function currently accepts only a path and name
for the new work tree. As we may want to expand these parameters in
future versions without adding additional parameters to the function for
every option, this commit introduces our typical pattern of an options
struct. Right now, this structure is still empty, which will change with
the next commit.
|
|
1dc89aab
|
2017-05-01T21:34:21
|
|
object validation: free some memleaks
|
|
13c1bf07
|
2017-05-01T16:17:48
|
|
Merge pull request #4197 from pks-t/pks/verify-object-hashes
Verify object hashes
|
|
5700ee9c
|
2017-05-01T16:10:50
|
|
Merge pull request #4216 from pks-t/pks/debian-test-failures
Debian HTTPS feature test failure
|
|
35079f50
|
2017-04-21T07:31:56
|
|
odb: add option to turn off hash verification
Verifying hashsums of objects we are reading from the ODB may be costly
as we have to perform an additional hashsum calculation on the object.
Especially when reading large objects, the penalty can be as high as
35%, as can be seen when executing the equivalent of `git cat-file` with
and without verification enabled. To mitigate for this, we add a global
option for libgit2 which enables the developer to turn off the
verification, e.g. when he can be reasonably sure that the objects on
disk won't be corrupted.
|
|
28a0741f
|
2017-04-10T09:30:08
|
|
odb: verify object hashes
The upstream git.git project verifies objects when looking them up from
disk. This avoids scenarios where objects have somehow become corrupt on
disk, e.g. due to hardware failures or bit flips. While our mantra is
usually to follow upstream behavior, we do not do so in this case, as we
never check hashes of objects we have just read from disk.
To fix this, we create a new error class `GIT_EMISMATCH` which denotes
that we have looked up an object with a hashsum mismatch. `odb_read_1`
will then, after having read the object from its backend, hash the
object and compare the resulting hash to the expected hash. If hashes do
not match, it will return an error.
This obviously introduces another computation of checksums and could
potentially impact performance. Note though that we usually perform I/O
operations directly before doing this computation, and as such the
actual overhead should be drowned out by I/O. Running our test suite
seems to confirm this guess. On a Linux system with best-of-five
timings, we had 21.592s with the check enabled and 21.590s with the
ckeck disabled. Note though that our test suite mostly contains very
small blobs only. It is expected that repositories with bigger blobs may
notice an increased hit by this check.
In addition to a new test, we also had to change the
odb::backend::nonrefreshing test suite, which now triggers a hashsum
mismatch when looking up the commit "deadbeef...". This is expected, as
the fake backend allocated inside of the test will return an empty
object for the OID "deadbeef...", which will obviously not hash back to
"deadbeef..." again. We can simply adjust the hash to equal the hash of
the empty object here to fix this test.
|
|
d59dabe5
|
2017-04-10T09:00:51
|
|
tests: object: test looking up corrupted objects
We currently have no tests which check whether we fail reading corrupted
objects. Add one which modifies contents of an object stored on disk and
then tries to read the object.
|
|
86c03552
|
2017-04-10T09:27:04
|
|
tests: object: create sandbox
The object::lookup tests do use the "testrepo.git" repository in a
read-only way, so we do not set up the repository as a sandbox but
simply open it. But in a future commit, we will want to test looking up
objects which are corrupted in some way, which requires us to modify the
on-disk data. Doing this in a repository without creating the sandbox
will modify contents of our libgit2 repository, though.
Create the repository in a sandbox to avoid this.
|
|
e29e8029
|
2017-04-10T10:31:22
|
|
tests: odb: make hash of fake backend configurable
In the odb::backend::nonrefreshing test suite, we set up a fake backend
so that we are able to determine if backend functions are called
correctly. During the setup, we also parse an OID which is later on used
to read out the pseudo-object. While this procedure works right now, it
will create problems later when we implement hash verification for
looked up objects. The current OID ("deadbeef") will not match the hash
of contents we give back to the ODB layer and thus cannot be verified.
Make the hash configurable so that we can simply switch the returned for
single tests.
|
|
7df580fa
|
2017-04-28T11:58:49
|
|
Merge pull request #4191 from pks-t/pks/wt-ref-renames
Branch renames with worktrees
|
|
2a7086fa
|
2017-04-25T13:23:04
|
|
tests: config: verify functionality with read-only backends
|
|
417319cc
|
2017-04-25T10:14:37
|
|
tests: core::features: only check for HTTPS if it is supported
|
|
a4de1ae3
|
2017-04-25T10:14:19
|
|
cmake: define GIT_HTTPS when HTTPS is supported
|
|
13c275ab
|
2017-04-21T07:49:08
|
|
tests: threads::diff: fix warning for unused variable
The threads::diff test suite has a static variable `_retries`, which is
used on Windows platforms only. As it is unused on other systems, the
compiler throws a warning there. Fix the warning by wrapping the
declaration in an ifdef.
|
|
8d89e409
|
2017-04-17T17:19:03
|
|
Merge pull request #4192 from libgit2/ethomson/win32_posix
Refactor some of the win32 POSIX emulation
|
|
78b8f039
|
2017-02-15T14:00:38
|
|
tests: fix indentation in checkout::head::with_index_only_tree
|
|
cf07db2f
|
2017-04-07T16:05:10
|
|
filter: only close filter if it's been initialized correctly
In the function `git_filter_list_stream_data`, we initialize, write and
subesquently close the stream which should receive content processed by
the filter. While we skip writing to the stream if its initialization
failed, we still try to close it unconditionally -- even if the
initialization failed, where the stream might not be set at all, leading
us to segfault.
Semantics in this code is not really clear. The function handling the
same logic for files instead of data seems to do the right thing here in
only closing the stream when initialization succeeded. When stepping
back a bit, this is only reasonable: if a stream cannot be initialized,
the caller would not expect it to be closed again. So actually, both
callers of `stream_list_init` fail to do so. The data streaming function
will always close the stream and the file streaming function will not
close the stream if writing to it has failed.
The fix is thus two-fold:
- callers of `stream_list_init` now close the stream iff it has been
initialized
- `stream_list_init` now closes the lastly initialized stream if
the current stream in the chain failed to initialize
Add a test which segfaulted previous to these changes.
|
|
2a485dab
|
2017-04-04T18:55:57
|
|
refs: update worktree HEADs when renaming branches
Whenever we rename a branch, we update the repository's symbolic HEAD
reference if it currently points to the branch that is to be renamed.
But with the introduction of worktrees, we also have to iterate over all
HEADs of linked worktrees to adjust them. Do so.
|
|
60297256
|
2017-04-04T16:12:27
|
|
tests: worktree::refs: convert spaces to tabs
|
|
48f09c6c
|
2017-04-05T11:59:03
|
|
win32: only set `git_win32__retries` where it exists
|
|
89d403cc
|
2017-04-05T09:50:12
|
|
win32: enable `p_utimes` for readonly files
Instead of failing to set the timestamp of a read-only file (like any
object file), set it writable temporarily to update the timestamp.
|
|
7ece9065
|
2017-04-03T23:07:16
|
|
win32: make posix emulation retries configurable
POSIX emulation retries should be configurable so that tests can disable
them. In particular, maniacally threading tests may end up trying to
open locked files and need retries, which will slow continuous
integration tests significantly.
|
|
e86d02f9
|
2017-04-03T00:10:47
|
|
git_repository_set_head: use remote name in reflog
When `git_repository_set_head` is provided a remote reference, update
the reflog with the tag name, like we do with a branch. This helps
consumers match the semantics of `git checkout remote`.
|