|
44d5e47d
|
2019-08-24T10:39:56
|
|
Fix include casing for case-sensitive filesystems.
|
|
08cfa43d
|
2019-08-13T18:17:11
|
|
Merge pull request #5202 from libgit2/users/ethomson/security_updates
Security updates from 0.28.3
|
|
57a9ccd5
|
2019-06-21T15:53:54
|
|
commit_list: fix possible buffer overflow in `commit_quick_parse`
The function `commit_quick_parse` provides a way to quickly parse
parts of a commit without storing or verifying most of its
metadata. The first thing it does is calculating the number of
parents by skipping "parent " lines until it finds the first
non-parent line. Afterwards, this parent count is passed to
`alloc_parents`, which will allocate an array to store all the
parent.
To calculate the amount of storage required for the parents
array, `alloc_parents` simply multiplicates the number of parents
with the respective elements's size. This already screams "buffer
overflow", and in fact this problem is getting worse by the
result being cast to an `uint32_t`.
In fact, triggering this is possible: git-hash-object(1) will
happily write a commit with multiple millions of parents for you.
I've stopped at 67,108,864 parents as git-hash-object(1)
unfortunately soaks up the complete object without streaming
anything to disk and thus will cause an OOM situation at a later
point. The point here is: this commit was about 4.1GB of size but
compressed down to 24MB and thus easy to distribute.
The above doesn't yet trigger the buffer overflow, thus. As the
array's elements are all pointers which are 8 bytes on 64 bit, we
need a total of 536,870,912 parents to trigger the overflow to
`0`. The effect is that we're now underallocating the array
and do an out-of-bound writes. As the buffer is kindly provided
by the adversary, this may easily result in code execution.
Extrapolating from the test file with 67m commits to the one with
536m commits results in a factor of 8. Thus the uncompressed
contents would be about 32GB in size and the compressed ones
192MB. While still easily distributable via the network, only
servers will have that amount of RAM and not cause an
out-of-memory condition previous to triggering the overflow. This
at least makes this attack not an easy vector for client-side use
of libgit2.
|
|
cb1439c9
|
2019-06-19T12:59:27
|
|
config: validate ownership of C:\ProgramData\Git\config before using it
When the VirtualStore feature is in effect, it is safe to let random
users write into C:\ProgramData because other users won't see those
files. This seemed to be the case when we introduced support for
C:\ProgramData\Git\config.
However, when that feature is not in effect (which seems to be the case
in newer Windows 10 versions), we'd rather not use those files unless
they come from a trusted source, such as an administrator.
This change imitates the strategy chosen by PowerShell's native OpenSSH
port to Windows regarding host key files: if a system file is owned
neither by an administrator, a system account, or the current user, it
is ignored.
|
|
df3f18ac
|
2019-08-05T00:32:11
|
|
changelog: include security updates
|
|
5774b2b1
|
2019-08-11T23:42:45
|
|
Merge pull request #5113 from pks-t/pks/stash-perf
stash: avoid recomputing tree when committing worktree
|
|
42bacbc6
|
2019-08-11T21:06:19
|
|
Merge pull request #5121 from pks-t/pks/variadic-errors
Variadic macros
|
|
b0692d6b
|
2019-08-09T09:01:56
|
|
Merge pull request #4913 from implausible/feature/signing-rebase-commits
Add sign capability to git_rebase_commit
|
|
998f9c15
|
2019-08-07T07:21:27
|
|
fixup: strange indentation
|
|
f627ba6c
|
2019-08-02T13:18:07
|
|
Merge pull request #5197 from pks-t/pks/remote-ifdeffed-block
remote: remove unused block of code
|
|
24c491ed
|
2019-08-02T07:58:11
|
|
Merge pull request #5146 from scottfurry/StaticFixesExamples
Adjust printf specifiers in examples code
|
|
e23c0b18
|
2019-08-02T07:52:58
|
|
remote: remove unused block of code
In "remote.c", we have a chunk of code that is #ifdef'fed out via
`#if 0` with a comment that we could export it as a helper function.
The code was implemented in 2013 and ifdef'fed in 2014, which shows that
there's clearly no interest in having such a helper at all.
As this block has recently created some confusion about `p_getenv` due
to it containing the only reference to that function in our codebase,
let's remove this block altogether.
|
|
d588de7c
|
2019-08-02T07:51:02
|
|
Merge pull request #5191 from eaigner/master
config: check if we are running in a sandboxed environment
|
|
73a186f2
|
2019-06-27T10:02:40
|
|
Adjust printf specifiers in examples code
Static analysis of example code found multiple findings of `printf` usage
where filling value is members of git_indexer_progress object. Specifier
used was for signed int but git_indexer_progress members are typed as
unsigned ints. `printf` specifiers were altered to match type.
|
|
952fbbfb
|
2019-08-01T20:04:11
|
|
config: check if we are running in a sandboxed environment
On macOS the $HOME environment variable returns the path to the sandbox container instead of the actual user $HOME for sandboxed apps. To get the correct path, we have to get it from the password file entry.
|
|
ac171542
|
2019-08-01T17:45:14
|
|
Merge pull request #5184 from novalis/fix-example
Fix example checkout to forbid rather than require --
|
|
8f7fd981
|
2019-08-01T13:35:27
|
|
Merge pull request #5183 from pks-t/pks/editorconfig
editorconfig: update to match our coding style
|
|
56e7aaf0
|
2019-08-01T12:40:51
|
|
Merge pull request #5125 from albfan/wip/albfan/diff_buffers
Compare buffers in diff example
|
|
1721ab04
|
2019-06-16T11:25:47
|
|
unix: posix: avoid use of variadic macro `p_snprintf`
The macro `p_snprintf` is implemented as a variadic macro that
calls `snprintf` directly with `__VA_ARGS__`. In C89, variadic
macros are not allowed, but as the arguments of `p_snprintf` and
`snprintf` are matching 1:1, we can fix this by simply removing
the parameter list from `p_snprintf`.
|
|
63d8cd18
|
2019-06-16T11:17:17
|
|
apply: remove use of variadic error macro
The macro `apply_err` is implemented as a variadic macro, which
are not defined by C89. Convert it to a variadic function,
instead.
|
|
27b8b31e
|
2019-08-01T11:57:03
|
|
parse: remove use of variadic macros which are not C89 compliant
The macro `git_parse_error` is implemented in a variadic way so
that it's possible to pass printf-style parameters.
Unfortunately, variadic macros are not defined by C89 and thus we
cannot use that functionality. But as we have implemented
`git_error_vset` in the previous commit, we can now just use that
instead.
Convert `git_parse_error` to a variadic function and use
`git_error_vset` to fix the compliance violation. While at it,
move the function to "patch_parse.c".
|
|
c8e63812
|
2019-06-16T11:03:08
|
|
errors: introduce `git_error_vset` function
Right now, we only provide a `git_error_set` that has a variadic
function signature. It's impossible to drive this function in a
C89-compliant way from other functions that have a variadic
signature, though, like for example `git_parse_error`.
Implement a new `git_error_vset` function that gets a `va_list`
as parameter, fixing the above problem.
|
|
eea128b5
|
2019-08-01T11:49:50
|
|
Merge pull request #5135 from j143-bot/jdev01
Include ahead_behind in the test suite
|
|
e8f63411
|
2019-08-01T11:29:58
|
|
Merge pull request #5186 from pks-t/pks/config-snapshot-separation
config: separate file and snapshot backends
|
|
f039c836
|
2019-07-29T13:02:37
|
|
Merge pull request #5192 from libgit2/cmn/object-size-nopublic
object: deprecate git_object__size for removal
|
|
c8e249b0
|
2019-07-29T10:51:22
|
|
object: deprecate git_object__size for removal
In #5118 we remove the double-underscore to make it a normally-named public
function. However, this is not an interesting function outside of the library
and it takes up a name for something that could be more useful.
Remove the single-underscore version as we have not done any releases with it.
|
|
37ebe9ad
|
2019-07-24T18:49:08
|
|
config_backend: rename internal structures
The internal backend structures are kind-of legacy and do not really
speak for themselves. Rename them accordingly to make them easier to
understand.
|
|
2bff84ba
|
2019-07-26T21:02:56
|
|
config_file: separate out read-only backend
To further distinguish the file writeable and readonly backends,
separate the readonly backend into its own "config_snapshot.c"
implementation. The snapshot backend can be generically used to snapshot
any type of backend.
|
|
f0b10066
|
2019-07-24T18:37:14
|
|
config_file: fix cast of readonly backend
In `backend_readonly_free`, the passed in config backend is being cast
to a `diskfile_backend` instead of to a `diskfile_readonly_backend`.
While this works out just fine because we only access its header values,
which were shared between both backends, it is undefined behaviour.
Use the correct type to fix this.
|
|
a3159df8
|
2019-07-24T18:31:43
|
|
config_file: remove shared `diskfile_header` struct
The `diskfile_header` structure is shared between both
`diskfile_backend` and `diskfile_readonly_backend`. The separation and
resulting casting is confusing at times and a source for programming
errors.
Remove the shared structure and inline them directly.
|
|
271e5fba
|
2019-07-24T18:18:18
|
|
config_file: duplicate accessors for readonly backend
While most functions of the readonly configuration backend are
implemented separately from the writeable configuration backend, the two
functions `config_iterator_new` and `config_get` are shared between
both. This sharing makes it necessary to have some shared data
structures, which is the `diskfile_header` structure. Unfortunately, this
makes the backends harder to grasp than necessary due to all the casting
between structs and also quite error prone.
Reimplement those functions for the readonly backends. As readonly
backends cannot be refreshed anyway, we can remove the calls to
`config_refresh` in there.
|
|
4e7ce1fb
|
2019-07-24T18:13:52
|
|
config_file: reimplement `config_readonly_open` generically
The `config_readonly_open` function currently receives as input a
diskfile backend and will copy its entries to a new snapshot. This is
rather intimate, as we need to assume that the source config backend is
in fact a diskfile entry. We can do better than this though by using
generic methods to copy contents of the provided backend, e.g. by using
a config iterator. This also allows us to decouple the read-only backend
from the read-write backend.
|
|
76182e84
|
2019-07-24T18:04:38
|
|
config_entries: fix possible segfault when duplicating entries
When duplicating a configuration entry, we allocate a new entry but do
not verify that we get a valid pointer back. As we're dereferencing the
pointer afterwards, we might thus run into a segfault in out-of-memory
situations.
Extract a new function `git_config_entries_dup_entry` that handles the
complete entry duplication. Fix the error by using
`GIT_ERROR_CHECK_ALLOC`.
|
|
ed387d4a
|
2019-07-24T12:01:27
|
|
Fix example checkout to forbid rather than require --
Make the example program for checkout follow git syntax, where
"--" indicates a file. This was likely just a strcmp return
value confusion.
|
|
ab545014
|
2019-07-24T08:20:08
|
|
editorconfig: update to match our coding style
Update editorconfig to match our coding style. Most importantly, we set
up the tab width to be 8 characters instead of the default and use
2 spaces to indent YAML files.
|
|
fdd10839
|
2019-06-30T00:41:10
|
|
Implement test for graph ahead and behind
|
|
e3adc99e
|
2019-07-22T11:02:38
|
|
Merge pull request #5181 from pks-t/pks/config-iterator-refresh
config_file: refresh when creating an iterator
|
|
a213fec6
|
2019-07-21T15:12:40
|
|
tests: remote: add test suite to test listing remotes
There was a bug when calling `git_remote_list` that caused us to not
re-read modified configurations when using `git_config_iterator`. This
bug also impacted `git_remote_list`, which thus failed to provide an
up-to-date list of remotes. Add a test suite remote::list with a single
test that verifies we do the right thing.
|
|
2766b92d
|
2019-07-21T15:10:34
|
|
config_file: refresh when creating an iterator
When creating a new iterator for a config file backend, then we should
always make sure that we're up to date by calling `config_refresh`.
Otherwise, we might not notice when another process has modified the
configuration file and thus will represent outdated values.
Add two tests to config::stress that verify that we get up-to-date
values when reading configuration entries via `git_config_iterator`.
|
|
9fac8b78
|
2019-07-21T15:08:22
|
|
config_file: do not refresh read-only backends
If calling `config_refresh` on a read-only configuration file backend,
then we will segfault when comparing the timestamp of the file due to
`path` being uninitialized. As a read-only snapshot should not be
refreshed anyway and stay consistent, we can simply return early when
calling `config_refresh` on a read-only snapshot.
|
|
28d11b59
|
2019-07-21T14:41:21
|
|
config_file: consistently use `GIT_CONTAINER_OF`
|
|
82b1d1da
|
2019-07-21T12:25:10
|
|
Merge pull request #5141 from pks-t/pks/azure-drop-powershell
azure: drop powershell
|
|
90858192
|
2019-07-20T21:30:03
|
|
Merge pull request #5180 from libgit2/ethomson/futils
fuzzer: use futils instead of fileops
|
|
ecd4f97b
|
2019-07-20T21:15:47
|
|
fuzzer: use futils instead of fileops
|
|
2376cd26
|
2019-07-20T20:48:49
|
|
Merge pull request #5151 from pks-t/pks/w32-unlink-symlink
w32: fix unlinking of directory symlinks
|
|
6be5ac23
|
2019-07-11T15:30:51
|
|
checkout: postpone creation of symlinks to the end
On most platforms it's fine to create symlinks to nonexisting files. Not
so on Windows, where the type of a symlink (file or directory) needs to
be set at creation time. So depending on whether the target file exists
or not, we may end up with different symlink types. This creates a
problem when performing checkouts, where we simply iterate over all blobs
that need to be updated without treating symlinks any special. If the
target file of the symlink is going to be checked out after the symlink
itself, then the symlink will be created as directory symlink and not as
file symlink.
Fix the issue by iterating over blobs twice: once to perform postponed
deletions and updates to non-symlink blobs, and once to perform updates
to symlink blobs.
|
|
50194dcd
|
2019-07-11T15:14:42
|
|
win32: fix symlinks to relative file targets
When creating a symlink in Windows, one needs to tell Windows whether
the symlink should be a file or directory symlink. To determine which
flag to pass, we call `GetFileAttributesW` on the target file to see
whether it is a directory and then pass the flag accordingly. The
problem though is if create a symlink with a relative target path, then
we will check that relative path while not necessarily being inside of
the working directory where the symlink is to be created. Thus, getting
its attributes will either fail or return attributes of the wrong
target.
Fix this by resolving the target path relative to the directory in which
the symlink is to be created.
|
|
93d37a1d
|
2019-06-29T09:59:36
|
|
tests: core: improve symlink test coverage
Add two more tests to verify that we're not deleting symlink targets,
but the symlinks themselves. Furthermore, convert several `cl_skip`s on
Win32 to conditional skips depending on whether the clar sandbox
supports symlinks or not. Windows is grown up now and may allow
unprivileged symlinks if the machine has been configured accordingly.
|
|
683ea2b0
|
2019-06-29T09:10:57
|
|
tests: core: add missing asserts for several function calls
Several function calls to `p_stat` and `p_close` have no verification if
they actually succeeded. As these functions _may_ fail and as we also
want to make sure that we're not doing anything dumb, let's check them,
too.
|
|
a00842c4
|
2019-06-29T09:59:14
|
|
win32: correctly unlink symlinks to directories
When deleting a symlink on Windows, then the way to delete it depends on
whether it is a directory symlink or a file symlink. In the first case,
we need to use `DeleteFile`, in the second `RemoveDirectory`. Right now,
`p_unlink` will only ever try to use `DeleteFile`, though, and thus fail
to remove directory symlinks. This mismatches how unlink(3P) is expected
to behave, though, as it shall remove any symlink disregarding whether
it is a file or directory symlink.
In order to correctly unlink a symlink, we thus need to check what kind
of file this is. If we were to first query file attributes of every file
upon calling `p_unlink`, then this would penalize the common case
though. Instead, we can try to first delete the file with `DeleteFile`
and only if the error returned is `ERROR_ACCESS_DENIED` will we query
file attributes and determine whether it is a directory symlink to use
`RemoveDirectory` instead.
|
|
ded77bb1
|
2019-06-29T09:58:34
|
|
path: extract function to check whether a path supports symlinks
When initializing a repository, we need to check whether its working
directory supports symlinks to correctly set the initial value of the
"core.symlinks" config variable. The code to check the filesystem is
reusable in other parts of our codebase, like for example in our tests
to determine whether certain tests can be expected to succeed or not.
Extract the code into a new function `git_path_supports_symlinks` to
avoid duplicate implementations. Remove a duplicate implementation in
the repo test helper code.
|
|
e54343a4
|
2019-06-29T09:17:32
|
|
fileops: rename to "futils.h" to match function signatures
Our file utils functions all have a "futils" prefix, e.g.
`git_futils_touch`. One would thus naturally guess that their
definitions and implementation would live in files "futils.h" and
"futils.c", respectively, but in fact they live in "fileops.h".
Rename the files to match expectations.
|
|
a7d32d60
|
2019-07-20T18:46:32
|
|
stash: avoid recomputing tree when committing worktree
When creating a new stash, we need to create there separate
commits storing differences stored in the index, untracked
changes as well as differences in the working directory. The
first two will only be done conditionally if the equivalent
options "git stash --keep-index --include-untracked" are being
passed to `git_stash_save`, but even when only creating a stash
of worktree changes we're much slower than git.git. Using our new
stash example:
$ time git stash
Saved working directory and index state WIP on (no branch): 2f7d9d47575e Linux 5.1.7
real 0m0.528s
user 0m0.309s
sys 0m0.381s
$ time lg2 stash
real 0m27.165s
user 0m13.645s
sys 0m6.403s
As can be seen, libgit2 is more than 50x slower than git.git!
When creating the stash commit that includes all worktree
changes, we create a completely new index to prepare for the new
commit and populate it with the entries contained in the index'
tree. Here comes the catch: by populating the index with a tree's
contents, we do not have any stat caches in the index. This means
that we have to re-validate every single file from the worktree
and see whether it has changed.
The issue can be fixed by populating the new index with the
repo's existing index instead of with the tree. This retains all
stat cache information, and thus we really only need to check
files that have changed stat information. This is semantically
equivalent to what we previously did: previously, we used the
tree of the commit computed from the index. Now we're just using
the index directly.
And, in fact, the cache is doing wonders:
time lg2 stash
real 0m1.836s
user 0m1.166s
sys 0m0.663s
We're now performing 15x faster than before and are only 3x
slower than git.git now.
|
|
88731e3c
|
2019-06-14T12:34:37
|
|
examples: implement git-stash example
Implement a new example that resembles the git-stash(1) command.
Right now, it only provides the apply, list, save and pop
subcommands without any options.
This example is mostly used to test libgit2's stashing
performance on big repositories.
|
|
fe3b5da3
|
2019-07-19T12:43:55
|
|
clar: provide ability to set summary file via environment
As different test suites for our CI are mostly defined via CMake, it's
hard to run those tests with a summary file path as that'd require us to
add another parameter to all unit tests. As we do not want to
unconditionally run unit tests with a summary file, we would have to add
another CMake build parameter for test execution, which is ugly.
Instead, implement a way to provide a summary file path via the
environment.
|
|
86ecd600
|
2019-06-28T15:11:27
|
|
fuzzers: provide test targets
Instead of having to find the fuzzer executables in our Azure test
scripts, provide test targets for each of our fuzzers that will
run them with the correct paths.
|
|
415ee616
|
2019-07-12T09:40:13
|
|
azure-pipelines: make gitdaemon tests work on Win32
On Win32 builds, the PID file created by git-daemon contained in invalid
PID that we were not able to kill afterwards. Somehow, it seems like the
contained PID was wrapped in braces. Consequentially, kill(1) failed and
thus caused the build to error.
Fix this by directly grabbing the PID of the spawned git-daemon process.
|
|
f867bfa8
|
2019-07-20T18:35:46
|
|
azure: skip SSH tests on Win32 platforms
On Win32 build hosts, we do not have an SSH daemon readily available and
thus cannot perform the SSH tests. Let's skip the tests to not let Azure
Pipelines fail.
|
|
0cda5252
|
2019-07-19T12:09:58
|
|
azure: use bash scripts across all platforms
Right now, we maintain semantically equivalent build scripts in
both Bash and Powershell to support both Windows and non-Windows
hosts. Azure Pipelines supports Bash on Windows, too, via Git for
Windows, and as such it's not really required to maintain the
Powershell scripts at all.
Remove them to reduce our own maintenance burden.
|
|
1be4f896
|
2019-07-19T12:07:59
|
|
azure: avoid executing compiler if there is none
Until now, we always had the CC variable defined in the build.sh
pipeline. But as we're about to migrate the Windows jobs to Bash, as
well, those will not have CC defined and thus we cannot use "$CC" to
determine the compiler version.
Fix this by only executing "$CC" if the variable was set.
|
|
8e356f48
|
2019-07-20T18:35:20
|
|
azure: explicitly specify CMake generator
We currently specify the CMake generator as part of the CMAKE_OPTIONS
variable. This is fine in the current setup, but during the conversion
to drop PowerShell scripts this will prove problematic for all
generators that have spaces in their names due to quoting issues.
Convert to use an explicit CMAKE_GENERATOR variable that makes it easier
to get quoting right.
|
|
443df2df
|
2019-06-24T16:27:00
|
|
azure: replace mingw setup with bash
We're about to phase out our Powershell scripts as Azure
Pipelines does in fact support Bash scripts on all platforms. As
a preparatory step, let's replace our MinGW setup script with a
Bash script.
|
|
d8e85d57
|
2019-06-27T15:01:24
|
|
azure: fix building in MinGW via Bash
Azure Pipelines supports bash tasks on Windows hosts due to it always
having Git for Windows included. To support this, the Git for Window
directory is added to the PATH environment to make the bash shell
available for execution. Unfortunately, this breaks CMake with the MinGW
generator, as it has sanity checks to verify that no bash executable is
in the PATH. So we can either remove Git for Windows from the path, but
then we're unable to execute bash jobs. Or we can add it to the path,
but then we're unable to execute CMake with the MinGW generator.
Let's re-model how we set the PATH environment. Instead of setting up
PATH for the complete build job, we now set a variable "BUILD_PATH" for
the job. This variable is only being used when executing CMake so that
it encounters a sanitizied PATH environment without GfW's bash shell.
|
|
ffac520e
|
2019-06-24T16:19:35
|
|
azure: move build scripts into "azure-pipelines" directory
Since we have migrated to Azure Pipelines, we have deprecated and
subsequentally removed all infrastructure for AppVeyor and
Travis. Thus it doesn't make a lot of sense to have the split
between "ci/" and "azure-pipelines/" directories anymoer, as
"azure-pipelines/" is essentially our only CI.
Move all CI scripts into the "azure-pipelines/" directory to have
everything centrally located and to remove clutter in the
top-level directory.
|
|
d827b11b
|
2019-06-28T13:20:54
|
|
tests: execute leak checker via CTest directly
Right now, we have an awful hack in our test CI setup that extracts the
test command from CTest's output and then prepends the leak checker.
This is dependent on non-machine-parseable output from CMake and also
breaks on various ocassions, like for example when we have spaces in the
current path or when the path contains backslashes. Both conditions may
easily be triggered on Win32 systems, and in fact they do break our
Azure Pipelines builds.
Remove the awful hack in favour of a new CMake build option
"USE_LEAK_CHECKER". If specifying e.g. "-DUSE_LEAK_CHECKER=valgrind",
then we will set up all tests to be run under valgrind. Like this, we
can again simply execute ctest without needing to rely on evil sourcery.
|
|
1f44079c
|
2019-07-20T18:08:40
|
|
Merge pull request #5179 from pks-t/pks/patch-parse-free
patch_parse: fix segfault due to line containing static contents
|
|
a613832e
|
2019-07-20T18:49:48
|
|
patch_parse: fix segfault due to line containing static contents
With commit dedf70ad2 (patch_parse: do not depend on parsed buffer's
lifetime, 2019-07-05), all lines of the patch are allocated with
`strdup` to make lifetime of the parsed patch independent of the buffer
that is currently being parsed. In patch b08932824 (patch_parse: ensure
valid patch output with EOFNL, 2019-07-11), we introduced another
code location where we add lines to the parsed patch. But as that one
was implemented via a separate pull request, it wasn't converted to use
`strdup`, as well. As a consequence, we generate a segfault when trying
to deallocate the potentially static buffer that's now in some of the
lines.
Use `git__strdup` to fix the issue.
|
|
e07dbc92
|
2019-07-20T11:26:00
|
|
Merge pull request #5173 from pks-t/pks/gitignore-wildmatch-error
ignore: fix determining whether a shorter pattern negates another
|
|
fd7a384b
|
2019-07-20T11:24:37
|
|
Merge pull request #5159 from pks-t/pks/patch-parse-old-missing-nl
patch_parse: handle missing newline indicator in old file
|
|
f33ca472
|
2019-07-20T11:06:23
|
|
Merge pull request #5158 from pks-t/pks/patch-parsed-lifetime
patch_parse: do not depend on parsed buffer's lifetime
|
|
d78a1b18
|
2019-07-20T11:04:53
|
|
Merge pull request #5174 from pks-t/pks/winhttp-hash
sha1: fix compilation of WinHTTP backend
|
|
964c1c60
|
2019-07-20T11:02:30
|
|
Merge pull request #5176 from pks-t/pks/repo-template-head
repository: do not initialize HEAD if it's provided by templates
|
|
9d46f167
|
2019-07-19T10:50:51
|
|
repository: do not initialize HEAD if it's provided by templates
When using templates to initialize a git repository, then git-init(1)
will copy over all contents of the template directory. These will be
preferred over the default ones created by git-init(1). While we mostly
do the same, there is the exception of "HEAD". While we do copy over the
template's HEAD file, afterwards we'll immediately re-initialize its
contents with either the default "ref: refs/origin/master" or the init
option's `initial_head` field.
Let's fix the inconsistency with upstream git-init(1) by not overwriting
the template HEAD, but only if the user hasn't set `opts.initial_head`.
If the `initial_head` field has been supplied, we should use that
indifferent from whether the template contained a HEAD file or not. Add
tests to verify we correctly use the template directory's HEAD file and
that `initial_head` overrides the template.
|
|
f3134a84
|
2019-07-19T10:41:10
|
|
repository: update error handling in `init_ext`
Update `git_repository_init_ext` to use our typical style of error
handling. The function had multiple statements which didn't `goto out`
immediately but instead deferred it to later calls combined with `if`
statements.
|
|
869ae5a3
|
2019-07-19T10:15:43
|
|
repository: avoid swallowing error codes in `create_head`
The error handling in `git_repository_create_head` completely swallows
all error codes. While probably not too much of a problem, this also
violates our usual coding style.
Refactor the code to use a local `error` variable with the typical `goto
out` statements.
|
|
0d12b8dd
|
2019-07-19T09:43:34
|
|
tests: repo: refactor setup of templates and repos
All tests in repo::template have a common pattern of first setting up
templates, then settung up the repository that makes use of those
templates via several init options. Refactor this pattern into two
functions `setup_templates` and `setup_repo` that handle most of that
logic to make it easier to spot what a test actually wants to check.
Furthermore, this also refactors how we clean up after the tests.
Previously, it was a combination of manually calling
`cl_fixture_cleanup` and `cl_set_cleanup`, which really is kind of hard
to read. This commit refactors this to instead provide the cleaning
parameters in the setup functions. All cleanups are then performed in
the suite's cleanup function.
|
|
3b79ceaf
|
2019-07-19T08:58:12
|
|
tests: repo: refactor template path handling
The repo::template test suite makes use of quite a few local variables
that could be consolidated. Do so to make the code easier to read.
|
|
ee193480
|
2019-07-19T08:45:45
|
|
tests: repo: move template tests into their own suite
There's quite a lot of supporting code for our templates and they are an
obvious standalone feature. Thus, let's extract those tests into their
own suite to also make refactoring of them easier.
|
|
3424c210
|
2019-07-19T08:00:13
|
|
Merge pull request #5138 from libgit2/ethomson/cvar
configuration: cvar -> configmap
|
|
a33c0de2
|
2019-07-18T19:17:40
|
|
Merge pull request #5172 from bk2204/cache-efficient-eviction
Evict cache items more efficiently
|
|
e86d75f3
|
2019-07-18T19:00:42
|
|
Merge pull request #5175 from pks-t/pks/clar-fix-suite-count
clar: fix suite count
|
|
92109976
|
2019-07-18T14:20:18
|
|
tests: fix undercounting of suites
With the introduction of data variants for suites, we started
undercounting the number of suites as we didn't account for those that
were executed twice. This was then adjusted to count the number of
initializers instead, but this fails to account for suites without any
initializers at all.
Fix the suite count by counting either the number of initializers or, if
there is no initializer, count it as a single suite, only.
|
|
29fe79e6
|
2019-07-18T14:07:22
|
|
Merge pull request #5163 from csware/gitignore-vs2017
Ignore VS2017 specific files and folders
|
|
36558513
|
2019-06-24T23:31:23
|
|
configuration: deprecate git_cvar safely
|
|
658022c4
|
2019-07-18T13:53:41
|
|
configuration: cvar -> configmap
`cvar` is an unhelpful name. Refactor its usage to `configmap` for more
clarity.
|
|
343fb83a
|
2019-07-18T13:50:47
|
|
Merge pull request #5156 from pks-t/pks/attr-macros-in-subdir
gitattributes: ignore macros defined in subdirectories
|
|
270fd807
|
2019-07-18T13:44:10
|
|
azure: compile one Windows platform with the WinHTTP SHA1 backend
We currently have no job that compiles libgit2 with the WinHTTP backend
for SHA1. Due to this, a compile error has been introduced and not
noticed for several months. Change the x86 MSVC job to use the HTTPS
backend for SHA1. The x86 job was chosen with no particular reason.
|
|
7574564e
|
2019-07-18T13:40:34
|
|
sha1: win32: fix compilation due to unknown type
In commit bbf034ab9 (hash: move `git_hash_prov` into Win32 backend,
2019-02-22), the `git_hash_prov`'s structure name has been removed in
favour of its typedef'ed name. But as we have no CI that compiles with
the WinHTTPS hashing backend right now, it wasn't noticed that the
implementation that uses this struct wasn't changed correctly.
Fix the struct type to make it compile again.
|
|
b7c247b3
|
2019-07-18T13:37:02
|
|
cmake: include SHA1 headers into our source files
When selecting the SHA1 backend, we only include the respective C
implementation of the selected backend. But since commit bd48bf3fb
(hash: introduce source files to break include circles, 2019-06-14), we
have introduced separate headers and compilation units for all hashes.
So by not including the headers, we may not honor them to compute
whether a file needs to be recompiled and they also will not be
displayed in IDEs.
Add the header files to fix this problem.
|
|
6f6340af
|
2019-07-18T11:57:55
|
|
ignore: fix determining whether a shorter pattern negates another
When computing whether we need to store a negative pattern, we iterate
through all previously known patterns and check whether the negative
pattern undoes any of the previous ones. In doing so we call `wildmatch`
and check it's return for any negative error values. If there was a
negative return, we will abort and bubble up that error to the caller.
In fact, this check for negative values stems from the time where we
still used `fnmatch` instead of `wildmatch`. For `fnmatch`, negative
values indicate a "real" error, while for `wildmatch` a negative value
may be returned if the matching was prematurely aborted. A premature
abort may for example also happen if the pattern matches a prefix of the
haystack if the pattern is shorter. Returning an error in that case is
the wrong thing to do.
Fix the code to compare for equality with `WM_MATCH`, only. Negative
values returned by `wildmatch` are perfectly fine and thus should be
ignored. Add a test that verifies we do not see the error.
|
|
368b9795
|
2019-07-18T11:27:21
|
|
Merge pull request #5168 from tiennou/clar/fix-data-suite-count
clar: correctly account for "data" suites when counting
|
|
51124a5b
|
2019-07-17T17:33:34
|
|
Merge pull request #5170 from bk2204/packbuilder-efficient-realloc
Allocate memory more efficiently when packing objects
|
|
770b91b1
|
2019-07-17T15:59:54
|
|
cache: evict items more efficiently
When our object cache is full, we pick eight items (or the whole cache,
if there are fewer) and evict them. For small cache sizes, this is fine,
but when we're dealing with a large number of objects, we can repeatedly
exhaust the cache and spend a large amount of time in git_oidmap_iterate
trying to find items to evict.
Instead, let's assume that if the cache gets full, we have a large
number of objects that we're handling, and be more aggressive about
evicting items. Let's remove one item for every 2048 items, but not less
than 8. This causes us to scale our evictions in proportion to the size
of the cache and significantly reduces the time we spend in
git_oidmap_iterate.
Before this change, a full pack of all the non-blob objects in the Linux
repository took in excess of 30 minutes and spent 62.3% of total runtime
in odb_read_1 and its children, and 44.3% of the time in
git_oidmap_iterate. With this change, the same operation now takes 14
minutes and 44 seconds, and odb_read_1 accounts for only 35.9% of total
time, whereas git_oidmap_iterate consists of 6.2%.
Note that we do spend a little more time inflating objects and a decent
amount more time in memcmp. However, overall, the time taken is
significantly improved, and time in pack building is now dominated by
git_delta_create_from_index (33.7%), which is what we would expect.
|
|
c4df926b
|
2019-07-16T21:54:10
|
|
pack-objects: allocate memory more efficiently
The packbuilder code allocates memory in chunks. When it needs to
allocate, it tries to add 1024 to the number of objects and multiply by
3/2. However, it actually multiplies by 1 instead, since it performs an
integral division in the expression "3 / 2" and only then multiplies by
the increased number of objects.
The current behavior causes the code to waste massive amounts of time
copying memory when it reallocates, causing inserting all non-blob
objects in the Linux repository into a new pack to take some
indeterminate time greater than 5 minutes instead of 52 seconds.
Correct this error by first dividing by two, and only then multiplying
by 3. We still check for overflow for the multiplication, which is the
only part that can overflow. This appears to be the only place in the
code base which has this problem.
|
|
4cd8dfaa
|
2019-07-16T20:20:55
|
|
clar: correctly account for "data" suites when counting
Failing to do that makes clar miss the last of the suites, as all
duplicated "data" would have not been accounted for.
|
|
5f22f8d2
|
2019-07-12T17:25:50
|
|
Ignore VS2017 specific files and folders
Signed-off-by: Sven Strickroth <email@cs-ware.de>
|
|
f92d495d
|
2019-07-12T10:48:14
|
|
Merge pull request #5131 from pks-t/pks/fileops-mkdir-in-root
fileops: fix creation of directory in filesystem root
|
|
f8346905
|
2019-07-12T09:03:33
|
|
attr_file: ignore macros defined in subdirectories
Right now, we are unconditionally applying all macros found in a
gitatttributes file. But quoting gitattributes(5):
Custom macro attributes can be defined only in top-level
gitattributes files ($GIT_DIR/info/attributes, the .gitattributes
file at the top level of the working tree, or the global or
system-wide gitattributes files), not in .gitattributes files in
working tree subdirectories. The built-in macro attribute "binary"
is equivalent to:
So gitattribute files in subdirectories of the working tree may
explicitly _not_ contain macro definitions, but we do not currently
enforce this limitation.
This patch introduces a new parameter to the gitattributes parser that
tells whether macros are allowed in the current file or not. If set to
`false`, we will still parse macros, but silently ignore them instead of
adding them to the list of defined macros. Update all callers to
correctly determine whether the to-be-parsed file may contain macros or
not. Most importantly, when walking up the directory hierarchy, we will
only set it to `true` once it reaches the root directory of the repo
itself.
Add a test that verifies that we are indeed not applying macros from
subdirectories. Previous to these changes, the test would've failed.
|
|
97968529
|
2019-07-05T08:05:16
|
|
attr_file: refactor `parse_buffer` function
The gitattributes code is one of our oldest and most-untouched codebases
in libgit2, and as such its code style doesn't quite match our current
best practices. Refactor the function `git_attr_file__parse_buffer` to
better match them.
|
|
dbc7e4b1
|
2019-07-05T07:53:02
|
|
attr_file: refactor `load_standalone` function
The gitattributes code is one of our oldest and most-untouched codebases
in libgit2, and as such its code style doesn't quite match our current
best practices. Refactor the function `git_attr_file__lookup_standalone`
to better match them.
|