|
44527f5c
|
2017-09-27T15:17:26
|
|
proxy: add a free function for the options's pointers
When we duplicate a user-provided options struct, we're stuck with freeing the
url in it. In case we add stuff to the proxy struct, let's add a function in
which to put the logic.
|
|
046b081a
|
2017-09-15T10:46:26
|
|
diff: cleanup hash ctx in `git_diff_patchid`
After initializing the hash context in `git_diff_patchid`, we never
proceed to call `git_hash_ctx_cleanup` on it. While this doesn't really
matter on most hash implementations, this causes a memory leak on Win32
due to CNG system requiring a `malloc` call.
Fix the memory leak by always calling `git_hash_ctx_cleanup` before
exiting.
|
|
e098b5f5
|
2017-09-12T20:21:27
|
|
Merge pull request #4344 from slavikus/fix-dirty-buffer-in-git-push-update-tips
Clear the remote_ref_name buffer in git_push_update_tips()
|
|
26f531d3
|
2017-09-12T13:35:18
|
|
features.h: allow building without CMake-generated feature header
In commit a390a8464 (cmake: move defines into "features.h" header,
2017-07-01), we have introduced a new "features.h" header. This file is
being generated by the CMake build system based on how the libgit2 build
has been configured, replacing the preexisting method of simply setting
the defines inside of the CMake build system. This was done to help
splitting up the build instructions into multiple separate
subdirectories.
An overlooked shortcoming of this approach is that some projects making
use of libgit2 build the library with custom build systems, without
making use of CMake. For those users, the introduction of the
"features.h" file makes their life harder as they would have to also
generate this file.
Fix this issue by guarding all inclusions of the generated header file
by the `LIBGIT2_NO_FEATURES_H` define. Like this, other build systems
can skip the feature header and simply define all used features by
specifying `-D` flags for the compiler again.
|
|
b34fc3fd
|
2017-09-11T21:34:41
|
|
Clear the remote_ref_name buffer in git_push_update_tips()
If fetch_spec was a non-pattern, and it is not the first iteration of push_status vector, then git_refspec_transform would result in the new value appended via git_buf_puts to the previous iteration value.
Forcibly clearing the buffer on each iteration to prevent this behavior.
|
|
3c216453
|
2017-08-25T21:06:46
|
|
Merge pull request #4296 from pks-t/pks/pattern-based-gitignore
Fix negative ignore rules with patterns
|
|
477b3e04
|
2017-07-10T12:25:43
|
|
submodule: refuse lookup in bare repositories
While it is technically possible to look up submodules inside of a
bare repository by reading the submodule configuration of a specific
commit, we do not offer this functionality right now. As such, calling
both `git_submodule_lookup` and `git_submodule_foreach` should error out
early when these functions encounter a bare repository. While
`git_submodule_lookup` already does return an error due to not being
able to parse the configuration, `git_submodule_foreach` simply returns
success and never invokes the callback function.
Fix the issue by having both functions check whether the repository is
bare and returning an error in that case.
|
|
2d9ff8f5
|
2017-07-10T09:36:19
|
|
ignore: honor case insensitivity for negative ignores
When computing negative ignores, we throw away any rule which does not
undo a previous rule to optimize. But on case insensitive file systems,
we need to keep in mind that a negative ignore can also undo a previous
rule with different case, which we did not yet honor while determining
whether a rule undoes a previous one. So in the following example, we
fail to unignore the "/Case" directory:
/case
!/Case
Make both paths checking whether a plain- or wildcard-based rule undo a
previous rule aware of case-insensitivity. This fixes the described
issue.
|
|
b8922fc8
|
2017-07-07T13:27:27
|
|
ignore: keep negative rules containing wildcards
Ignore rules allow for reverting a previously ignored rule by prefixing
it with an exclamation mark. As such, a negative rule can only override
previously ignored files. While computing all ignore patterns, we try to
use this fact to optimize away some negative rules which do not override
any previous patterns, as they won't change the outcome anyway.
In some cases, though, this optimization causes us to get the actual
ignores wrong for some files. This may happen whenever the pattern
contains a wildcard, as we are unable to reason about whether a pattern
overrides a previous pattern in a sane way. This happens for example in
the case where a gitignore file contains "*.c" and "!src/*.c", where we
wouldn't un-ignore files inside of the "src/" subdirectory.
In this case, the first solution coming to mind may be to just strip the
"src/" prefix and simply compare the basenames. While that would work
here, it would stop working as soon as the basename pattern itself is
different, like for example with "*x.c" and "!src/*.c. As such, we
settle for the easier fix of just not optimizing away rules that contain
a wildcard.
|
|
4467543e
|
2017-07-07T12:27:43
|
|
ignore: return early to avoid useless indentation
|
|
9bd83622
|
2017-07-07T12:27:18
|
|
ignore: fix indentation of comment block
|
|
a3a35473
|
2017-08-17T08:38:47
|
|
cmake: fix output location of import libraries and DLLs
As observed by Edward Thomson, the libgit2 DLL built by Windows will not
end up in the top-level build directory but instead inside of the 'src/'
subdirectory. While confusing at first because we are actually setting
the LIBRARY_OUTPUT_DIRECTORY to the project's binary directory, the
manual page of LIBRARY_OUTPUT_DIRECTORY clears this up:
There are three kinds of target files that may be built: archive,
library, and runtime. Executables are always treated as runtime
targets. Static libraries are always treated as archive targets.
Module libraries are always treated as library targets. For non-DLL
platforms shared libraries are treated as library targets. For DLL
platforms the DLL part of a shared library is treated as a runtime
target and the corresponding import library is treated as an archive
target. All Windows-based systems including Cygwin are DLL
platforms.
So in fact, DLLs and import libraries are not treated as libraries at
all by CMake but instead as runtime and archive targets. To fix the
issue, we can thus simply set the variables RUNTIME_OUTPUT_DIRECTORY and
ARCHIVE_OUTPUT_DIRECTORY to the project's root binary directory.
|
|
8a43161b
|
2017-07-05T12:18:17
|
|
cmake: always include our own headers first
With c26ce7840 (Merge branch 'AndreyG/cmake/modernization', 2017-06-28),
we have recently introduced a regression in the way we are searching for
headers. We have made sure to always include our own headers first, but
due to the changes in c26ce7840 this is no longer guaranteed. In fact,
this already leads the compiler into picking "config.h" from the
"deps/regex" dependency, if it is used.
Fix the issue by declaring our internal include directories up front,
before any of the other search directories is added.
|
|
e5c9723d
|
2017-06-30T18:12:02
|
|
cmake: move library build instructions into subdirectory
To fix leaking build instructions into different targets and to make
the build instructions easier to handle, create a new CMakeLists.txt
file containing build instructions for the libgit2 target.
By now, the split is rather easy to achieve. Due to the preparatory
steps, we can now simply move over all related build instructions, only
needing to remove the "src/" prefix from some files.
|
|
8341d6cf
|
2017-07-04T10:57:28
|
|
cmake: move regcomp and futimens checks to "features.h"
In our CMakeLists.txt, we have to check multiple functions in order to
determine if we have to use our own or whether we can use the
platform-provided one. For two of these functions, namely `regcomp_l()`
and `futimens`, the defined macro is actually used inside of the header
file "src/unix/posix.h". As such, these macros are not only required by
the library, but also by our test suite, which is makes use of internal
headers.
To prepare for the CMakeLists.txt split, move these two defines inside
of the "features.h" header.
|
|
a390a846
|
2017-07-01T13:06:00
|
|
cmake: move defines into "features.h" header
In a future commit, we will split out the build instructions for our
library directory and move them into a subdirectory. One of the benefits
is fixing scoping issues, where e.g. defines do not leak to build
targets where they do not belong to. But unfortunately, this does also
pose the problem of how to propagate some defines which are required by
both the library and the test suite.
One way would be to create another variable keeping track of all added
defines and declare it inside of the parent scope. While this is the
most obvious and simplest way of going ahead, it is kind of unfortunate.
The main reason to not use this is that these defines become implicit
dependencies between the build targets. By simply observing a define
inside of the CMakeLists.txt file, one cannot reason whether this define
is only required by the current target or whether it is required by
different targets, as well.
Another approach would be to use an internal header file keeping track
of all defines shared between targets. While configuring the library, we
will set various variables and let CMake configure the file, adding or
removing defines based on what has been configured. Like this, one can
easily keep track of the current environment by simply inspecting the
header file. Furthermore, these dependencies are becoming clear inside
the CMakeLists.txt, as instead of simply adding a define, we now call
e.g. `SET(GIT_THREADSAFE 1)`.
Having this header file though requires us to make sure it is always
included before any "#ifdef"-preprocessor checks are executed. As we
have already refactored code to always include the "common.h" header
file before any statement inside of a file, this becomes easy: just make
sure "common.h" includes the new "features.h" header file first.
|
|
1560b580
|
2017-08-15T10:35:47
|
|
Merge pull request #4288 from pks-t/pks/include-fixups
Include fixups
|
|
577aeef7
|
2017-08-14T22:02:26
|
|
Merge pull request #4328 from libgit2/peff/hashcmp-is-memcmp
oid: use memcmp in git_oid__hashcmp
|
|
f908b184
|
2017-08-14T22:00:51
|
|
Merge pull request #4327 from libgit2/peff/drop-sha1-entry-pos
sha1_lookup: drop sha1_entry_pos function
|
|
c9b1e646
|
2017-08-09T16:54:07
|
|
oid: use memcmp in git_oid__hashcmp
The open-coded version was inherited from git.git. But it
turns out it was based on an older version of glibc, whose
memcmp was not very optimized.
Modern glibc does much better, and some compilers (like gcc
7) can even inline the memcmp into a series of multi-byte
xors.
Upstream is switching to using memcmp in
git/git@0b006014c87f400bd9a86267ed30fd3e7b383884.
|
|
9842b327
|
2017-08-09T16:47:14
|
|
sha1_lookup: drop sha1_entry_pos function
This was pulled over from git.git, and is an experiment in
making binary-searching lists of sha1s faster. It was never
compiled by default (nor was it used upstream by default
without a special environment variable).
Unfortunately, it is actually slower in practice, and
upstream is planning to drop it in
git/git@f1068efefe6dd3beaa89484db5e2db730b094e0b (which has
some timing results). It's worth doing the same here for
simplicity.
|
|
09930192
|
2017-08-09T16:34:02
|
|
sha1_position: convert do-while to while
If we enter the sha1_position() function with "lo == hi",
we have no elements. But the do-while loop means that we'll
enter the loop body once anyway, picking "mi" at that same
value and comparing nonsense to our desired key. This is
unlikely to match in practice, but we still shouldn't be
looking at the memory in the first place.
This bug is inherited from git.git; it was fixed there in
e01580cfe01526ec2c4eb4899f776a82ade7e0e1.
|
|
a9d6b9d5
|
2017-07-31T01:20:21
|
|
Merge pull request #4304 from pks-t/pks/patch-buffers
patch_generate: represent buffers as void pointers
|
|
fb585d01
|
2017-07-31T00:58:58
|
|
Merge branch '4233'
|
|
ed00ac06
|
2017-07-26T23:24:28
|
|
Merge pull request #4314 from pks-t/pks/timsort
tsort: remove idempotent conditional assignment
|
|
20d30000
|
2017-07-26T11:03:27
|
|
Merge pull request #4311 from libgit2/ethomson/win32_remediate
win32: provide fast-path for retrying filesystem operations
|
|
bc35fd4b
|
2017-07-18T14:44:29
|
|
win32: provide fast-path for retrying filesystem operations
When using the `do_with_retries` macro for retrying filesystem
operations in the posix emulation layer, allow the remediation function
to return `GIT_RETRY`, meaning that the error was believed to be
remediated, and the operation should be retried immediately, without
a sleep.
This is a slightly more general solution to the problem fixed in #4312.
|
|
1bcdaba2
|
2017-07-18T14:47:28
|
|
fixed win32 p_unlink retry sleep issue
Fixed an issue where the retry logic on p_unlink sleeps before it tries setting a file to write mode causing unnecessary slowdown.
|
|
fdbb40fd
|
2017-07-21T11:26:13
|
|
tsort: remove idempotent conditional assignment
The conditional `run < minrun` can never be true directly after
assigning `run = minrun`. Remove it to avoid confusion.
|
|
e0568621
|
2017-07-19T13:55:55
|
|
Merge pull request #4250 from pks-t/pks/config-file-iteration
Configuration file fixes with includes
|
|
a94a5402
|
2017-07-19T13:28:32
|
|
Merge pull request #4272 from pks-t/pks/patch-id
Patch ID calculation
|
|
1b329089
|
2017-05-31T22:27:19
|
|
config_file: refuse modifying included variables
Modifying variables pulled in by an included file currently succeeds,
but it doesn't actually do what one would expect, as refreshing the
configuration will cause the values to reappear. As we are currently not
really able to support this use case, we will instead just return an
error for deleting and setting variables which were included via an
include.
|
|
28c2cc3d
|
2017-05-31T16:41:44
|
|
config_file: move reader into `config_read` only
Right now, we have multiple call sites which initialize a `reader`
structure. As the structure is only actually used inside of
`config_read`, we can instead just move the reader inside of the
`config_read` function. Instead, we can just pass in the configuration
file into `config_read`, which eases code readability.
|
|
83bcd3a1
|
2017-05-31T22:45:25
|
|
config_file: refresh all files if includes were modified
Currently, we only re-parse the top-level configuration file when it has
changed itself. This can cause problems when an include is changed, as
we were not updating all values correctly.
Instead of conditionally reparsing only refreshed files, the logic
becomes much clearer and easier to follow if we always re-parse the
top-level configuration file when either the file itself or one of its
included configuration files has changed on disk. This commit implements
this logic.
Note that this might impact performance in some cases, as we need to
re-read all configuration files whenever any of the included files
changed. It could increase performance to just re-parse include files
which have actually changed, but this would compromise maintainability
of the code without much gain. The only case where we will gain anything
is when we actually use includes and when only these includes are
updated, which will probably be quite an unusual scenario to actually be
worthwhile to optimize.
|
|
56a7a264
|
2017-05-31T14:50:40
|
|
config_file: remove unused backend field from parse data
The backend passed to `config_read` is never actually used anymore, so
we can remove it from the function and the `parse_data` structure.
|
|
3a7f7a6e
|
2017-05-31T14:43:46
|
|
config_file: pass reader directly to callbacks
Previously, the callbacks passed to `config_parse` got the reader via a
pointer to a pointer. This allowed the callbacks to update the callers
`reader` variable when the array holding it has been reallocated. As the
array is no longer present, we can simply the code by making the reader
a simple pointer.
|
|
73df75d8
|
2017-05-31T14:34:48
|
|
config_file: refactor include handling
Current code for configuration files uses the `reader` structure to
parse configuration files and store additional metadata like the file's
path and checksum. These structures are stored within an array in the
backend itself, which causes multiple problems.
First, it does not make sense to keep around the file's contents with
the backend itself. While this data is usually free'd before being added
to the backend, this brings along somewhat intricate lifecycle problems.
A better solution would be to store only the file paths as well as the
checksum of the currently parsed content only.
The second problem is that the `reader` structures are stored inside an
array. When re-parsing configuration files due to changed contents, we
may cause this array to be reallocated, requiring us to update pointers
hold by callers. Furthermore, we do not keep track of includes which
are already associated to a reader inside of this array. This causes us
to add readers multiple times to the backend, e.g. in the scenario of
refreshing configurations.
This commit fixes these shortcomings. We introduce a split between the
parsing data and the configuration file's metadata. The `reader` will
now only hold the file's contents and the parser state and the new
`config_file` structure holds the file's path and checksum. Furthermore,
the new structure is a recursive structure in that it will also hold
references to the files it directly includes. The diskfile is changed to
only store the top-level configuration file.
These changes allow us further refactorings and greatly simplify
understanding the code.
|
|
d1dbb3ae
|
2017-07-12T07:40:16
|
|
signature: don't leave a dangling pointer to the strings on parse failure
If the signature is invalid but we detect that after allocating the strings, we
free them. We however leave that pointer dangling in the structure the caller
gave us, which can lead to double-free.
Set these pointers to `NULL` after freeing their memory to avoid this.
|
|
9093ced6
|
2017-07-10T11:42:26
|
|
patch_generate: represent buffers as void pointers
Pointers to general data should usually be used as a void pointer such
that it is possible to hand in variables of a different pointer type
without the need to cast. This is the same when creating patches from
buffers, where the buffers may contain arbitrary data. Instead of
requiring the caller to care whether his buffer is e.g. `char *` or
`unsigned char *`, we should instead just accept a `void *`. This is
also consistent in how we tread other types like for example `git_blob`,
which also just has a void pointer as its raw contents.
|
|
0c7f49dd
|
2017-06-30T13:39:01
|
|
Make sure to always include "common.h" first
Next to including several files, our "common.h" header also declares
various macros which are then used throughout the project. As such, we
have to make sure to always include this file first in all
implementation files. Otherwise, we might encounter problems or even
silent behavioural differences due to macros or defines not being
defined as they should be. So in fact, our header and implementation
files should make sure to always include "common.h" first.
This commit does so by establishing a common include pattern. Header
files inside of "src" will now always include "common.h" as its first
other file, separated by a newline from all the other includes to make
it stand out as special. There are two cases for the implementation
files. If they do have a matching header file, they will always include
this one first, leading to "common.h" being transitively included as
first file. If they do not have a matching header file, they instead
include "common.h" as first file themselves.
This fixes the outlined problems and will become our standard practice
for header and source files inside of the "src/" from now on.
|
|
2480d0eb
|
2017-06-30T13:34:05
|
|
Add missing license headers
Some implementation files were missing the license headers. This commit
adds them.
|
|
0fb4b351
|
2017-06-30T13:27:26
|
|
Fix missing include for header files
Some of our header files are not included at all by any of their
implementing counter-parts. Including them inside of these files leads
to some compile errors mostly due to unknown types because of missing
includes. But there's also one case where a declared function does not
match the implementation's prototype.
Fix all these errors by fixing up the prototype and adding missing
includes. This is preparatory work for fixing up missing includes in the
implementation files.
|
|
459fb8fe
|
2017-06-30T15:35:46
|
|
win32: fix circular include deps with w32_crtdbg
The current order of declarations and includes between "common.h" and
"w32_crtdbg_stacktrace.h" is rather complicated. Both header files make
use of things defined in the other one and are thus circularly dependent
on each other. This makes it currently impossible to compile the
"w32_crtdbg_stacktrace.c" file when including "common.h" inside of
"w32_crtdbg_stacktrace.h".
We can disentangle the mess by moving declaration of the inline crtdbg
functions into the "w32_crtdbg_stacktrace.h" file and adding additional
includes inside of it, such that all required functions are available to
it. This allows us to break the dependency cycle.
|
|
d4e03be6
|
2017-06-30T11:21:18
|
|
git_reset_*: pass parameters as const pointers
|
|
89a34828
|
2017-06-16T13:34:43
|
|
diff: implement function to calculate patch ID
The upstream git project provides the ability to calculate a so-called
patch ID. Quoting from git-patch-id(1):
A "patch ID" is nothing but a sum of SHA-1 of the file diffs
associated with a patch, with whitespace and line numbers ignored."
Patch IDs can be used to identify two patches which are probably the
same thing, e.g. when a patch has been cherry-picked to another branch.
This commit implements a new function `git_diff_patchid`, which gets a
patch and derives an OID from the diff. Note the different terminology
here: a patch in libgit2 are the differences in a single file and a diff
can contain multiple patches for different files. The implementation
matches the upstream implementation and should derive the same OID for
the same diff. In fact, some code has been directly derived from the
upstream implementation.
The upstream implementation has two different modes to calculate patch
IDs, which is the stable and unstable mode. The old way of calculating
the patch IDs was unstable in a sense that a different ordering the
diffs was leading to different results. This oversight was fixed in git
1.9, but as git tries hard to never break existing workflows, the old
and unstable way is still default. The newer and stable way does not
care for ordering of the diff hunks, and in fact it is the mode that
should probably be used today. So right now, we only implement the
stable way of generating the patch ID.
|
|
ef09eae1
|
2017-06-23T10:10:29
|
|
Convert port with htons() in p_getaddrinfo()
`sin_port` should be in network byte order.
|
|
4dc87e72
|
2017-06-21T13:35:46
|
|
merge: fix potential free of uninitialized memory
The function `merge_diff_mark_similarity_exact` may error our early and,
when it does so, free the `ours_deletes_by_oid` and
`theirs_deletes_by_oid` variables. While the first one can never be
uninitialized due to the first call actually assigning to it, the second
variable can be freed without being initialized.
Fix the issue by initializing both variables to `NULL`.
|
|
40294f38
|
2017-06-21T12:25:52
|
|
Merge pull request #4202 from mitesch/linear_exact_rename
merge: perform exact rename detection in linear time
|
|
af720bb6
|
2017-06-16T23:19:31
|
|
repository: remove trailing whitespace
|
|
9a46c777
|
2017-06-16T21:02:26
|
|
repository: do not initialize templates if dir is an empty string
|
|
a78441bc
|
2017-06-13T11:05:40
|
|
Adding git_filter_init for initializing `git_filter` struct + unit test
|
|
99e40a67
|
2017-06-12T21:23:44
|
|
Merge pull request #4263 from libgit2/ethomson/config_for_inmemory_repo
Allow creation of a configuration object in an in-memory repository
|
|
2d486781
|
2017-06-12T12:02:27
|
|
repository: don't fail to create config option in inmemory repo
When in an in-memory repository - without a configuration file - do not
fail to create a configuration object.
|
|
9d49a43c
|
2017-06-12T12:01:10
|
|
repository_item_path: return ENOTFOUND when appropriate
Disambiguate error values: return `GIT_ENOTFOUND` when the item cannot
exist in the repository (perhaps because the repository is inmemory or
otherwise not backed by a filesystem), return `-1` when there is a hard
failure.
|
|
9927e958
|
2017-06-12T16:01:22
|
|
Merge pull request #4261 from RogerGee/fix_wait_while_ack
smart_protocol: fix parsing of server ACK responses
|
|
cb3010c5
|
2017-06-12T12:56:40
|
|
odb_read_prefix: reset error in backends loop
When looking for an object by prefix, we query all the backends so that
we can ensure that there is no ambiguity. We need to reset the `error`
value between backends; otherwise the first backend may find an object
by prefix, but subsequent backends may not. If we do not reset the
`error` value then it will remain at `GIT_ENOTFOUND` and `read_prefix_1`
will fail, despite having actually found an object.
|
|
fb3fc837
|
2017-06-12T11:45:09
|
|
repository_item_path: error messages lowercased
|
|
6f960b55
|
2017-06-11T10:37:46
|
|
Merge pull request #4088 from chescock/packfile-name-using-complete-hash
Ensure packfiles with different contents have different names
|
|
d2c4f764
|
2017-06-11T09:54:04
|
|
Merge pull request #4260 from libgit2/ethomson/forced_checkout_2
Update to forced checkout and untracked files
|
|
4a0df574
|
2017-06-10T18:46:35
|
|
git_futils_rmdir: only allow `EBUSY` when asked
Only ignore `EBUSY` from `rmdir` when the `GIT_RMDIR_SKIP_NONEMPTY` bit
is set.
|
|
83989d70
|
2017-06-08T22:23:53
|
|
checkout: cope with untracked files in directory deletion
When deleting a directory during checkout, do not simply delete the
directory, since there may be untracked files. Instead, go into
the iterator and examine each file.
In the original code (the code with the faulty assumption), we look to
see if there's an index entry beneath the directory that we want to
remove. Eg, it looks to see if we have a workdir entry foo and an
index entry foo/bar.txt. If this is not the case, then the working
directory must have precious files in that directory. This part is okay.
The part that's not okay is if there is an index entry foo/bar.txt. It
just blows away the whole damned directory.
That's not cool.
Instead, by simply pushing the directory itself onto the stack and
iterating each entry, we will deal with the files one by one - whether
they're in the index (and can be force removed) or not (and are
precious).
The original code was a bad optimization, assuming that we didn't need
to git_iterator_advance_into if there was any index entry in the folder.
That's wrong - we could have optimized this iff all folder entries are
in the index.
Instead, we need to simply dig into the directory and analyze its
entries.
|
|
e141f079
|
2017-06-10T11:46:09
|
|
smart_protocol: fix parsing of server ACK responses
Fix ACK parsing in wait_while_ack() internal function. This patch
handles the case where multi_ack_detailed mode sends 'ready' ACKs. The
existing functionality would bail out too early, thus causing the
processing of the ensuing packfile to fail if/when 'ready' ACKs were
sent.
|
|
6c23704d
|
2017-06-08T21:40:18
|
|
settings: rename `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION`
Initially, the setting has been solely used to enable the use of
`fsync()` when creating objects. Since then, the use has been extended
to also cover references and index files. As the option is not yet part
of any release, we can still correct this by renaming the option to
something more sensible, indicating not only correlation to objects.
This commit renames the option to `GIT_OPT_ENABLE_FSYNC_GITDIR`. We also
move the variable from the object to repository source code.
|
|
458cea5c
|
2017-06-08T14:22:24
|
|
Merge pull request #4255 from pks-t/pks/buffer-grow-errors
Buffer growing cleanups
|
|
90500d81
|
2017-06-08T13:56:22
|
|
Merge pull request #4253 from pks-t/pks/cov-fixes
Coverity fixes
|
|
90388aa8
|
2017-06-06T15:02:23
|
|
refdb_fs: be explicit about using null-OID if we cannot resolve ref
|
|
78a8f68f
|
2017-06-06T14:57:31
|
|
path: only set dotgit flags when configs were read
|
|
9be4c303
|
2017-06-06T14:54:48
|
|
worktree: use `git__free` instead of `free`
|
|
0f642f31
|
2017-06-06T14:54:19
|
|
refs: properly report errors from `update_wt_heads`
|
|
0c28c72d
|
2017-06-06T14:53:45
|
|
fileops: check return value of `git_path_dirname`
|
|
a693b873
|
2017-06-07T10:20:44
|
|
buffer: use `git_buf_init` with length
The `git_buf_init` function has an optional length parameter, which will
cause the buffer to be initialized and allocated in one step. This can
be used instead of static initialization with `GIT_BUF_INIT` followed by
a `git_buf_grow`. This patch does so for two functions where it is
applicable.
|
|
4796c916
|
2017-06-07T09:56:31
|
|
buffer: return errors for `git_buf_init` and `git_buf_attach`
Both the `git_buf_init` and `git_buf_attach` functions may call
`git_buf_grow` in case they were given an allocation length as
parameter. As such, it is possible for these functions to fail when we
run out of memory. While it won't probably be used anytime soon, it does
indeed make sense to also record this fact by returning an error code
from both functions. As they belong to the internal API only, this
change does not break our interface.
|
|
9a8386a2
|
2017-06-07T09:50:54
|
|
buffer: consistently use `ENSURE_SIZE` to grow buffers on-demand
The `ENSURE_SIZE` macro can be used to grow a buffer if its currently
allocated size does not suffice a required target size. While most of
the code already uses this macro, the `git_buf_join` and `git_buf_join3`
functions do not yet use it. Due to the macro first checking whether we
have to grow the buffer at all, this has the benefit of saving a
function call when it is not needed. While this is nice to have, it will
probably not matter at all performance-wise -- instead, this only serves
for consistency across the code.
|
|
e82dd813
|
2017-06-08T11:52:32
|
|
buffer: fix `ENSURE_SIZE` macro referencing wrong variable
While the `ENSURE_SIZE` macro gets a reference to both the buffer that
is to be resized and a new size, we were not consistently referencing
the passed buffer, but instead a variable `buf`, which is not passed in.
Funnily enough, we never noticed because our buffers seem to always be
named `buf` whenever the macro was being used.
Fix the macro by always using the passed-in buffer. While at it, add
braces around all mentions of passed-in variables as should be done with
macros to avoid subtle errors.
Found-by: Edward Thompson
|
|
97eb5ef0
|
2017-06-07T10:05:54
|
|
buffer: rely on `GITERR_OOM` set by `git_buf_try_grow`
The function `git_buf_try_grow` consistently calls `giterr_set_oom`
whenever growing the buffer fails due to insufficient memory being
available. So in fact, we do not have to do this ourselves when a call
to any buffer-growing function has failed due to an OOM situation. But
we still do so in two functions, which this patch cleans up.
|
|
3a8801ae
|
2017-06-08T10:55:47
|
|
Merge pull request #4258 from pks-t/pks/sha1dc-update
SHA1DC update
|
|
63d86c27
|
2017-06-07T14:50:16
|
|
sha1dc: update to fix errors with endianess and unaligned access
This updates our version of SHA1DC to e139984 (Merge pull request #35
from lidl/master, 2017-05-30).
|
|
3bc95cfe
|
2017-06-07T14:42:12
|
|
Merge pull request #4236 from pks-t/pks/index-v4-fixes
Fix path computations for compressed index entries
|
|
f28744a5
|
2017-06-05T10:11:20
|
|
openssl_stream: fix building with libressl
OpenSSL v1.1 has introduced a new way of initializing the library
without having to call various functions of different subsystems. In
libgit2, we have been adapting to that change with 88520151f
(openssl_stream: use new initialization function on OpenSSL version
>=1.1, 2017-04-07), where we added an #ifdef depending on the OpenSSL
version. This change broke building with libressl, though, which has not
changed its API in the same way.
Fix the issue by expanding the #ifdef condition to use the old way of
initializing with libressl.
Signed-off-by: Marc-Antoine Perennou <Marc-Antoine@Perennou.com>
|
|
064a60e9
|
2017-05-19T14:06:15
|
|
index: verify we have enough space left when writing index entries
In our code writing index entries, we carry around a `disk_size`
representing how much memory we have in total and pass this value to
`git_encode_varint` to do bounds checks. This does not make much sense,
as at the time when passing on this variable it is already out of date.
Fix this by subtracting used memory from `disk_size` as we go along.
Furthermore, assert we've actually got enough space left to do the final
path memcpy.
|
|
c71dff7e
|
2017-05-19T13:49:34
|
|
index: fix shared prefix computation when writing index entry
When using compressed index entries, each entry's path is preceded by a
varint encoding how long the shared prefix with the previous index entry
actually is. We currently encode a length of `(path_len - same_len)`,
which is doubly wrong. First, `path_len` is already set to `path_len -
same_len` previously. Second, we want to encode the shared prefix rather
than the un-shared suffix length.
Fix this by using `same_len` as the varint value instead.
|
|
83e0392c
|
2017-05-19T13:39:05
|
|
index: also sanity check entry size with compressed entries
We have a check in place whether the index has enough data left for the
required footer after reading an index entry, but this was only used for
uncompressed entries. Move the check down a bit so that it is executed
for both compressed and uncompressed index entries.
|
|
350d2c47
|
2017-05-19T14:22:35
|
|
index: remove file-scope entry size macros
All index entry size computations are now performed in
`index_entry_size`. As such, we do not need the file-scope macros for
computing these sizes anymore. Remove them and move the `entry_size`
macro into the `index_entry_size` function.
|
|
46b67034
|
2017-05-19T13:59:53
|
|
index: don't right-pad paths when writing compressed entries
Our code to write index entries to disk does not check whether the
entry that is to be written should use prefix compression for the path.
As such, we were overallocating memory and added bogus right-padding
into the resulting index entries. As there is no padding allowed in the
index version 4 format, this should actually result in an invalid index.
Fix this by re-using the newly extracted `index_entry_size` function.
|
|
29f498e0
|
2017-05-19T13:38:34
|
|
index: move index entry size computation into its own function
Create a new function `index_entry_size` which encapsulates the logic to
calculate how much space is needed for an index entry, whether it is
simple/extended or compressed/uncompressed. This can later be re-used by
our code writing index entries.
|
|
8ceb890b
|
2017-05-19T12:35:21
|
|
index: set last written index entry in foreach-entry-loop
The last written disk entry is currently being written inside of the
function `write_disk_entry`. Make behavior a bit more obviously by
instead setting it inside of `write_entries` while iterating all
entries.
|
|
11d0be23
|
2017-05-12T10:01:43
|
|
index: set last entry when reading compressed entries
To calculate the path of a compressed index entry, we need to know the
preceding entry's path. While we do actually set the first predecessor
correctly to "", we fail to update this while reading the entries.
Fix the issue by updating `last` inside of the loop. Previously, we've
been passing a double-pointer to `read_entry`, which it didn't update.
As it is more obvious to update the pointer inside the loop itself,
though, we can simply convert it to a normal pointer.
|
|
febe8c14
|
2017-05-10T14:27:12
|
|
index: fix confusion with shared prefix in compressed path names
The index version 4 introduced compressed path names for the entries.
From the git.git index-format documentation:
At the beginning of an entry, an integer N in the variable width
encoding [...] is stored, followed by a NUL-terminated string S.
Removing N bytes from the end of the path name for the previous
entry, and replacing it with the string S yields the path name for
this entry.
But instead of stripping N bytes from the previous path's string and
using the remaining prefix, we were instead simply concatenating the
previous path with the current entry path, which is obviously wrong.
Fix the issue by correctly copying the first N bytes of the previous
entry only and concatenating the result with our current entry's path.
|
|
8a5e7aae
|
2017-05-22T12:53:44
|
|
varint: fix computation for remaining buffer space
When encoding varints to a buffer, we want to remain sure that the
required buffer space does not exceed what is actually available. Our
current check does not do the right thing, though, in that it does not
honor that our `pos` variable counts the position down instead of up. As
such, we will require too much memory for small varints and not enough
memory for big varints.
Fix the issue by correctly calculating the required size as
`(sizeof(varint) - pos)`. Add a test which failed before.
|
|
dd0aa811
|
2017-06-04T22:46:07
|
|
Merge branch 'pr/4228'
|
|
82e929a8
|
2017-06-04T19:35:39
|
|
Merge pull request #4239 from roblg/toplevel-dir-ignore-fix
Fix issue with directory glob ignore in subdirectories
|
|
04de614b
|
2017-06-04T19:03:07
|
|
Merge pull request #4243 from pks-t/pks/submodule-workdir
Submodule working directory
|
|
a1023a43
|
2017-05-20T17:18:07
|
|
Merge pull request #4179 from libgit2/ethomson/expand_tilde
Introduce home directory expansion function for config files, attribute files
|
|
9b1260d3
|
2017-05-20T14:18:32
|
|
Merge pull request #4097 from implausible/fix/auto-detect-proxy-callbacks
Fix proxy auto detect not utilizing callbacks
|
|
e694e4e9
|
2017-05-20T14:17:36
|
|
Merge pull request #4174 from libgit2/ethomson/set_head_to_tag
git_repository_set_head: use tag name in reflog
|
|
119bdd86
|
2017-05-20T14:13:27
|
|
Merge pull request #4231 from wabain/open-revrange
revparse: support open-ended ranges
|
|
c0e54155
|
2017-01-11T10:39:59
|
|
indexer: name pack files after trailer hash
Upstream git.git has changed the way how packfiles are named.
Previously, they were using a hash of the contained object's OIDs, which
has then been changed to use the hash of the complete packfile instead.
See 1190a1acf (pack-objects: name pack files after trailer hash,
2013-12-05) in the git.git repository for more information on this
change.
This commit changes our logic to match the behavior of core git.
|
|
2696c5c3
|
2017-05-19T09:21:17
|
|
repository: make check if repo is a worktree more strict
To determine if a repository is a worktree or not, we currently check
for the existence of a "gitdir" file inside of the repository's gitdir.
While this is sufficient for non-broken repositories, we have at least
one case of a subtly broken repository where there exists a gitdir file
inside of a gitmodule. This will cause us to misidentify the submodule
as a worktree.
While this is not really a fault of ours, we can do better here by
observing that a repository can only ever be a worktree iff its common
directory and dotgit directory are different. This allows us to make our
check whether a repo is a worktree or not more strict by doing a simple
string comparison of these two directories. This will also allow us to
do the right thing in the above case of a broken repository, as for
submodules these directories will be the same. At the same time, this
allows us to skip the `stat` check for the "gitdir" file for most
repositories.
|
|
9f9fd05f
|
2017-05-19T08:59:46
|
|
repository: factor out worktree check
The check whether a repository is a worktree or not is currently done
inside of `git_repository_open_ext`. As we want to extend this function
later on, pull it out into its own function `repo_is_worktree` to ease
working on it.
|
|
32841973
|
2017-05-19T08:38:47
|
|
repository: improve parameter names for `find_repo`
The out-parameters of `find_repo` containing found paths of a repository
are a tad confusing, as they are not as obvious as they could be. Rename
them like following to ease reading the code:
- `repo_path` -> `gitdir_path`
- `parent_path` -> `workdir_path`
- `link_path` -> `gitlink_path`
- `common_path` -> `commondir_path`
|