|
ed00ac06
|
2017-07-26T23:24:28
|
|
Merge pull request #4314 from pks-t/pks/timsort
tsort: remove idempotent conditional assignment
|
|
8f9d2bbf
|
2017-07-26T23:23:37
|
|
Merge pull request #4317 from libgit2/ethomson/libcurl_build
Build with patched libcurl
|
|
20d30000
|
2017-07-26T11:03:27
|
|
Merge pull request #4311 from libgit2/ethomson/win32_remediate
win32: provide fast-path for retrying filesystem operations
|
|
bc35fd4b
|
2017-07-18T14:44:29
|
|
win32: provide fast-path for retrying filesystem operations
When using the `do_with_retries` macro for retrying filesystem
operations in the posix emulation layer, allow the remediation function
to return `GIT_RETRY`, meaning that the error was believed to be
remediated, and the operation should be retried immediately, without
a sleep.
This is a slightly more general solution to the problem fixed in #4312.
|
|
192a87e1
|
2017-07-18T14:55:56
|
|
Updated changelog
|
|
1bcdaba2
|
2017-07-18T14:47:28
|
|
fixed win32 p_unlink retry sleep issue
Fixed an issue where the retry logic on p_unlink sleeps before it tries setting a file to write mode causing unnecessary slowdown.
|
|
c582fa4e
|
2017-07-24T17:53:32
|
|
travis: only install custom libcurl on trusty
|
|
697583ea
|
2017-07-24T16:48:04
|
|
travis: only kill our own sshd
|
|
4da38193
|
2017-07-24T13:10:43
|
|
travis: use trusty
|
|
f031e20b
|
2017-07-23T03:41:52
|
|
travis: build with patched libcurl
Ubuntu trusty has a bug in curl when using NTLM credentials in a proxy,
dereferencing a null pointer and causing segmentation faults. Use a
custom-patched version of libcurl that avoids this issue.
|
|
fdbb40fd
|
2017-07-21T11:26:13
|
|
tsort: remove idempotent conditional assignment
The conditional `run < minrun` can never be true directly after
assigning `run = minrun`. Remove it to avoid confusion.
|
|
e0568621
|
2017-07-19T13:55:55
|
|
Merge pull request #4250 from pks-t/pks/config-file-iteration
Configuration file fixes with includes
|
|
a94a5402
|
2017-07-19T13:28:32
|
|
Merge pull request #4272 from pks-t/pks/patch-id
Patch ID calculation
|
|
83bcd3a1
|
2017-05-31T22:45:25
|
|
config_file: refresh all files if includes were modified
Currently, we only re-parse the top-level configuration file when it has
changed itself. This can cause problems when an include is changed, as
we were not updating all values correctly.
Instead of conditionally reparsing only refreshed files, the logic
becomes much clearer and easier to follow if we always re-parse the
top-level configuration file when either the file itself or one of its
included configuration files has changed on disk. This commit implements
this logic.
Note that this might impact performance in some cases, as we need to
re-read all configuration files whenever any of the included files
changed. It could increase performance to just re-parse include files
which have actually changed, but this would compromise maintainability
of the code without much gain. The only case where we will gain anything
is when we actually use includes and when only these includes are
updated, which will probably be quite an unusual scenario to actually be
worthwhile to optimize.
|
|
56a7a264
|
2017-05-31T14:50:40
|
|
config_file: remove unused backend field from parse data
The backend passed to `config_read` is never actually used anymore, so
we can remove it from the function and the `parse_data` structure.
|
|
3a7f7a6e
|
2017-05-31T14:43:46
|
|
config_file: pass reader directly to callbacks
Previously, the callbacks passed to `config_parse` got the reader via a
pointer to a pointer. This allowed the callbacks to update the callers
`reader` variable when the array holding it has been reallocated. As the
array is no longer present, we can simply the code by making the reader
a simple pointer.
|
|
73df75d8
|
2017-05-31T14:34:48
|
|
config_file: refactor include handling
Current code for configuration files uses the `reader` structure to
parse configuration files and store additional metadata like the file's
path and checksum. These structures are stored within an array in the
backend itself, which causes multiple problems.
First, it does not make sense to keep around the file's contents with
the backend itself. While this data is usually free'd before being added
to the backend, this brings along somewhat intricate lifecycle problems.
A better solution would be to store only the file paths as well as the
checksum of the currently parsed content only.
The second problem is that the `reader` structures are stored inside an
array. When re-parsing configuration files due to changed contents, we
may cause this array to be reallocated, requiring us to update pointers
hold by callers. Furthermore, we do not keep track of includes which
are already associated to a reader inside of this array. This causes us
to add readers multiple times to the backend, e.g. in the scenario of
refreshing configurations.
This commit fixes these shortcomings. We introduce a split between the
parsing data and the configuration file's metadata. The `reader` will
now only hold the file's contents and the parser state and the new
`config_file` structure holds the file's path and checksum. Furthermore,
the new structure is a recursive structure in that it will also hold
references to the files it directly includes. The diskfile is changed to
only store the top-level configuration file.
These changes allow us further refactorings and greatly simplify
understanding the code.
|
|
6f7aab0c
|
2017-06-06T09:45:11
|
|
tests: config::include: use init and cleanup functions
|
|
1b329089
|
2017-05-31T22:27:19
|
|
config_file: refuse modifying included variables
Modifying variables pulled in by an included file currently succeeds,
but it doesn't actually do what one would expect, as refreshing the
configuration will cause the values to reappear. As we are currently not
really able to support this use case, we will instead just return an
error for deleting and setting variables which were included via an
include.
|
|
28c2cc3d
|
2017-05-31T16:41:44
|
|
config_file: move reader into `config_read` only
Right now, we have multiple call sites which initialize a `reader`
structure. As the structure is only actually used inside of
`config_read`, we can instead just move the reader inside of the
`config_read` function. Instead, we can just pass in the configuration
file into `config_read`, which eases code readability.
|
|
8149f850
|
2017-07-14T08:30:18
|
|
Merge pull request #4306 from libgit2/cmn/tag-bad-signature
signature: don't leave a dangling pointer to the strings on parse failure
|
|
d1dbb3ae
|
2017-07-12T07:40:16
|
|
signature: don't leave a dangling pointer to the strings on parse failure
If the signature is invalid but we detect that after allocating the strings, we
free them. We however leave that pointer dangling in the structure the caller
gave us, which can lead to double-free.
Set these pointers to `NULL` after freeing their memory to avoid this.
|
|
c4c95bf4
|
2017-07-10T08:50:11
|
|
Merge pull request #4287 from AndreyG/reset/interface-cosmetics
git_reset_*: pass parameters as const pointers
|
|
0e165686
|
2017-07-07T10:03:34
|
|
Merge pull request #4291 from pks-t/pks/regex-header-confusion
tests: config: fix missing declaration causing error
|
|
1f7af277
|
2017-07-05T11:52:47
|
|
tests: config: fix missing declaration causing error
On systems where we pull in our distributed version of the regex
library, all tests in config::readonly fail. This error is actually
quite interesting: the test suite is unable to find the declaration of
`git_path_exists` and assumes it has a signature of `int
git_path_exists(const char *)`. But actually, it has a `bool` return
value. Due to this confusion, some wrong conversion is done by the
compiler and the `cl_assert(!git_path_exists("file"))` checks
erroneously fail, even when the function does in fact return the correct
value.
The error is actually introduced by 56893bb9a (cmake: consistently use
TARGET_INCLUDE_DIRECTORIES if available, 2017-06-28), unfortunately
introduced by myself. Due to the delayed addition of include
directories, we will now find the "config.h" header inside of the
"deps/regex" directory instead of inside the "src/" directory, where it
should be. As such, we are missing definitions for the
`git_config_file__ondisk` and `git_path_exists` symbols.
The correct fix here would be to fix the order in which include search
directories are added. But due to the current restructuring of
CMakeBuild.txt, I'm refraining from doing so and delay the proper fix a
bit. Instead, we paper over the issue by explicitly including "path.h"
to fix its prototype. This ignores the issue that
`git_config_file__ondisk` is undeclared, as its signature is correctly
identified by the compiler.
|
|
d4e03be6
|
2017-06-30T11:21:18
|
|
git_reset_*: pass parameters as const pointers
|
|
c26ce784
|
2017-06-28T12:26:04
|
|
Merge branch 'AndreyG/cmake/modernization'
|
|
56893bb9
|
2017-06-28T12:11:44
|
|
cmake: consistently use TARGET_INCLUDE_DIRECTORIES if available
Instead of using INCLUDE_DIRECTORIES again for the libgit2_clar test
suite, we should just be using TARGET_INCLUDE_DIRECTORIES again if the
CMake version is greater than 2.8.11.
|
|
22de81e6
|
2017-06-21T08:25:36
|
|
cmake: use `target_include_directories` for modern cmake
Apply `target_include_directories` when CMAKE_VERSION >= 2.8.12
|
|
f5f13b50
|
2017-06-27T19:38:50
|
|
Merge pull request #4280 from ids1024/htons
Convert port with htons() in p_getaddrinfo()
|
|
6b2133b4
|
2017-06-27T12:46:15
|
|
Merge pull request #4235 from pks-t/pks/out-of-tree-builds
Out of tree builds
|
|
6f02a4d1
|
2017-06-26T08:17:20
|
|
Merge pull request #4278 from pks-t/pks/cmake-disable-http-parser
cmake: Permit disabling external http-parser
|
|
89a34828
|
2017-06-16T13:34:43
|
|
diff: implement function to calculate patch ID
The upstream git project provides the ability to calculate a so-called
patch ID. Quoting from git-patch-id(1):
A "patch ID" is nothing but a sum of SHA-1 of the file diffs
associated with a patch, with whitespace and line numbers ignored."
Patch IDs can be used to identify two patches which are probably the
same thing, e.g. when a patch has been cherry-picked to another branch.
This commit implements a new function `git_diff_patchid`, which gets a
patch and derives an OID from the diff. Note the different terminology
here: a patch in libgit2 are the differences in a single file and a diff
can contain multiple patches for different files. The implementation
matches the upstream implementation and should derive the same OID for
the same diff. In fact, some code has been directly derived from the
upstream implementation.
The upstream implementation has two different modes to calculate patch
IDs, which is the stable and unstable mode. The old way of calculating
the patch IDs was unstable in a sense that a different ordering the
diffs was leading to different results. This oversight was fixed in git
1.9, but as git tries hard to never break existing workflows, the old
and unstable way is still default. The newer and stable way does not
care for ordering of the diff hunks, and in fact it is the mode that
should probably be used today. So right now, we only implement the
stable way of generating the patch ID.
|
|
ef09eae1
|
2017-06-23T10:10:29
|
|
Convert port with htons() in p_getaddrinfo()
`sin_port` should be in network byte order.
|
|
b6ed67c2
|
2017-05-10T12:54:14
|
|
tests: refs::crashes: create sandbox for creating symref
The test `refs::crashes::double_free` operates on our in-source
"testrepo.git" repository without creating a copy first. As the test
will try to create a new symbolic reference, this will fail when we want
to do a pure out-of-tree build with a read-only source tree.
Fix the issue by creating a sandbox first.
|
|
6ee7d37a
|
2017-05-10T12:51:06
|
|
tests: index::tests: create sandboxed repo for locking
The test `index::tests::can_lock_index` operates on the "testrepo.git"
repository located inside of our source tree. While this is okay for
tests which do read-only operations on these resouces, this specific
test tries to lock the index by creating a lock. This will obviously
fail on out-of-tree builds with read-only source trees.
Fix the issue by creating a sandbox first.
|
|
4305fcca
|
2017-05-10T12:14:36
|
|
cmake: generate clar.suite in binary directory
Change the output path of generate.py to generate the clar.suite file
inside of the binary directory. This fixes out of tree builds with
read-only source trees as we now refrain from writing anything into the
source tree.
|
|
8d22bcea
|
2017-05-10T12:21:53
|
|
generate.py: generate clar cache in binary directory
The source directory should usually not be touched when using
out-of-tree builds. But next to the previously fixed "clar.suite" file, we
are also writing the ".clarcache" into the project's source tree,
breaking the builds. Fix this by also honoring the output directory for
the ".clarcache" file.
|
|
9e240bd2
|
2017-05-10T12:02:03
|
|
generate.py: enable overriding path of generated clar.suite
The generate.py script will currently always write the generated
clar.suite file into the scanned directory, breaking out-of-tree builds
with read-only source directories. Fix this issue by adding another
option to allow overriding the output path of the generated file.
|
|
0a513a94
|
2017-05-10T11:46:00
|
|
generate.py: disallow generating test suites for multiple paths
Our generate.py script is used to extract and write test suite
declarations into the clar.suite file. As is, the script accepts
multiple directories on the command line and will generate this file for
each of these directories.
The generate.py script will always write the clar.suite file into the
directory which is about to be scanned. This actually breaks
out-of-tree builds with libgit2, as the file will be generated in the
source tree instead of in the build tree. This is noticed especially in
the case where the source tree is mounted read-only, rendering us unable
to build unit tests.
Due to us accepting multiple paths which are to be scanned, it is not
trivial to fix though. The first solution which comes into mind would be
to re-create the directory hierarchy at a given output path or use
unique names for the clar.suite files, but this is rather cumbersome and
magical. The second and cleaner solution would be to fold all
directories into a single clar.suite file, but this would probably break
some use-cases.
Seeing that we do not ever pass multiple directories to generate.py, we
will now simply retire support for this. This allows us to later on
introduce an additional option to specify the path where the clar.suite
file will be generated at, defaulting to "clar.suite" inside of the
scanned directory.
|
|
845f661d
|
2017-06-21T20:02:48
|
|
cmake: Permit disabling external http-parser
When attempting to build libgit2 as an isolated static lib, CMake
gleefully attempts to use the system http-parser. This is typically
seen on Linux systems which install header files with every package,
such as Gentoo.
Allow developers to forcibly disable using the system http-parser with
the config switch USE_EXT_HTTP_PARSER. Defaults to ON to maintain
previous behavior.
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
|
|
fa948752
|
2017-06-21T13:08:45
|
|
Merge pull request #4277 from pks-t/pks/merge-coverity-fix
merge: fix potential free of uninitialized memory
|
|
4dc87e72
|
2017-06-21T13:35:46
|
|
merge: fix potential free of uninitialized memory
The function `merge_diff_mark_similarity_exact` may error our early and,
when it does so, free the `ours_deletes_by_oid` and
`theirs_deletes_by_oid` variables. While the first one can never be
uninitialized due to the first call actually assigning to it, the second
variable can be freed without being initialized.
Fix the issue by initializing both variables to `NULL`.
|
|
40294f38
|
2017-06-21T12:25:52
|
|
Merge pull request #4202 from mitesch/linear_exact_rename
merge: perform exact rename detection in linear time
|
|
62c44c49
|
2017-06-21T12:25:26
|
|
Merge pull request #4211 from pks-t/pks/trusty
travis: upgrade container to Ubuntu 14.04
|
|
77850789
|
2017-06-21T08:11:12
|
|
Merge pull request #4273 from azdavis/fix-template-dir-empty-string
Fix template dir empty string
|
|
7c8d460f
|
2017-04-21T07:58:46
|
|
travis: upgrade container to Ubuntu 14.04
Ubuntu 12.04 (Precise Pangolin) reaches end of life on April 28th, 2017.
As such, we should update our build infrastructure to use the next
available LTS release, which is Ubuntu 14.04 LTS (Trusty Tahr). Note
that Trusty is still considered beta quality on Travis. But considering
we are able to correctly build and test libgit2, this seems to be a
non-issue for us.
Switch over our default distribution to Trusty. As Precise still has
extended support for paying customers, add an additional job which
compiles libgit2 on the old release.
|
|
06619904
|
2017-04-26T13:04:23
|
|
travis: cibuild: set up our own sshd server
Some tests of ours require to be running against an SSH server.
Currently, we simply run against the SSH server provided and started by
Travis itself. As our Linux tests run in a sudo-less environment, we
have no control over its configuration and startup/shutdown procedure.
While this has been no problem until now, it will become a problem as
soon as we migrate over to newer Precise images, as the SSH server does
not have any host keys set up. Luckily, we can simply set up our own
unpriviledged SSH server. This has the benefit of us being able to
modify its configuration even in a sudo-less environment.
This commit sets up the unpriviledged SSH server on port 2222.
|
|
c2c95ad0
|
2017-04-26T13:16:18
|
|
tests: online::clone: use URL of test server
All our tests running against a local SSH server usually read the
server's URL from environment variables. But online::clone::ssh_cert
test fails to do so and instead always connects to
"ssh://localhost/foo". This assumption breaks whenever the SSH server is
not running on the standard port, e.g. when it is running as a user.
Fix the issue by using the URL provided by the environment.
|
|
af720bb6
|
2017-06-16T23:19:31
|
|
repository: remove trailing whitespace
|
|
9a46c777
|
2017-06-16T21:02:26
|
|
repository: do not initialize templates if dir is an empty string
|
|
8e912e79
|
2017-06-16T21:05:58
|
|
tests: try to init with empty template path
|
|
15e11937
|
2017-06-14T13:31:20
|
|
CHANGELOG: document git_filter_init and GIT_FILTER_INIT
|
|
8296da5f
|
2017-06-14T10:49:28
|
|
Merge pull request #4267 from mohseenrm/master
adding GIT_FILTER_VERSION to GIT_FILTER_INIT as part of convention
|
|
4e257dab
|
2017-06-14T10:48:04
|
|
Merge pull request #4268 from pks-t/pks/homebrew-dupes-deprecation
travis: replace use of deprecated homebrew/dupes tap
|
|
953427b3
|
2017-06-14T10:47:55
|
|
Merge pull request #4269 from pks-t/pks/tests
Test improvements
|
|
a78441bc
|
2017-06-13T11:05:40
|
|
Adding git_filter_init for initializing `git_filter` struct + unit test
|
|
7f7dabda
|
2017-06-12T13:40:47
|
|
adding GIT_FILTER_VERSION to GIT_FILTER_INIT as part of convention
|
|
a180e7d9
|
2017-06-13T11:10:19
|
|
tests: odb: add more low-level backend tests
Introduce a new test suite "odb::backend::simple", which utilizes the
fake backend to exercise the ODB abstraction layer. While such tests
already exist for the case where multiple backends are put together, no
direct testing for functionality with a single backend exist yet.
|
|
b2e53f36
|
2017-06-13T11:39:36
|
|
tests: odb: implement `exists_prefix` for the fake backend
The fake backend currently implements all reading functions except for
the `exists_prefix` one. Implement it to enable further testing of the
ODB layer.
|
|
983e627d
|
2017-06-13T11:38:59
|
|
tests: odb: use correct OID length
The `search_object` function takes the OID length as one of its
parameters, where its maximum length is `GIT_OID_HEXSZ`. The `exists`
function of the fake backend used `GIT_OID_RAWSZ` though, leading to
only the first half of the OID being used when finding the correct
object.
|
|
c4cbb3b1
|
2017-06-13T11:38:14
|
|
tests: odb: have the fake backend detect ambiguous prefixes
In order to be able to test the ODB prefix functions, we need to be able
to detect ambiguous prefixes in case multiple objects with the same
prefix exist in the fake ODB. Extend `search_object` to detect ambiguous
queries and have callers return its error code instead of always
returning `GIT_ENOTFOUND`.
|
|
95170294
|
2017-06-13T11:08:28
|
|
tests: core: test initialization of `git_proxy_options`
Initialization of the `git_proxy_options` structure is never tested
anywhere. Include it in our usual initialization test in
"core::structinit::compare".
|
|
bee423cc
|
2017-06-13T10:29:23
|
|
tests: network: add missing include for `git_repository_new`
A newly added test uses the `git_repository_new` function without the
corresponding header file being included. While this works due to the
compiler deducing the correct function signature, we should obviously
just include the function's declaration file.
|
|
a64532e1
|
2017-06-13T11:05:09
|
|
cmake: disable optimization on debug builds
While our debug builds on MSVC platforms already tune the code optimizer
to aid debugging code, all the other platforms still use the default
optimization level. This makes it hard for developers on these platforms
to actually debug code while maintaining his sanity due to optimizations
like inlined code, elided variables etc.
To help this common use case, we can simply follow the MSVC example and
turn off code optimization with "-O0" for debug builds. While it would
be preferable to instead use "-Og" supported by more modern compilers,
we cannot guarantee that this level is available on all supported
platforms.
|
|
61399953
|
2017-06-13T11:03:38
|
|
cmake: set "-D_DEBUG" on non-Windows platforms
In our code base, we have some occasions where we use the "_DEBUG"
preprocessor macro to enable additional code which should not be part of
release builds. While we define this flag on MSVC platforms, it is
guarded by the conditional `WIN32 AND NOT CYGWIN` on other platforms
since 19be3f9e6 (Improve MSVC compiler, linker flags, 2013-02-13). While
this condition can be fulfilled by the MSVC platform, it is never
encountered due to being part of the `ELSE` part of `IF (MSVC)`.
The intention of the conditional was most likely to avoid the
preprocessor macro on Cygwin platforms, but to include it on everthing
else. As such, the correct condition here would be `IF (NOT CYGWIN)`
instead. But digging a bit further, the condition is only ever used in
two places:
1. To skip the test in "core::structinit", which should also work on
Cygwin.
2. In "src/win32/git2.rc", where it is used to set additional file
flags. As this file is included in MSVC builds only, it cannot cause
any harm to set "_DEBUG" on Cygwin here.
As such, we can simply drop the conditional and always set "-D_DEBUG" on
all platforms.
|
|
e94be4c0
|
2017-06-13T11:08:19
|
|
cmake: remove stale comment on precompiled headers
In commit 9f75a9ce7 (Turning on runtime checks when building debug under
MSVC., 2012-03-30), we introduced a comment "Precompiled headers", which
actually refers to no related commands. Seeing that the comment never
had anything to refer to, we can simply remove it here.
|
|
96d02989
|
2017-06-13T08:09:38
|
|
travis: replace use of deprecated homebrew/dupes tap
The formulae provided by the homebrew/dupes tap are deprecated since at
least April 4, 2017, with formulae having been migrated to
homebrew/core.
Replace the deprecated reference to "homebrew/dupes/zlib" with only
"zlib".
|
|
2ca088bd
|
2017-06-12T22:47:54
|
|
Merge pull request #4265 from pks-t/pks/read-prefix-tests
Read prefix tests
|
|
99e40a67
|
2017-06-12T21:23:44
|
|
Merge pull request #4263 from libgit2/ethomson/config_for_inmemory_repo
Allow creation of a configuration object in an in-memory repository
|
|
d9914fb7
|
2017-06-12T21:22:27
|
|
Merge pull request #4266 from libgit2/ethomson/travis-explicit-openssl
travis: install openssl explicitly
|
|
844e85f2
|
2017-06-12T20:00:21
|
|
travis: install openssl explicitly
|
|
fe9a5dd3
|
2017-06-12T12:00:14
|
|
remote: ensure we can create an anon remote on inmemory repo
Given a wholly in-memory repository, ensure that we can create an
anonymous remote and perform actions on it.
|
|
2d486781
|
2017-06-12T12:02:27
|
|
repository: don't fail to create config option in inmemory repo
When in an in-memory repository - without a configuration file - do not
fail to create a configuration object.
|
|
9d49a43c
|
2017-06-12T12:01:10
|
|
repository_item_path: return ENOTFOUND when appropriate
Disambiguate error values: return `GIT_ENOTFOUND` when the item cannot
exist in the repository (perhaps because the repository is inmemory or
otherwise not backed by a filesystem), return `-1` when there is a hard
failure.
|
|
f148258a
|
2017-06-12T16:19:45
|
|
tests: odb: add tests with multiple backends
Previous to pulling out and extending the fake backend, it was quite
cumbersome to write tests for very specific scenarios regarding
backends. But as we have made it more generic, it has become much easier
to do so. As such, this commit adds multiple tests for scenarios with
multiple backends for the ODB.
The changes also include a test for a very targeted scenario. When one
backend found a matching object via `read_prefix`, but the last backend
returns `GIT_ENOTFOUND` and when object hash verification is turned off,
we fail to reset the error code to `GIT_OK`. This causes us to segfault
later on, when doing a double-free on the returned object.
|
|
6e010bb1
|
2017-06-12T15:43:56
|
|
tests: odb: allow passing fake objects to the fake backend
Right now, the fake backend is quite restrained in the way how it
works: we pass it an OID which it is to return later as well as an error
code we want it to return. While this is sufficient for existing tests,
we can make the fake backend a little bit more generic in order to allow
us testing for additional scenarios.
To do so, we change the backend to not accept an error code and OID
which it is to return for queries, but instead a simple array of OIDs
with their respective blob contents. On each query, the fake backend
simply iterates through this array and returns the first matching
object.
|
|
369cb45f
|
2017-06-12T15:21:58
|
|
tests: do not reuse OID from backend
In order to make the fake backend more useful, we want to enable it
holding multiple object references. To do so, we need to decouple it
from the single fake OID it currently holds, which we simply move up
into the calling tests.
|
|
2add34d0
|
2017-06-12T14:53:46
|
|
tests: odb: move fake backend into its own file
The fake backend used by the test suite `odb::backend::nonrefreshing` is
useful to have some low-level tests for the ODB layer. As such, we move
the implementation into its own `backend_helpers` module.
|
|
9927e958
|
2017-06-12T16:01:22
|
|
Merge pull request #4261 from RogerGee/fix_wait_while_ack
smart_protocol: fix parsing of server ACK responses
|
|
2ade8fb0
|
2017-06-12T07:33:41
|
|
Merge pull request #4264 from libgit2/ethomson/read_prefix
odb_read_prefix: reset error in backends loop
|
|
cb3010c5
|
2017-06-12T12:56:40
|
|
odb_read_prefix: reset error in backends loop
When looking for an object by prefix, we query all the backends so that
we can ensure that there is no ambiguity. We need to reset the `error`
value between backends; otherwise the first backend may find an object
by prefix, but subsequent backends may not. If we do not reset the
`error` value then it will remain at `GIT_ENOTFOUND` and `read_prefix_1`
will fail, despite having actually found an object.
|
|
fb3fc837
|
2017-06-12T11:45:09
|
|
repository_item_path: error messages lowercased
|
|
bd692809
|
2017-06-11T12:32:00
|
|
Merge pull request #4262 from libgit2/ethomson/bump-v26
Update version number to 0.26
|
|
2a3cc403
|
2017-06-11T12:23:34
|
|
Update version number to v0.26
|
|
a1b4cafd
|
2017-06-11T12:21:23
|
|
changelog: add some final 0.26 changes
|
|
29ef7d3f
|
2017-06-11T10:58:35
|
|
Merge pull request #4254 from pks-t/pks/changelog-v0.26
CHANGELOG: add various changes introduced since v0.25
|
|
6f960b55
|
2017-06-11T10:37:46
|
|
Merge pull request #4088 from chescock/packfile-name-using-complete-hash
Ensure packfiles with different contents have different names
|
|
d2c4f764
|
2017-06-11T09:54:04
|
|
Merge pull request #4260 from libgit2/ethomson/forced_checkout_2
Update to forced checkout and untracked files
|
|
4a0df574
|
2017-06-10T18:46:35
|
|
git_futils_rmdir: only allow `EBUSY` when asked
Only ignore `EBUSY` from `rmdir` when the `GIT_RMDIR_SKIP_NONEMPTY` bit
is set.
|
|
83989d70
|
2017-06-08T22:23:53
|
|
checkout: cope with untracked files in directory deletion
When deleting a directory during checkout, do not simply delete the
directory, since there may be untracked files. Instead, go into
the iterator and examine each file.
In the original code (the code with the faulty assumption), we look to
see if there's an index entry beneath the directory that we want to
remove. Eg, it looks to see if we have a workdir entry foo and an
index entry foo/bar.txt. If this is not the case, then the working
directory must have precious files in that directory. This part is okay.
The part that's not okay is if there is an index entry foo/bar.txt. It
just blows away the whole damned directory.
That's not cool.
Instead, by simply pushing the directory itself onto the stack and
iterating each entry, we will deal with the files one by one - whether
they're in the index (and can be force removed) or not (and are
precious).
The original code was a bad optimization, assuming that we didn't need
to git_iterator_advance_into if there was any index entry in the folder.
That's wrong - we could have optimized this iff all folder entries are
in the index.
Instead, we need to simply dig into the directory and analyze its
entries.
|
|
0ef405b3
|
2017-02-15T14:05:10
|
|
checkout: do not delete directories with untracked entries
If the `GIT_CHECKOUT_FORCE` flag is given to any of the `git_checkout`
invocations, we remove files which were previously staged. But while
doing so, we unfortunately also remove unstaged files in a directory
which contains at least one staged file, resulting in potential data
loss.
This commit adds two tests to verify behavior.
|
|
e141f079
|
2017-06-10T11:46:09
|
|
smart_protocol: fix parsing of server ACK responses
Fix ACK parsing in wait_while_ack() internal function. This patch
handles the case where multi_ack_detailed mode sends 'ready' ACKs. The
existing functionality would bail out too early, thus causing the
processing of the ensuing packfile to fail if/when 'ready' ACKs were
sent.
|
|
a1510880
|
2017-06-07T08:32:41
|
|
CHANGELOG: add various changes introduced since v0.25
|
|
e476d528
|
2017-06-08T22:54:30
|
|
Merge pull request #4259 from pks-t/pks/fsync-option-rename
settings: rename `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION`
|
|
6c23704d
|
2017-06-08T21:40:18
|
|
settings: rename `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION`
Initially, the setting has been solely used to enable the use of
`fsync()` when creating objects. Since then, the use has been extended
to also cover references and index files. As the option is not yet part
of any release, we can still correct this by renaming the option to
something more sensible, indicating not only correlation to objects.
This commit renames the option to `GIT_OPT_ENABLE_FSYNC_GITDIR`. We also
move the variable from the object to repository source code.
|
|
458cea5c
|
2017-06-08T14:22:24
|
|
Merge pull request #4255 from pks-t/pks/buffer-grow-errors
Buffer growing cleanups
|
|
90500d81
|
2017-06-08T13:56:22
|
|
Merge pull request #4253 from pks-t/pks/cov-fixes
Coverity fixes
|
|
90388aa8
|
2017-06-06T15:02:23
|
|
refdb_fs: be explicit about using null-OID if we cannot resolve ref
|
|
78a8f68f
|
2017-06-06T14:57:31
|
|
path: only set dotgit flags when configs were read
|