Log

Author Commit Date CI Message
Edward Thomson 8631357e 2017-10-07T12:23:33 checkout: do not test file mode on Windows On Windows, we do not support file mode changes, so do not test for type changes between the disk and tree being checked out. We could have false positives since the on-disk file can only have an (effective) mode of 0100644 since NTFS does not support executable files. If the tree being checked out did have an executable file, we would erroneously decide that the file on disk had been changed.
Edward Thomson 24388179 2016-06-15T02:00:35 checkout: treat files as modified if mode differs When performing a forced checkout, treat files as modified when the workdir or the index is identical except for the mode. This ensures that force checkout will update the mode to the target. (Apply this check for regular files only, if one of the items was a file and the other was another type of item then this would be a typechange and handled independently.)
Patrick Steinhardt cda18f9b 2017-10-06T11:24:11 refs: do not use peeled OID if peeling to a tag If a reference stored in a packed-refs file does not directly point to a commit, tree or blob, the packed-refs file will also will include a fully-peeled OID pointing to the first underlying object of that type. If we try to peel a reference to an object, we will use that peeled OID to speed up resolving the object. As a reference for an annotated tag does not directly point to a commit, tree or blob but instead to the tag object, the packed-refs file will have an accomodating fully-peeled OID pointing to the object referenced by that tag. When we use the fully-peeled OID pointing to the referenced object when peeling, we obviously cannot peel that to the tag anymore. Fix this issue by not using the fully-peeled OID whenever we want to peel to a tag. Note that this does not include the case where we want to resolve to _any_ object type. Existing code may make use from the fact that we resolve those to commit objects instead of tag objects, even though that behaviour is inconsistent between packed and loose references. Furthermore, some tests of ours make the assumption that we in fact resolve those references to a commit.
Carlos Martín Nieto 8d7dcb10 2017-09-27T15:27:32 curl: free the proxy options
Patrick Steinhardt 8d86cdd4 2017-07-07T12:27:43 ignore: return early to avoid useless indentation
Carlos Martín Nieto e29ab6fe 2017-09-27T15:17:26 proxy: add a free function for the options's pointers When we duplicate a user-provided options struct, we're stuck with freeing the url in it. In case we add stuff to the proxy struct, let's add a function in which to put the logic.
Andreas Smas c2702235 2017-01-20T23:14:19 Use SOCK_CLOEXEC when creating sockets
Slava Karpenko c3fbf905 2017-09-11T21:34:41 Clear the remote_ref_name buffer in git_push_update_tips() If fetch_spec was a non-pattern, and it is not the first iteration of push_status vector, then git_refspec_transform would result in the new value appended via git_buf_puts to the previous iteration value. Forcibly clearing the buffer on each iteration to prevent this behavior.
Jeff King 3ca2bb39 2017-08-09T16:34:02 sha1_position: convert do-while to while If we enter the sha1_position() function with "lo == hi", we have no elements. But the do-while loop means that we'll enter the loop body once anyway, picking "mi" at that same value and comparing nonsense to our desired key. This is unlikely to match in practice, but we still shouldn't be looking at the memory in the first place. This bug is inherited from git.git; it was fixed there in e01580cfe01526ec2c4eb4899f776a82ade7e0e1.
Patrick Steinhardt f41e86d6 2017-10-06T12:05:26 transports: smart: fix memory leak when skipping symbolic refs When we setup the revision walk for negotiating references with a remote, we iterate over all references, ignoring tags and symbolic references. While skipping over symbolic references, we forget to free the looked up reference, resulting in a memory leak when the next iteration simply overwrites the variable. Fix that issue by freeing the reference at the beginning of each iteration and collapsing return paths for error and success.
Carlos Martín Nieto 93ecb61a 2017-10-07T11:25:12 proxy: rename the options freeing function
Carlos Martín Nieto 21f77af9 2017-07-12T07:40:16 signature: don't leave a dangling pointer to the strings on parse failure If the signature is invalid but we detect that after allocating the strings, we free them. We however leave that pointer dangling in the structure the caller gave us, which can lead to double-free. Set these pointers to `NULL` after freeing their memory to avoid this.
Patrick Steinhardt 4296a36b 2017-07-10T09:36:19 ignore: honor case insensitivity for negative ignores When computing negative ignores, we throw away any rule which does not undo a previous rule to optimize. But on case insensitive file systems, we need to keep in mind that a negative ignore can also undo a previous rule with different case, which we did not yet honor while determining whether a rule undoes a previous one. So in the following example, we fail to unignore the "/Case" directory: /case !/Case Make both paths checking whether a plain- or wildcard-based rule undo a previous rule aware of case-insensitivity. This fixes the described issue.
Carlos Martín Nieto 27a8092b 2017-09-27T15:30:19 curl: free the user-provided proxy credentials
Patrick Steinhardt 32cc5edc 2017-07-07T17:10:57 tests: status: additional test for negative ignores with pattern This test is by Carlos Martín Nieto.
Patrick Steinhardt 5c15cd94 2017-07-07T13:27:27 ignore: keep negative rules containing wildcards Ignore rules allow for reverting a previously ignored rule by prefixing it with an exclamation mark. As such, a negative rule can only override previously ignored files. While computing all ignore patterns, we try to use this fact to optimize away some negative rules which do not override any previous patterns, as they won't change the outcome anyway. In some cases, though, this optimization causes us to get the actual ignores wrong for some files. This may happen whenever the pattern contains a wildcard, as we are unable to reason about whether a pattern overrides a previous pattern in a sane way. This happens for example in the case where a gitignore file contains "*.c" and "!src/*.c", where we wouldn't un-ignore files inside of the "src/" subdirectory. In this case, the first solution coming to mind may be to just strip the "src/" prefix and simply compare the basenames. While that would work here, it would stop working as soon as the basename pattern itself is different, like for example with "*x.c" and "!src/*.c. As such, we settle for the easier fix of just not optimizing away rules that contain a wildcard.
Patrick Steinhardt 58197758 2017-07-07T12:27:18 ignore: fix indentation of comment block
Ian Douglas Scott f908bb8e 2017-06-23T10:10:29 Convert port with htons() in p_getaddrinfo() `sin_port` should be in network byte order.
Etienne Samson e7c24ea2 2017-07-20T21:00:15 tests: fix the rebase-submodule test
Etienne Samson 54d4e5de 2017-06-21T14:57:30 Remove invalid submodule Fixes #4274
Ariel Davis e4517af3 2017-06-16T23:19:31 repository: remove trailing whitespace
Ariel Davis 82bb59b4 2017-06-16T21:02:26 repository: do not initialize templates if dir is an empty string
Ariel Davis cc9b0b6c 2017-06-16T21:05:58 tests: try to init with empty template path
Patrick Steinhardt dd2d5381 2018-03-08T18:00:46 Merge pull request #4572 from pks-t/pks/index-secfixes Security fixes for reading index v4
Patrick Steinhardt 182e8e5e 2018-03-08T16:19:16 Bump version to v0.26.2
Patrick Steinhardt 01b5a161 2018-03-08T16:23:15 CHANGELOG: udpate for v0.26.2
Patrick Steinhardt 6f4d04b5 2018-03-08T12:36:46 index: error out on unreasonable prefix-compressed path lengths When computing the complete path length from the encoded prefix-compressed path, we end up just allocating the complete path without ever checking what the encoded path length actually is. This can easily lead to a denial of service by just encoding an unreasonable long path name inside of the index. Git already enforces a maximum path length of 4096 bytes. As we also have that enforcement ready in some places, just make sure that the resulting path is smaller than GIT_PATH_MAX. Reported-by: Krishna Ram Prakash R <krp@gtux.in> Reported-by: Vivek Parikh <viv0411.parikh@gmail.com>
Patrick Steinhardt 6ddd286e 2018-03-08T12:00:27 index: fix out-of-bounds read with invalid index entry prefix length The index format in version 4 has prefix-compressed entries, where every index entry can compress its path by using a path prefix of the previous entry. Since implmenting support for this index format version in commit 5625d86b9 (index: support index v4, 2016-05-17), though, we do not correctly verify that the prefix length that we want to reuse is actually smaller or equal to the amount of characters than the length of the previous index entry's path. This can lead to a an integer underflow and subsequently to an out-of-bounds read. Fix this by verifying that the prefix is actually smaller than the previous entry's path length. Reported-by: Krishna Ram Prakash R <krp@gtux.in> Reported-by: Vivek Parikh <viv0411.parikh@gmail.com>
Patrick Steinhardt b6756821 2018-03-08T11:49:19 index: convert `read_entry` to return entry size via an out-param The function `read_entry` does not conform to our usual coding style of returning stuff via the out parameter and to use the return value for reporting errors. Due to most of our code conforming to that pattern, it has become quite natural for us to actually return `-1` in case there is any error, which has also slipped in with commit 5625d86b9 (index: support index v4, 2016-05-17). As the function returns an `size_t` only, though, the return value is wrapped around, causing the caller of `read_tree` to continue with an invalid index entry. Ultimately, this can lead to a double-free. Improve code and fix the bug by converting the function to return the index entry size via an out parameter and only using the return value to indicate errors. Reported-by: Krishna Ram Prakash R <krp@gtux.in> Reported-by: Vivek Parikh <viv0411.parikh@gmail.com>
Edward Thomson 3f15bf8b 2018-03-07T17:46:15 Merge pull request #4568 from pks-t/pks/zlib-update-0.26 deps: upgrade embedded zlib to version 1.2.11
Patrick Steinhardt 67211f31 2018-03-07T10:42:44 Bump version to 0.26.1
Patrick Steinhardt aade4bd1 2018-03-07T16:00:05 CHANGELOG.md: update for version 0.26.1
Carlos Martín Nieto 490c7426 2018-01-10T15:13:23 travis: we use bintray's own key for signing The VM on Travis apparently will still proceed, but it's good practice.
Edward Thomson acbb435c 2018-01-10T12:33:56 travis: fetch trusty dependencies from bintray The trusty dependencies are now hosted on Bintray.
Patrick Steinhardt f05f90d8 2017-09-15T10:28:32 cmake: fix linker error with dbghelper library When the MSVC_CRTDBG option is set by the developer, we will link in the dbghelper library to enable memory lead detection in MSVC projects. We are doing so by adding it to the variable `CMAKE_C_STANDARD_LIBRARIES`, so that it is linked for every library and executable built by CMake. But this causes our builds to fail with a linker error: ``` LINK: fatal error LNK1104: cannot open file 'advapi32.lib;Dbghelp.lib' ``` The issue here is that we are treating the variable as if it were an array of libraries by setting it via the following command: ``` SET(CMAKE_C_STANDARD_LIBRARIES "${CMAKE_C_STANDARD_LIBRARIES}" "Dbghelp.lib") ``` The generated build commands will then simply stringify the variable, concatenating all the contained libraries with a ";". This causes the observed linking failure. To fix the issue, we should just treat the variabable as a simple string. So instead of adding multiple members, we just add the "Dbghelp.lib" library to the existing string, separated by a space character.
Patrick Steinhardt edc03027 2018-03-07T10:28:21 deps: upgrade embedded zlib to version 1.2.11 The current version of zlib bundled with libgit2 is version 1.2.8. This version has several CVEs assigned: - CVE-2016-9843 - CVE-2016-9841 - CVE-2016-9842 - CVE-2016-9840 Upgrade the bundled version to the current release 1.2.11, which has these vulnerabilities fixes.
Edward Thomson 15e11937 2017-06-14T13:31:20 CHANGELOG: document git_filter_init and GIT_FILTER_INIT
Edward Thomson 8296da5f 2017-06-14T10:49:28 Merge pull request #4267 from mohseenrm/master adding GIT_FILTER_VERSION to GIT_FILTER_INIT as part of convention
Edward Thomson 4e257dab 2017-06-14T10:48:04 Merge pull request #4268 from pks-t/pks/homebrew-dupes-deprecation travis: replace use of deprecated homebrew/dupes tap
Edward Thomson 953427b3 2017-06-14T10:47:55 Merge pull request #4269 from pks-t/pks/tests Test improvements
Mohseen Mukaddam a78441bc 2017-06-13T11:05:40 Adding git_filter_init for initializing `git_filter` struct + unit test
Mohseen Mukaddam 7f7dabda 2017-06-12T13:40:47 adding GIT_FILTER_VERSION to GIT_FILTER_INIT as part of convention
Patrick Steinhardt a180e7d9 2017-06-13T11:10:19 tests: odb: add more low-level backend tests Introduce a new test suite "odb::backend::simple", which utilizes the fake backend to exercise the ODB abstraction layer. While such tests already exist for the case where multiple backends are put together, no direct testing for functionality with a single backend exist yet.
Patrick Steinhardt b2e53f36 2017-06-13T11:39:36 tests: odb: implement `exists_prefix` for the fake backend The fake backend currently implements all reading functions except for the `exists_prefix` one. Implement it to enable further testing of the ODB layer.
Patrick Steinhardt 983e627d 2017-06-13T11:38:59 tests: odb: use correct OID length The `search_object` function takes the OID length as one of its parameters, where its maximum length is `GIT_OID_HEXSZ`. The `exists` function of the fake backend used `GIT_OID_RAWSZ` though, leading to only the first half of the OID being used when finding the correct object.
Patrick Steinhardt c4cbb3b1 2017-06-13T11:38:14 tests: odb: have the fake backend detect ambiguous prefixes In order to be able to test the ODB prefix functions, we need to be able to detect ambiguous prefixes in case multiple objects with the same prefix exist in the fake ODB. Extend `search_object` to detect ambiguous queries and have callers return its error code instead of always returning `GIT_ENOTFOUND`.
Patrick Steinhardt 95170294 2017-06-13T11:08:28 tests: core: test initialization of `git_proxy_options` Initialization of the `git_proxy_options` structure is never tested anywhere. Include it in our usual initialization test in "core::structinit::compare".
Patrick Steinhardt bee423cc 2017-06-13T10:29:23 tests: network: add missing include for `git_repository_new` A newly added test uses the `git_repository_new` function without the corresponding header file being included. While this works due to the compiler deducing the correct function signature, we should obviously just include the function's declaration file.
Patrick Steinhardt a64532e1 2017-06-13T11:05:09 cmake: disable optimization on debug builds While our debug builds on MSVC platforms already tune the code optimizer to aid debugging code, all the other platforms still use the default optimization level. This makes it hard for developers on these platforms to actually debug code while maintaining his sanity due to optimizations like inlined code, elided variables etc. To help this common use case, we can simply follow the MSVC example and turn off code optimization with "-O0" for debug builds. While it would be preferable to instead use "-Og" supported by more modern compilers, we cannot guarantee that this level is available on all supported platforms.
Patrick Steinhardt 61399953 2017-06-13T11:03:38 cmake: set "-D_DEBUG" on non-Windows platforms In our code base, we have some occasions where we use the "_DEBUG" preprocessor macro to enable additional code which should not be part of release builds. While we define this flag on MSVC platforms, it is guarded by the conditional `WIN32 AND NOT CYGWIN` on other platforms since 19be3f9e6 (Improve MSVC compiler, linker flags, 2013-02-13). While this condition can be fulfilled by the MSVC platform, it is never encountered due to being part of the `ELSE` part of `IF (MSVC)`. The intention of the conditional was most likely to avoid the preprocessor macro on Cygwin platforms, but to include it on everthing else. As such, the correct condition here would be `IF (NOT CYGWIN)` instead. But digging a bit further, the condition is only ever used in two places: 1. To skip the test in "core::structinit", which should also work on Cygwin. 2. In "src/win32/git2.rc", where it is used to set additional file flags. As this file is included in MSVC builds only, it cannot cause any harm to set "_DEBUG" on Cygwin here. As such, we can simply drop the conditional and always set "-D_DEBUG" on all platforms.
Patrick Steinhardt e94be4c0 2017-06-13T11:08:19 cmake: remove stale comment on precompiled headers In commit 9f75a9ce7 (Turning on runtime checks when building debug under MSVC., 2012-03-30), we introduced a comment "Precompiled headers", which actually refers to no related commands. Seeing that the comment never had anything to refer to, we can simply remove it here.
Patrick Steinhardt 96d02989 2017-06-13T08:09:38 travis: replace use of deprecated homebrew/dupes tap The formulae provided by the homebrew/dupes tap are deprecated since at least April 4, 2017, with formulae having been migrated to homebrew/core. Replace the deprecated reference to "homebrew/dupes/zlib" with only "zlib".
Edward Thomson 2ca088bd 2017-06-12T22:47:54 Merge pull request #4265 from pks-t/pks/read-prefix-tests Read prefix tests
Edward Thomson 99e40a67 2017-06-12T21:23:44 Merge pull request #4263 from libgit2/ethomson/config_for_inmemory_repo Allow creation of a configuration object in an in-memory repository
Edward Thomson d9914fb7 2017-06-12T21:22:27 Merge pull request #4266 from libgit2/ethomson/travis-explicit-openssl travis: install openssl explicitly
Edward Thomson 844e85f2 2017-06-12T20:00:21 travis: install openssl explicitly
Edward Thomson fe9a5dd3 2017-06-12T12:00:14 remote: ensure we can create an anon remote on inmemory repo Given a wholly in-memory repository, ensure that we can create an anonymous remote and perform actions on it.
Edward Thomson 2d486781 2017-06-12T12:02:27 repository: don't fail to create config option in inmemory repo When in an in-memory repository - without a configuration file - do not fail to create a configuration object.
Edward Thomson 9d49a43c 2017-06-12T12:01:10 repository_item_path: return ENOTFOUND when appropriate Disambiguate error values: return `GIT_ENOTFOUND` when the item cannot exist in the repository (perhaps because the repository is inmemory or otherwise not backed by a filesystem), return `-1` when there is a hard failure.
Patrick Steinhardt f148258a 2017-06-12T16:19:45 tests: odb: add tests with multiple backends Previous to pulling out and extending the fake backend, it was quite cumbersome to write tests for very specific scenarios regarding backends. But as we have made it more generic, it has become much easier to do so. As such, this commit adds multiple tests for scenarios with multiple backends for the ODB. The changes also include a test for a very targeted scenario. When one backend found a matching object via `read_prefix`, but the last backend returns `GIT_ENOTFOUND` and when object hash verification is turned off, we fail to reset the error code to `GIT_OK`. This causes us to segfault later on, when doing a double-free on the returned object.
Patrick Steinhardt 6e010bb1 2017-06-12T15:43:56 tests: odb: allow passing fake objects to the fake backend Right now, the fake backend is quite restrained in the way how it works: we pass it an OID which it is to return later as well as an error code we want it to return. While this is sufficient for existing tests, we can make the fake backend a little bit more generic in order to allow us testing for additional scenarios. To do so, we change the backend to not accept an error code and OID which it is to return for queries, but instead a simple array of OIDs with their respective blob contents. On each query, the fake backend simply iterates through this array and returns the first matching object.
Patrick Steinhardt 369cb45f 2017-06-12T15:21:58 tests: do not reuse OID from backend In order to make the fake backend more useful, we want to enable it holding multiple object references. To do so, we need to decouple it from the single fake OID it currently holds, which we simply move up into the calling tests.
Patrick Steinhardt 2add34d0 2017-06-12T14:53:46 tests: odb: move fake backend into its own file The fake backend used by the test suite `odb::backend::nonrefreshing` is useful to have some low-level tests for the ODB layer. As such, we move the implementation into its own `backend_helpers` module.
Edward Thomson 9927e958 2017-06-12T16:01:22 Merge pull request #4261 from RogerGee/fix_wait_while_ack smart_protocol: fix parsing of server ACK responses
Patrick Steinhardt 2ade8fb0 2017-06-12T07:33:41 Merge pull request #4264 from libgit2/ethomson/read_prefix odb_read_prefix: reset error in backends loop
Edward Thomson cb3010c5 2017-06-12T12:56:40 odb_read_prefix: reset error in backends loop When looking for an object by prefix, we query all the backends so that we can ensure that there is no ambiguity. We need to reset the `error` value between backends; otherwise the first backend may find an object by prefix, but subsequent backends may not. If we do not reset the `error` value then it will remain at `GIT_ENOTFOUND` and `read_prefix_1` will fail, despite having actually found an object.
Edward Thomson fb3fc837 2017-06-12T11:45:09 repository_item_path: error messages lowercased
Edward Thomson bd692809 2017-06-11T12:32:00 Merge pull request #4262 from libgit2/ethomson/bump-v26 Update version number to 0.26
Edward Thomson 2a3cc403 2017-06-11T12:23:34 Update version number to v0.26
Edward Thomson a1b4cafd 2017-06-11T12:21:23 changelog: add some final 0.26 changes
Edward Thomson 29ef7d3f 2017-06-11T10:58:35 Merge pull request #4254 from pks-t/pks/changelog-v0.26 CHANGELOG: add various changes introduced since v0.25
Edward Thomson 6f960b55 2017-06-11T10:37:46 Merge pull request #4088 from chescock/packfile-name-using-complete-hash Ensure packfiles with different contents have different names
Edward Thomson d2c4f764 2017-06-11T09:54:04 Merge pull request #4260 from libgit2/ethomson/forced_checkout_2 Update to forced checkout and untracked files
Edward Thomson 4a0df574 2017-06-10T18:46:35 git_futils_rmdir: only allow `EBUSY` when asked Only ignore `EBUSY` from `rmdir` when the `GIT_RMDIR_SKIP_NONEMPTY` bit is set.
Edward Thomson 83989d70 2017-06-08T22:23:53 checkout: cope with untracked files in directory deletion When deleting a directory during checkout, do not simply delete the directory, since there may be untracked files. Instead, go into the iterator and examine each file. In the original code (the code with the faulty assumption), we look to see if there's an index entry beneath the directory that we want to remove. Eg, it looks to see if we have a workdir entry foo and an index entry foo/bar.txt. If this is not the case, then the working directory must have precious files in that directory. This part is okay. The part that's not okay is if there is an index entry foo/bar.txt. It just blows away the whole damned directory. That's not cool. Instead, by simply pushing the directory itself onto the stack and iterating each entry, we will deal with the files one by one - whether they're in the index (and can be force removed) or not (and are precious). The original code was a bad optimization, assuming that we didn't need to git_iterator_advance_into if there was any index entry in the folder. That's wrong - we could have optimized this iff all folder entries are in the index. Instead, we need to simply dig into the directory and analyze its entries.
Patrick Steinhardt 0ef405b3 2017-02-15T14:05:10 checkout: do not delete directories with untracked entries If the `GIT_CHECKOUT_FORCE` flag is given to any of the `git_checkout` invocations, we remove files which were previously staged. But while doing so, we unfortunately also remove unstaged files in a directory which contains at least one staged file, resulting in potential data loss. This commit adds two tests to verify behavior.
Roger Gee e141f079 2017-06-10T11:46:09 smart_protocol: fix parsing of server ACK responses Fix ACK parsing in wait_while_ack() internal function. This patch handles the case where multi_ack_detailed mode sends 'ready' ACKs. The existing functionality would bail out too early, thus causing the processing of the ensuing packfile to fail if/when 'ready' ACKs were sent.
Patrick Steinhardt a1510880 2017-06-07T08:32:41 CHANGELOG: add various changes introduced since v0.25
Edward Thomson e476d528 2017-06-08T22:54:30 Merge pull request #4259 from pks-t/pks/fsync-option-rename settings: rename `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION`
Patrick Steinhardt 6c23704d 2017-06-08T21:40:18 settings: rename `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION` Initially, the setting has been solely used to enable the use of `fsync()` when creating objects. Since then, the use has been extended to also cover references and index files. As the option is not yet part of any release, we can still correct this by renaming the option to something more sensible, indicating not only correlation to objects. This commit renames the option to `GIT_OPT_ENABLE_FSYNC_GITDIR`. We also move the variable from the object to repository source code.
Edward Thomson 458cea5c 2017-06-08T14:22:24 Merge pull request #4255 from pks-t/pks/buffer-grow-errors Buffer growing cleanups
Edward Thomson 90500d81 2017-06-08T13:56:22 Merge pull request #4253 from pks-t/pks/cov-fixes Coverity fixes
Patrick Steinhardt 90388aa8 2017-06-06T15:02:23 refdb_fs: be explicit about using null-OID if we cannot resolve ref
Patrick Steinhardt 78a8f68f 2017-06-06T14:57:31 path: only set dotgit flags when configs were read
Patrick Steinhardt 9be4c303 2017-06-06T14:54:48 worktree: use `git__free` instead of `free`
Patrick Steinhardt 0f642f31 2017-06-06T14:54:19 refs: properly report errors from `update_wt_heads`
Patrick Steinhardt 0c28c72d 2017-06-06T14:53:45 fileops: check return value of `git_path_dirname`
Patrick Steinhardt a693b873 2017-06-07T10:20:44 buffer: use `git_buf_init` with length The `git_buf_init` function has an optional length parameter, which will cause the buffer to be initialized and allocated in one step. This can be used instead of static initialization with `GIT_BUF_INIT` followed by a `git_buf_grow`. This patch does so for two functions where it is applicable.
Patrick Steinhardt 4796c916 2017-06-07T09:56:31 buffer: return errors for `git_buf_init` and `git_buf_attach` Both the `git_buf_init` and `git_buf_attach` functions may call `git_buf_grow` in case they were given an allocation length as parameter. As such, it is possible for these functions to fail when we run out of memory. While it won't probably be used anytime soon, it does indeed make sense to also record this fact by returning an error code from both functions. As they belong to the internal API only, this change does not break our interface.
Patrick Steinhardt 9a8386a2 2017-06-07T09:50:54 buffer: consistently use `ENSURE_SIZE` to grow buffers on-demand The `ENSURE_SIZE` macro can be used to grow a buffer if its currently allocated size does not suffice a required target size. While most of the code already uses this macro, the `git_buf_join` and `git_buf_join3` functions do not yet use it. Due to the macro first checking whether we have to grow the buffer at all, this has the benefit of saving a function call when it is not needed. While this is nice to have, it will probably not matter at all performance-wise -- instead, this only serves for consistency across the code.
Patrick Steinhardt e82dd813 2017-06-08T11:52:32 buffer: fix `ENSURE_SIZE` macro referencing wrong variable While the `ENSURE_SIZE` macro gets a reference to both the buffer that is to be resized and a new size, we were not consistently referencing the passed buffer, but instead a variable `buf`, which is not passed in. Funnily enough, we never noticed because our buffers seem to always be named `buf` whenever the macro was being used. Fix the macro by always using the passed-in buffer. While at it, add braces around all mentions of passed-in variables as should be done with macros to avoid subtle errors. Found-by: Edward Thompson
Patrick Steinhardt 97eb5ef0 2017-06-07T10:05:54 buffer: rely on `GITERR_OOM` set by `git_buf_try_grow` The function `git_buf_try_grow` consistently calls `giterr_set_oom` whenever growing the buffer fails due to insufficient memory being available. So in fact, we do not have to do this ourselves when a call to any buffer-growing function has failed due to an OOM situation. But we still do so in two functions, which this patch cleans up.
Edward Thomson 3a8801ae 2017-06-08T10:55:47 Merge pull request #4258 from pks-t/pks/sha1dc-update SHA1DC update
Patrick Steinhardt db1abffa 2017-06-07T14:59:38 sha1dc: do not use standard includes The updated SHA1DC library allows us to use custom includes instead of using standard includes. Due to requirements with cross-platform, we provide some custom system includes files like for example the "stdint.h" file on Win32. Because of this, we want to make sure to avoid breaking cross-platform compatibility when SHA1DC is enabled. To use the new mechanism, we can simply define `SHA1DC_NO_STANDARD_INCLUDES`. Furthermore, we can specify custom include files via two defines, which we now use to include our "common.h" header.
Patrick Steinhardt 63d86c27 2017-06-07T14:50:16 sha1dc: update to fix errors with endianess and unaligned access This updates our version of SHA1DC to e139984 (Merge pull request #35 from lidl/master, 2017-05-30).
Edward Thomson 3bc95cfe 2017-06-07T14:42:12 Merge pull request #4236 from pks-t/pks/index-v4-fixes Fix path computations for compressed index entries
Edward Thomson 6a13cf1e 2017-06-07T13:56:22 Merge pull request #4256 from libgit2/ethomson/unc_tests (Temporarily) disable UNC tests
Edward Thomson f218508f 2017-06-07T10:54:48 ctest: temporarily disable UNC path tests (Temporarily) disable UNC path tests to work around AppVeyor issues.
Patrick Steinhardt 40139fe6 2017-06-07T07:38:06 Merge pull request #4251 from Keruspe/master Fix build with libressl
Marc-Antoine Perennou f28744a5 2017-06-05T10:11:20 openssl_stream: fix building with libressl OpenSSL v1.1 has introduced a new way of initializing the library without having to call various functions of different subsystems. In libgit2, we have been adapting to that change with 88520151f (openssl_stream: use new initialization function on OpenSSL version >=1.1, 2017-04-07), where we added an #ifdef depending on the OpenSSL version. This change broke building with libressl, though, which has not changed its API in the same way. Fix the issue by expanding the #ifdef condition to use the old way of initializing with libressl. Signed-off-by: Marc-Antoine Perennou <Marc-Antoine@Perennou.com>