Log

Author Commit Date CI Message
Edward Thomson f031e20b 2017-07-23T03:41:52 travis: build with patched libcurl Ubuntu trusty has a bug in curl when using NTLM credentials in a proxy, dereferencing a null pointer and causing segmentation faults. Use a custom-patched version of libcurl that avoids this issue.
Edward Thomson e0568621 2017-07-19T13:55:55 Merge pull request #4250 from pks-t/pks/config-file-iteration Configuration file fixes with includes
Edward Thomson a94a5402 2017-07-19T13:28:32 Merge pull request #4272 from pks-t/pks/patch-id Patch ID calculation
Patrick Steinhardt 1b329089 2017-05-31T22:27:19 config_file: refuse modifying included variables Modifying variables pulled in by an included file currently succeeds, but it doesn't actually do what one would expect, as refreshing the configuration will cause the values to reappear. As we are currently not really able to support this use case, we will instead just return an error for deleting and setting variables which were included via an include.
Patrick Steinhardt 3a7f7a6e 2017-05-31T14:43:46 config_file: pass reader directly to callbacks Previously, the callbacks passed to `config_parse` got the reader via a pointer to a pointer. This allowed the callbacks to update the callers `reader` variable when the array holding it has been reallocated. As the array is no longer present, we can simply the code by making the reader a simple pointer.
Patrick Steinhardt 73df75d8 2017-05-31T14:34:48 config_file: refactor include handling Current code for configuration files uses the `reader` structure to parse configuration files and store additional metadata like the file's path and checksum. These structures are stored within an array in the backend itself, which causes multiple problems. First, it does not make sense to keep around the file's contents with the backend itself. While this data is usually free'd before being added to the backend, this brings along somewhat intricate lifecycle problems. A better solution would be to store only the file paths as well as the checksum of the currently parsed content only. The second problem is that the `reader` structures are stored inside an array. When re-parsing configuration files due to changed contents, we may cause this array to be reallocated, requiring us to update pointers hold by callers. Furthermore, we do not keep track of includes which are already associated to a reader inside of this array. This causes us to add readers multiple times to the backend, e.g. in the scenario of refreshing configurations. This commit fixes these shortcomings. We introduce a split between the parsing data and the configuration file's metadata. The `reader` will now only hold the file's contents and the parser state and the new `config_file` structure holds the file's path and checksum. Furthermore, the new structure is a recursive structure in that it will also hold references to the files it directly includes. The diskfile is changed to only store the top-level configuration file. These changes allow us further refactorings and greatly simplify understanding the code.
Patrick Steinhardt 28c2cc3d 2017-05-31T16:41:44 config_file: move reader into `config_read` only Right now, we have multiple call sites which initialize a `reader` structure. As the structure is only actually used inside of `config_read`, we can instead just move the reader inside of the `config_read` function. Instead, we can just pass in the configuration file into `config_read`, which eases code readability.
Patrick Steinhardt 6f7aab0c 2017-06-06T09:45:11 tests: config::include: use init and cleanup functions
Patrick Steinhardt 83bcd3a1 2017-05-31T22:45:25 config_file: refresh all files if includes were modified Currently, we only re-parse the top-level configuration file when it has changed itself. This can cause problems when an include is changed, as we were not updating all values correctly. Instead of conditionally reparsing only refreshed files, the logic becomes much clearer and easier to follow if we always re-parse the top-level configuration file when either the file itself or one of its included configuration files has changed on disk. This commit implements this logic. Note that this might impact performance in some cases, as we need to re-read all configuration files whenever any of the included files changed. It could increase performance to just re-parse include files which have actually changed, but this would compromise maintainability of the code without much gain. The only case where we will gain anything is when we actually use includes and when only these includes are updated, which will probably be quite an unusual scenario to actually be worthwhile to optimize.
Patrick Steinhardt 56a7a264 2017-05-31T14:50:40 config_file: remove unused backend field from parse data The backend passed to `config_read` is never actually used anymore, so we can remove it from the function and the `parse_data` structure.
Patrick Steinhardt 8149f850 2017-07-14T08:30:18 Merge pull request #4306 from libgit2/cmn/tag-bad-signature signature: don't leave a dangling pointer to the strings on parse failure
Carlos Martín Nieto d1dbb3ae 2017-07-12T07:40:16 signature: don't leave a dangling pointer to the strings on parse failure If the signature is invalid but we detect that after allocating the strings, we free them. We however leave that pointer dangling in the structure the caller gave us, which can lead to double-free. Set these pointers to `NULL` after freeing their memory to avoid this.
Patrick Steinhardt c4c95bf4 2017-07-10T08:50:11 Merge pull request #4287 from AndreyG/reset/interface-cosmetics git_reset_*: pass parameters as const pointers
Patrick Steinhardt 0e165686 2017-07-07T10:03:34 Merge pull request #4291 from pks-t/pks/regex-header-confusion tests: config: fix missing declaration causing error
Patrick Steinhardt 1f7af277 2017-07-05T11:52:47 tests: config: fix missing declaration causing error On systems where we pull in our distributed version of the regex library, all tests in config::readonly fail. This error is actually quite interesting: the test suite is unable to find the declaration of `git_path_exists` and assumes it has a signature of `int git_path_exists(const char *)`. But actually, it has a `bool` return value. Due to this confusion, some wrong conversion is done by the compiler and the `cl_assert(!git_path_exists("file"))` checks erroneously fail, even when the function does in fact return the correct value. The error is actually introduced by 56893bb9a (cmake: consistently use TARGET_INCLUDE_DIRECTORIES if available, 2017-06-28), unfortunately introduced by myself. Due to the delayed addition of include directories, we will now find the "config.h" header inside of the "deps/regex" directory instead of inside the "src/" directory, where it should be. As such, we are missing definitions for the `git_config_file__ondisk` and `git_path_exists` symbols. The correct fix here would be to fix the order in which include search directories are added. But due to the current restructuring of CMakeBuild.txt, I'm refraining from doing so and delay the proper fix a bit. Instead, we paper over the issue by explicitly including "path.h" to fix its prototype. This ignores the issue that `git_config_file__ondisk` is undeclared, as its signature is correctly identified by the compiler.
Andrey Davydov d4e03be6 2017-06-30T11:21:18 git_reset_*: pass parameters as const pointers
Patrick Steinhardt c26ce784 2017-06-28T12:26:04 Merge branch 'AndreyG/cmake/modernization'
Patrick Steinhardt 56893bb9 2017-06-28T12:11:44 cmake: consistently use TARGET_INCLUDE_DIRECTORIES if available Instead of using INCLUDE_DIRECTORIES again for the libgit2_clar test suite, we should just be using TARGET_INCLUDE_DIRECTORIES again if the CMake version is greater than 2.8.11.
Andrey Davydov 22de81e6 2017-06-21T08:25:36 cmake: use `target_include_directories` for modern cmake Apply `target_include_directories` when CMAKE_VERSION >= 2.8.12
Patrick Steinhardt f5f13b50 2017-06-27T19:38:50 Merge pull request #4280 from ids1024/htons Convert port with htons() in p_getaddrinfo()
Patrick Steinhardt 6b2133b4 2017-06-27T12:46:15 Merge pull request #4235 from pks-t/pks/out-of-tree-builds Out of tree builds
Patrick Steinhardt 6f02a4d1 2017-06-26T08:17:20 Merge pull request #4278 from pks-t/pks/cmake-disable-http-parser cmake: Permit disabling external http-parser
Patrick Steinhardt 89a34828 2017-06-16T13:34:43 diff: implement function to calculate patch ID The upstream git project provides the ability to calculate a so-called patch ID. Quoting from git-patch-id(1): A "patch ID" is nothing but a sum of SHA-1 of the file diffs associated with a patch, with whitespace and line numbers ignored." Patch IDs can be used to identify two patches which are probably the same thing, e.g. when a patch has been cherry-picked to another branch. This commit implements a new function `git_diff_patchid`, which gets a patch and derives an OID from the diff. Note the different terminology here: a patch in libgit2 are the differences in a single file and a diff can contain multiple patches for different files. The implementation matches the upstream implementation and should derive the same OID for the same diff. In fact, some code has been directly derived from the upstream implementation. The upstream implementation has two different modes to calculate patch IDs, which is the stable and unstable mode. The old way of calculating the patch IDs was unstable in a sense that a different ordering the diffs was leading to different results. This oversight was fixed in git 1.9, but as git tries hard to never break existing workflows, the old and unstable way is still default. The newer and stable way does not care for ordering of the diff hunks, and in fact it is the mode that should probably be used today. So right now, we only implement the stable way of generating the patch ID.
Ian Douglas Scott ef09eae1 2017-06-23T10:10:29 Convert port with htons() in p_getaddrinfo() `sin_port` should be in network byte order.
Patrick Steinhardt b6ed67c2 2017-05-10T12:54:14 tests: refs::crashes: create sandbox for creating symref The test `refs::crashes::double_free` operates on our in-source "testrepo.git" repository without creating a copy first. As the test will try to create a new symbolic reference, this will fail when we want to do a pure out-of-tree build with a read-only source tree. Fix the issue by creating a sandbox first.
Patrick Steinhardt 6ee7d37a 2017-05-10T12:51:06 tests: index::tests: create sandboxed repo for locking The test `index::tests::can_lock_index` operates on the "testrepo.git" repository located inside of our source tree. While this is okay for tests which do read-only operations on these resouces, this specific test tries to lock the index by creating a lock. This will obviously fail on out-of-tree builds with read-only source trees. Fix the issue by creating a sandbox first.
Patrick Steinhardt 4305fcca 2017-05-10T12:14:36 cmake: generate clar.suite in binary directory Change the output path of generate.py to generate the clar.suite file inside of the binary directory. This fixes out of tree builds with read-only source trees as we now refrain from writing anything into the source tree.
Patrick Steinhardt 8d22bcea 2017-05-10T12:21:53 generate.py: generate clar cache in binary directory The source directory should usually not be touched when using out-of-tree builds. But next to the previously fixed "clar.suite" file, we are also writing the ".clarcache" into the project's source tree, breaking the builds. Fix this by also honoring the output directory for the ".clarcache" file.
Patrick Steinhardt 9e240bd2 2017-05-10T12:02:03 generate.py: enable overriding path of generated clar.suite The generate.py script will currently always write the generated clar.suite file into the scanned directory, breaking out-of-tree builds with read-only source directories. Fix this issue by adding another option to allow overriding the output path of the generated file.
Patrick Steinhardt 0a513a94 2017-05-10T11:46:00 generate.py: disallow generating test suites for multiple paths Our generate.py script is used to extract and write test suite declarations into the clar.suite file. As is, the script accepts multiple directories on the command line and will generate this file for each of these directories. The generate.py script will always write the clar.suite file into the directory which is about to be scanned. This actually breaks out-of-tree builds with libgit2, as the file will be generated in the source tree instead of in the build tree. This is noticed especially in the case where the source tree is mounted read-only, rendering us unable to build unit tests. Due to us accepting multiple paths which are to be scanned, it is not trivial to fix though. The first solution which comes into mind would be to re-create the directory hierarchy at a given output path or use unique names for the clar.suite files, but this is rather cumbersome and magical. The second and cleaner solution would be to fold all directories into a single clar.suite file, but this would probably break some use-cases. Seeing that we do not ever pass multiple directories to generate.py, we will now simply retire support for this. This allows us to later on introduce an additional option to specify the path where the clar.suite file will be generated at, defaulting to "clar.suite" inside of the scanned directory.
Jason Cooper 845f661d 2017-06-21T20:02:48 cmake: Permit disabling external http-parser When attempting to build libgit2 as an isolated static lib, CMake gleefully attempts to use the system http-parser. This is typically seen on Linux systems which install header files with every package, such as Gentoo. Allow developers to forcibly disable using the system http-parser with the config switch USE_EXT_HTTP_PARSER. Defaults to ON to maintain previous behavior. Signed-off-by: Jason Cooper <jason@lakedaemon.net>
Edward Thomson fa948752 2017-06-21T13:08:45 Merge pull request #4277 from pks-t/pks/merge-coverity-fix merge: fix potential free of uninitialized memory
Patrick Steinhardt 4dc87e72 2017-06-21T13:35:46 merge: fix potential free of uninitialized memory The function `merge_diff_mark_similarity_exact` may error our early and, when it does so, free the `ours_deletes_by_oid` and `theirs_deletes_by_oid` variables. While the first one can never be uninitialized due to the first call actually assigning to it, the second variable can be freed without being initialized. Fix the issue by initializing both variables to `NULL`.
Edward Thomson 40294f38 2017-06-21T12:25:52 Merge pull request #4202 from mitesch/linear_exact_rename merge: perform exact rename detection in linear time
Edward Thomson 62c44c49 2017-06-21T12:25:26 Merge pull request #4211 from pks-t/pks/trusty travis: upgrade container to Ubuntu 14.04
Patrick Steinhardt 77850789 2017-06-21T08:11:12 Merge pull request #4273 from azdavis/fix-template-dir-empty-string Fix template dir empty string
Patrick Steinhardt 7c8d460f 2017-04-21T07:58:46 travis: upgrade container to Ubuntu 14.04 Ubuntu 12.04 (Precise Pangolin) reaches end of life on April 28th, 2017. As such, we should update our build infrastructure to use the next available LTS release, which is Ubuntu 14.04 LTS (Trusty Tahr). Note that Trusty is still considered beta quality on Travis. But considering we are able to correctly build and test libgit2, this seems to be a non-issue for us. Switch over our default distribution to Trusty. As Precise still has extended support for paying customers, add an additional job which compiles libgit2 on the old release.
Patrick Steinhardt 06619904 2017-04-26T13:04:23 travis: cibuild: set up our own sshd server Some tests of ours require to be running against an SSH server. Currently, we simply run against the SSH server provided and started by Travis itself. As our Linux tests run in a sudo-less environment, we have no control over its configuration and startup/shutdown procedure. While this has been no problem until now, it will become a problem as soon as we migrate over to newer Precise images, as the SSH server does not have any host keys set up. Luckily, we can simply set up our own unpriviledged SSH server. This has the benefit of us being able to modify its configuration even in a sudo-less environment. This commit sets up the unpriviledged SSH server on port 2222.
Patrick Steinhardt c2c95ad0 2017-04-26T13:16:18 tests: online::clone: use URL of test server All our tests running against a local SSH server usually read the server's URL from environment variables. But online::clone::ssh_cert test fails to do so and instead always connects to "ssh://localhost/foo". This assumption breaks whenever the SSH server is not running on the standard port, e.g. when it is running as a user. Fix the issue by using the URL provided by the environment.
Ariel Davis af720bb6 2017-06-16T23:19:31 repository: remove trailing whitespace
Ariel Davis 9a46c777 2017-06-16T21:02:26 repository: do not initialize templates if dir is an empty string
Ariel Davis 8e912e79 2017-06-16T21:05:58 tests: try to init with empty template path
Edward Thomson 15e11937 2017-06-14T13:31:20 CHANGELOG: document git_filter_init and GIT_FILTER_INIT
Edward Thomson 8296da5f 2017-06-14T10:49:28 Merge pull request #4267 from mohseenrm/master adding GIT_FILTER_VERSION to GIT_FILTER_INIT as part of convention
Edward Thomson 4e257dab 2017-06-14T10:48:04 Merge pull request #4268 from pks-t/pks/homebrew-dupes-deprecation travis: replace use of deprecated homebrew/dupes tap
Edward Thomson 953427b3 2017-06-14T10:47:55 Merge pull request #4269 from pks-t/pks/tests Test improvements
Mohseen Mukaddam a78441bc 2017-06-13T11:05:40 Adding git_filter_init for initializing `git_filter` struct + unit test
Mohseen Mukaddam 7f7dabda 2017-06-12T13:40:47 adding GIT_FILTER_VERSION to GIT_FILTER_INIT as part of convention
Patrick Steinhardt a180e7d9 2017-06-13T11:10:19 tests: odb: add more low-level backend tests Introduce a new test suite "odb::backend::simple", which utilizes the fake backend to exercise the ODB abstraction layer. While such tests already exist for the case where multiple backends are put together, no direct testing for functionality with a single backend exist yet.
Patrick Steinhardt b2e53f36 2017-06-13T11:39:36 tests: odb: implement `exists_prefix` for the fake backend The fake backend currently implements all reading functions except for the `exists_prefix` one. Implement it to enable further testing of the ODB layer.
Patrick Steinhardt 983e627d 2017-06-13T11:38:59 tests: odb: use correct OID length The `search_object` function takes the OID length as one of its parameters, where its maximum length is `GIT_OID_HEXSZ`. The `exists` function of the fake backend used `GIT_OID_RAWSZ` though, leading to only the first half of the OID being used when finding the correct object.
Patrick Steinhardt c4cbb3b1 2017-06-13T11:38:14 tests: odb: have the fake backend detect ambiguous prefixes In order to be able to test the ODB prefix functions, we need to be able to detect ambiguous prefixes in case multiple objects with the same prefix exist in the fake ODB. Extend `search_object` to detect ambiguous queries and have callers return its error code instead of always returning `GIT_ENOTFOUND`.
Patrick Steinhardt 95170294 2017-06-13T11:08:28 tests: core: test initialization of `git_proxy_options` Initialization of the `git_proxy_options` structure is never tested anywhere. Include it in our usual initialization test in "core::structinit::compare".
Patrick Steinhardt bee423cc 2017-06-13T10:29:23 tests: network: add missing include for `git_repository_new` A newly added test uses the `git_repository_new` function without the corresponding header file being included. While this works due to the compiler deducing the correct function signature, we should obviously just include the function's declaration file.
Patrick Steinhardt a64532e1 2017-06-13T11:05:09 cmake: disable optimization on debug builds While our debug builds on MSVC platforms already tune the code optimizer to aid debugging code, all the other platforms still use the default optimization level. This makes it hard for developers on these platforms to actually debug code while maintaining his sanity due to optimizations like inlined code, elided variables etc. To help this common use case, we can simply follow the MSVC example and turn off code optimization with "-O0" for debug builds. While it would be preferable to instead use "-Og" supported by more modern compilers, we cannot guarantee that this level is available on all supported platforms.
Patrick Steinhardt 61399953 2017-06-13T11:03:38 cmake: set "-D_DEBUG" on non-Windows platforms In our code base, we have some occasions where we use the "_DEBUG" preprocessor macro to enable additional code which should not be part of release builds. While we define this flag on MSVC platforms, it is guarded by the conditional `WIN32 AND NOT CYGWIN` on other platforms since 19be3f9e6 (Improve MSVC compiler, linker flags, 2013-02-13). While this condition can be fulfilled by the MSVC platform, it is never encountered due to being part of the `ELSE` part of `IF (MSVC)`. The intention of the conditional was most likely to avoid the preprocessor macro on Cygwin platforms, but to include it on everthing else. As such, the correct condition here would be `IF (NOT CYGWIN)` instead. But digging a bit further, the condition is only ever used in two places: 1. To skip the test in "core::structinit", which should also work on Cygwin. 2. In "src/win32/git2.rc", where it is used to set additional file flags. As this file is included in MSVC builds only, it cannot cause any harm to set "_DEBUG" on Cygwin here. As such, we can simply drop the conditional and always set "-D_DEBUG" on all platforms.
Patrick Steinhardt e94be4c0 2017-06-13T11:08:19 cmake: remove stale comment on precompiled headers In commit 9f75a9ce7 (Turning on runtime checks when building debug under MSVC., 2012-03-30), we introduced a comment "Precompiled headers", which actually refers to no related commands. Seeing that the comment never had anything to refer to, we can simply remove it here.
Patrick Steinhardt 96d02989 2017-06-13T08:09:38 travis: replace use of deprecated homebrew/dupes tap The formulae provided by the homebrew/dupes tap are deprecated since at least April 4, 2017, with formulae having been migrated to homebrew/core. Replace the deprecated reference to "homebrew/dupes/zlib" with only "zlib".
Edward Thomson 2ca088bd 2017-06-12T22:47:54 Merge pull request #4265 from pks-t/pks/read-prefix-tests Read prefix tests
Edward Thomson 99e40a67 2017-06-12T21:23:44 Merge pull request #4263 from libgit2/ethomson/config_for_inmemory_repo Allow creation of a configuration object in an in-memory repository
Edward Thomson d9914fb7 2017-06-12T21:22:27 Merge pull request #4266 from libgit2/ethomson/travis-explicit-openssl travis: install openssl explicitly
Edward Thomson 844e85f2 2017-06-12T20:00:21 travis: install openssl explicitly
Edward Thomson fe9a5dd3 2017-06-12T12:00:14 remote: ensure we can create an anon remote on inmemory repo Given a wholly in-memory repository, ensure that we can create an anonymous remote and perform actions on it.
Edward Thomson 2d486781 2017-06-12T12:02:27 repository: don't fail to create config option in inmemory repo When in an in-memory repository - without a configuration file - do not fail to create a configuration object.
Edward Thomson 9d49a43c 2017-06-12T12:01:10 repository_item_path: return ENOTFOUND when appropriate Disambiguate error values: return `GIT_ENOTFOUND` when the item cannot exist in the repository (perhaps because the repository is inmemory or otherwise not backed by a filesystem), return `-1` when there is a hard failure.
Patrick Steinhardt f148258a 2017-06-12T16:19:45 tests: odb: add tests with multiple backends Previous to pulling out and extending the fake backend, it was quite cumbersome to write tests for very specific scenarios regarding backends. But as we have made it more generic, it has become much easier to do so. As such, this commit adds multiple tests for scenarios with multiple backends for the ODB. The changes also include a test for a very targeted scenario. When one backend found a matching object via `read_prefix`, but the last backend returns `GIT_ENOTFOUND` and when object hash verification is turned off, we fail to reset the error code to `GIT_OK`. This causes us to segfault later on, when doing a double-free on the returned object.
Patrick Steinhardt 6e010bb1 2017-06-12T15:43:56 tests: odb: allow passing fake objects to the fake backend Right now, the fake backend is quite restrained in the way how it works: we pass it an OID which it is to return later as well as an error code we want it to return. While this is sufficient for existing tests, we can make the fake backend a little bit more generic in order to allow us testing for additional scenarios. To do so, we change the backend to not accept an error code and OID which it is to return for queries, but instead a simple array of OIDs with their respective blob contents. On each query, the fake backend simply iterates through this array and returns the first matching object.
Patrick Steinhardt 369cb45f 2017-06-12T15:21:58 tests: do not reuse OID from backend In order to make the fake backend more useful, we want to enable it holding multiple object references. To do so, we need to decouple it from the single fake OID it currently holds, which we simply move up into the calling tests.
Patrick Steinhardt 2add34d0 2017-06-12T14:53:46 tests: odb: move fake backend into its own file The fake backend used by the test suite `odb::backend::nonrefreshing` is useful to have some low-level tests for the ODB layer. As such, we move the implementation into its own `backend_helpers` module.
Edward Thomson 9927e958 2017-06-12T16:01:22 Merge pull request #4261 from RogerGee/fix_wait_while_ack smart_protocol: fix parsing of server ACK responses
Patrick Steinhardt 2ade8fb0 2017-06-12T07:33:41 Merge pull request #4264 from libgit2/ethomson/read_prefix odb_read_prefix: reset error in backends loop
Edward Thomson cb3010c5 2017-06-12T12:56:40 odb_read_prefix: reset error in backends loop When looking for an object by prefix, we query all the backends so that we can ensure that there is no ambiguity. We need to reset the `error` value between backends; otherwise the first backend may find an object by prefix, but subsequent backends may not. If we do not reset the `error` value then it will remain at `GIT_ENOTFOUND` and `read_prefix_1` will fail, despite having actually found an object.
Edward Thomson fb3fc837 2017-06-12T11:45:09 repository_item_path: error messages lowercased
Edward Thomson bd692809 2017-06-11T12:32:00 Merge pull request #4262 from libgit2/ethomson/bump-v26 Update version number to 0.26
Edward Thomson 2a3cc403 2017-06-11T12:23:34 Update version number to v0.26
Edward Thomson a1b4cafd 2017-06-11T12:21:23 changelog: add some final 0.26 changes
Edward Thomson 29ef7d3f 2017-06-11T10:58:35 Merge pull request #4254 from pks-t/pks/changelog-v0.26 CHANGELOG: add various changes introduced since v0.25
Edward Thomson 6f960b55 2017-06-11T10:37:46 Merge pull request #4088 from chescock/packfile-name-using-complete-hash Ensure packfiles with different contents have different names
Edward Thomson d2c4f764 2017-06-11T09:54:04 Merge pull request #4260 from libgit2/ethomson/forced_checkout_2 Update to forced checkout and untracked files
Edward Thomson 4a0df574 2017-06-10T18:46:35 git_futils_rmdir: only allow `EBUSY` when asked Only ignore `EBUSY` from `rmdir` when the `GIT_RMDIR_SKIP_NONEMPTY` bit is set.
Edward Thomson 83989d70 2017-06-08T22:23:53 checkout: cope with untracked files in directory deletion When deleting a directory during checkout, do not simply delete the directory, since there may be untracked files. Instead, go into the iterator and examine each file. In the original code (the code with the faulty assumption), we look to see if there's an index entry beneath the directory that we want to remove. Eg, it looks to see if we have a workdir entry foo and an index entry foo/bar.txt. If this is not the case, then the working directory must have precious files in that directory. This part is okay. The part that's not okay is if there is an index entry foo/bar.txt. It just blows away the whole damned directory. That's not cool. Instead, by simply pushing the directory itself onto the stack and iterating each entry, we will deal with the files one by one - whether they're in the index (and can be force removed) or not (and are precious). The original code was a bad optimization, assuming that we didn't need to git_iterator_advance_into if there was any index entry in the folder. That's wrong - we could have optimized this iff all folder entries are in the index. Instead, we need to simply dig into the directory and analyze its entries.
Patrick Steinhardt 0ef405b3 2017-02-15T14:05:10 checkout: do not delete directories with untracked entries If the `GIT_CHECKOUT_FORCE` flag is given to any of the `git_checkout` invocations, we remove files which were previously staged. But while doing so, we unfortunately also remove unstaged files in a directory which contains at least one staged file, resulting in potential data loss. This commit adds two tests to verify behavior.
Roger Gee e141f079 2017-06-10T11:46:09 smart_protocol: fix parsing of server ACK responses Fix ACK parsing in wait_while_ack() internal function. This patch handles the case where multi_ack_detailed mode sends 'ready' ACKs. The existing functionality would bail out too early, thus causing the processing of the ensuing packfile to fail if/when 'ready' ACKs were sent.
Patrick Steinhardt a1510880 2017-06-07T08:32:41 CHANGELOG: add various changes introduced since v0.25
Edward Thomson e476d528 2017-06-08T22:54:30 Merge pull request #4259 from pks-t/pks/fsync-option-rename settings: rename `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION`
Patrick Steinhardt 6c23704d 2017-06-08T21:40:18 settings: rename `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION` Initially, the setting has been solely used to enable the use of `fsync()` when creating objects. Since then, the use has been extended to also cover references and index files. As the option is not yet part of any release, we can still correct this by renaming the option to something more sensible, indicating not only correlation to objects. This commit renames the option to `GIT_OPT_ENABLE_FSYNC_GITDIR`. We also move the variable from the object to repository source code.
Edward Thomson 458cea5c 2017-06-08T14:22:24 Merge pull request #4255 from pks-t/pks/buffer-grow-errors Buffer growing cleanups
Edward Thomson 90500d81 2017-06-08T13:56:22 Merge pull request #4253 from pks-t/pks/cov-fixes Coverity fixes
Patrick Steinhardt 90388aa8 2017-06-06T15:02:23 refdb_fs: be explicit about using null-OID if we cannot resolve ref
Patrick Steinhardt 78a8f68f 2017-06-06T14:57:31 path: only set dotgit flags when configs were read
Patrick Steinhardt 9be4c303 2017-06-06T14:54:48 worktree: use `git__free` instead of `free`
Patrick Steinhardt 0f642f31 2017-06-06T14:54:19 refs: properly report errors from `update_wt_heads`
Patrick Steinhardt 0c28c72d 2017-06-06T14:53:45 fileops: check return value of `git_path_dirname`
Patrick Steinhardt a693b873 2017-06-07T10:20:44 buffer: use `git_buf_init` with length The `git_buf_init` function has an optional length parameter, which will cause the buffer to be initialized and allocated in one step. This can be used instead of static initialization with `GIT_BUF_INIT` followed by a `git_buf_grow`. This patch does so for two functions where it is applicable.
Patrick Steinhardt 4796c916 2017-06-07T09:56:31 buffer: return errors for `git_buf_init` and `git_buf_attach` Both the `git_buf_init` and `git_buf_attach` functions may call `git_buf_grow` in case they were given an allocation length as parameter. As such, it is possible for these functions to fail when we run out of memory. While it won't probably be used anytime soon, it does indeed make sense to also record this fact by returning an error code from both functions. As they belong to the internal API only, this change does not break our interface.
Patrick Steinhardt 9a8386a2 2017-06-07T09:50:54 buffer: consistently use `ENSURE_SIZE` to grow buffers on-demand The `ENSURE_SIZE` macro can be used to grow a buffer if its currently allocated size does not suffice a required target size. While most of the code already uses this macro, the `git_buf_join` and `git_buf_join3` functions do not yet use it. Due to the macro first checking whether we have to grow the buffer at all, this has the benefit of saving a function call when it is not needed. While this is nice to have, it will probably not matter at all performance-wise -- instead, this only serves for consistency across the code.
Patrick Steinhardt e82dd813 2017-06-08T11:52:32 buffer: fix `ENSURE_SIZE` macro referencing wrong variable While the `ENSURE_SIZE` macro gets a reference to both the buffer that is to be resized and a new size, we were not consistently referencing the passed buffer, but instead a variable `buf`, which is not passed in. Funnily enough, we never noticed because our buffers seem to always be named `buf` whenever the macro was being used. Fix the macro by always using the passed-in buffer. While at it, add braces around all mentions of passed-in variables as should be done with macros to avoid subtle errors. Found-by: Edward Thompson
Patrick Steinhardt 97eb5ef0 2017-06-07T10:05:54 buffer: rely on `GITERR_OOM` set by `git_buf_try_grow` The function `git_buf_try_grow` consistently calls `giterr_set_oom` whenever growing the buffer fails due to insufficient memory being available. So in fact, we do not have to do this ourselves when a call to any buffer-growing function has failed due to an OOM situation. But we still do so in two functions, which this patch cleans up.
Edward Thomson 3a8801ae 2017-06-08T10:55:47 Merge pull request #4258 from pks-t/pks/sha1dc-update SHA1DC update
Patrick Steinhardt db1abffa 2017-06-07T14:59:38 sha1dc: do not use standard includes The updated SHA1DC library allows us to use custom includes instead of using standard includes. Due to requirements with cross-platform, we provide some custom system includes files like for example the "stdint.h" file on Win32. Because of this, we want to make sure to avoid breaking cross-platform compatibility when SHA1DC is enabled. To use the new mechanism, we can simply define `SHA1DC_NO_STANDARD_INCLUDES`. Furthermore, we can specify custom include files via two defines, which we now use to include our "common.h" header.