src


Log

Author Commit Date CI Message
Edward Thomson 6f960b55 2017-06-11T10:37:46 Merge pull request #4088 from chescock/packfile-name-using-complete-hash Ensure packfiles with different contents have different names
Edward Thomson d2c4f764 2017-06-11T09:54:04 Merge pull request #4260 from libgit2/ethomson/forced_checkout_2 Update to forced checkout and untracked files
Edward Thomson 4a0df574 2017-06-10T18:46:35 git_futils_rmdir: only allow `EBUSY` when asked Only ignore `EBUSY` from `rmdir` when the `GIT_RMDIR_SKIP_NONEMPTY` bit is set.
Edward Thomson 83989d70 2017-06-08T22:23:53 checkout: cope with untracked files in directory deletion When deleting a directory during checkout, do not simply delete the directory, since there may be untracked files. Instead, go into the iterator and examine each file. In the original code (the code with the faulty assumption), we look to see if there's an index entry beneath the directory that we want to remove. Eg, it looks to see if we have a workdir entry foo and an index entry foo/bar.txt. If this is not the case, then the working directory must have precious files in that directory. This part is okay. The part that's not okay is if there is an index entry foo/bar.txt. It just blows away the whole damned directory. That's not cool. Instead, by simply pushing the directory itself onto the stack and iterating each entry, we will deal with the files one by one - whether they're in the index (and can be force removed) or not (and are precious). The original code was a bad optimization, assuming that we didn't need to git_iterator_advance_into if there was any index entry in the folder. That's wrong - we could have optimized this iff all folder entries are in the index. Instead, we need to simply dig into the directory and analyze its entries.
Patrick Steinhardt 6c23704d 2017-06-08T21:40:18 settings: rename `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION` Initially, the setting has been solely used to enable the use of `fsync()` when creating objects. Since then, the use has been extended to also cover references and index files. As the option is not yet part of any release, we can still correct this by renaming the option to something more sensible, indicating not only correlation to objects. This commit renames the option to `GIT_OPT_ENABLE_FSYNC_GITDIR`. We also move the variable from the object to repository source code.
Edward Thomson 458cea5c 2017-06-08T14:22:24 Merge pull request #4255 from pks-t/pks/buffer-grow-errors Buffer growing cleanups
Edward Thomson 90500d81 2017-06-08T13:56:22 Merge pull request #4253 from pks-t/pks/cov-fixes Coverity fixes
Patrick Steinhardt 90388aa8 2017-06-06T15:02:23 refdb_fs: be explicit about using null-OID if we cannot resolve ref
Patrick Steinhardt 78a8f68f 2017-06-06T14:57:31 path: only set dotgit flags when configs were read
Patrick Steinhardt 9be4c303 2017-06-06T14:54:48 worktree: use `git__free` instead of `free`
Patrick Steinhardt 0f642f31 2017-06-06T14:54:19 refs: properly report errors from `update_wt_heads`
Patrick Steinhardt 0c28c72d 2017-06-06T14:53:45 fileops: check return value of `git_path_dirname`
Patrick Steinhardt a693b873 2017-06-07T10:20:44 buffer: use `git_buf_init` with length The `git_buf_init` function has an optional length parameter, which will cause the buffer to be initialized and allocated in one step. This can be used instead of static initialization with `GIT_BUF_INIT` followed by a `git_buf_grow`. This patch does so for two functions where it is applicable.
Patrick Steinhardt 4796c916 2017-06-07T09:56:31 buffer: return errors for `git_buf_init` and `git_buf_attach` Both the `git_buf_init` and `git_buf_attach` functions may call `git_buf_grow` in case they were given an allocation length as parameter. As such, it is possible for these functions to fail when we run out of memory. While it won't probably be used anytime soon, it does indeed make sense to also record this fact by returning an error code from both functions. As they belong to the internal API only, this change does not break our interface.
Patrick Steinhardt 9a8386a2 2017-06-07T09:50:54 buffer: consistently use `ENSURE_SIZE` to grow buffers on-demand The `ENSURE_SIZE` macro can be used to grow a buffer if its currently allocated size does not suffice a required target size. While most of the code already uses this macro, the `git_buf_join` and `git_buf_join3` functions do not yet use it. Due to the macro first checking whether we have to grow the buffer at all, this has the benefit of saving a function call when it is not needed. While this is nice to have, it will probably not matter at all performance-wise -- instead, this only serves for consistency across the code.
Patrick Steinhardt e82dd813 2017-06-08T11:52:32 buffer: fix `ENSURE_SIZE` macro referencing wrong variable While the `ENSURE_SIZE` macro gets a reference to both the buffer that is to be resized and a new size, we were not consistently referencing the passed buffer, but instead a variable `buf`, which is not passed in. Funnily enough, we never noticed because our buffers seem to always be named `buf` whenever the macro was being used. Fix the macro by always using the passed-in buffer. While at it, add braces around all mentions of passed-in variables as should be done with macros to avoid subtle errors. Found-by: Edward Thompson
Patrick Steinhardt 97eb5ef0 2017-06-07T10:05:54 buffer: rely on `GITERR_OOM` set by `git_buf_try_grow` The function `git_buf_try_grow` consistently calls `giterr_set_oom` whenever growing the buffer fails due to insufficient memory being available. So in fact, we do not have to do this ourselves when a call to any buffer-growing function has failed due to an OOM situation. But we still do so in two functions, which this patch cleans up.
Edward Thomson 3a8801ae 2017-06-08T10:55:47 Merge pull request #4258 from pks-t/pks/sha1dc-update SHA1DC update
Patrick Steinhardt 63d86c27 2017-06-07T14:50:16 sha1dc: update to fix errors with endianess and unaligned access This updates our version of SHA1DC to e139984 (Merge pull request #35 from lidl/master, 2017-05-30).
Edward Thomson 3bc95cfe 2017-06-07T14:42:12 Merge pull request #4236 from pks-t/pks/index-v4-fixes Fix path computations for compressed index entries
Marc-Antoine Perennou f28744a5 2017-06-05T10:11:20 openssl_stream: fix building with libressl OpenSSL v1.1 has introduced a new way of initializing the library without having to call various functions of different subsystems. In libgit2, we have been adapting to that change with 88520151f (openssl_stream: use new initialization function on OpenSSL version >=1.1, 2017-04-07), where we added an #ifdef depending on the OpenSSL version. This change broke building with libressl, though, which has not changed its API in the same way. Fix the issue by expanding the #ifdef condition to use the old way of initializing with libressl. Signed-off-by: Marc-Antoine Perennou <Marc-Antoine@Perennou.com>
Patrick Steinhardt 064a60e9 2017-05-19T14:06:15 index: verify we have enough space left when writing index entries In our code writing index entries, we carry around a `disk_size` representing how much memory we have in total and pass this value to `git_encode_varint` to do bounds checks. This does not make much sense, as at the time when passing on this variable it is already out of date. Fix this by subtracting used memory from `disk_size` as we go along. Furthermore, assert we've actually got enough space left to do the final path memcpy.
Patrick Steinhardt c71dff7e 2017-05-19T13:49:34 index: fix shared prefix computation when writing index entry When using compressed index entries, each entry's path is preceded by a varint encoding how long the shared prefix with the previous index entry actually is. We currently encode a length of `(path_len - same_len)`, which is doubly wrong. First, `path_len` is already set to `path_len - same_len` previously. Second, we want to encode the shared prefix rather than the un-shared suffix length. Fix this by using `same_len` as the varint value instead.
Patrick Steinhardt 83e0392c 2017-05-19T13:39:05 index: also sanity check entry size with compressed entries We have a check in place whether the index has enough data left for the required footer after reading an index entry, but this was only used for uncompressed entries. Move the check down a bit so that it is executed for both compressed and uncompressed index entries.
Patrick Steinhardt 350d2c47 2017-05-19T14:22:35 index: remove file-scope entry size macros All index entry size computations are now performed in `index_entry_size`. As such, we do not need the file-scope macros for computing these sizes anymore. Remove them and move the `entry_size` macro into the `index_entry_size` function.
Patrick Steinhardt 46b67034 2017-05-19T13:59:53 index: don't right-pad paths when writing compressed entries Our code to write index entries to disk does not check whether the entry that is to be written should use prefix compression for the path. As such, we were overallocating memory and added bogus right-padding into the resulting index entries. As there is no padding allowed in the index version 4 format, this should actually result in an invalid index. Fix this by re-using the newly extracted `index_entry_size` function.
Patrick Steinhardt 29f498e0 2017-05-19T13:38:34 index: move index entry size computation into its own function Create a new function `index_entry_size` which encapsulates the logic to calculate how much space is needed for an index entry, whether it is simple/extended or compressed/uncompressed. This can later be re-used by our code writing index entries.
Patrick Steinhardt 8ceb890b 2017-05-19T12:35:21 index: set last written index entry in foreach-entry-loop The last written disk entry is currently being written inside of the function `write_disk_entry`. Make behavior a bit more obviously by instead setting it inside of `write_entries` while iterating all entries.
Patrick Steinhardt 11d0be23 2017-05-12T10:01:43 index: set last entry when reading compressed entries To calculate the path of a compressed index entry, we need to know the preceding entry's path. While we do actually set the first predecessor correctly to "", we fail to update this while reading the entries. Fix the issue by updating `last` inside of the loop. Previously, we've been passing a double-pointer to `read_entry`, which it didn't update. As it is more obvious to update the pointer inside the loop itself, though, we can simply convert it to a normal pointer.
Patrick Steinhardt febe8c14 2017-05-10T14:27:12 index: fix confusion with shared prefix in compressed path names The index version 4 introduced compressed path names for the entries. From the git.git index-format documentation: At the beginning of an entry, an integer N in the variable width encoding [...] is stored, followed by a NUL-terminated string S. Removing N bytes from the end of the path name for the previous entry, and replacing it with the string S yields the path name for this entry. But instead of stripping N bytes from the previous path's string and using the remaining prefix, we were instead simply concatenating the previous path with the current entry path, which is obviously wrong. Fix the issue by correctly copying the first N bytes of the previous entry only and concatenating the result with our current entry's path.
Patrick Steinhardt 8a5e7aae 2017-05-22T12:53:44 varint: fix computation for remaining buffer space When encoding varints to a buffer, we want to remain sure that the required buffer space does not exceed what is actually available. Our current check does not do the right thing, though, in that it does not honor that our `pos` variable counts the position down instead of up. As such, we will require too much memory for small varints and not enough memory for big varints. Fix the issue by correctly calculating the required size as `(sizeof(varint) - pos)`. Add a test which failed before.
Edward Thomson dd0aa811 2017-06-04T22:46:07 Merge branch 'pr/4228'
Edward Thomson 82e929a8 2017-06-04T19:35:39 Merge pull request #4239 from roblg/toplevel-dir-ignore-fix Fix issue with directory glob ignore in subdirectories
Edward Thomson 04de614b 2017-06-04T19:03:07 Merge pull request #4243 from pks-t/pks/submodule-workdir Submodule working directory
Carlos Martín Nieto a1023a43 2017-05-20T17:18:07 Merge pull request #4179 from libgit2/ethomson/expand_tilde Introduce home directory expansion function for config files, attribute files
Carlos Martín Nieto 9b1260d3 2017-05-20T14:18:32 Merge pull request #4097 from implausible/fix/auto-detect-proxy-callbacks Fix proxy auto detect not utilizing callbacks
Carlos Martín Nieto e694e4e9 2017-05-20T14:17:36 Merge pull request #4174 from libgit2/ethomson/set_head_to_tag git_repository_set_head: use tag name in reflog
Carlos Martín Nieto 119bdd86 2017-05-20T14:13:27 Merge pull request #4231 from wabain/open-revrange revparse: support open-ended ranges
Chris Hescock c0e54155 2017-01-11T10:39:59 indexer: name pack files after trailer hash Upstream git.git has changed the way how packfiles are named. Previously, they were using a hash of the contained object's OIDs, which has then been changed to use the hash of the complete packfile instead. See 1190a1acf (pack-objects: name pack files after trailer hash, 2013-12-05) in the git.git repository for more information on this change. This commit changes our logic to match the behavior of core git.
Patrick Steinhardt 2696c5c3 2017-05-19T09:21:17 repository: make check if repo is a worktree more strict To determine if a repository is a worktree or not, we currently check for the existence of a "gitdir" file inside of the repository's gitdir. While this is sufficient for non-broken repositories, we have at least one case of a subtly broken repository where there exists a gitdir file inside of a gitmodule. This will cause us to misidentify the submodule as a worktree. While this is not really a fault of ours, we can do better here by observing that a repository can only ever be a worktree iff its common directory and dotgit directory are different. This allows us to make our check whether a repo is a worktree or not more strict by doing a simple string comparison of these two directories. This will also allow us to do the right thing in the above case of a broken repository, as for submodules these directories will be the same. At the same time, this allows us to skip the `stat` check for the "gitdir" file for most repositories.
Patrick Steinhardt 9f9fd05f 2017-05-19T08:59:46 repository: factor out worktree check The check whether a repository is a worktree or not is currently done inside of `git_repository_open_ext`. As we want to extend this function later on, pull it out into its own function `repo_is_worktree` to ease working on it.
Patrick Steinhardt 32841973 2017-05-19T08:38:47 repository: improve parameter names for `find_repo` The out-parameters of `find_repo` containing found paths of a repository are a tad confusing, as they are not as obvious as they could be. Rename them like following to ease reading the code: - `repo_path` -> `gitdir_path` - `parent_path` -> `workdir_path` - `link_path` -> `gitlink_path` - `common_path` -> `commondir_path`
Patrick Steinhardt 57121a23 2017-05-19T08:34:32 repository: clear out-parameter instead of freeing it The `path` out-parameter of `find_repo` is being sanitized initially such that we do not try to append to existing content. The sanitization is done via `git_buf_free`, though, which forces us to needlessly reallocate the buffer later in the function. Fix this by using `git_buf_clear` instead.
Robert Gay c3b8e8b3 2017-05-14T10:28:05 Fix issue with directory glob ignore in subdirectories
Patrick Steinhardt 8d93a11c 2017-05-03T12:38:55 odb: fix printf formatter for git_off_t The fields `declared_size` and `received_bytes` of the `git_odb_stream` are both of type `git_off_t` which is defined as a signed integer. When passing these values to a printf-style string in `git_odb_stream__invalid_length`, though, we format these as PRIuZ, which is unsigned. Fix the issue by using PRIdZ instead, silencing warnings on macOS.
Patrick Steinhardt 7776db51 2017-05-03T12:15:12 odb: shut up gcc warnings regarding uninitilized variables The `error` variable is used as a return value in the out-section of both `odb_read_1` and `read_prefix_1`. While the value will actually always be initialized inside of this section, GCC fails to realize this due to interactions with the `found` variable: if `found` is set, the error will always be initialized. If it is not, we return early without reaching the out-statements. Shut up the warnings by initializing the error variable, even though it is unnecessary.
William Bain 8b107dc5 2017-05-03T11:20:57 revparse: support open-ended ranges Support '..' and '...' ranges where one side is not specified. The unspecified side defaults to HEAD. Closes #4223
Patrick Steinhardt 883eeb5f 2017-05-02T12:35:59 worktree: switch over worktree pruning to an opts structure The current signature of `git_worktree_prune` accepts a flags field to alter its behavior. This is not as flexible as we'd like it to be when we want to enable passing additional options in the future. As the function has not been part of any release yet, we are still free to alter its current signature. This commit does so by using our usual pattern of an options structure, which is easily extendable without breaking the API.
Patrick Steinhardt 8264a30f 2017-05-02T10:11:28 worktree: support creating locked worktrees When creating a new worktree, we do have a potential race with us creating the worktree and another process trying to delete the same worktree as it is being created. As such, the upstream git project has introduced a flag `git worktree add --locked`, which will cause the newly created worktree to be locked immediately after its creation. This mitigates the race condition. We want to be able to mirror the same behavior. As such, a new flag `locked` is added to the options structure of `git_worktree_add` which allows the user to enable this behavior.
Patrick Steinhardt 2ce2a48f 2017-05-02T13:37:15 transports: ssh: clean up after libssh2 on exit After calling `libssh2_init`, we need to clean up after the library by executing `libssh2_exit` as soon as we exit. Register a shutdown handler to do so which simply calls `libssh2_exit`. This fixes several memory leaks.
Patrick Steinhardt 8c027351 2017-05-02T13:35:09 transports: ssh: report failure initializing libssh2 We unconditionally return success when initializing libssh2, regardless of whether `libgssh2_init` signals success or an error. Fix this by checking its return code.
Patrick Steinhardt a7aa73a5 2017-05-02T10:02:36 worktree: introduce git_worktree_add options The `git_worktree_add` function currently accepts only a path and name for the new work tree. As we may want to expand these parameters in future versions without adding additional parameters to the function for every option, this commit introduces our typical pattern of an options struct. Right now, this structure is still empty, which will change with the next commit.
Edward Thomson 1dc89aab 2017-05-01T21:34:21 object validation: free some memleaks
Edward Thomson 34c13106 2017-05-01T21:32:24 signature: free dup'd buffers on parse error
Edward Thomson 4dbcf0e6 2017-05-01T19:34:04 remote: free the config snapshot This reverts commit 5552237 and frees the snapshot properly.
Edward Thomson be343b88 2017-05-01T18:56:55 worktrees: cleanup some memory leaks Be sure to clean up looked up references. Free buffers instead of merely clearing them. Use `git__free` instead of `free`.
Edward Thomson 13c1bf07 2017-05-01T16:17:48 Merge pull request #4197 from pks-t/pks/verify-object-hashes Verify object hashes
Edward Thomson d8702843 2017-05-01T16:11:56 Merge pull request #4206 from libgit2/cmn/transport-get-proxy transport: provide a getter for the proxy options
Edward Thomson 5700ee9c 2017-05-01T16:10:50 Merge pull request #4216 from pks-t/pks/debian-test-failures Debian HTTPS feature test failure
Edward Thomson f86f35d6 2017-05-01T15:23:54 Merge branch 'pr/4225'
Yichao Yu 90cdf44f 2017-04-29T13:00:07 Allow NULL refspec in git_remote_push Since this is allowed in `git_remote_upload`
Yichao Yu 55522376 2017-04-29T12:28:35 Do not free config when creating remote The regression was introduced in 22261344de18b3cc60ee6937468d66a6a6a28875
Patrick Steinhardt e0973bc0 2017-04-28T14:05:15 odb: verify hashes in read_prefix_1 While the function reading an object from the complete OID already verifies OIDs, we do not yet do so for reading objects from a partial OID. Do so when strict OID verification is enabled.
Patrick Steinhardt 14109620 2017-04-28T14:03:54 odb: improve error handling in read_prefix_1 The read_prefix_1 function has several return statements springled throughout the code. As we have to free memory upon getting an error, the free code has to be repeated at every single retrun -- which it is not, so we have a memory leak here. Refactor the code to use the typical `goto out` pattern, which will free data when an error has occurred. While we're at it, we can also improve the error message thrown when multiple ambiguous prefixes are found. It will now include the colliding prefixes.
Patrick Steinhardt 35079f50 2017-04-21T07:31:56 odb: add option to turn off hash verification Verifying hashsums of objects we are reading from the ODB may be costly as we have to perform an additional hashsum calculation on the object. Especially when reading large objects, the penalty can be as high as 35%, as can be seen when executing the equivalent of `git cat-file` with and without verification enabled. To mitigate for this, we add a global option for libgit2 which enables the developer to turn off the verification, e.g. when he can be reasonably sure that the objects on disk won't be corrupted.
Patrick Steinhardt 28a0741f 2017-04-10T09:30:08 odb: verify object hashes The upstream git.git project verifies objects when looking them up from disk. This avoids scenarios where objects have somehow become corrupt on disk, e.g. due to hardware failures or bit flips. While our mantra is usually to follow upstream behavior, we do not do so in this case, as we never check hashes of objects we have just read from disk. To fix this, we create a new error class `GIT_EMISMATCH` which denotes that we have looked up an object with a hashsum mismatch. `odb_read_1` will then, after having read the object from its backend, hash the object and compare the resulting hash to the expected hash. If hashes do not match, it will return an error. This obviously introduces another computation of checksums and could potentially impact performance. Note though that we usually perform I/O operations directly before doing this computation, and as such the actual overhead should be drowned out by I/O. Running our test suite seems to confirm this guess. On a Linux system with best-of-five timings, we had 21.592s with the check enabled and 21.590s with the ckeck disabled. Note though that our test suite mostly contains very small blobs only. It is expected that repositories with bigger blobs may notice an increased hit by this check. In addition to a new test, we also had to change the odb::backend::nonrefreshing test suite, which now triggers a hashsum mismatch when looking up the commit "deadbeef...". This is expected, as the fake backend allocated inside of the test will return an empty object for the OID "deadbeef...", which will obviously not hash back to "deadbeef..." again. We can simply adjust the hash to equal the hash of the empty object here to fix this test.
Edward Thomson 7df580fa 2017-04-28T11:58:49 Merge pull request #4191 from pks-t/pks/wt-ref-renames Branch renames with worktrees
Edward Thomson 6cf25a39 2017-04-26T09:09:53 Merge pull request #4219 from pks-t/pks/socket-stream-addrinfo-loop socket_stream: continue to next addrinfo on socket creation failure
Edward Thomson cecd41fb 2017-04-26T09:08:51 Merge pull request #4217 from pks-t/pks/readonly-cfg-backend Honor read-only flag when writing to config backends
Patrick Steinhardt 954e06a8 2017-04-26T12:09:57 socket_stream: continue to next addrinfo on socket creation failure When connecting to a remote via socket stream, we first use getaddrinfo to obtain the possible connection methods followed by creating and connecting the socket. But when creating the socket, we error out as soon as we get an invalid socket instead of trying out other address hints returned by addrinfo. Fix this by continuing on invalid socket instead of returning an error. This fixes connection establishment with musl libc.
Patrick Steinhardt 95f29fb3 2017-04-25T12:40:13 config: skip r/o backends when writing Configuration backends have a readonly-flag which is currently used to distinguish configuration snapshots. But somewhat unexpectedly, we do not use the flag to prevent writing to a readonly backend but happily proceed to do so. This commit modifies logic to also honor the readonly flag for configuration setters. We will now traverse through all backends and pick the first one which is not marked as read-only whenever we want to write new configuration.
Patrick Steinhardt 64244131 2017-04-25T12:59:48 config_file: add missing include for `git_config_backend` The config_file.h header provides some inline declarations accessing the `git_config_backend`, but misses its declaration. Add the missing include for "git2/sys/config.h" to add it.
Patrick Steinhardt a4de1ae3 2017-04-25T10:14:19 cmake: define GIT_HTTPS when HTTPS is supported
Patrick Steinhardt 1cb30b1b 2017-04-25T09:48:59 diff_parse: free object instead of its pointer In e7330016a (diff_parse: check return value of `git_diff_init_options`, 2017-03-20), we've introduced an error check whether we're able to correctly initialize the diff options. This simple commit actually introduced a segfault in that we now try to free the pointer to the allocated diff in an error case, instead of the allocated diff itself. This commit fixes the issue.
Carlos Martín Nieto 8d89e409 2017-04-17T17:19:03 Merge pull request #4192 from libgit2/ethomson/win32_posix Refactor some of the win32 POSIX emulation
Edward Thomson 86536c7e 2017-04-17T15:40:03 win32: `remediation` not `cleanup` The `remediation` function is run in the retry loop in order to attempt to fix any problems that the prior run encountered. There is nothing "cleaned up". Clarify the name.
Carlos Martín Nieto 5c760960 2017-04-17T13:03:03 transport: provide a getter for the proxy options As with the callbacks, third-party implementations of smart subtransports cannot reach into the opaque struct and thus cannot know what options the user set. Add a getter for these options to copy the proxy options into something external implementors can use.
Edward Thomson f9d3b0d0 2017-04-12T09:21:26 Merge pull request #4201 from pks-t/pks/fileops-fd-leak fileops: fix leaking fd in `mmap_ro_file`
Patrick Steinhardt 38b6e700 2017-04-12T08:09:08 fileops: fix leaking fd in `mmap_ro_file` When the `git_futils_mmap_ro_file` function encounters an error after the file has been opened, it will do a simple returns. Instead, we should close the opened file descriptor to avoid a leak. This commit fixes the issue.
Edward Thomson d476d024 2017-04-11T19:18:05 Merge pull request #4196 from pks-t/pks/filter-segfault filter: only close filter if it's been initialized correctly
Patrick Steinhardt 88520151 2017-04-07T13:02:50 openssl_stream: use new initialization function on OpenSSL version >=1.1 Previous to OpenSSL version 1.1, the user had to initialize at least the error strings as well as the SSL algorithms by himself. OpenSSL version 1.1 instead provides a new function `OPENSSL_init_ssl`, which handles initialization of all subsystems. As the new API call will by default load error strings and initialize the SSL algorithms, we can safely replace these calls when compiling against version 1.1 or later. This fixes a compiler error when compiling against OpenSSL version 1.1 which has been built without stubs for deprecated syntax.
Patrick Steinhardt 29081c2f 2017-04-07T12:54:33 openssl_stream: remove locking initialization on OpenSSL version >=1.1 Up to version 1.0, OpenSSL required us to provide a callback which implements a locking mechanism. Due to problems in the API design though this mechanism was inherently broken, especially regarding that the locking callback cannot report errors in an obvious way. Due to this shortcoming, the locking initialization has been completely removed in OpenSSL version 1.1. As the library has also been refactored to not make any use of these callback functions, we can safely remove all initialization of the locking subsystem if compiling against OpenSSL version 1.1 or higher. This fixes a compilation error when compiling against OpenSSL version 1.1 which has been built without stubs for deprecated syntax.
Patrick Steinhardt cf07db2f 2017-04-07T16:05:10 filter: only close filter if it's been initialized correctly In the function `git_filter_list_stream_data`, we initialize, write and subesquently close the stream which should receive content processed by the filter. While we skip writing to the stream if its initialization failed, we still try to close it unconditionally -- even if the initialization failed, where the stream might not be set at all, leading us to segfault. Semantics in this code is not really clear. The function handling the same logic for files instead of data seems to do the right thing here in only closing the stream when initialization succeeded. When stepping back a bit, this is only reasonable: if a stream cannot be initialized, the caller would not expect it to be closed again. So actually, both callers of `stream_list_init` fail to do so. The data streaming function will always close the stream and the file streaming function will not close the stream if writing to it has failed. The fix is thus two-fold: - callers of `stream_list_init` now close the stream iff it has been initialized - `stream_list_init` now closes the lastly initialized stream if the current stream in the chain failed to initialize Add a test which segfaulted previous to these changes.
Edward Thomson e572b631 2017-04-07T09:03:56 Merge pull request #4183 from pks-t/pks/coverity Coverity
Patrick Steinhardt 2a485dab 2017-04-04T18:55:57 refs: update worktree HEADs when renaming branches Whenever we rename a branch, we update the repository's symbolic HEAD reference if it currently points to the branch that is to be renamed. But with the introduction of worktrees, we also have to iterate over all HEADs of linked worktrees to adjust them. Do so.
Patrick Steinhardt 38fc5ab0 2017-04-04T18:44:43 branch: use `foreach_head` to see if a branch is checked out Previously, we have extracted the logic to find and iterate over all HEADs of a repository. Use this function in `git_branch_is_checked_out`.
Patrick Steinhardt 74511aa2 2017-04-04T18:44:29 repository: add function to iterate over all HEADs While we already provide functions to get the current repository's HEAD, it is quite involved to iterate over HEADs of both the repository and all linked work trees. This commit implements a function `git_repository_foreach_head`, which accepts a callback which is then called for all HEAD files.
Patrick Steinhardt 3e84aa50 2017-04-05T13:47:09 repository: get worktree HEAD via `git_reference__read_head` The functions `git_repository_head_for_worktree` and `git_repository_detached_head_for_worktree` both implement their own logic to read the HEAD reference file. Use the new function `git_reference__read_head` instead to unify the code paths.
Patrick Steinhardt 987f5659 2017-04-04T17:12:22 repository: extract function to get path to a file in a work tree The function `read_worktree_head` has the logic embedded to construct the path to `HEAD` in the work tree's git directory, which is quite useful for other callers. Extract the logic into its own function to make it reusable by others.
Patrick Steinhardt 8242cc1a 2017-04-04T18:18:45 repository: set error message if trying to set HEAD to a checked out one If trying to set the HEAD of a repository to another reference, we have to check whether this reference is already checked out in another linked work tree. If it is, we will refuse setting the HEAD and return an error, but do not set a meaningful error message. Add one.
Patrick Steinhardt 5b65ac25 2017-04-05T10:43:18 refs: implement function to read references from file Currently, we only provide functions to read references directly from a repository's reference store via e.g. `git_reference_lookup`. But in some cases, we may want to read files not connected to the current repository, e.g. when looking up HEAD of connected work trees. This commit implements `git_reference__read_head`, which will read out and allocate a reference at an arbitrary path.
Edward Thomson 89d403cc 2017-04-05T09:50:12 win32: enable `p_utimes` for readonly files Instead of failing to set the timestamp of a read-only file (like any object file), set it writable temporarily to update the timestamp.
Patrick Steinhardt 9daba9f4 2017-03-28T10:12:23 fileops: do not overwrite correct error message on mmap When executing `git_futils_mmap_ro_file`, we first try to guess whether the file is mmapable at all. Part of this check is whether the file is too large to be mmaped, which can be true on systems with 32 bit `size_t` types. The check is performed by first getting the file size wtih `git_futils_filesize` and then checking whether the returned size can be represented as `size_t`, returning an error if so. While this test also catches the case where the function returned an error (as `-1` is not representable by `size_t`), we will set the misleading error message "file too large to mmap". But in fact, a negative return value from `git_futils_filesize` will be caused by the inability to fstat the file. Fix the error message by handling negative return values separately and not overwriting the error message in that case.
Patrick Steinhardt 756138e4 2017-03-28T09:15:53 blame_git: check return value of `git__calloc` We do not check the return value of `git__calloc`, which may return `NULL` in out-of-memory situations. Fix the error by using `GITERR_CHECK_ALLOC`.
Patrick Steinhardt a76d7502 2017-03-28T09:12:34 path: short-circuit `git_path_apply_relative` on error Short-circuit the call to `git_path_resolve_relative` in case `git_buf_joinpath` returns an error. While this does not fix any immediate errors, the resulting code is easier to read and handles potential new error conditions raised by `git_buf_joinpath`.
Patrick Steinhardt cffd616a 2017-03-28T09:08:41 path: handle error returned by `git_buf_joinpath` In the `_check_dir_contents` function, we first allocate memory for joining the directory and subdirectory together and afterwards use `git_buf_joinpath`. While this function in fact should not fail as memory is already allocated, err on the safe side and check for returned errors.
Patrick Steinhardt 4467aeac 2017-03-28T09:00:48 config_file: handle errors other than OOM while parsing section headers The current code in `parse_section_header_ext` is only prepared to properly handle out-of-memory conditions for the `git_buf` structure. While very unlikely and probably caused by a programming error, it is also possible to run into error conditions other than out-of-memory previous to reaching the actual parsing loop. In these cases, we will run into undefined behavior as the `rpos` variable is only initialized after these triggerable errors, but we use it in the cleanup-routine. Fix the issue by unifying the function's cleanup code with an `end_error` section, which will not use the `rpos` variable.
Edward Thomson 7ece9065 2017-04-03T23:07:16 win32: make posix emulation retries configurable POSIX emulation retries should be configurable so that tests can disable them. In particular, maniacally threading tests may end up trying to open locked files and need retries, which will slow continuous integration tests significantly.
Edward Thomson 1069ad3c 2017-04-03T23:05:53 win32: do not inherit file descriptors
Sven Strickroth d5e6ca1e 2017-01-14T18:39:32 Allow to configure default file share mode for opening files This can prevent FILE_SHARED_VIOLATIONS when used in tools such as TortoiseGit TGitCache and FILE_SHARE_DELETE, because files can be opened w/o being locked any more. Signed-off-by: Sven Strickroth <email@cs-ware.de>