|
1305cd4e
|
2019-01-09T09:55:26
|
|
Merge pull request #4926 from csware/warning-c4133
Fix warning 'function': incompatible types - from 'git_cvar_value *' to 'int *' (C4133) on VS
|
|
8b599528
|
2019-01-08T17:26:14
|
|
Fix Linux warnings
This change fixes -Wmaybe-uninitialized and -Wdeprecated-declarations
warnings on Linux builds
|
|
45001906
|
2019-01-07T16:14:51
|
|
Fix warning 'function': incompatible types - from 'git_cvar_value *' to 'int *' (C4133) on VS
Signed-off-by: Sven Strickroth <email@cs-ware.de>
|
|
d9eae98b
|
2018-10-24T01:30:12
|
|
refs: assert that we're passed valid refs when renaming
CID 1382962
|
|
0a8745f2
|
2018-10-24T01:26:48
|
|
diff: assert that we're passed a valid git_diff object
CID 1386176, 1386177, 1388219
|
|
9c23552c
|
2018-10-24T01:21:21
|
|
submodule: grab the error while loading from config
Previously, an error in `git_config_next` would be mistaken as a
successful load, because the previous call would have succeeded.
Coverity saw the subsequent check for a completed iteration as dead, so
let's make it useful again.
CID 1391374
|
|
9f714dec
|
2018-08-17T18:51:56
|
|
config: assert that our parameters are valid
CID 1395011
|
|
fba70a9d
|
2019-01-03T12:02:06
|
|
Merge pull request #4919 from pks-t/pks/shutdown-cb-count
Shutdown callback count
|
|
9084712b
|
2019-01-03T12:01:52
|
|
Merge pull request #4904 from libgit2/ethomson/crlf
Update CRLF filtering to match modern git
|
|
b46c3594
|
2019-01-02T09:33:55
|
|
global: move init callbacks into an array
We currently have an explicit callchain of all the initialization
callbacks in our `init_common` function. This is perfectly fine, but
requires us to manually keep track of how many shutdown callbacks there
may be installed: to avoid allocations before libgit2 is fully
initialized, we assume that every initializer may register at most one
shutdown function. These shutdown functions are stored in a static array
of size `MAX_SHUTDOWN_CB`, which then needs to be updated manually
whenever a new initializer function is being added.
The situation can be easily fixed: convert the callchain of init
functions into an array and iterate over it to initialize all
subsystems. This allows us to define the `git__shutdown_callbacks` array
with the same size as the initializer array and rids us of the need to
always update `MAX_SHUTDOWN_CB`.
|
|
03dc6480
|
2019-01-02T09:27:44
|
|
hash: convert `global_init` macros to real function
The `git_hash_global_init` function is simply defined as a macro to zero
for most of the different hash implementations. This makes it impossible
to treat it like a function pointer, which is required for a later
commit where we want to improve the way global initialization works.
Fix the issue by converting all no-op macros to an inline function
returning zero.
There's a small gotcha here, though: as most hash implementations only
have a header file, but not a corresponding implementation file, we
cannot declare the function as non-static. But declaring it as `static
inline` fails, too, as there is a previous declaration as non-static. So
we have to move the function declaration after the include that brings
in the function definition, as it is allowed to have a non-static
declaration after a static definition, but not the other way round.
|
|
8dde7e11
|
2018-12-19T11:04:58
|
|
refdb_fs: refactor error handling in `refdb_reflog_fs__delete`
The function `refdb_reflog_fs__delete` uses the `if (!error && foobar())`
pattern of checking, where error conditions are being checked by following calls
to different code. This does not match our current style, where the call-site of
a function is usually directly responsible for checking the return value.
Convert the function to use `if ((error = foobar()) < 0) goto out;` style. Note
that this changes the code flow a bit: previously, we were always trying to
delete empty reference hierarchies even if deleting the reflog entry has failed.
This wasn't much of a problem -- if deletion failed, the hierarchy will still
contain at least one file and thus the function call was an expensive no-op.
Now, we will only perform this deletion if we have successfully removed the
reflog.
|
|
bc219657
|
2018-12-19T11:01:55
|
|
Merge pull request #4833 from csware/drop-empty-dirs
Remove empty (sub-)directories when deleting refs
|
|
6ea9381b
|
2018-12-14T14:43:09
|
|
annotated_commit: peel to commit instead of assuming we have one
We want to allow the creation of annotated commits out of annotated tags and for
that we have to peel the reference all the way to the commit instead of stopping
at the first id it provides.
|
|
5bd78c48
|
2018-12-14T14:41:17
|
|
refs: constify git_reference_peel
We have no need to take a non-const reference. This does involve some other work
to make sure we don't mix const and non-const variables, but by splitting what
we want each variable to do we can also simplify the logic for when we do want
to free a new reference we might have allocated.
|
|
da8138b0
|
2018-12-06T12:59:17
|
|
Merge pull request #4906 from QBobWatson/bugfix
Fix segfault in loose_backend__readstream
|
|
2f3c4b69
|
2018-12-06T10:48:20
|
|
Typesetting conventions
|
|
f4835e44
|
2018-12-04T21:48:12
|
|
make proxy_stream_close close target stream even on errors
When git_filter_apply_fn callback returns a error while smudging proxy_stream_close
ends up returning without closing the stream. This is turn makes blob_content_to_file
crash as it asserts the stream being closed whether there are errors or not.
Closing the target stream on error fixes this problem.
|
|
08afdb57
|
2018-12-04T10:59:25
|
|
Removed one null check
|
|
36f80742
|
2018-12-04T10:12:24
|
|
Fix segfault in loose_backend__readstream
If the routine exits with error before stream or hash_ctx is initialized, the
program will segfault when trying to free them.
|
|
ef8f8ec6
|
2018-12-03T13:35:30
|
|
crlf: update to match git's logic
Examine the recent CRLF changes to git by Torsten Bögershausen and
include similar changes to update our CRLF logic to match.
Note: Torsten Bögershausen has previously agreed to allow his changes to
be included in libgit2.
|
|
168fe39b
|
2018-11-28T14:26:57
|
|
object_type: use new enumeration names
Use the new object_type enumeration names within the codebase.
|
|
18e71e6d
|
2018-11-28T13:31:06
|
|
index: use new enum and structure names
Use the new-style index names throughout our own codebase.
|
|
0ddc6094
|
2018-11-30T09:46:14
|
|
Merge pull request #4770 from tiennou/feature/merge-analysis-any-branch
Allow merge analysis against any reference
|
|
e7873eb2
|
2018-11-29T08:00:31
|
|
Merge pull request #4888 from TheBB/add-cb
revwalk: Allow changing hide_cb
|
|
487233fa
|
2018-11-29T07:21:41
|
|
Merge pull request #4895 from pks-t/pks/unused-warnings
Unused function warnings
|
|
a904fc6d
|
2018-11-28T20:31:30
|
|
Merge pull request #4870 from libgit2/ethomson/proxy
Add builtin proxy support for the http transport
|
|
30ac46aa
|
2018-11-28T10:12:43
|
|
http: reset replay_count upon connection
Reset the replay_count upon a successful connection. It's possible that
we could encounter a situation where we connect successfully but need to
replay a request - for example, a connection and initial request
succeeds without authentication but a subsequent call does require
authentication. Reset the replay count upon any successful request to
afford subsequent replays room to manuever.
|
|
02bb39f4
|
2018-11-22T08:49:09
|
|
stream registration: take an enum type
Accept an enum (`git_stream_t`) during custom stream registration that
indicates whether the registration structure should be used for standard
(non-TLS) streams or TLS streams.
|
|
41f620d9
|
2018-11-18T19:10:50
|
|
http: only load proxy configuration during connection
Only load the proxy configuration during connection; we need this data
when we're going to connect to the server, however we may mutate it
after connection (connecting through a CONNECT proxy means that we
should send requests like normal). If we reload the proxy configuration
but do not actually reconnect (because we're in a keep-alive session)
then we will reload the proxy configuration that we should have mutated.
Thus, only load the proxy configuration when we know that we're going to
reconnect.
|
|
52478d7d
|
2018-11-18T19:54:49
|
|
http: don't allow SSL connections to a proxy
Temporarily disallow SSL connections to a proxy until we can understand
the valgrind warnings when tunneling OpenSSL over OpenSSL.
|
|
df2cc108
|
2018-11-18T10:29:07
|
|
stream: provide generic registration API
Update the new stream registration API to be `git_stream_register`
which takes a registration structure and a TLS boolean. This allows
callers to register non-TLS streams as well as TLS streams.
Provide `git_stream_register_tls` that takes just the init callback for
backward compatibliity.
|
|
0467606f
|
2018-11-18T11:00:11
|
|
http: disallow repeated headers from servers
Don't allow servers to send us multiple Content-Type, Content-Length
or Location headers.
|
|
21142c5a
|
2018-10-29T10:04:48
|
|
http: remove cURL
We previously used cURL to support HTTP proxies. Now that we've added
this support natively, we can remove the curl dependency.
|
|
2878ad08
|
2018-10-29T08:59:33
|
|
streams: remove unused tls functions
The implementations of git_openssl_stream_new and
git_mbedtls_stream_new have callers protected by #ifdefs and
are never called unless compiled in. There's no need for a
dummy implementation. Remove them.
|
|
5d4e1e04
|
2018-10-28T21:27:56
|
|
http: use CONNECT to talk to proxies
Natively support HTTPS connections through proxies by speaking CONNECT
to the proxy and then adding a TLS connection on top of the socket.
|
|
43b592ac
|
2018-10-25T08:49:01
|
|
tls: introduce a wrap function
Introduce `git_tls_stream_wrap` which will take an existing `stream`
with an already connected socket and begin speaking TLS on top of it.
This is useful if you've built a connection to a proxy server and you
wish to begin CONNECT over it to tunnel a TLS connection.
Also update the pluggable TLS stream layer so that it can accept a
registration structure that provides an `init` and `wrap` function,
instead of a single initialization function.
|
|
b2ed778a
|
2018-11-18T22:20:10
|
|
http transport: reset error message on cert failure
Store the error message from the underlying TLS library before calling
the certificate callback. If it refuses to act (demonstrated by
returning GIT_PASSTHROUGH) then restore the error message. Otherwise,
if the callback does not set an error message, set a sensible default
that implicates the callback itself.
|
|
2ce2315c
|
2018-10-22T17:33:45
|
|
http transport: support cert check for proxies
Refactor certificate checking so that it can easily be called for
proxies or the remote server.
|
|
74c6e08e
|
2018-10-22T14:56:53
|
|
http transport: provide proxy credentials
|
|
496da38c
|
2018-10-22T12:48:45
|
|
http transport: refactor storage
Create a simple data structure that contains information about the
server being connected to, whether that's the actual remote endpoint
(git server) or an intermediate proxy. This allows for organization of
streams, authentication state, etc.
|
|
6af8572c
|
2018-10-22T11:29:01
|
|
http transport: cap number of authentication replays
Put a limit on the number of authentication replays in the HTTP
transport. Standardize on 7 replays for authentication or redirects,
which matches the behavior of the WinHTTP transport.
|
|
22654812
|
2018-10-22T11:24:05
|
|
http transport: prompt for proxy credentials
Teach the HTTP transport how to prompt for proxy credentials.
|
|
0328eef6
|
2018-10-22T11:14:06
|
|
http transport: further refactor credential handling
Prepare credential handling to understand both git server and proxy
server authentication.
|
|
32cb56ce
|
2018-10-22T10:16:54
|
|
http transport: refactor credential handling
Factor credential handling into its own function. Additionally, add
safety checks to ensure that we are in a valid state - that we have
received a valid challenge from the server and that we have
configuration to respond to that challenge.
|
|
e6e399ab
|
2018-10-22T09:49:54
|
|
http transport: use HTTP proxies when requested
The HTTP transport should understand how to apply proxies when
configured with `GIT_PROXY_SPECIFIED` and `GIT_PROXY_SPECIFIED`.
When a proxy is configured, the HTTP transport will now connect
to the proxy (instead of directly to the git server), and will
request the properly-formed URL of the git server endpoint.
|
|
e6f1931a
|
2018-10-22T00:09:24
|
|
http: rename http subtransport's `io` to `gitserver_stream`
Rename `http_subtransport->io` to `http_subtransport->gitserver_stream`
to clarify its use, especially as we might have additional streams (eg
for a proxy) in the future.
|
|
c07ff4cb
|
2018-10-21T14:17:06
|
|
http: rename `connection_data` -> `gitserver_data`
Rename the `connection_data` struct member to `gitserver_data`, to
disambiguate future `connection_data`s that apply to the proxy, not the
final server endpoint.
|
|
ed72465e
|
2018-10-13T19:16:54
|
|
proxy: propagate proxy configuration errors
|
|
dcd00638
|
2018-11-28T14:45:55
|
|
Merge pull request #4898 from pks-t/pks/config-parent-is-file
config: fix adding files if their parent directory is a file
|
|
f2f5ec84
|
2018-11-23T19:27:09
|
|
khash: move khash include into implementation files
The current map implementations directly include the "khash.h" headers
into their own headers to make available a set of static functions,
defines et cetera. Besides leaking the complete khash namespace into
files wherever khashes are used, this also triggers Clang's
-Wunused-function warnings when some of the static functions are not
being used at all.
Fix the issue by moving the includes into the respective map
implementation files. Add forward declares for all the map types to make
them known.
|
|
852bc9f4
|
2018-11-23T19:26:24
|
|
khash: remove intricate knowledge of khash types
Instead of using the `khiter_t`, `git_strmap_iter` and `khint_t` types,
simply use `size_t` instead. This decouples code from the khash stuff
and makes it possible to move the khash includes into the implementation
files.
|
|
5bfb3b58
|
2018-11-23T18:48:40
|
|
khash: implement map-specific foreach macros
The current foreach map macros simply redirect to the type-indifferent
`kh_foreach` macro. As this type-indifferent macro directly accesses the
structures, the current implementation makes it impossible to make the
stuctures private to the implementation only. And making them private is
required to move out the khash include into the implementations to
decrease the namespace leak.
|
|
382b668b
|
2018-11-23T18:38:18
|
|
khash: implement begin/end via functions instead of macros
Right now, the `git_*map_begin()` and `git_*map_end()` helpers are
implemented via macros which simply redirect to `kh_begin` and `kh_end`.
As these macros refer to members of the map structures, they make it
impossible to move the khash include into the implementation files.
Implement these helpers as real functions instead to further decouple
the headers from implementations.
|
|
ae765d00
|
2018-11-23T19:26:48
|
|
submodule: remove string map implementation that strips trailing slashes
The submodule code currently has its own implementation of a string map,
which overrides the hashing and hash equals functions with functions
that ignore potential trailing slashes. These functions aren't actually
used by our code, making them useless.
|
|
02789782
|
2018-11-23T18:37:57
|
|
idxmap: remove unused foreach macros
The foreach macros of the idxmap types are not used anywhere. As we are
about to open-code all foreach macros for the maps in order to be able
to make the khash structure internal, removing these unused macros will
leave a few places less that need conversion.
|
|
b2af13f2
|
2018-11-21T12:07:23
|
|
iterator: remove unused function `tree_iterator_entry_cmp`
The function `tree_iterator_entry_cmp` has been introduced in commit be30387e8
(iterators: refactored tree iterator, 2016-02-25), but in fact it has never been
used at all. Remove it to avoid unused function warnings as soon as we re-enable
"-Wunused-functions".
|
|
0836f069
|
2018-11-14T16:08:30
|
|
revwalk: Allow changing hide_cb
Since git_revwalk objects are encouraged to be reused, a public
interface for changing hide_cb is desirable.
|
|
c97d302d
|
2018-11-28T13:45:41
|
|
Merge pull request #4879 from libgit2/ethomson/defer_cert_cred_cb
Allow certificate and credential callbacks to decline to act
|
|
43cbe6b7
|
2018-11-28T13:36:47
|
|
config: fix adding files if their parent directory is a file
When we try to add a configuration file with `git_config_add_file_ondisk`, we
treat nonexisting files as empty. We do this by performing a stat call, ignoring
ENOENT errors. This works just fine in case the file or any of its parents
simply does not exist, but there is also the case where any of the parent
directories is not a directory, but a file. So e.g. trying to add a
configuration file "/dev/null/.gitconfig" will fail, as `errno` will be ENOTDIR
instead of ENOENT.
Catch ENOTDIR in addition to ENOENT to fix the issue. Add a test that verifies
we are able to add configuration files with such an invalid path file just fine.
|
|
f0714daf
|
2018-11-25T13:36:29
|
|
Fix warning C4133 incompatible types in MSVC
Introduced in commit b433a22a979ae78c28c8b16f8c3487e2787cb73e.
Signed-off-by: Sven Strickroth <email@cs-ware.de>
|
|
a2e6e0ea
|
2018-11-06T14:15:43
|
|
transport: allow cred/cert callbacks to return GIT_PASSTHROUGH
Allow credential and certificate checking callbacks to return
GIT_PASSTHROUGH, indicating that they do not want to act.
Introduce this to support in both the http and ssh callbacks.
Additionally, enable the same mechanism for certificate validation.
This is most useful to disambiguate any meaning in the publicly exposed
credential and certificate functions (`git_transport_smart_credentials`
and `git_transport_smart_certificate_check`) but it may be more
generally useful for callers to be able to defer back to libgit2.
|
|
0e3e832d
|
2018-11-21T13:30:01
|
|
Merge pull request #4884 from libgit2/ethomson/index_iterator
index: introduce git_index_iterator
|
|
cb23c3ef
|
2018-11-21T10:54:29
|
|
commit: fix out-of-bound reads when parsing truncated author fields
While commit objects usually should have only one author field, our commit
parser actually handles the case where a commit has multiple author fields
because some tools that exist in the wild actually write them. Detection of
those additional author fields is done by using a simple `git__prefixcmp`,
checking whether the current line starts with the string "author ". In case
where we are handed a non-NUL-terminated string that ends directly after the
space, though, we may have an out-of-bounds read of one byte when trying to
compare the expected final NUL byte.
Fix the issue by using `git__prefixncmp` instead of `git_prefixcmp`.
Unfortunately, a test cannot be easily written to catch this case. While we
could test the last error message and verify that it didn't in fact fail parsing
a signature (because that would indicate that it has in fact tried to parse the
additional "author " field, which it shouldn't be able to detect in the first
place), this doesn't work as the next line needs to be the "committer" field,
which would error out with the same error message even if we hadn't done an
out-of-bounds read.
As objects read from the object database are always NUL terminated, this issue
cannot be triggered in normal code and thus it's not security critical.
|
|
11d33df8
|
2018-11-18T23:39:43
|
|
Merge branch 'tiennou/fix/logallrefupdates-always'
|
|
e226ad8f
|
2018-11-17T17:55:10
|
|
refs: add support for core.logAllRefUpdates=always
Since we were not expecting this config entry to contain a string, we
would fail as soon as its (cached) value would be accessed. Hence,
provide some constants for the 4 states we use, and account for "always"
when we decide to reflog changes.
|
|
646a94be
|
2018-11-18T23:15:56
|
|
Merge pull request #4847 from noahp/noahp/null-arg-fixes
tests: 🌀 address two null argument instances
|
|
5c213e29
|
2018-11-18T22:59:03
|
|
Merge pull request #4875 from tiennou/fix/openssl-errors
Some OpenSSL issues
|
|
4ef2b889
|
2018-11-18T22:56:28
|
|
Merge pull request #4882 from kc8apf/include_port_in_host_header
transport/http: Include non-default ports in Host header
|
|
7321cff0
|
2018-11-15T09:17:51
|
|
Merge pull request #4713 from libgit2/ethomson/win_symlinks
Support symlinks on Windows when core.symlinks=true
|
|
8ee10098
|
2018-11-06T13:10:30
|
|
transport: see if cert/cred callbacks exist before calling them
Custom transports may want to ask libgit2 to invoke a configured
credential or certificate callback; however they likely do not know if a
callback was actually configured. Return a sentinal value
(GIT_PASSTHROUGH) if there is no callback configured instead of crashing.
|
|
c358bbc5
|
2018-11-12T17:22:47
|
|
index: introduce git_index_iterator
Provide a public git_index_iterator API that is backed by an index
snapshot. This allows consumers to provide a stable iteration even
while manipulating the index during iteration.
|
|
4b84db6a
|
2018-11-14T12:33:38
|
|
patch_parse: remove unused function `parse_number`
The function `parse_number` was replaced by `git_parse_advance_digit`
which is provided by the parser interface in commit 252f2eeee (parse:
implement and use `git_parse_advance_digit`, 2017-07-14). As there are
no remaining callers, remove it.
|
|
4209a512
|
2018-11-14T12:04:42
|
|
strntol: fix out-of-bounds reads when parsing numbers with leading sign
When parsing a number, we accept a leading plus or minus sign to return
a positive or negative number. When the parsed string has such a leading
sign, we set up a flag indicating that the number is negative and
advance the pointer to the next character in that string. This misses
updating the number of bytes in the string, though, which is why the
parser may later on do an out-of-bounds read.
Fix the issue by correctly updating both the pointer and the number of
remaining bytes. Furthermore, we need to check whether we actually have
any bytes left after having advanced the pointer, as otherwise the
auto-detection of the base may do an out-of-bonuds access. Add a test
that detects the out-of-bound read.
Note that this is not actually security critical. While there are a lot
of places where the function is called, all of these places are guarded
or irrelevant:
- commit list: this operates on objects from the ODB, which are always
NUL terminated any may thus not trigger the off-by-one OOB read.
- config: the configuration is NUL terminated.
- curl stream: user input is being parsed that is always NUL terminated
- index: the index is read via `git_futils_readbuffer`, which always NUL
terminates it.
- loose objects: used to parse the length from the object's header. As
we check previously that the buffer contains a NUL byte, this is safe.
- rebase: this parses numbers from the rebase instruction sheet. As the
rebase code uses `git_futils_readbuffer`, the buffer is always NUL
terminated.
- revparse: this parses a user provided buffer that is NUL terminated.
- signature: this parser the header information of objects. As objects
read from the ODB are always NUL terminated, this is a non-issue. The
constructor `git_signature_from_buffer` does not accept a length
parameter for the buffer, so the buffer needs to be NUL terminated, as
well.
- smart transport: the buffer that is parsed is NUL terminated
- tree cache: this parses the tree cache from the index extension. The
index itself is read via `git_futils_readbuffer`, which always NUL
terminates it.
- winhttp transport: user input is being parsed that is always NUL
terminated
|
|
cf83809b
|
2018-11-13T14:26:26
|
|
Merge pull request #4883 from pks-t/pks/signature-tz-oob
signature: fix out-of-bounds read when parsing timezone offset
|
|
f127ce35
|
2018-11-13T08:22:25
|
|
tests: address two null argument instances
Handle two null argument cases that occur in the unit tests.
One is in library code, the other is in test code.
Detected by running unit tests with undefined behavior sanitizer:
```bash
# build
mkdir build && cd build
cmake -DBUILD_CLAR=ON -DCMAKE_C_FLAGS="-fsanitize=address \
-fsanitize=undefined -fstack-usage -static-libasan" ..
cmake --build .
# run with asan
ASAN_OPTIONS="allocator_may_return_null=1" ./libgit2_clar
...
............../libgit2/src/apply.c:316:3: runtime error: null pointer \
passed as argument 1, which is declared to never be null
...................../libgit2/tests/apply/fromfile.c:46:3: runtime \
error: null pointer passed as argument 1, which is declared to never be null
```
|
|
20cb30b6
|
2018-11-13T13:40:17
|
|
Merge pull request #4667 from tiennou/feature/remote-create-api
Remote creation API
|
|
28239be3
|
2018-11-13T13:27:41
|
|
Merge pull request #4818 from pks-t/pks/index-collision
Index collision fixes
|
|
11fbead8
|
2018-11-11T16:40:56
|
|
Merge pull request #4705 from libgit2/ethomson/apply
Patch (diff) application
|
|
83b35181
|
2018-10-19T10:54:38
|
|
transport/http: Include non-default ports in Host header
When the port is omitted, the server assumes the default port for the
service is used (see
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Host). In
cases where the client provided a non-default port, it should be passed
along.
This hasn't been an issue so far as the git protocol doesn't include
server-generated URIs. I encountered this when implementing Rust
registry support for Sonatype Nexus. Rust's registry uses a git
repository for the package index. Clients look at a file in the root of
the package index to find the base URL for downloading the packages.
Sonatype Nexus looks at the incoming HTTP request (Host header and URL)
to determine the client-facing URL base as it may be running behind a
load balancer or reverse proxy. This client-facing URL base is then used
to construct the package download base URL. When libgit2 fetches the
index from Nexus on a non-default port, Nexus trusts the incorrect Host
header and generates an incorrect package download base URL.
|
|
58b60fcc
|
2018-11-08T09:31:28
|
|
netops: add method to return default http port for a connection
Constant strings and logic for HTTP(S) default ports were starting to be
spread throughout netops.c. Instead of duplicating this again to
determine if a Host header should include the port, move the default
port constants and logic into an internal method in netops.{c,h}.
|
|
52f859fd
|
2018-11-09T19:32:08
|
|
signature: fix out-of-bounds read when parsing timezone offset
When parsing a signature's timezone offset, we first check whether there
is a timezone at all by verifying that there are still bytes left to
read following the time itself. The check thus looks like `time_end + 1
< buffer_end`, which is actually correct in this case. After setting the
timezone's start pointer to that location, we compute the remaining
bytes by using the formula `buffer_end - tz_start + 1`, re-using the
previous `time_end + 1`. But this is in fact missing the braces around
`(tz_start + 1)`, thus leading to an overestimation of the remaining
bytes by a length of two. In case of a non-NUL terminated buffer, this
will result in an overflow.
The function `git_signature__parse` is only used in two locations. First
is `git_signature_from_buffer`, which only accepts a string without a
length. The string thus necessarily has to be NUL terminated and cannot
trigger the issue.
The other function is `git_commit__parse_raw`, which can in fact trigger
the error as it may receive non-NUL terminated commit data. But as
objects read from the ODB are always NUL-terminated by us as a
cautionary measure, it cannot trigger the issue either.
In other words, this error does not have any impact on security.
|
|
9ad96367
|
2018-11-07T15:31:21
|
|
smart transport: only clear url on hard reset
After creating a transport for a server, we expect to be able to call
`connect`, then invoke subsequent `action` calls. We provide the URL to
these `action` calls, although our built-in transports happen to ignore
it since they've already parsed it into an internal format that they
intend to use (`gitno_connection_data`).
In ca2eb4608243162a13c427e74526b6422d5a6659, we began clearing the URL
field after a connection, meaning that subsequent calls to transport
`action` callbacks would get a NULL URL, which went undetected since the
builtin transports ignore the URL when they're already connected
(instead of re-parsing it into an internal format).
Downstream custom transport implementations (eg, LibGit2Sharp) did
notice this change, however.
Since `reset_stream` is called even when we're not closing the
subtransport, update to only clear the URL when we're closing the
subtransport. This ensures that `action` calls will get the correct URL
information even after a connection.
|
|
f8b9493b
|
2018-11-05T15:46:08
|
|
apply: test re-adding a file after removing it
Ensure that we can add a file back after it's been removed. Update the
renamed/deleted validation in application to not apply to deltas that
are adding files to support this.
|
|
78580ad3
|
2018-11-05T15:34:59
|
|
apply: test modifying a file after renaming it
Ensure that we cannot modify a file after it's been renamed out of the
way. If multiple deltas exist for a single path, ensure that we do not
attempt to modify a file after it's been renamed out of the way.
To support this, we must track the paths that have been removed or
renamed; add to a string map when we remove a path and remove from the
string map if we recreate a path. Validate that we are not applying to
a path that is in this map, unless the delta is a rename, since git
supports renaming one file to two different places in two different
deltas.
Further, test that we cannot apply a modification delta to a path that
will be created in the future by a rename (a path that does not yet
exist.)
|
|
df4258ad
|
2018-11-04T13:01:03
|
|
apply: handle multiple deltas to the same file
git allows a patch file to contain multiple deltas to the same file:
although it does not produce files in this format itself, this could
be the result of concatenating two different patch files that affected
the same file.
git apply behaves by applying this next delta to the existing postimage
of the file. We should do the same. If we have previously seen a file,
and produced a postimage for it, we will load that postimage and apply
the current delta to that. If we have not, get the file from the
preimage.
|
|
6fecf4d1
|
2018-11-04T11:47:46
|
|
apply: handle exact renames
Deltas containing exact renames are special; they simple indicate that a
file was renamed without providing additional metadata (like the
filemode). Teach the reader to provide the file mode and use the
preimage's filemode in the case that the delta does not provide one.)
|
|
12f9ac17
|
2018-11-04T11:26:42
|
|
apply: validate unchanged mode when applying both
When applying to both the index and the working directory, ensure that
the working directory's mode matches the index's mode. It's not
sufficient to look only at the hashed object id to determine that the
file is unchanged, git also takes the mode into account.
|
|
52e27b84
|
2018-10-10T12:42:54
|
|
reader: free is unused and unnecessary
None of the reader implementations actually allocate anything
themselves, so they don't need a free function. Remove it.
|
|
47cc5f85
|
2018-09-29T19:32:51
|
|
apply: introduce a hunk callback
Introduce a callback to patch application that allows consumers to
cancel hunk application.
|
|
af33210b
|
2018-07-10T16:10:03
|
|
apply: introduce a delta callback
Introduce a callback to the application options that allow callers to
add a per-delta callback. The callback can return an error code to stop
patch application, or can return a value to skip the application of a
particular delta.
|
|
37b25ac5
|
2018-07-08T16:12:58
|
|
apply: move location to an argument, not the opts
Move the location option to an argument, out of the options structure.
This allows the options structure to be re-used for functions that don't
need to know the location, since it's implicit in their functionality.
For example, `git_apply_tree` should not take a location, but is
expected to take all the other options.
|
|
2d27ddc0
|
2018-07-01T21:35:51
|
|
apply: use an indexwriter
Place the entire `git_apply` operation inside an indexwriter, so that we
lock the index before we begin performing patch application. This
ensures that there are no other processes modifying things in the
working directory.
|
|
9be89bbd
|
2018-07-01T11:08:26
|
|
reader: apply working directory filters
When reading a file from the working directory, ensure that we apply any
necessary filters to the item. This ensures that we get the
repository-normalized data as the preimage, and further ensures that we
can accurately compare the working directory contents to the index
contents for accurate safety validation in the `BOTH` case.
|
|
813f0802
|
2018-07-01T15:14:36
|
|
apply: validate workdir contents match index for BOTH
When applying to both the index and the working directory, ensure that
the index contents match the working directory. This mirrors the
requirement in `git apply --index`.
This also means that - along with the prior commit that uses the working
directory contents as the checkout baseline - we no longer expect
conflicts during checkout. So remove the special-case error handling
for checkout conflicts. (Any checkout conflict now would be because the
file was actually modified between the start of patch application and
the checkout.)
|
|
0f4b2f02
|
2018-07-01T15:13:50
|
|
reader: optionally validate index matches workdir
When using a workdir reader, optionally validate that the index contents
match the working directory contents.
|
|
5b8d5a22
|
2018-07-01T13:42:53
|
|
apply: use preimage as the checkout baseline
Use the preimage as the checkout's baseline. This allows us to support
applying patches to files that are modified in the working directory
(those that differ from the HEAD and index). Without this, files will
be reported as (checkout) conflicts. With this, we expect the on-disk
data when we began the patch application (the "preimage") to be on-disk
during checkout.
We could have also simply used the `FORCE` flag to checkout to
accomplish a similar mechanism. However, `FORCE` ignores all
differences, while providing a preimage ensures that we will only
overwrite the file contents that we actually read.
Modify the reader interface to provide the OID to support this.
|
|
dddfff77
|
2018-06-30T17:12:16
|
|
apply: convert checkout conflicts to apply failures
When there's a checkout conflict during apply, that means that the
working directory was modified in a conflicting manner and the postimage
cannot be written. During application, convert this to an application
failure for consistency across workdir/index/both applications.
|
|
5b66b667
|
2018-06-29T12:39:41
|
|
apply: when preimage file is missing, return EAPPLYFAIL
The preimage file being missing entirely is simply a case of an
application failure; return the correct error value for the caller.
|
|
e0224121
|
2018-06-29T12:09:02
|
|
apply: simplify checkout vs index application
Separate the concerns of applying via checkout and updating the
repository's index. This results in simpler functionality and allows us
to not build the temporary collection of paths in the index case.
|