src/transports


Log

Author Commit Date CI Message
Patrick Steinhardt c2dd895a 2019-08-08T10:47:29 transports: http: check for memory allocation failures When allocating a chunk that is used to write to HTTP streams, we do not check for memory allocation errors. This may lead us to write to a `NULL` pointer and thus cause a segfault. Fix this by adding a call to `GIT_ERROR_CHECK_ALLOC`.
Edward Thomson 3cd123e9 2019-06-15T21:56:53 win32: define DWORD_MAX if it's not defined MinGW does not define DWORD_MAX. Specify it when it's not defined.
Edward Thomson d93b0aa0 2019-06-15T21:47:40 win32: decorate unused parameters
Edward Thomson 8c925ef8 2019-05-21T14:30:28 smart protocol: validate progress message length Ensure that the server has not sent us overly-large sideband messages (ensure that they are no more than `INT_MAX` bytes), then cast to `int`.
Edward Thomson 7afe788c 2019-05-21T14:27:46 smart transport: use size_t for sizes
Edward Thomson db7f1d9b 2019-05-21T14:21:58 local transport: cast message size to int explicitly Our progress information messages are short (and bounded by their format string), cast the length to int for callers.
Edward Thomson 8048ba70 2019-05-21T14:18:40 winhttp: safely cast length to DWORD
Edward Thomson 2a4bcf63 2019-06-23T18:24:23 errors: use lowercase Use lowercase for our error messages, per our custom.
Edward Thomson 5d92e547 2019-06-08T17:28:35 oid: `is_zero` instead of `iszero` The only function that is named `issomething` (without underscore) was `git_oid_iszero`. Rename it to `git_oid_is_zero` for consistency with the rest of the library.
Edward Thomson 0b5ba0d7 2019-06-06T16:36:23 Rename opt init functions to `options_init` In libgit2 nomenclature, when we need to verb a direct object, we name a function `git_directobject_verb`. Thus, if we need to init an options structure named `git_foo_options`, then the name of the function that does that should be `git_foo_options_init`. The previous names of `git_foo_init_options` is close - it _sounds_ as if it's initializing the options of a `foo`, but in fact `git_foo_options` is its own noun that should be respected. Deprecate the old names; they'll now call directly to the new ones.
Edward Thomson 7ea8630e 2019-04-07T20:11:59 http: free auth context on failure When we send HTTP credentials but the server rejects them, tear down the authentication context so that we can start fresh. To maintain this state, additionally move all of the authentication handling into `on_auth_required`.
Edward Thomson 005b5bc2 2019-04-07T17:55:23 http: reconnect to proxy on connection close When we're issuing a CONNECT to a proxy, we expect to keep-alive to the proxy. However, during authentication negotiations, the proxy may close the connection. Reconnect if the server closes the connection.
Edward Thomson d171fbee 2019-04-07T17:40:23 http: allow server to drop a keepalive connection When we have a keep-alive connection to the server, that server may legally drop the connection for any reason once a successful request and response has occurred. It's common for servers to drop the connection after some amount of time or number of requests have occurred.
Edward Thomson 9af1de5b 2019-03-24T20:49:57 http: stop on server EOF We stop the read loop when we have read all the data. We should also consider the server's feelings. If the server hangs up on us, we need to stop our read loop. Otherwise, we'll try to read from the server - and fail - ad infinitum.
Edward Thomson 539e6293 2019-03-22T19:06:46 http: teach auth mechanisms about connection affinity Instead of using `is_complete` to decide whether we have connection or request affinity for authentication mechanisms, set a boolean on the mechanism definition itself.
Edward Thomson 3e0b4b43 2019-03-22T18:52:03 http: maintain authentication across connections For request-based authentication mechanisms (Basic, Digest) we should keep the authentication context alive across socket connections, since the authentication headers must be transmitted with every request. However, we should continue to remove authentication contexts for mechanisms with connection affinity (NTLM, Negotiate) since we need to reauthenticate for every socket connection.
Edward Thomson ce72ae95 2019-03-22T10:53:30 http: simplify authentication mechanisms Hold an individual authentication context instead of trying to maintain all the contexts; we can select the preferred context during the initial negotiation. Subsequent authentication steps will re-use the chosen authentication (until such time as it's rejected) instead of trying to manage multiple contexts when all but one will never be used (since we can only authenticate with a single mechanism at a time.) Also, when we're given a 401 or 407 in the middle of challenge/response handling, short-circuit immediately without incrementing the retry count. The multi-step authentication is expected, and not a "retry" and should not be penalized as such. This means that we don't need to keep the contexts around and ensures that we do not unnecessarily fail for too many retries when we have challenge/response auth on a proxy and a server and potentially redirects in play as well.
Edward Thomson 6d931ba7 2019-03-22T16:35:59 http: don't set the header in the auth token
Edward Thomson 10718526 2019-03-09T13:53:16 http: don't reset replay count after connection A "connection" to a server is transient, and we may reconnect to a server in the midst of authentication failures (if the remote indicates that we should, via `Connection: close`) or in a redirect.
Edward Thomson 3192e3c9 2019-03-07T16:57:11 http: provide an NTLM authentication provider
Edward Thomson 79fc8281 2019-03-21T16:49:25 http: validate server's authentication types Ensure that the server supports the particular credential type that we're specifying. Previously we considered credential types as an input to an auth mechanism - since the HTTP transport only supported default credentials (via negotiate) and username/password credentials (via basic), this worked. However, if we are to add another mechanism that uses username/password credentials, we'll need to be careful to identify the types that are accepted.
Edward Thomson 5ad99210 2019-03-07T16:43:45 http: consume body on proxy auth failure We must always consume the full parser body if we're going to keep-alive. So in the authentication failure case, continue advancing the http message parser until it's complete, then we can retry the connection. Not doing so would mean that we have to tear the connection down and start over. Advancing through fully (even though we don't use the data) will ensure that we can retry a connection with keep-alive.
Edward Thomson 75b20458 2019-03-07T16:34:55 http: always consume body on auth failure When we get an authentication failure, we must consume the entire body of the response. If we only read half of the body (on the assumption that we can ignore the rest) then we will never complete the parsing of the message. This means that we will never set the complete flag, and our replay must actually tear down the connection and try again. This is particularly problematic for stateful authentication mechanisms (SPNEGO, NTLM) that require that we keep the connection alive. Note that the prior code is only a problem when the 401 that we are parsing is too large to be read in a single chunked read from the http parser. But now we will continue to invoke the http parser until we've got a complete message in the authentication failed scenario. Note that we need not do anything with the message, so when we get an authentication failed, we'll stop adding data to our buffer, we'll simply loop in the parser and let it advance its internal state.
Edward Thomson e87f912b 2019-03-21T15:29:52 http: don't realloc the request
Edward Thomson 10e8fe55 2019-03-21T13:55:54 transports: add an `is_complete` function for auth Some authentication mechanisms (like HTTP Basic and Digest) have a one-step mechanism to create credentials, but there are more complex mechanisms like NTLM and Negotiate that require challenge/response after negotiation, requiring several round-trips. Add an `is_complete` function to know when they have round-tripped enough to be a single authentication and should now either have succeeded or failed to authenticate.
Edward Thomson 9050c69c 2019-03-09T17:24:16 http: examine keepalive status at message end We cannot examine the keep-alive status of the http parser in `http_connect`; it's too late and the critical information about whether keep-alive is supported has been destroyed. Per the documentation for `http_should_keep_alive`: > If http_should_keep_alive() in the on_headers_complete or > on_message_complete callback returns 0, then this should be > the last message on the connection. Query then and set the state.
Edward Thomson 956ba48b 2019-03-14T10:36:40 http: increase the replay count Increase the permissible replay count; with multiple-step authentication schemes (NTLM, Negotiate), proxy authentication and redirects, we need to be mindful of the number of steps it takes to get connected. 7 seems high but can be exhausted quickly with just a single authentication failure over a redirected multi-state authentication pipeline.
Edward Thomson ee3d35cf 2019-04-07T19:15:21 http: support https for proxies
Edward Thomson 3d11b6c5 2019-03-11T20:36:09 winhttp: support default credentials for proxies We did not properly support default credentials for proxies, only for destination servers. Refactor the credential handling to support sending either username/password _or_ default credentials to either the proxy or the destination server. This actually shares the authentication logic between proxy servers and destination servers. Due to copy/pasta drift over time, they had diverged. Now they share a common logic which is: first, use credentials specified in the URL (if there were any), treating empty username and password (ie, "http://:@foo.com/") as default credentials, for compatibility with git. Next, call the credential callbacks. Finally, fallback to WinHTTP compatibility layers using built-in authentication like we always have. Allowing default credentials for proxies requires moving the security level downgrade into the credential setting routines themselves. We will update our security level to "high" by default which means that we will never send default credentials without prompting. (A lower setting, like the WinHTTP default of "medium" would allow WinHTTP to handle credentials for us, despite what a user may have requested with their structures.) Now we start with "high" and downgrade to "low" only after a user has explicitly requested default credentials.
Edward Thomson c6ab183e 2019-03-11T11:43:08 net: rename gitno_connection_data to git_net_url "Connection data" is an imprecise and largely incorrect name; these structures are actually parsed URLs. Provide a parser that takes a URL string and produces a URL structure (if it is valid). Separate the HTTP redirect handling logic from URL parsing, keeping a `gitno_connection_data_handle_redirect` whose only job is redirect handling logic and does not parse URLs itself.
Edward Thomson 810cefd9 2019-06-08T17:14:00 credentials: suffix the callbacks with `_cb` The credential callbacks should match the other callback naming conventions, using the `_cb` suffix instead of a `_callback` suffix.
Etienne Samson b51789ac 2019-04-16T13:20:08 transports: make use of the `GIT_CONTAINER_OF` macro
Edward Thomson 59001e83 2019-02-21T11:41:19 remote: rename git_push_transfer_progress callback The `git_push_transfer_progress` is a callback and as such should be suffixed with `_cb` for consistency. Rename `git_push_transfer_progress` to `git_push_transfer_progress_cb`.
Edward Thomson a1ef995d 2019-02-21T10:33:30 indexer: use git_indexer_progress throughout Update internal usage of `git_transfer_progress` to `git_indexer_progreses`.
Patrick Steinhardt 5265b31c 2019-01-23T15:00:20 streams: fix callers potentially only writing partial data Similar to the write(3) function, implementations of `git_stream_write` do not guarantee that all bytes are written. Instead, they return the number of bytes that actually have been written, which may be smaller than the total number of bytes. Furthermore, due to an interface design issue, we cannot ever write more than `SSIZE_MAX` bytes at once, as otherwise we cannot represent the number of bytes written to the caller. Unfortunately, no caller of `git_stream_write` ever checks the return value, except to verify that no error occurred. Due to this, they are susceptible to the case where only partial data has been written. Fix this by introducing a new function `git_stream__write_full`. In contrast to `git_stream_write`, it will always return either success or failure, without returning the number of bytes written. Thus, it is able to write all `SIZE_MAX` bytes and loop around `git_stream_write` until all data has been written. Adjust all callers except the BIO callbacks in our mbedtls and OpenSSL streams, which already do the right thing and require the amount of bytes written.
Edward Thomson 4947216f 2019-01-21T11:11:27 git transport: only write INT_MAX bytes The transport code returns an `int` with the number of bytes written; thus only attempt to write at most `INT_MAX`.
Sven Strickroth bff7aed2 2019-01-24T16:44:04 Don't use deprecated constants Follow up for PR #4917. Signed-off-by: Sven Strickroth <email@cs-ware.de>
Edward Thomson f673e232 2018-12-27T13:47:34 git_error: use new names in internal APIs and usage Move to the `git_error` name in the internal API for error-related functions.
Edward Thomson 1758636b 2019-01-19T01:38:34 Merge pull request #4939 from libgit2/ethomson/git_ref Move `git_ref_t` to `git_reference_t`
Edward Thomson abe23675 2019-01-17T20:09:05 Merge pull request #4925 from lhchavez/fix-a-bunch-of-warnings Fix a bunch of warnings
Edward Thomson ed8cfbf0 2019-01-17T00:32:31 references: use new names in internal usage Update internal usage to use the `git_reference` names for constants.
Jason Haslam 35d86c77 2019-01-14T10:14:36 proxy: fix crash on remote connection with GIT_PROXY_AUTO but no proxy is detected
lhchavez 321d19c1 2019-01-06T08:36:06 Windows is hard.
lhchavez 7b453e7e 2019-01-05T22:12:48 Fix a bunch of warnings This change fixes a bunch of warnings that were discovered by compiling with `clang -target=i386-pc-linux-gnu`. It turned out that the intrinsics were not necessarily being used in all platforms! Especially in GCC, since it does not support __has_builtin. Some more warnings were gleaned from the Windows build, but I stopped when I saw that some third-party dependencies (e.g. zlib) have warnings of their own, so we might never be able to enable -Werror there.
Edward Thomson 168fe39b 2018-11-28T14:26:57 object_type: use new enumeration names Use the new object_type enumeration names within the codebase.
Edward Thomson 30ac46aa 2018-11-28T10:12:43 http: reset replay_count upon connection Reset the replay_count upon a successful connection. It's possible that we could encounter a situation where we connect successfully but need to replay a request - for example, a connection and initial request succeeds without authentication but a subsequent call does require authentication. Reset the replay count upon any successful request to afford subsequent replays room to manuever.
Edward Thomson 52478d7d 2018-11-18T19:54:49 http: don't allow SSL connections to a proxy Temporarily disallow SSL connections to a proxy until we can understand the valgrind warnings when tunneling OpenSSL over OpenSSL.
Edward Thomson 41f620d9 2018-11-18T19:10:50 http: only load proxy configuration during connection Only load the proxy configuration during connection; we need this data when we're going to connect to the server, however we may mutate it after connection (connecting through a CONNECT proxy means that we should send requests like normal). If we reload the proxy configuration but do not actually reconnect (because we're in a keep-alive session) then we will reload the proxy configuration that we should have mutated. Thus, only load the proxy configuration when we know that we're going to reconnect.
Edward Thomson 0467606f 2018-11-18T11:00:11 http: disallow repeated headers from servers Don't allow servers to send us multiple Content-Type, Content-Length or Location headers.
Edward Thomson 21142c5a 2018-10-29T10:04:48 http: remove cURL We previously used cURL to support HTTP proxies. Now that we've added this support natively, we can remove the curl dependency.
Edward Thomson 5d4e1e04 2018-10-28T21:27:56 http: use CONNECT to talk to proxies Natively support HTTPS connections through proxies by speaking CONNECT to the proxy and then adding a TLS connection on top of the socket.
Edward Thomson 43b592ac 2018-10-25T08:49:01 tls: introduce a wrap function Introduce `git_tls_stream_wrap` which will take an existing `stream` with an already connected socket and begin speaking TLS on top of it. This is useful if you've built a connection to a proxy server and you wish to begin CONNECT over it to tunnel a TLS connection. Also update the pluggable TLS stream layer so that it can accept a registration structure that provides an `init` and `wrap` function, instead of a single initialization function.
Edward Thomson b2ed778a 2018-11-18T22:20:10 http transport: reset error message on cert failure Store the error message from the underlying TLS library before calling the certificate callback. If it refuses to act (demonstrated by returning GIT_PASSTHROUGH) then restore the error message. Otherwise, if the callback does not set an error message, set a sensible default that implicates the callback itself.
Edward Thomson 2ce2315c 2018-10-22T17:33:45 http transport: support cert check for proxies Refactor certificate checking so that it can easily be called for proxies or the remote server.
Edward Thomson 74c6e08e 2018-10-22T14:56:53 http transport: provide proxy credentials
Edward Thomson 496da38c 2018-10-22T12:48:45 http transport: refactor storage Create a simple data structure that contains information about the server being connected to, whether that's the actual remote endpoint (git server) or an intermediate proxy. This allows for organization of streams, authentication state, etc.
Edward Thomson 6af8572c 2018-10-22T11:29:01 http transport: cap number of authentication replays Put a limit on the number of authentication replays in the HTTP transport. Standardize on 7 replays for authentication or redirects, which matches the behavior of the WinHTTP transport.
Edward Thomson 22654812 2018-10-22T11:24:05 http transport: prompt for proxy credentials Teach the HTTP transport how to prompt for proxy credentials.
Edward Thomson 0328eef6 2018-10-22T11:14:06 http transport: further refactor credential handling Prepare credential handling to understand both git server and proxy server authentication.
Edward Thomson 32cb56ce 2018-10-22T10:16:54 http transport: refactor credential handling Factor credential handling into its own function. Additionally, add safety checks to ensure that we are in a valid state - that we have received a valid challenge from the server and that we have configuration to respond to that challenge.
Edward Thomson e6e399ab 2018-10-22T09:49:54 http transport: use HTTP proxies when requested The HTTP transport should understand how to apply proxies when configured with `GIT_PROXY_SPECIFIED` and `GIT_PROXY_SPECIFIED`. When a proxy is configured, the HTTP transport will now connect to the proxy (instead of directly to the git server), and will request the properly-formed URL of the git server endpoint.
Edward Thomson e6f1931a 2018-10-22T00:09:24 http: rename http subtransport's `io` to `gitserver_stream` Rename `http_subtransport->io` to `http_subtransport->gitserver_stream` to clarify its use, especially as we might have additional streams (eg for a proxy) in the future.
Edward Thomson c07ff4cb 2018-10-21T14:17:06 http: rename `connection_data` -> `gitserver_data` Rename the `connection_data` struct member to `gitserver_data`, to disambiguate future `connection_data`s that apply to the proxy, not the final server endpoint.
Edward Thomson ed72465e 2018-10-13T19:16:54 proxy: propagate proxy configuration errors
Patrick Steinhardt c97d302d 2018-11-28T13:45:41 Merge pull request #4879 from libgit2/ethomson/defer_cert_cred_cb Allow certificate and credential callbacks to decline to act
Edward Thomson a2e6e0ea 2018-11-06T14:15:43 transport: allow cred/cert callbacks to return GIT_PASSTHROUGH Allow credential and certificate checking callbacks to return GIT_PASSTHROUGH, indicating that they do not want to act. Introduce this to support in both the http and ssh callbacks. Additionally, enable the same mechanism for certificate validation. This is most useful to disambiguate any meaning in the publicly exposed credential and certificate functions (`git_transport_smart_credentials` and `git_transport_smart_certificate_check`) but it may be more generally useful for callers to be able to defer back to libgit2.
Edward Thomson 4ef2b889 2018-11-18T22:56:28 Merge pull request #4882 from kc8apf/include_port_in_host_header transport/http: Include non-default ports in Host header
Edward Thomson 8ee10098 2018-11-06T13:10:30 transport: see if cert/cred callbacks exist before calling them Custom transports may want to ask libgit2 to invoke a configured credential or certificate callback; however they likely do not know if a callback was actually configured. Return a sentinal value (GIT_PASSTHROUGH) if there is no callback configured instead of crashing.
Rick Altherr 83b35181 2018-10-19T10:54:38 transport/http: Include non-default ports in Host header When the port is omitted, the server assumes the default port for the service is used (see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Host). In cases where the client provided a non-default port, it should be passed along. This hasn't been an issue so far as the git protocol doesn't include server-generated URIs. I encountered this when implementing Rust registry support for Sonatype Nexus. Rust's registry uses a git repository for the package index. Clients look at a file in the root of the package index to find the base URL for downloading the packages. Sonatype Nexus looks at the incoming HTTP request (Host header and URL) to determine the client-facing URL base as it may be running behind a load balancer or reverse proxy. This client-facing URL base is then used to construct the package download base URL. When libgit2 fetches the index from Nexus on a non-default port, Nexus trusts the incorrect Host header and generates an incorrect package download base URL.
Edward Thomson 9ad96367 2018-11-07T15:31:21 smart transport: only clear url on hard reset After creating a transport for a server, we expect to be able to call `connect`, then invoke subsequent `action` calls. We provide the URL to these `action` calls, although our built-in transports happen to ignore it since they've already parsed it into an internal format that they intend to use (`gitno_connection_data`). In ca2eb4608243162a13c427e74526b6422d5a6659, we began clearing the URL field after a connection, meaning that subsequent calls to transport `action` callbacks would get a NULL URL, which went undetected since the builtin transports ignore the URL when they're already connected (instead of re-parsing it into an internal format). Downstream custom transport implementations (eg, LibGit2Sharp) did notice this change, however. Since `reset_stream` is called even when we're not closing the subtransport, update to only clear the URL when we're closing the subtransport. This ensures that `action` calls will get the correct URL information even after a connection.
Patrick Steinhardt 2613fbb2 2018-10-18T11:58:14 global: replace remaining use of `git__strtol32` Replace remaining uses of the `git__strtol32` function. While these uses are all safe as the strings were either sanitized or from a trusted source, we want to remove `git__strtol32` altogether to avoid future misuse.
Carlos Martín Nieto 9b6e4081 2018-10-15T17:08:38 Merge commit 'afd10f0' (Follow 308 redirects)
Zander Brown afd10f0b 2018-10-13T09:31:20 Follow 308 redirects (as used by GitLab)
Anders Borum 475db39b 2018-10-06T12:58:06 ignore unsupported http authentication schemes auth_context_match returns 0 instead of -1 for unknown schemes to not fail in situations where some authentication schemes are supported and others are not. apply_credentials is adjusted to handle auth_context_match returning 0 without producing authentication context.
Patrick Steinhardt 1bc5b05c 2018-10-03T16:17:21 smart_pkt: do not accept callers passing in no line length Right now, we simply ignore the `linelen` parameter of `git_pkt_parse_line` in case the caller passed in zero. But in fact, we never want to assume anything about the provided buffer length and always want the caller to pass in the available number of bytes. And in fact, checking all the callers, one can see that the funciton is never being called in case where the buffer length is zero, and thus we are safe to remove this check.
Patrick Steinhardt c05790a8 2018-08-09T11:16:15 smart_pkt: return parsed length via out-parameter The `parse_len` function currently directly returns the parsed length of a packet line or an error code in case there was an error. Instead, convert this to our usual style of using the return value as error code only and returning the actual value via an out-parameter. Thus, we can now convert the output parameter to an unsigned type, as the size of a packet cannot ever be negative. While at it, we also move the check whether the input buffer is long enough into `parse_len` itself. We don't really want to pass around potentially non-NUL-terminated buffers to functions without also passing along the length, as this is dangerous in the unlikely case where other callers for that function get added. Note that we need to make sure though to not mess with `GIT_EBUFS` error codes, as these indicate not an error to the caller but that he needs to fetch more data.
Patrick Steinhardt 0b3dfbf4 2018-08-09T11:13:59 smart_pkt: reorder and rename parameters of `git_pkt_parse_line` The parameters of the `git_pkt_parse_line` function are quite confusing. First, there is no real indicator what the `out` parameter is actually all about, and it's not really clear what the `bufflen` parameter refers to. Reorder and rename the parameters to make this more obvious.
Patrick Steinhardt 5fabaca8 2018-08-09T11:04:42 smart_pkt: fix buffer overflow when parsing "unpack" packets When checking whether an "unpack" packet returned the "ok" status or not, we use a call to `git__prefixcmp`. In case where the passed line isn't properly NUL terminated, though, this may overrun the line buffer. Fix this by using `git__prefixncmp` instead.
Patrick Steinhardt b5ba7af2 2018-08-09T11:03:37 smart_pkt: fix "ng" parser accepting non-space character When parsing "ng" packets, we blindly assume that the character immediately following the "ng" prefix is a space and skip it. As the calling function doesn't make sure that this is the case, we can thus end up blindly accepting an invalid packet line. Fix the issue by using `git__prefixncmp`, checking whether the line starts with "ng ".
Patrick Steinhardt a9f1ca09 2018-08-09T11:01:00 smart_pkt: fix buffer overflow when parsing "ok" packets There are two different buffer overflows present when parsing "ok" packets. First, we never verify whether the line already ends after "ok", but directly go ahead and also try to skip the expected space after "ok". Second, we then go ahead and use `strchr` to scan for the terminating newline character. But in case where the line isn't terminated correctly, this can overflow the line buffer. Fix the issues by using `git__prefixncmp` to check for the "ok " prefix and only checking for a trailing '\n' instead of using `memchr`. This also fixes the issue of us always requiring a trailing '\n'. Reported by oss-fuzz, issue 9749: Crash Type: Heap-buffer-overflow READ {*} Crash Address: 0x6310000389c0 Crash State: ok_pkt git_pkt_parse_line git_smart__store_refs Sanitizer: address (ASAN)
Patrick Steinhardt bc349045 2018-08-09T10:38:10 smart_pkt: fix buffer overflow when parsing "ACK" packets We are being quite lenient when parsing "ACK" packets. First, we didn't correctly verify that we're not overrunning the provided buffer length, which we fix here by using `git__prefixncmp` instead of `git__prefixcmp`. Second, we do not verify that the actual contents make any sense at all, as we simply ignore errors when parsing the ACKs OID and any unknown status strings. This may result in a parsed packet structure with invalid contents, which is being silently passed to the caller. This is being fixed by performing proper input validation and checking of return codes.
Patrick Steinhardt 5edcf5d1 2018-08-09T10:57:06 smart_pkt: adjust style of "ref" packet parsing function While the function parsing ref packets doesn't have any immediately obvious buffer overflows, it's style is different to all the other parsing functions. Instead of checking buffer length while we go, it does a check up-front. This causes the code to seem a lot more magical than it really is due to some magic constants. Refactor the function to instead make use of the style of other packet parser and verify buffer lengths as we go.
Patrick Steinhardt 786426ea 2018-08-09T10:46:58 smart_pkt: check whether error packets are prefixed with "ERR " In the `git_pkt_parse_line` function, we determine what kind of packet a given packet line contains by simply checking for the prefix of that line. Except for "ERR" packets, we always only check for the immediate identifier without the trailing space (e.g. we check for an "ACK" prefix, not for "ACK "). But for "ERR" packets, we do in fact include the trailing space in our check. This is not really much of a problem at all, but it is inconsistent with all the other packet types and thus causes confusion when the `err_pkt` function just immediately skips the space without checking whether it overflows the line buffer. Adjust the check in `git_pkt_parse_line` to not include the trailing space and instead move it into `err_pkt` for consistency.
Patrick Steinhardt 40fd84cc 2018-08-09T10:46:26 smart_pkt: explicitly avoid integer overflows when parsing packets When parsing data, progress or error packets, we need to copy the contents of the rest of the current packet line into the flex-array of the parsed packet. To keep track of this array's length, we then assign the remaining length of the packet line to the structure. We do have a mismatch of types here, as the structure's `len` field is a signed integer, while the length that we are assigning has type `size_t`. On nearly all platforms, this shouldn't pose any problems at all. The line length can at most be 16^4, as the line's length is being encoded by exactly four hex digits. But on a platforms with 16 bit integers, this assignment could cause an overflow. While such platforms will probably only exist in the embedded ecosystem, we still want to avoid this potential overflow. Thus, we now simply change the structure's `len` member to be of type `size_t` to avoid any integer promotion.
Patrick Steinhardt 4a5804c9 2018-08-09T10:36:44 smart_pkt: honor line length when determining packet type When we parse the packet type of an incoming packet line, we do not verify that we don't overflow the provided line buffer. Fix this by using `git__prefixncmp` instead and passing in `len`. As we have previously already verified that `len <= linelen`, we thus won't ever overflow the provided buffer length.
Patrick Steinhardt 9a193102 2018-08-24T11:01:39 Merge pull request #4774 from tiennou/fix/clang-analyzer Coverity flavored clang analyzer fixes
Etienne Samson 1c949ce1 2018-08-21T02:11:32 transport/http: do not return success if we failed to get a scheme Otherwise we return a NULL context, which will get dereferenced in apply_credentials.
Christian Schlack 50dd7fea 2018-08-11T13:06:14 Fix 'invalid packet line' for ng packets containing errors
Patrick Steinhardt 9ada072e 2018-08-06T13:31:23 Merge pull request #4758 from pks-t/pks/smart-pkt-oob-read smart_pkt: fix potential OOB-read when processing ng packet
Edward Thomson ba55592f 2018-08-02T20:34:56 Merge pull request #4743 from Agent00Log/dev/winbugfixes Windows: default credentials / fallback credential handling
Henning Schaffaf ccbffbae 2018-07-30T13:39:21 Only unitialize if the call to CoInitializeEx was successful
Edward Thomson b00a09b0 2018-07-27T20:14:27 Merge pull request #4731 from libgit2/ethomson/wintls_fix winhttp: retry erroneously failing requests
Henning Schaffaf 8c21cb5c 2018-07-26T09:52:32 Fix fallback credentials: The call to CoInitializeEx fails if it was previously been set to a different mode.
Henning Schaffaf c9dc30ff 2018-07-26T09:52:21 Fix default credentials: The WinHttpSetCredentials auth scheme must only be one of the supported schemes.
Edward Thomson ca2eb460 2018-07-20T21:50:58 smart subtransport: free url when resetting stream Free the url field when resetting the stream to avoid leaking it.
Edward Thomson dc371e3c 2018-07-20T08:20:48 winhttp: retry erroneously failing requests Early Windows TLS 1.2 implementations have an issue during key exchange with OpenSSL implementations that cause negotiation to fail with the error "the buffer supplied to a function was too small." This is a transient error on the connection, so when that error is received, retry up to 5 times to create a connection to the remote server before actually giving up.
Patrick Steinhardt 0652abaa 2018-07-20T12:56:49 Merge pull request #4702 from tiennou/fix/coverity Assorted Coverity fixes
Patrick Steinhardt 19bed3e2 2018-07-19T13:00:42 smart_pkt: fix potential OOB-read when processing ng packet OSS-fuzz has reported a potential out-of-bounds read when processing a "ng" smart packet: ==1==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6310000249c0 at pc 0x000000493a92 bp 0x7ffddc882cd0 sp 0x7ffddc882480 READ of size 65529 at 0x6310000249c0 thread T0 SCARINESS: 26 (multi-byte-read-heap-buffer-overflow) #0 0x493a91 in __interceptor_strchr.part.35 /src/llvm/projects/compiler-rt/lib/sanitizer_common/sanitizer_common_interceptors.inc:673 #1 0x813960 in ng_pkt libgit2/src/transports/smart_pkt.c:320:14 #2 0x810f79 in git_pkt_parse_line libgit2/src/transports/smart_pkt.c:478:9 #3 0x82c3c9 in git_smart__store_refs libgit2/src/transports/smart_protocol.c:47:12 #4 0x6373a2 in git_smart__connect libgit2/src/transports/smart.c:251:15 #5 0x57688f in git_remote_connect libgit2/src/remote.c:708:15 #6 0x52e59b in LLVMFuzzerTestOneInput /src/download_refs_fuzzer.cc:145:9 #7 0x52ef3f in ExecuteFilesOnyByOne(int, char**) /src/libfuzzer/afl/afl_driver.cpp:301:5 #8 0x52f4ee in main /src/libfuzzer/afl/afl_driver.cpp:339:12 #9 0x7f6c910db82f in __libc_start_main /build/glibc-Cl5G7W/glibc-2.23/csu/libc-start.c:291 #10 0x41d518 in _start When parsing an "ng" packet, we keep track of both the current position as well as the remaining length of the packet itself. But instead of taking care not to exceed the length, we pass the current pointer's position to `strchr`, which will search for a certain character until hitting NUL. It is thus possible to create a crafted packet which doesn't contain a NUL byte to trigger an out-of-bounds read. Fix the issue by instead using `memchr`, passing the remaining length as restriction. Furthermore, verify that we actually have enough bytes left to produce a match at all. OSS-Fuzz-Issue: 9406
Patrick Steinhardt fa401a32 2018-07-19T08:20:04 Merge pull request #4704 from nelhage/no-pkt-pack Remove GIT_PKT_PACK entirely
Nelson Elhage 388149f5 2018-07-15T17:25:26 No need for this placeholder.