src/transports


Log

Author Commit Date CI Message
Edward Thomson d299a7aa 2022-02-08T20:42:45 Merge pull request #6205 from ccstolley/ccs_fix_http_push_timeout push: Prepare pack before sending pack header.
Colin Stolley aceac672 2022-02-08T12:14:50 Rename prepare_pack() to git_packbuilder__prepare()
Colin Stolley 19ec5923 2022-02-07T09:29:40 push: Prepare pack before sending pack header. For large pushes, preparing the pack can take a while. Currently we send the pack header first, followed by preparing the pack and then finally sending the pack. Unfortunately github.com will terminate a git-receive-pack command over http if it is idle for more than 10 seconds. This is easily exceeded for a large push, and so the push is rejected with a Broken Pipe error. This patch moves the pack preparation ahead of sending the pack header, so that the timeout is avoided. prepare_pack() can be called multiple times but will only do the work once, so the original PREPARE_PACK call inside git_packbuilder_foreach() remains.
Peter Pettersson 53e8deb9 2022-01-23T22:33:37 Remove stray '// TODO'
Edward Thomson d298059e 2022-01-17T21:41:12 Merge pull request #6167 from libgit2/ethomson/scp_urls_with_ports Support scp style paths with ports
Edward Thomson 616628dd 2022-01-17T21:39:35 Merge branch 'main' into typos
Edward Thomson 27307ed6 2022-01-11T10:39:57 ssh: use url parsing functionality Instead of trying to figure out a repo's path from a URL by hand, parse a URL using the parsing functionality.
Edward Thomson e2bda60a 2022-01-10T21:12:13 url: introduce git_net_url_parse_scp Provide a mechanism for parsing scp-style paths (eg `git@github.com:libgit2/libgit2` into the url form `ssh://git@github.com/libgit2/libgit2`.)
Edward Thomson 2bfd8ddc 2022-01-17T21:05:17 Merge pull request #6175 from libgit2/ethomson/follow_redirects_initial remote: support `http.followRedirects` (`false` and `initial`) and follow initial redirects by default
Peter Pettersson a979cf3d 2021-11-17T22:19:47 c99: change single bit flags to unsigned
Edward Thomson fda59a76 2022-01-04T07:05:20 remote: honor `http.followRedirects` configuration option
Edward Thomson 515daeaf 2022-01-04T06:16:30 remote: introduce `follow_redirects` connect option Give callers the ability to select how to handle redirects - either supporting redirects during the initial connection (so that, for example, `git.example.com/repo` can redirect to `github.com/example/repo`) or all/no redirects. This is for compatibility with git.
Edward Thomson 342e55ac 2021-12-18T10:13:18 url: optionally allow off-site redirects In redirect application logic, (optionally) allow off-site redirects.
Edward Thomson 6fc6eeb6 2021-12-24T15:14:38 remote: introduce `git_remote_connect_options` The existing mechanism for providing options to remote fetch/push calls, and subsequently to transports, is unsatisfactory. It requires an options structure to avoid breaking the API and callback signatures. 1. Introduce `git_remote_connect_options` to satisfy those needs. 2. Add a new remote connection API, `git_remote_connect_ext` that will take this new options structure. Existing `git_remote_connect` calls will proxy to that. `git_remote_fetch` and `git_remote_push` will proxy their fetch/push options to that as well. 3. Define the interaction between `git_remote_connect` and fetch/push. Connect _may_ be called before fetch/push, but _need not_ be. The semantics of which options would be used for these operations was not specified if you specify options for both connect _and_ fetch. Now these are defined that the fetch or push options will be used _if_ they were specified. Otherwise, the connect options will be used if they were specified. Otherwise, the library's defaults will be used. 4. Update the transports to understand `git_remote_connect_options`. This is a breaking change to the systems API.
Dimitris Apostolou 90df4302 2022-01-05T12:18:05 Fix typos
Peter Pettersson 7dcc29fc 2021-10-22T22:51:59 Make enum in src,tests and examples C90 compliant by removing trailing comma.
Edward Thomson 95117d47 2021-10-31T09:45:46 path: separate git-specific path functions from util Introduce `git_fs_path`, which operates on generic filesystem paths. `git_path` will be kept for only git-specific path functionality (for example, checking for `.git` in a path).
Edward Thomson f0e693b1 2021-09-07T17:53:49 str: introduce `git_str` for internal, `git_buf` is external libgit2 has two distinct requirements that were previously solved by `git_buf`. We require: 1. A general purpose string class that provides a number of utility APIs for manipulating data (eg, concatenating, truncating, etc). 2. A structure that we can use to return strings to callers that they can take ownership of. By using a single class (`git_buf`) for both of these purposes, we have confused the API to the point that refactorings are difficult and reasoning about correctness is also difficult. Move the utility class `git_buf` to be called `git_str`: this represents its general purpose, as an internal string buffer class. The name also is an homage to Junio Hamano ("gitstr"). The public API remains `git_buf`, and has a much smaller footprint. It is generally only used as an "out" param with strict requirements that follow the documentation. (Exceptions exist for some legacy APIs to avoid breaking callers unnecessarily.) Utility functions exist to convert a user-specified `git_buf` to a `git_str` so that we can call internal functions, then converting it back again.
punkymaniac 379c4646 2021-09-09T19:49:04 Fix coding style for pointer Make some syntax change to follow coding style.
Edward Thomson 3c0f14cc 2021-09-01T20:34:28 remote: refactor proxy detection Update the proxy detection for a remote. 1. Honor `http.<url>.proxy` syntax for a remote's direct URL and parent URLs. 2. Honor an empty configuration URL to override a proxy configuration. Add tests to ensure that configuration specificity is honored.
Mathieu Parent e5a32774 2021-02-11T22:53:16 Add NO_PROXY env support Item 2 of 3 from #4164 Signed-off-by: Mathieu Parent <math.parent@gmail.com>
Edward Thomson 4e8840fd 2021-08-30T18:20:35 Merge pull request #6022 from lollipopman/connect-proxy-host-header Set Host Header to match CONNECT authority target
Jesse Hathaway fc5d0e80 2021-08-30T21:24:54 Set Host Header to match CONNECT authority target Prior to this change, for CONNECT requests, the Host header was set to the host and port of the target http proxy. However, per the rfc7230 for HTTP/1.1 this is incorrect as the Host header should match the target of the CONNECT request, as detailed in section 5.3.3 & 5.4. 5.3.3. authority-form The authority-form of request-target is only used for CONNECT requests (Section 4.3.6 of [RFC7231]). authority-form = authority When making a CONNECT request to establish a tunnel through one or more proxies, a client MUST send only the target URI's authority component (excluding any userinfo and its "@" delimiter) as the request-target. For example, CONNECT www.example.com:80 HTTP/1.1 5.4. Host <snip> A client MUST send a Host header field in all HTTP/1.1 request messages. If the target URI includes an authority component, then a client MUST send a field-value for Host that is identical to that authority component, excluding any userinfo subcomponent and its "@" delimiter (Section 2.7.1). If the authority component is missing or undefined for the target URI, then a client MUST send a Host header field with an empty field-value. This issue was noticed when proxying requests through HAProxy 2.2 which rejects these invalid http requests.
Edward Thomson 5eb2b0b3 2021-08-30T08:27:27 httpclient: actually return `GIT_EAUTH`
Edward Thomson 9937967e 2021-08-29T21:29:14 Merge branch 'main' into http-use-eauth
Edward Thomson 5158b0b7 2021-08-24T11:56:22 ntlmclient: update to ntlmclient 0.9.1 The ntlmclient dependency can now dynamically load OpenSSL.
Edward Thomson 28841241 2021-08-05T08:12:28 http: don't require a password Attempt authentication when a username is presented but a password is not; this can happen in particular when users are doing token authentication and specifying the token in the URL itself. For example, `https://token@host/` is a valid URI and should be treated as a username of `token` with an empty password.
Peter Pettersson e4e173e8 2021-07-15T21:00:02 Allow compilation on systems without CLOCK_MONOTONIC Makes usage of CLOCK_MONOTONIC conditional and makes functions that uses git__timer handle clock resynchronization. Call gettimeofday with tzp set to NULL as required by https://pubs.opengroup.org/onlinepubs/9699919799/functions/gettimeofday.html
Edward Thomson 95a2966f 2021-07-13T17:32:02 Merge pull request #5908 from punkymaniac/patch-mem-leak Fix memory leak in git_smart__connect
Jacques Germishuys a2cd10e5 2021-06-26T18:52:21 define WINHTTP_NO_CLIENT_CERT_CONTEXT if needed
punkymaniac d07a0dc1 2021-06-04T16:34:32 Remove useless condition
punkymaniac 2934b447 2021-06-03T11:21:39 Fix memory leak in git_smart__connect The call to git_proxy_options_dup will replace the url pointer of the proxy. But if the url pointer is already set, the old address will be lost forever and will never be free.
Edward Thomson b5dcdad3 2021-05-16T12:53:58 Merge pull request #5852 from implausible/httpclient/skip-entire-body Fix issues with Proxy Authentication after httpclient refactor
Tyler Ang-Wanek 3473a088 2021-05-12T11:48:23 httpclient: no proxy creds in requests if proxy is CONNECT type
Tyler Ang-Wanek 049618ce 2021-04-30T15:11:54 httpclient: git_http_client_skip_body should drain socket of body
Edward Thomson b31795ef 2021-05-06T01:46:19 test: clean up memory leaks
Tobias Nießen 3fa8ec52 2021-04-18T20:30:13 src: fix typos in header files
Ian Hattendorf edffea15 2021-03-01T16:26:58 winhttp: skip certificate check if unable to send request In some circumstances (e.g. when proxies are involved), winhttp will fail to reach the WINHTTP_CALLBACK_STATUS_SENDING_REQUEST phase. If this occurs, we'll error with ERROR_WINHTTP_INCORRECT_HANDLE_STATE when attempting to query the server certificate context (see https://docs.microsoft.com/en-us/windows/win32/api/winhttp/nf-winhttp-winhttpsendrequest#remarks). To avoid this, verify that WinHttpSendRequest has reached the WINHTTP_CALLBACK_STATUS_SENDING_REQUEST phase before checking the certificate. Since we're using WinHTTP in synchronous mode, we know for sure that once WinHttpSendRequest returns we've either sent it successfully or not. NOTE: WINHTTP_CALLBACK_STATUS_SENDING_REQUEST appears to be deprecated with no direct replacement. WINHTTP_CALLBACK_STATUS_SENDREQUEST_COMPLETE is only available in async mode, and there doesn't appear to be a method of querying this flag outside of the status callback.
Aaron Franke 7efddeb7 2021-02-15T15:47:28 Fix some typos
Edward Thomson fe41e582 2020-12-23T12:32:58 Merge pull request #5741 from libgit2/ethomson/ipv6 Handle ipv6 addresses
Edward Thomson b7ffc63b 2020-12-17T18:38:01 winhttp: handle ipv6 addresses
Edward Thomson 2807de5c 2020-12-17T15:18:31 http: handle ipv6 addresses
Miguel Arroz 3433acd9 2020-12-21T21:27:58 Wrap newer hostkeys in #ifdefs This allows the library to be built using a pre-1.9.0 version of libssh2.
Miguel Arroz ed7b20e7 2020-12-21T17:26:34 Add support for additional hostkey types. Specifically: ECDSA_256, ECDSA_384, ECDSA_521 and ED25519.
Edward Thomson 86a1cdd3 2020-12-13T13:44:56 Merge pull request #5384 from ianhattendorf/fix/winhttp-client-cert winhttp: support optional client cert
Edward Thomson 37763d38 2020-12-05T15:26:59 threads: rename git_atomic to git_atomic32 Clarify the `git_atomic` type and functions now that we have a 64 bit version as well (`git_atomic64`).
lhchavez 29fe5f61 2020-11-22T18:25:00 Also add the raw hostkey to `git_cert_hostkey` `git_cert_x509` has the raw encoded certificate. Let's do the same for the SSH certificate for symmetry.
Edward Thomson 4f5f1127 2020-11-22T00:01:09 transports: use GIT_ASSERT
Edward Thomson 05816a98 2020-04-05T17:20:08 netops: use GIT_ASSERT
Edward Thomson e316b0d3 2020-05-15T11:47:09 runtime: move init/shutdown into the "runtime" Provide a mechanism for system components to register for initialization and shutdown of the libgit2 runtime.
Edward Thomson 6554b40e 2020-05-13T10:39:33 settings: localize global data Move the settings global data teardown into its own separate function, instead of intermingled with the global state.
Edward Thomson 2ab99c6d 2020-10-04T18:30:10 Merge pull request #5576 from lollipopman/double-auth httpclient: only free challenges for current_server type
Patrick Steinhardt 9e81711b 2020-09-18T10:31:50 Merge pull request #5632 from csware/winhttp_typo Fix typo: Make ifndef macroname the same as the define name
Sven Strickroth 797535b6 2020-09-12T00:14:41 WinHTTP: Try to use TLS1.3 Signed-off-by: Sven Strickroth <email@cs-ware.de>
Sven Strickroth 621e501c 2020-09-10T10:32:02 Fix typo: Make ifndef macroname the same as the define name Signed-off-by: Sven Strickroth <email@cs-ware.de>
Sven Strickroth 2dea3eb4 2020-09-08T13:03:07 Don't fail if a HTTP server announces he supports a protocol upgrade cf. RFC7230 section 6.7, an Upgrade header in a normal response merely informs the client that the server supports upgrading to other protocols, and the client can ask for such an upgrade in a later request. The server requiring an upgrade is via the 426 Upgrade Required response code, not the mere presence of the Upgrade response header. (closes issue #5573) Signed-off-by: Sven Strickroth <email@cs-ware.de>
Jesse Hathaway bd346313 2020-07-10T15:37:08 httpclient: only free challenges for current_server type Prior to this commit we freed both the server and proxy auth challenges in git_http_client_read_response. This works when the proxy needs auth or when the server needs auth, but it does not work when both the proxy and the server need auth as we erroneously remove the server auth challenge before we have added them as server credentials. Instead only remove the challenges for the current_server type. Co-authored-by: Stephen Gelman <ssgelm@gmail.com>
Ian Hattendorf 6a10b802 2020-06-24T08:53:46 winhttp: clarify invalid cert case
Patrick Steinhardt c6184f0c 2020-06-08T21:07:36 tree-wide: do not compile deprecated functions with hard deprecation When compiling libgit2 with -DDEPRECATE_HARD, we add a preprocessor definition `GIT_DEPRECATE_HARD` which causes the "git2/deprecated.h" header to be empty. As a result, no function declarations are made available to callers, but the implementations are still available to link against. This has the problem that function declarations also aren't visible to the implementations, meaning that the symbol's visibility will not be set up correctly. As a result, the resulting library may not expose those deprecated symbols at all on some platforms and thus cause linking errors. Fix the issue by conditionally compiling deprecated functions, only. While it becomes impossible to link against such a library in case one uses deprecated functions, distributors of libgit2 aren't expected to pass -DDEPRECATE_HARD anyway. Instead, users of libgit2 should manually define GIT_DEPRECATE_HARD to hide deprecated functions. Using "real" hard deprecation still makes sense in the context of CI to test we don't use deprecated symbols ourselves and in case a dependant uses libgit2 in a vendored way and knows it won't ever use any of the deprecated symbols anyway.
Patrick Steinhardt a6c9e0b3 2020-06-08T12:40:47 tree-wide: mark local functions as static We've accumulated quite some functions which are never used outside of their respective code unit, but which are lacking the `static` keyword. Add it to reduce their linkage scope and allow the compiler to optimize better.
Patrick Steinhardt 53a8f463 2020-06-03T07:40:59 Merge pull request #5536 from libgit2/ethomson/http httpclient: support googlesource
Edward Thomson 04c7bdb4 2020-06-01T22:44:14 httpclient: clear the read_buf on new requests The httpclient implementation keeps a `read_buf` that holds the data in the body of the response after the headers have been written. We store that data for subsequent calls to `git_http_client_read_body`. If we want to stop reading body data and send another request, we need to clear that cached data. Clear the cached body data on new requests, just like we read any outstanding data from the socket.
Edward Thomson aa8b2c0f 2020-06-01T23:53:55 httpclient: don't read more than the client wants When `git_http_client_read_body` is invoked, it provides the size of the buffer that can be read into. This will be set as the parser context's `output_size` member. Use this as an upper limit on our reads, and ensure that we do not read more than the client requests.
Edward Thomson 51eff5a5 2020-05-29T13:13:19 strarray: we should `dispose` instead of `free` We _dispose_ the contents of objects; we _free_ objects (and their contents). Update `git_strarray_free` to be `git_strarray_dispose`. `git_strarray_free` remains as a deprecated proxy function.
Edward Thomson 570f0340 2020-06-01T19:10:38 httpclient: read_body should return 0 at EOF When users call `git_http_client_read_body`, it should return 0 at the end of a message. When the `on_message_complete` callback is called, this will set `client->state` to `DONE`. In our read loop, we look for this condition and exit. Without this, when there is no data left except the end of message chunk (`0\r\n`) in the http stream, we would block by reading the three bytes off the stream but not making progress in any `on_body` callbacks. Listening to the `on_message_complete` callback allows us to stop trying to read from the socket when we've read the end of message chunk.
Utkarsh Gupta e7a1fd88 2020-03-26T11:42:47 Fix spelling error Signed-off-by: Utkarsh Gupta <utkarsh@debian.org>
Edward Thomson 502e5d51 2020-03-01T12:44:39 httpclient: use a 16kb read buffer for macOS Use a 16kb read buffer for compatibility with macOS SecureTransport. SecureTransport `SSLRead` has the following behavior: 1. It will return _at most_ one TLS packet's worth of data, and 2. It will try to give you as much data as you asked for This means that if you call `SSLRead` with a buffer size that is smaller than what _it_ reads (in other words, the maximum size of a TLS packet), then it will buffer that data for subsequent calls. However, it will also attempt to give you as much data as you requested in your SSLRead call. This means that it will guarantee a network read in the event that it has buffered data. Consider our 8kb buffer and a server sending us 12kb of data on an HTTP Keep-Alive session. Our first `SSLRead` will read the TLS packet off the network. It will return us the 8kb that we requested and buffer the remaining 4kb. Our second `SSLRead` call will see the 4kb that's buffered and decide that it could give us an additional 4kb. So it will do a network read. But there's nothing left to read; that was the end of the data. The HTTP server is waiting for us to provide a new request. The server will eventually time out, our `read` system call will return, `SSLRead` can return back to us and we can make progress. While technically correct, this is wildly ineffecient. (Thanks, Tim Apple!) Moving us to use an internal buffer that is the maximum size of a TLS packet (16kb) ensures that `SSLRead` will never buffer and it will always return everything that it read (albeit decrypted).
Patrick Steinhardt ebade233 2020-02-24T21:49:43 transports: auth_ntlm: fix use of strdup/strndup In the NTLM authentication code, we accidentally use strdup(3P) and strndup(3P) instead of our own wrappers git__strdup and git__strndup, respectively. Fix the issue by using our own functions.
Josh Bleecher Snyder 216165ec 2020-02-07T10:06:28 transports: use GIT_EAUTH for authentication failures When the failure is clearly an auth failure (as opposed to possibly an auth failure), use the error code GIT_EAUTH instead of GIT_ERROR. While we're here, fix a typo and improve an error message. Fixes #5389.
Patrick Steinhardt 46228d86 2020-02-06T11:10:27 transports: http: fix custom headers not being applied In commit b9c5b15a7 (http: use the new httpclient, 2019-12-22), the HTTP code got refactored to extract a generic HTTP client that operates independently of the Git protocol. Part of refactoring was the creation of a new `git_http_request` struct that encapsulates the generation of requests. Our Git-specific HTTP transport was converted to use that in `generate_request`, but during the process we forgot to set up custom headers for the `git_http_request` and as a result we do not send out these headers anymore. Fix the issue by correctly setting up the request's custom headers and add a test to verify we correctly send them.
Ian Hattendorf 1697e90f 2020-02-06T09:22:46 winhttp: variable and switch case scoping
Ian Hattendorf 5b2cd755 2020-02-03T16:24:02 winhttp: support optional client cert
Edward Thomson 3f54ba8b 2020-01-18T13:51:40 credential: change git_cred to git_credential We avoid abbreviations where possible; rename git_cred to git_credential. In addition, we have standardized on a trailing `_t` for enum types, instead of using "type" in the name. So `git_credtype_t` has become `git_credential_t` and its members have become `GIT_CREDENTIAL` instead of `GIT_CREDTYPE`. Finally, the source and header files have been renamed to `credential` instead of `cred`. Keep previous name and values as deprecated, and include the new header files from the previous ones.
Edward Thomson e9cef7c4 2020-01-11T23:53:45 http: introduce GIT_ERROR_HTTP Disambiguate between general network problems and HTTP problems in error codes.
Edward Thomson 29762e40 2020-01-01T16:14:37 httpclient: use defines for status codes
Edward Thomson 76fd406a 2019-12-26T16:37:01 http: send probe packets When we're authenticating with a connection-based authentication scheme (NTLM, Negotiate), we need to make sure that we're still connected between the initial GET where we did the authentication and the POST that we're about to send. Our keep-alive session may have not kept alive, but more likely, some servers do not authenticate the entire keep-alive connection and may have "forgotten" that we were authenticated, namely Apache and nginx. Send a "probe" packet, that is an HTTP POST request to the upload-pack or receive-pack endpoint, that consists of an empty git pkt ("0000"). If we're authenticated, we'll get a 200 back. If we're not, we'll get a 401 back, and then we'll resend that probe packet with the first step of our authentication (asking to start authentication with the given scheme). We expect _yet another_ 401 back, with the authentication challenge. Finally, we will send our authentication response with the actual POST data. This will allow us to authenticate without draining the POST data in the initial request that gets us a 401.
Edward Thomson b9c5b15a 2019-12-22T14:12:24 http: use the new httpclient Untangle the notion of the http transport from the actual http implementation. The http transport now uses the httpclient.
Edward Thomson 7372573b 2019-10-25T12:22:10 httpclient: support expect/continue Allow users to opt-in to expect/continue handling when sending a POST and we're authenticated with a "connection-based" authentication mechanism like NTLM or Negotiate. If the response is a 100, return to the caller (to allow them to post their body). If the response is *not* a 100, buffer the response for the caller. HTTP expect/continue is generally safe, but some legacy servers have not implemented it correctly. Require it to be opt-in.
Edward Thomson 6c21c989 2019-12-14T21:32:07 httpclient: support CONNECT proxies Fully support HTTP proxies, in particular CONNECT proxies, that allow us to speak TLS through a proxy.
Edward Thomson 6b208836 2019-12-18T21:55:28 httpclient: handle chunked responses Detect responses that are sent with Transfer-Encoding: chunked, and record that information so that we can consume the entire message body.
Edward Thomson 6a095679 2019-12-14T10:34:36 httpclient: support authentication Store the last-seen credential challenges (eg, all the 'WWW-Authenticate' headers in a response message). Given some credentials, find the best (first) challenge whose mechanism supports these credentials. (eg, 'Basic' supports username/password credentials, 'Negotiate' supports default credentials). Set up an authentication context for this mechanism and these credentials. Continue exchanging challenge/responses until we're authenticated.
Edward Thomson 1152f361 2019-12-13T18:37:19 httpclient: consume final chunk message When sending a new request, ensure that we got the entirety of the response body. Our caller may have decided that they were done reading. If we were not at the end of the message, this means that we need to tear down the connection and cannot do keep-alive. However, if the caller read all of the message, but we still have a final end-of-response chunk signifier (ie, "0\r\n\r\n") on the socket, then we should consider that the response was successfully copmleted. If we're asked to send a new request, try to read from the socket, just to clear out that end-of-chunk message, marking ourselves as disconnected on any errors.
Edward Thomson 84b99a95 2019-12-12T13:53:43 httpclient: add chunk support to POST Teach httpclient how to support chunking when POSTing request bodies.
Edward Thomson eacecebd 2019-12-12T13:25:32 httpclient: introduce a simple http implementation Introduce a new http client implementation that can GET and POST to remote URLs. Consumers can use `git_http_client_init` to create a new client, `git_http_client_send_request` to send a request to the remote server and `git_http_client_read_response` to read the response. The http client implementation will perform the I/O with the remote server (http or https) but does not understand the git smart transfer protocol. This allows us to split the concerns of the http subtransport from the actual http implementation.
Edward Thomson 471daeea 2019-12-01T14:00:49 net: refactor gitno redirect handling Move the redirect handling into `git_net_url` for consistency.
Edward Thomson a194e17f 2019-11-27T18:43:36 winhttp: refactor request sending Clarify what it means to not send a length; this allows us to refactor requests further.
Edward Thomson 7e0f5a6a 2019-10-22T22:37:14 smart protocol: correct case in error messages
Edward Thomson 2d6a61bd 2019-10-22T09:52:31 gssapi: validate that we were requested Negotiate
Edward Thomson e761df5c 2019-10-22T09:35:48 gssapi: dispose after completion for retry Disposal pattern; dispose on completion, allowing us to retry authentication, which may happen on web servers that close connection-based authenticated sessions (NTLM/SPNEGO) unexpectedly.
Jonathan Turcotte 5625892b 2019-09-20T12:06:11 gssapi: delete half-built security context so auth can continue
Edward Thomson 2174aa0a 2019-10-21T11:47:23 gssapi: correct incorrect case in error message
Edward Thomson 3f6fe054 2019-10-20T17:23:01 gssapi: protect GSS_ERROR macro The GSS_ERROR(x) macro may expand to `(x & value)` on some implementations, instead of `((x) & value)`. This is the case on macOS, which means that if we attempt to wrap an expression in that macro, like `a = b`, then that would expand to `(a = b & value)`. Since `&` has a higher precedence, this is not at all what we want, and will set our result code to an incorrect value. Evaluate the expression then test it with `GSS_ERROR` independently to avoid this.
Edward Thomson 73fe690d 2019-10-20T17:22:27 gssapi: protect against empty messages
Edward Thomson 917ba762 2020-01-18T14:14:00 auth: update enum type name for consistency libgit2 does not use `type_t` suffixes as it's redundant; thus, rename `git_http_authtype_t` to `git_http_auth_t` for consistency.
Patrick Steinhardt dbb6429c 2020-01-10T14:30:18 Merge pull request #5305 from kas-luthor/bugfix/multiple-auth Adds support for multiple SSH auth mechanisms being used sequentially
Patrick Steinhardt 33f93bf3 2020-01-06T11:53:53 Merge pull request #5325 from josharian/no-double-slash http: avoid generating double slashes in url
Josh Bleecher Snyder 05c1fb8a 2019-12-06T11:04:40 http: avoid generating double slashes in url Prior to this change, given a remote url with a trailing slash, such as http://localhost/a/, service requests would contain a double slash: http://localhost/a//info/refs?service=git-receive-pack. Detect and prevent that. Updates #5321
kas cb7fd1ed 2019-12-13T15:11:38 Fixes code styling
Patrick Steinhardt 86852613 2019-12-13T12:13:05 smart_pkt: fix overflow resulting in OOB read/write of one byte When parsing OK packets, we copy any information after the initial "ok " prefix into the resulting packet. As newlines act as packet boundaries, we also strip the trailing newline if there is any. We do not check whether there is any data left after the initial "ok " prefix though, which leads to a pointer overflow in that case as `len == 0`: if (line[len - 1] == '\n') --len; This out-of-bounds read is a rather useless gadget, as we can only deduce whether at some offset there is a newline character. In case there accidentally is one, we overflow `len` to `SIZE_MAX` and then write a NUL byte into an array indexed by it: pkt->ref[len] = '\0'; Again, this doesn't seem like something that's possible to be exploited in any meaningful way, but it may surely lead to inconsistencies or DoS. Fix the issue by checking whether there is any trailing data after the packet prefix.
Anders Borum 48c3f7e1 2019-11-20T11:21:14 ssh: include sha256 host key hash when supported