src/transports


Log

Author Commit Date CI Message
Patrick Steinhardt 46228d86 2020-02-06T11:10:27 transports: http: fix custom headers not being applied In commit b9c5b15a7 (http: use the new httpclient, 2019-12-22), the HTTP code got refactored to extract a generic HTTP client that operates independently of the Git protocol. Part of refactoring was the creation of a new `git_http_request` struct that encapsulates the generation of requests. Our Git-specific HTTP transport was converted to use that in `generate_request`, but during the process we forgot to set up custom headers for the `git_http_request` and as a result we do not send out these headers anymore. Fix the issue by correctly setting up the request's custom headers and add a test to verify we correctly send them.
Edward Thomson 3f54ba8b 2020-01-18T13:51:40 credential: change git_cred to git_credential We avoid abbreviations where possible; rename git_cred to git_credential. In addition, we have standardized on a trailing `_t` for enum types, instead of using "type" in the name. So `git_credtype_t` has become `git_credential_t` and its members have become `GIT_CREDENTIAL` instead of `GIT_CREDTYPE`. Finally, the source and header files have been renamed to `credential` instead of `cred`. Keep previous name and values as deprecated, and include the new header files from the previous ones.
Edward Thomson e9cef7c4 2020-01-11T23:53:45 http: introduce GIT_ERROR_HTTP Disambiguate between general network problems and HTTP problems in error codes.
Edward Thomson 29762e40 2020-01-01T16:14:37 httpclient: use defines for status codes
Edward Thomson 76fd406a 2019-12-26T16:37:01 http: send probe packets When we're authenticating with a connection-based authentication scheme (NTLM, Negotiate), we need to make sure that we're still connected between the initial GET where we did the authentication and the POST that we're about to send. Our keep-alive session may have not kept alive, but more likely, some servers do not authenticate the entire keep-alive connection and may have "forgotten" that we were authenticated, namely Apache and nginx. Send a "probe" packet, that is an HTTP POST request to the upload-pack or receive-pack endpoint, that consists of an empty git pkt ("0000"). If we're authenticated, we'll get a 200 back. If we're not, we'll get a 401 back, and then we'll resend that probe packet with the first step of our authentication (asking to start authentication with the given scheme). We expect _yet another_ 401 back, with the authentication challenge. Finally, we will send our authentication response with the actual POST data. This will allow us to authenticate without draining the POST data in the initial request that gets us a 401.
Edward Thomson b9c5b15a 2019-12-22T14:12:24 http: use the new httpclient Untangle the notion of the http transport from the actual http implementation. The http transport now uses the httpclient.
Edward Thomson 7372573b 2019-10-25T12:22:10 httpclient: support expect/continue Allow users to opt-in to expect/continue handling when sending a POST and we're authenticated with a "connection-based" authentication mechanism like NTLM or Negotiate. If the response is a 100, return to the caller (to allow them to post their body). If the response is *not* a 100, buffer the response for the caller. HTTP expect/continue is generally safe, but some legacy servers have not implemented it correctly. Require it to be opt-in.
Edward Thomson 6c21c989 2019-12-14T21:32:07 httpclient: support CONNECT proxies Fully support HTTP proxies, in particular CONNECT proxies, that allow us to speak TLS through a proxy.
Edward Thomson 6b208836 2019-12-18T21:55:28 httpclient: handle chunked responses Detect responses that are sent with Transfer-Encoding: chunked, and record that information so that we can consume the entire message body.
Edward Thomson 6a095679 2019-12-14T10:34:36 httpclient: support authentication Store the last-seen credential challenges (eg, all the 'WWW-Authenticate' headers in a response message). Given some credentials, find the best (first) challenge whose mechanism supports these credentials. (eg, 'Basic' supports username/password credentials, 'Negotiate' supports default credentials). Set up an authentication context for this mechanism and these credentials. Continue exchanging challenge/responses until we're authenticated.
Edward Thomson 1152f361 2019-12-13T18:37:19 httpclient: consume final chunk message When sending a new request, ensure that we got the entirety of the response body. Our caller may have decided that they were done reading. If we were not at the end of the message, this means that we need to tear down the connection and cannot do keep-alive. However, if the caller read all of the message, but we still have a final end-of-response chunk signifier (ie, "0\r\n\r\n") on the socket, then we should consider that the response was successfully copmleted. If we're asked to send a new request, try to read from the socket, just to clear out that end-of-chunk message, marking ourselves as disconnected on any errors.
Edward Thomson 84b99a95 2019-12-12T13:53:43 httpclient: add chunk support to POST Teach httpclient how to support chunking when POSTing request bodies.
Edward Thomson eacecebd 2019-12-12T13:25:32 httpclient: introduce a simple http implementation Introduce a new http client implementation that can GET and POST to remote URLs. Consumers can use `git_http_client_init` to create a new client, `git_http_client_send_request` to send a request to the remote server and `git_http_client_read_response` to read the response. The http client implementation will perform the I/O with the remote server (http or https) but does not understand the git smart transfer protocol. This allows us to split the concerns of the http subtransport from the actual http implementation.
Edward Thomson 7e0f5a6a 2019-10-22T22:37:14 smart protocol: correct case in error messages
Edward Thomson 2d6a61bd 2019-10-22T09:52:31 gssapi: validate that we were requested Negotiate
Edward Thomson e761df5c 2019-10-22T09:35:48 gssapi: dispose after completion for retry Disposal pattern; dispose on completion, allowing us to retry authentication, which may happen on web servers that close connection-based authenticated sessions (NTLM/SPNEGO) unexpectedly.
Edward Thomson 471daeea 2019-12-01T14:00:49 net: refactor gitno redirect handling Move the redirect handling into `git_net_url` for consistency.
Edward Thomson a194e17f 2019-11-27T18:43:36 winhttp: refactor request sending Clarify what it means to not send a length; this allows us to refactor requests further.
Jonathan Turcotte 5625892b 2019-09-20T12:06:11 gssapi: delete half-built security context so auth can continue
Edward Thomson 2174aa0a 2019-10-21T11:47:23 gssapi: correct incorrect case in error message
Edward Thomson 3f6fe054 2019-10-20T17:23:01 gssapi: protect GSS_ERROR macro The GSS_ERROR(x) macro may expand to `(x & value)` on some implementations, instead of `((x) & value)`. This is the case on macOS, which means that if we attempt to wrap an expression in that macro, like `a = b`, then that would expand to `(a = b & value)`. Since `&` has a higher precedence, this is not at all what we want, and will set our result code to an incorrect value. Evaluate the expression then test it with `GSS_ERROR` independently to avoid this.
Edward Thomson 73fe690d 2019-10-20T17:22:27 gssapi: protect against empty messages
Edward Thomson 917ba762 2020-01-18T14:14:00 auth: update enum type name for consistency libgit2 does not use `type_t` suffixes as it's redundant; thus, rename `git_http_authtype_t` to `git_http_auth_t` for consistency.
Patrick Steinhardt dbb6429c 2020-01-10T14:30:18 Merge pull request #5305 from kas-luthor/bugfix/multiple-auth Adds support for multiple SSH auth mechanisms being used sequentially
Patrick Steinhardt 33f93bf3 2020-01-06T11:53:53 Merge pull request #5325 from josharian/no-double-slash http: avoid generating double slashes in url
Josh Bleecher Snyder 05c1fb8a 2019-12-06T11:04:40 http: avoid generating double slashes in url Prior to this change, given a remote url with a trailing slash, such as http://localhost/a/, service requests would contain a double slash: http://localhost/a//info/refs?service=git-receive-pack. Detect and prevent that. Updates #5321
kas cb7fd1ed 2019-12-13T15:11:38 Fixes code styling
Patrick Steinhardt 86852613 2019-12-13T12:13:05 smart_pkt: fix overflow resulting in OOB read/write of one byte When parsing OK packets, we copy any information after the initial "ok " prefix into the resulting packet. As newlines act as packet boundaries, we also strip the trailing newline if there is any. We do not check whether there is any data left after the initial "ok " prefix though, which leads to a pointer overflow in that case as `len == 0`: if (line[len - 1] == '\n') --len; This out-of-bounds read is a rather useless gadget, as we can only deduce whether at some offset there is a newline character. In case there accidentally is one, we overflow `len` to `SIZE_MAX` and then write a NUL byte into an array indexed by it: pkt->ref[len] = '\0'; Again, this doesn't seem like something that's possible to be exploited in any meaningful way, but it may surely lead to inconsistencies or DoS. Fix the issue by checking whether there is any trailing data after the packet prefix.
Anders Borum 48c3f7e1 2019-11-20T11:21:14 ssh: include sha256 host key hash when supported
kas fbdf5bdd 2019-11-16T22:41:25 Adds support for multiple SSH auth mechanisms being used sequentially
pcpthm 3f998aee 2019-10-26T17:21:29 Follow 308 redirect in WinHTTP transport
Etienne Samson dbc17a7e 2019-09-21T08:46:08 negotiate: use GSS.framework on macOS
Etienne Samson 49a3289e 2019-09-21T08:25:23 cred: add missing private header in GSSAPI block Should have been part of 8bf0f7eb26c65b2b937b1f40a384b9b269b0b76d
Etienne Samson 8bf0f7eb 2019-09-09T13:00:27 cred: separate public interface from low-level details
Patrick Steinhardt 5d8a4659 2019-09-13T10:31:49 Merge pull request #5195 from tiennou/fix/commitish-smart-push smart: use push_glob instead of manual filtering
Edward Thomson 39028eb6 2019-09-09T13:00:53 Merge pull request #5212 from libgit2/ethomson/creds_for_scheme Use an HTTP scheme that supports the given credentials
Ian Hattendorf 4de51f9e 2019-08-23T16:05:28 http: ensure the scheme supports the credentials When a server responds with multiple scheme support - for example, Negotiate and NTLM are commonly used together - we need to ensure that we choose a scheme that supports the credentials.
Etienne Samson 53f51c60 2019-08-21T19:48:05 smart: implement by-date insertion when revwalking
Patrick Steinhardt c2dd895a 2019-08-08T10:47:29 transports: http: check for memory allocation failures When allocating a chunk that is used to write to HTTP streams, we do not check for memory allocation errors. This may lead us to write to a `NULL` pointer and thus cause a segfault. Fix this by adding a call to `GIT_ERROR_CHECK_ALLOC`.
Edward Thomson 1c847169 2019-08-21T16:38:59 http: allow dummy negotiation scheme to fail to act The dummy negotiation scheme is used for known authentication strategies that do not wish to act. For example, when a server requests the "Negotiate" scheme but libgit2 is not built with Negotiate support, and will use the "dummy" strategy which will simply not act. Instead of setting `out` to NULL and returning a successful code, return `GIT_PASSTHROUGH` to indicate that it did not act and catch that error code.
Etienne Samson 39d18fe6 2019-07-31T08:37:10 smart: use push_glob instead of manual filtering The code worked under the assumption that anything under `refs/tags` are tag objects, and all the rest would be peelable to a commit. As it is completely valid to have tags to blobs under a non `refs/tags` ref, this would cause failures when trying to peel a tag to a commit. Fix the broken filtering by switching to `git_revwalk_push_glob`, which already handles this case.
Edward Thomson 3cd123e9 2019-06-15T21:56:53 win32: define DWORD_MAX if it's not defined MinGW does not define DWORD_MAX. Specify it when it's not defined.
Edward Thomson d93b0aa0 2019-06-15T21:47:40 win32: decorate unused parameters
Edward Thomson 8c925ef8 2019-05-21T14:30:28 smart protocol: validate progress message length Ensure that the server has not sent us overly-large sideband messages (ensure that they are no more than `INT_MAX` bytes), then cast to `int`.
Edward Thomson 7afe788c 2019-05-21T14:27:46 smart transport: use size_t for sizes
Edward Thomson db7f1d9b 2019-05-21T14:21:58 local transport: cast message size to int explicitly Our progress information messages are short (and bounded by their format string), cast the length to int for callers.
Edward Thomson 8048ba70 2019-05-21T14:18:40 winhttp: safely cast length to DWORD
Edward Thomson 2a4bcf63 2019-06-23T18:24:23 errors: use lowercase Use lowercase for our error messages, per our custom.
Edward Thomson 5d92e547 2019-06-08T17:28:35 oid: `is_zero` instead of `iszero` The only function that is named `issomething` (without underscore) was `git_oid_iszero`. Rename it to `git_oid_is_zero` for consistency with the rest of the library.
Edward Thomson 0b5ba0d7 2019-06-06T16:36:23 Rename opt init functions to `options_init` In libgit2 nomenclature, when we need to verb a direct object, we name a function `git_directobject_verb`. Thus, if we need to init an options structure named `git_foo_options`, then the name of the function that does that should be `git_foo_options_init`. The previous names of `git_foo_init_options` is close - it _sounds_ as if it's initializing the options of a `foo`, but in fact `git_foo_options` is its own noun that should be respected. Deprecate the old names; they'll now call directly to the new ones.
Edward Thomson 7ea8630e 2019-04-07T20:11:59 http: free auth context on failure When we send HTTP credentials but the server rejects them, tear down the authentication context so that we can start fresh. To maintain this state, additionally move all of the authentication handling into `on_auth_required`.
Edward Thomson 005b5bc2 2019-04-07T17:55:23 http: reconnect to proxy on connection close When we're issuing a CONNECT to a proxy, we expect to keep-alive to the proxy. However, during authentication negotiations, the proxy may close the connection. Reconnect if the server closes the connection.
Edward Thomson d171fbee 2019-04-07T17:40:23 http: allow server to drop a keepalive connection When we have a keep-alive connection to the server, that server may legally drop the connection for any reason once a successful request and response has occurred. It's common for servers to drop the connection after some amount of time or number of requests have occurred.
Edward Thomson 9af1de5b 2019-03-24T20:49:57 http: stop on server EOF We stop the read loop when we have read all the data. We should also consider the server's feelings. If the server hangs up on us, we need to stop our read loop. Otherwise, we'll try to read from the server - and fail - ad infinitum.
Edward Thomson 539e6293 2019-03-22T19:06:46 http: teach auth mechanisms about connection affinity Instead of using `is_complete` to decide whether we have connection or request affinity for authentication mechanisms, set a boolean on the mechanism definition itself.
Edward Thomson 3e0b4b43 2019-03-22T18:52:03 http: maintain authentication across connections For request-based authentication mechanisms (Basic, Digest) we should keep the authentication context alive across socket connections, since the authentication headers must be transmitted with every request. However, we should continue to remove authentication contexts for mechanisms with connection affinity (NTLM, Negotiate) since we need to reauthenticate for every socket connection.
Edward Thomson ce72ae95 2019-03-22T10:53:30 http: simplify authentication mechanisms Hold an individual authentication context instead of trying to maintain all the contexts; we can select the preferred context during the initial negotiation. Subsequent authentication steps will re-use the chosen authentication (until such time as it's rejected) instead of trying to manage multiple contexts when all but one will never be used (since we can only authenticate with a single mechanism at a time.) Also, when we're given a 401 or 407 in the middle of challenge/response handling, short-circuit immediately without incrementing the retry count. The multi-step authentication is expected, and not a "retry" and should not be penalized as such. This means that we don't need to keep the contexts around and ensures that we do not unnecessarily fail for too many retries when we have challenge/response auth on a proxy and a server and potentially redirects in play as well.
Edward Thomson 6d931ba7 2019-03-22T16:35:59 http: don't set the header in the auth token
Edward Thomson 10718526 2019-03-09T13:53:16 http: don't reset replay count after connection A "connection" to a server is transient, and we may reconnect to a server in the midst of authentication failures (if the remote indicates that we should, via `Connection: close`) or in a redirect.
Edward Thomson 3192e3c9 2019-03-07T16:57:11 http: provide an NTLM authentication provider
Edward Thomson 79fc8281 2019-03-21T16:49:25 http: validate server's authentication types Ensure that the server supports the particular credential type that we're specifying. Previously we considered credential types as an input to an auth mechanism - since the HTTP transport only supported default credentials (via negotiate) and username/password credentials (via basic), this worked. However, if we are to add another mechanism that uses username/password credentials, we'll need to be careful to identify the types that are accepted.
Edward Thomson 5ad99210 2019-03-07T16:43:45 http: consume body on proxy auth failure We must always consume the full parser body if we're going to keep-alive. So in the authentication failure case, continue advancing the http message parser until it's complete, then we can retry the connection. Not doing so would mean that we have to tear the connection down and start over. Advancing through fully (even though we don't use the data) will ensure that we can retry a connection with keep-alive.
Edward Thomson 75b20458 2019-03-07T16:34:55 http: always consume body on auth failure When we get an authentication failure, we must consume the entire body of the response. If we only read half of the body (on the assumption that we can ignore the rest) then we will never complete the parsing of the message. This means that we will never set the complete flag, and our replay must actually tear down the connection and try again. This is particularly problematic for stateful authentication mechanisms (SPNEGO, NTLM) that require that we keep the connection alive. Note that the prior code is only a problem when the 401 that we are parsing is too large to be read in a single chunked read from the http parser. But now we will continue to invoke the http parser until we've got a complete message in the authentication failed scenario. Note that we need not do anything with the message, so when we get an authentication failed, we'll stop adding data to our buffer, we'll simply loop in the parser and let it advance its internal state.
Edward Thomson e87f912b 2019-03-21T15:29:52 http: don't realloc the request
Edward Thomson 10e8fe55 2019-03-21T13:55:54 transports: add an `is_complete` function for auth Some authentication mechanisms (like HTTP Basic and Digest) have a one-step mechanism to create credentials, but there are more complex mechanisms like NTLM and Negotiate that require challenge/response after negotiation, requiring several round-trips. Add an `is_complete` function to know when they have round-tripped enough to be a single authentication and should now either have succeeded or failed to authenticate.
Edward Thomson 9050c69c 2019-03-09T17:24:16 http: examine keepalive status at message end We cannot examine the keep-alive status of the http parser in `http_connect`; it's too late and the critical information about whether keep-alive is supported has been destroyed. Per the documentation for `http_should_keep_alive`: > If http_should_keep_alive() in the on_headers_complete or > on_message_complete callback returns 0, then this should be > the last message on the connection. Query then and set the state.
Edward Thomson 956ba48b 2019-03-14T10:36:40 http: increase the replay count Increase the permissible replay count; with multiple-step authentication schemes (NTLM, Negotiate), proxy authentication and redirects, we need to be mindful of the number of steps it takes to get connected. 7 seems high but can be exhausted quickly with just a single authentication failure over a redirected multi-state authentication pipeline.
Edward Thomson ee3d35cf 2019-04-07T19:15:21 http: support https for proxies
Edward Thomson 3d11b6c5 2019-03-11T20:36:09 winhttp: support default credentials for proxies We did not properly support default credentials for proxies, only for destination servers. Refactor the credential handling to support sending either username/password _or_ default credentials to either the proxy or the destination server. This actually shares the authentication logic between proxy servers and destination servers. Due to copy/pasta drift over time, they had diverged. Now they share a common logic which is: first, use credentials specified in the URL (if there were any), treating empty username and password (ie, "http://:@foo.com/") as default credentials, for compatibility with git. Next, call the credential callbacks. Finally, fallback to WinHTTP compatibility layers using built-in authentication like we always have. Allowing default credentials for proxies requires moving the security level downgrade into the credential setting routines themselves. We will update our security level to "high" by default which means that we will never send default credentials without prompting. (A lower setting, like the WinHTTP default of "medium" would allow WinHTTP to handle credentials for us, despite what a user may have requested with their structures.) Now we start with "high" and downgrade to "low" only after a user has explicitly requested default credentials.
Edward Thomson c6ab183e 2019-03-11T11:43:08 net: rename gitno_connection_data to git_net_url "Connection data" is an imprecise and largely incorrect name; these structures are actually parsed URLs. Provide a parser that takes a URL string and produces a URL structure (if it is valid). Separate the HTTP redirect handling logic from URL parsing, keeping a `gitno_connection_data_handle_redirect` whose only job is redirect handling logic and does not parse URLs itself.
Edward Thomson 810cefd9 2019-06-08T17:14:00 credentials: suffix the callbacks with `_cb` The credential callbacks should match the other callback naming conventions, using the `_cb` suffix instead of a `_callback` suffix.
Etienne Samson b51789ac 2019-04-16T13:20:08 transports: make use of the `GIT_CONTAINER_OF` macro
Edward Thomson 59001e83 2019-02-21T11:41:19 remote: rename git_push_transfer_progress callback The `git_push_transfer_progress` is a callback and as such should be suffixed with `_cb` for consistency. Rename `git_push_transfer_progress` to `git_push_transfer_progress_cb`.
Edward Thomson a1ef995d 2019-02-21T10:33:30 indexer: use git_indexer_progress throughout Update internal usage of `git_transfer_progress` to `git_indexer_progreses`.
Patrick Steinhardt 5265b31c 2019-01-23T15:00:20 streams: fix callers potentially only writing partial data Similar to the write(3) function, implementations of `git_stream_write` do not guarantee that all bytes are written. Instead, they return the number of bytes that actually have been written, which may be smaller than the total number of bytes. Furthermore, due to an interface design issue, we cannot ever write more than `SSIZE_MAX` bytes at once, as otherwise we cannot represent the number of bytes written to the caller. Unfortunately, no caller of `git_stream_write` ever checks the return value, except to verify that no error occurred. Due to this, they are susceptible to the case where only partial data has been written. Fix this by introducing a new function `git_stream__write_full`. In contrast to `git_stream_write`, it will always return either success or failure, without returning the number of bytes written. Thus, it is able to write all `SIZE_MAX` bytes and loop around `git_stream_write` until all data has been written. Adjust all callers except the BIO callbacks in our mbedtls and OpenSSL streams, which already do the right thing and require the amount of bytes written.
Edward Thomson 4947216f 2019-01-21T11:11:27 git transport: only write INT_MAX bytes The transport code returns an `int` with the number of bytes written; thus only attempt to write at most `INT_MAX`.
Sven Strickroth bff7aed2 2019-01-24T16:44:04 Don't use deprecated constants Follow up for PR #4917. Signed-off-by: Sven Strickroth <email@cs-ware.de>
Edward Thomson f673e232 2018-12-27T13:47:34 git_error: use new names in internal APIs and usage Move to the `git_error` name in the internal API for error-related functions.
Edward Thomson 1758636b 2019-01-19T01:38:34 Merge pull request #4939 from libgit2/ethomson/git_ref Move `git_ref_t` to `git_reference_t`
Edward Thomson abe23675 2019-01-17T20:09:05 Merge pull request #4925 from lhchavez/fix-a-bunch-of-warnings Fix a bunch of warnings
Edward Thomson ed8cfbf0 2019-01-17T00:32:31 references: use new names in internal usage Update internal usage to use the `git_reference` names for constants.
Jason Haslam 35d86c77 2019-01-14T10:14:36 proxy: fix crash on remote connection with GIT_PROXY_AUTO but no proxy is detected
lhchavez 321d19c1 2019-01-06T08:36:06 Windows is hard.
lhchavez 7b453e7e 2019-01-05T22:12:48 Fix a bunch of warnings This change fixes a bunch of warnings that were discovered by compiling with `clang -target=i386-pc-linux-gnu`. It turned out that the intrinsics were not necessarily being used in all platforms! Especially in GCC, since it does not support __has_builtin. Some more warnings were gleaned from the Windows build, but I stopped when I saw that some third-party dependencies (e.g. zlib) have warnings of their own, so we might never be able to enable -Werror there.
Edward Thomson 168fe39b 2018-11-28T14:26:57 object_type: use new enumeration names Use the new object_type enumeration names within the codebase.
Edward Thomson 30ac46aa 2018-11-28T10:12:43 http: reset replay_count upon connection Reset the replay_count upon a successful connection. It's possible that we could encounter a situation where we connect successfully but need to replay a request - for example, a connection and initial request succeeds without authentication but a subsequent call does require authentication. Reset the replay count upon any successful request to afford subsequent replays room to manuever.
Edward Thomson 52478d7d 2018-11-18T19:54:49 http: don't allow SSL connections to a proxy Temporarily disallow SSL connections to a proxy until we can understand the valgrind warnings when tunneling OpenSSL over OpenSSL.
Edward Thomson 41f620d9 2018-11-18T19:10:50 http: only load proxy configuration during connection Only load the proxy configuration during connection; we need this data when we're going to connect to the server, however we may mutate it after connection (connecting through a CONNECT proxy means that we should send requests like normal). If we reload the proxy configuration but do not actually reconnect (because we're in a keep-alive session) then we will reload the proxy configuration that we should have mutated. Thus, only load the proxy configuration when we know that we're going to reconnect.
Edward Thomson 0467606f 2018-11-18T11:00:11 http: disallow repeated headers from servers Don't allow servers to send us multiple Content-Type, Content-Length or Location headers.
Edward Thomson 21142c5a 2018-10-29T10:04:48 http: remove cURL We previously used cURL to support HTTP proxies. Now that we've added this support natively, we can remove the curl dependency.
Edward Thomson 5d4e1e04 2018-10-28T21:27:56 http: use CONNECT to talk to proxies Natively support HTTPS connections through proxies by speaking CONNECT to the proxy and then adding a TLS connection on top of the socket.
Edward Thomson 43b592ac 2018-10-25T08:49:01 tls: introduce a wrap function Introduce `git_tls_stream_wrap` which will take an existing `stream` with an already connected socket and begin speaking TLS on top of it. This is useful if you've built a connection to a proxy server and you wish to begin CONNECT over it to tunnel a TLS connection. Also update the pluggable TLS stream layer so that it can accept a registration structure that provides an `init` and `wrap` function, instead of a single initialization function.
Edward Thomson b2ed778a 2018-11-18T22:20:10 http transport: reset error message on cert failure Store the error message from the underlying TLS library before calling the certificate callback. If it refuses to act (demonstrated by returning GIT_PASSTHROUGH) then restore the error message. Otherwise, if the callback does not set an error message, set a sensible default that implicates the callback itself.
Edward Thomson 2ce2315c 2018-10-22T17:33:45 http transport: support cert check for proxies Refactor certificate checking so that it can easily be called for proxies or the remote server.
Edward Thomson 74c6e08e 2018-10-22T14:56:53 http transport: provide proxy credentials
Edward Thomson 496da38c 2018-10-22T12:48:45 http transport: refactor storage Create a simple data structure that contains information about the server being connected to, whether that's the actual remote endpoint (git server) or an intermediate proxy. This allows for organization of streams, authentication state, etc.
Edward Thomson 6af8572c 2018-10-22T11:29:01 http transport: cap number of authentication replays Put a limit on the number of authentication replays in the HTTP transport. Standardize on 7 replays for authentication or redirects, which matches the behavior of the WinHTTP transport.
Edward Thomson 22654812 2018-10-22T11:24:05 http transport: prompt for proxy credentials Teach the HTTP transport how to prompt for proxy credentials.
Edward Thomson 0328eef6 2018-10-22T11:14:06 http transport: further refactor credential handling Prepare credential handling to understand both git server and proxy server authentication.
Edward Thomson 32cb56ce 2018-10-22T10:16:54 http transport: refactor credential handling Factor credential handling into its own function. Additionally, add safety checks to ensure that we are in a valid state - that we have received a valid challenge from the server and that we have configuration to respond to that challenge.