|
6d8a34ad
|
2019-06-23T19:57:05
|
|
ci: add flaky test re-execution on Unix
Our online tests are occasionally flaky since they hit real network
endpoints. Re-run them up to 5 times if they fail, to allow us to
avoid having to fail the whole build.
|
|
a080037c
|
2019-06-24T15:49:31
|
|
Merge pull request #5137 from libgit2/ethomson/error_messages
errors: use lowercase
|
|
6ffc49e1
|
2019-06-23T19:29:55
|
|
Merge pull request #5136 from libgit2/ethomson/largefiles_32bit
largefile tests: only write 2GB on 32-bit platforms
|
|
2a4bcf63
|
2019-06-23T18:24:23
|
|
errors: use lowercase
Use lowercase for our error messages, per our custom.
|
|
8eb910b0
|
2019-06-23T11:26:10
|
|
largefile tests: only write 2GB on 32-bit platforms
Don't try to feed 4 GB of data to APIs that only take a `size_t` on
32-bit platforms.
|
|
4df9f3c6
|
2019-06-21T08:24:47
|
|
Merge pull request #5129 from ehuss/patch-1
Fix broken link in README
|
|
84262643
|
2019-06-20T10:32:09
|
|
Fix broken link in README
|
|
55a1535d
|
2019-06-20T12:32:31
|
|
Merge pull request #5122 from libgit2/ethomson/deprecate_headlist
net: remove unused `git_headlist_cb`
|
|
89f36f1b
|
2019-06-17T13:07:56
|
|
Merge pull request #5124 from pks-t/pks/cmake-ntlm-without-https
cmake: default NTLM client to off if no HTTPS support
|
|
393fb8a1
|
2019-06-17T12:15:19
|
|
cmake: default NTLM client to off if no HTTPS support
If building libgit2 with `-DUSE_HTTPS=NO`, then CMake will
generate an error complaining that there's no usable HTTPS
backend for NTLM. In fact, it doesn't make sense to support NTLM
when we don't support HTTPS. So let's should just have
NTLM default to OFF when HTTPS is disabled to make life easier
and to fix our OSSFuzz builds failing.
|
|
2c642918
|
2019-06-16T17:55:40
|
|
net: remove unused `git_headlist_cb`
|
|
37e4c1ba
|
2019-06-16T14:35:53
|
|
Merge pull request #5119 from libgit2/ethomson/attr
attr: rename constants and macros for consistency
|
|
91a300b7
|
2019-06-16T00:46:30
|
|
attr: rename constants and macros for consistency
Our enumeration values are not generally suffixed with `T`. Further,
our enumeration names are generally more descriptive.
|
|
c3bbbcf5
|
2019-06-16T12:30:56
|
|
Merge pull request #5117 from libgit2/ethomson/to_from
Change API instances of `fromnoun` to `from_noun` (with an underscore)
|
|
2bdb617a
|
2019-06-16T11:10:58
|
|
Merge pull request #5118 from libgit2/ethomson/object_size
object: rename git_object__size to git_object_size
|
|
e45350fe
|
2019-06-16T00:10:02
|
|
tag: add underscore to `from` function
The majority of functions are named `from_something` (with an
underscore) instead of `fromsomething`. Update the tag function for
consistency with the rest of the library.
|
|
6574cd00
|
2019-06-08T19:25:36
|
|
index: rename `frombuffer` to `from_buffer`
The majority of functions are named `from_something` (with an
underscore) instead of `fromsomething`. Update the index functions for
consistency with the rest of the library.
|
|
b7791d04
|
2019-06-16T00:23:01
|
|
object: rename git_object__size to git_object_size
We don't use double-underscores in the public API.
|
|
08f39208
|
2019-06-08T17:46:04
|
|
blob: add underscore to `from` functions
The majority of functions are named `from_something` (with an
underscore) instead of `fromsomething`. Update the blob functions for
consistency with the rest of the library.
|
|
5d92e547
|
2019-06-08T17:28:35
|
|
oid: `is_zero` instead of `iszero`
The only function that is named `issomething` (without underscore) was
`git_oid_iszero`. Rename it to `git_oid_is_zero` for consistency with
the rest of the library.
|
|
fef847ae
|
2019-06-15T15:47:41
|
|
Merge pull request #5110 from pks-t/pks/wildmatch
Replace fnmatch with wildmatch
|
|
2b6594de
|
2019-06-15T15:43:49
|
|
Merge pull request #5111 from tiennou/fix/docs
Documentation fixes
|
|
764196ff
|
2019-06-13T20:17:01
|
|
doc: add missing documentation comments
|
|
2376fa6c
|
2019-06-13T19:42:55
|
|
indexer: correct missing includes
Docurium seems to choke on this header because it can't see both
git_indexer_progress & git_indexer_progress_cb, let's add the include.
|
|
a9f57629
|
2019-06-13T15:03:00
|
|
wildmatch: import wildmatch from git.git
In commit 70a8fc999d (stop using fnmatch (either native or
compat), 2014-02-15), upstream git has switched over all code
from their internal fnmatch copy to its new wildmatch code. We
haven't followed suit, and thus have developed some
incompatibilities in how we match regular expressions.
Import git's wildmatch from v2.22.0 and add a test suite based on
their t3070-wildmatch.sh tests.
|
|
13ded47c
|
2019-06-13T19:57:17
|
|
fnmatch: remove unused code
The `fnmatch` code has now been completely replaced by
`wildmatch`, same as upstream git.git has been doing in 2014.
Remove it.
|
|
05f9986a
|
2019-06-14T08:06:05
|
|
attr_file: convert to use `wildmatch`
Upstream git has converted to use `wildmatch` instead of
`fnmatch`. Convert our gitattributes logic to use `wildmatch` as
the last user of `fnmatch`. Please, don't expect I know what I'm
doing here: the fnmatch parser is one of the most fun things to
play around with as it has a sh*tload of weird cases. In all
honesty, I'm simply relying on our tests that are by now rather
comprehensive in that area.
The conversion actually fixes compatibility with how git.git
parser "**" patterns when the given path does not contain any
directory separators. Previously, a pattern "**.foo" erroneously
wouldn't match a file "x.foo", while git.git would match.
Remove the new-unused LEADINGDIR/NOLEADINGDIR flags for
`git_attr_fnmatch`.
|
|
5811e3ba
|
2019-06-13T19:16:32
|
|
config_file: use `wildmatch` to evaluate conditionals
We currently use `p_fnmatch` to compute whether a given "gitdir:"
or "gitdir/i:" conditional matches the current configuration file
path. As git.git has moved to use `wildmatch` instead of
`p_fnmatch` throughout its complete codebase, we evaluate
conditionals inconsistently with git.git in some special cases.
Convert `p_fnmatch` to use `wildmatch`. The `FNM_LEADINGDIR` flag
cannot be translated to `wildmatch`, but in fact git.git doesn't
use it here either. And in fact, dropping it while we go
increases compatibility with git.git.
|
|
cf1a114b
|
2019-06-13T19:10:22
|
|
config_file: do not include trailing '/' for "gitdir" conditionals
When evaluating "gitdir:" and "gitdir/i:" conditionals, we
currently compare the given pattern with the value of
`git_repository_path`. Thing is though that `git_repository_path`
returns the gitdir path with trailing '/', while we actually need
to match against the gitdir without it.
Fix this issue by stripping the trailing '/' previous to
matching. Add various tests to ensure we get this right.
|
|
5d987f7d
|
2019-06-13T19:00:06
|
|
config_file: refactor `do_match_gitdir` to improve readability
The function `do_match_gitdir` has some horribly named parameters
and variables. Rename them to improve readability. Furthermore,
fix a potentially undetected out-of-memory condition when
appending "**" to the pattern.
|
|
de70bb46
|
2019-06-13T15:27:22
|
|
global: convert trivial `fnmatch` users to use `wildcard`
Upstream git.git has converted its codebase to use wildcard in
favor of fnmatch in commit 70a8fc999d (stop using fnmatch (either
native or compat), 2014-02-15). To keep our own regex-matching in
line with what git does, convert all trivial instances of
`fnmatch` usage to use `wildcard`, instead. Trivial usage is
defined to be use of `fnmatch` with either no flags or flags that
have a 1:1 equivalent in wildmatch (PATHNAME, IGNORECASE).
|
|
451df793
|
2019-06-13T15:20:23
|
|
posix: remove implicit include of "fnmatch.h"
We're about to phase out our bundled fnmatch implementation as
git.git has moved to wildmatch long ago in 2014. To make it
easier to spot which files are stilll using fnmatch, remove the
implicit "fnmatch.h" include in "posix.h" and instead include it
explicitly.
|
|
f0a720d5
|
2019-06-14T18:22:39
|
|
Merge pull request #5114 from pks-t/pks/bigfile-refactoring
Removal of `p_fallocate`
|
|
2d85c7e8
|
2019-06-14T14:12:19
|
|
posix: remove `p_fallocate` abstraction
By now, we have repeatedly failed to provide a nice
cross-platform implementation of `p_fallocate`. Recent tries to
do that escalated quite fast to a set of different CMake checks,
implementations, fallbacks, etc., which started to look real
awkward to maintain. In fact, `p_fallocate` had only been
introduced in commit 4e3949b73 (tests: test that largefiles can
be read through the tree API, 2019-01-30) to support a test with
large files, but given the maintenance costs it just seems not to
be worht it.
As we have removed the sole user of `p_fallocate` in the previous
commit, let's drop it altogether.
|
|
0c2d0d4b
|
2019-06-14T14:07:26
|
|
tests: object: refactor largefile test to not use `p_fallocate`
The `p_fallocate` platform is currently in use in our tests,
only, but it proved to be quite burdensome to get it implemented
in a cross-platform way. The only "real" user is the test
object::tree::read::largefile, where it's used to allocate a
large file in the filesystem only to commit it to the repo and
read its object back again. We can simplify this quite a bit by
just using an in-memory buffer of 4GB. Sure, this cannot be used
on platforms with low resources. But creating 4GB files is not
any better, and we already skip the test if the environment
variable "GITTEST_INVASIVE_FS_SIZE" is not set. So we're arguably
not worse off than before.
|
|
c3179eff
|
2019-06-14T13:34:13
|
|
Merge pull request #5055 from tiennou/cmake/backend-detect
Modularize our TLS & hash detection
|
|
94fc83b6
|
2019-06-13T16:48:35
|
|
cmake: Modulize our TLS & hash detection
The interactions between `USE_HTTPS` and `SHA1_BACKEND` have been
streamlined. Previously we would have accepted not quite working
configurations (like, `-DUSE_HTTPS=OFF -DSHA1_BACKEND=OpenSSL`) and, as
the OpenSSL detection only ran with `USE_HTTPS`, the link would fail.
The detection was moved to a new `USE_SHA1`, modeled after `USE_HTTPS`,
which takes the values "CollisionDetection/Backend/Generic", to better
match how the "hashing backend" is selected, the default (ON) being
"CollisionDetection".
Note that, as `SHA1_BACKEND` is still used internally, you might need to
check what customization you're using it for.
|
|
231ccbeb
|
2019-06-14T10:36:23
|
|
Merge pull request #5109 from pks-t/pks/test-mergeanalysis-variant
tests: merge::analysis: use test variants to avoid duplicated test suites
|
|
1ab0523d
|
2019-06-14T10:34:52
|
|
Merge pull request #5101 from libgit2/ethomson/opts_init
Rename options initialization functions
|
|
bed33a6f
|
2019-06-14T09:59:34
|
|
Merge pull request #5112 from pks-t/pks/ntlmclient-implicit-fallthrough
deps: ntlmclient: disable implicit fallthrough warnings
|
|
c0dd7122
|
2019-06-06T16:48:04
|
|
apply: add an options struct initializer
|
|
0b5ba0d7
|
2019-06-06T16:36:23
|
|
Rename opt init functions to `options_init`
In libgit2 nomenclature, when we need to verb a direct object, we name
a function `git_directobject_verb`. Thus, if we need to init an options
structure named `git_foo_options`, then the name of the function that
does that should be `git_foo_options_init`.
The previous names of `git_foo_init_options` is close - it _sounds_ as
if it's initializing the options of a `foo`, but in fact
`git_foo_options` is its own noun that should be respected.
Deprecate the old names; they'll now call directly to the new ones.
|
|
358b7a9d
|
2019-06-14T08:44:13
|
|
deps: ntlmclient: disable implicit fallthrough warnings
The ntlmclient dependency has quite a lot of places with implicit
fallthroughs. As at least modern GCC has enabled warnings on
implicit fallthroughs by default, the developer is greeted with a
wall of warnings when compiling that dependency.
Disable implicit fallthrough warnings for ntlmclient to fix this
issue.
|
|
a5ddae68
|
2019-06-13T22:00:48
|
|
Merge pull request #5097 from pks-t/pks/ignore-escapes
gitignore with escapes
|
|
e277ff4d
|
2019-06-13T21:41:55
|
|
Merge pull request #5108 from libgit2/ethomson/urlparse_empty_port
Handle URLs with a colon after host but no port
|
|
fb529a01
|
2019-06-11T22:03:29
|
|
http-parser: use our bundled http-parser by default
Our bundled http-parser includes bugfixes, therefore we should prefer
our http-parser until such time as we can identify that the system
http-parser has these bugfixes (using a version check).
Since these bugs are - at present - minor, retain the ability for users
to force that they want to use the system http-parser anyway. This does
change the cmake specification so that people _must_ opt-in to the new
behavior knowingly.
|
|
70fae43c
|
2019-06-13T11:57:16
|
|
tests: merge::analysis: use variants to deduplicate test suites
Since commit 394951ad4 (tests: allow for simple data-driven
tests, 2019-06-07), we have the ability to run a given test suite
with multiple variants. Use this new feature to deduplicate the
test suites for merge::{trees,workdir}::analysis into a single
test suite.
|
|
0c1029be
|
2019-06-13T11:41:39
|
|
Merge pull request #5022 from rcoup/merge-analysis-bare-repo-5017
Merge analysis support for bare repos
|
|
758d1b9c
|
2019-06-13T11:38:14
|
|
Merge pull request #5104 from rcoup/patch-1
Add memleak check docs
|
|
3b517351
|
2019-06-07T10:13:34
|
|
attr_file: remove invalid TODO comment
In our attributes pattern parsing code, we have a comment that
states we might have to convert '\' characters to '/' to have
proper POSIX paths. But in fact, '\' characters are valid inside
the string and act as escape mechanism for various characters,
which is why we never want to convert those to POSIX directory
separators. Furthermore, gitignore patterns are specified to only
treat '/' as directory separators.
Remove the comment to avoid future confusion.
|
|
b3b6a39d
|
2019-06-07T11:12:54
|
|
attr_file: account for escaped escapes when searching trailing space
When determining the trailing space length, we need to honor
whether spaces are escaped or not. Currently, we do not check
whether the escape itself is escaped, though, which might
generate an off-by-one in that case as we will simply treat the
space as escaped.
Fix this by checking whether the backslashes preceding the space
are themselves escaped.
|
|
10ac298c
|
2019-06-07T11:12:42
|
|
attr_file: fix unescaping of escapes required for fnmatch
When parsing attribute patterns, we will eventually unescape the
parsed pattern. This is required because we require custom
escapes for whitespace characters, as normally they are used to
terminate the current pattern. Thing is, we don't only unescape
those whitespace characters, but in fact all escaped sequences.
So for example if the pattern was "\*", we unescape that to "*".
As this is directly passed to fnmatch(3) later, fnmatch would
treat it as a simple glob matching all files where it should
instead only match a file with name "*".
Fix the issue by unescaping spaces, only. Add a bunch of tests to
exercise escape parsing.
|
|
eb146e58
|
2019-06-07T09:17:23
|
|
attr_file: properly handle escaped '\' when searching non-escaped spaces
When parsing attributes, we need to search for the first
unescaped whitespace character to determine where the pattern is
to be cut off. The scan fails to account for the case where the
escaping '\' character is itself escaped, though, and thus we
would not recognize the cut-off point in patterns like "\\ ".
Refactor the scanning loop to remember whether the last character
was an escape character. If it was and the next character is a
'\', too, then we will reset to non-escaped mode again. Thus, we
now handle escaped whitespaces as well as escaped wildcards
correctly.
|
|
f7c6795f
|
2019-06-07T10:20:35
|
|
path: only treat paths starting with '\' as absolute on Win32
Windows-based systems treat paths starting with '\' as absolute,
either referring to the current drive's root (e.g. "\foo" might
refer to "C:\foo") or to a network path (e.g. "\\host\foo"). On
the other hand, (most?) systems that are not based on Win32
accept backslashes as valid characters that may be part of the
filename, and thus we cannot treat them to identify absolute
paths.
Change the logic to only paths starting with '\' as absolute on
the Win32 platform. Add tests to avoid regressions and document
behaviour.
|
|
b3196a60
|
2019-06-10T12:27:12
|
|
Add memleak check docs
Document how to run it locally on macOS & Linux
|
|
1bbdec69
|
2019-06-11T21:55:31
|
|
http_parser: handle URLs with colon but no port
When the end of the host is reached, and we're at the colon separating
the host with the port (ie, there is no numeric port) then do not error.
This is allowed by RFC 3986.
|
|
938cbd03
|
2019-06-11T21:53:35
|
|
net: handle urls with a colon after host but no port
Core git copes with URLs that have a colon after the port, but no actual
numeric value. eg `http://example.com:/foo.git` or
`http://example.com:`. That's horrible, but RFC 3986 says:
> URI producers and normalizers should omit the port component and its
> ":" delimiter if port is empty or if its value would be the same as
> that of the scheme's default.
Which indicates that they may and therefore we must accept it.
Test that we can handle URLs with a colon but no following port number.
|
|
ff7652c1
|
2019-06-11T17:05:27
|
|
Merge pull request #5098 from pks-t/pks/clar-data-driven
Data-driven tests
|
|
fd734f7d
|
2019-06-11T12:45:27
|
|
Merge pull request #5107 from pks-t/pks/sha1dc-update
sha1dc: update to fix endianess issues on AIX/HP-UX
|
|
110b5895
|
2019-06-11T08:07:48
|
|
Merge pull request #5052 from libgit2/ethomson/netrefactor
Add NTLM support for HTTP(s) servers and proxies
|
|
230a451e
|
2019-06-10T13:54:11
|
|
sha1dc: update to fix endianess issues on AIX/HP-UX
Update our copy of sha1dc to the upstream commit 855827c (Detect
endianess on HP-UX, 2019-05-09). Changes include fixes to endian
detection on AIX and HP-UX systems as well as a define that
allows us to force aligned access, which we're not using yet.
|
|
d171fbee
|
2019-04-07T17:40:23
|
|
http: allow server to drop a keepalive connection
When we have a keep-alive connection to the server, that server may
legally drop the connection for any reason once a successful request and
response has occurred. It's common for servers to drop the connection
after some amount of time or number of requests have occurred.
|
|
9af1de5b
|
2019-03-24T20:49:57
|
|
http: stop on server EOF
We stop the read loop when we have read all the data. We should also
consider the server's feelings.
If the server hangs up on us, we need to stop our read loop. Otherwise,
we'll try to read from the server - and fail - ad infinitum.
|
|
4c2ca1ba
|
2019-03-23T12:10:57
|
|
ci: test NTLM proxy authentication on Unix
|
|
539e6293
|
2019-03-22T19:06:46
|
|
http: teach auth mechanisms about connection affinity
Instead of using `is_complete` to decide whether we have connection or
request affinity for authentication mechanisms, set a boolean on the
mechanism definition itself.
|
|
3e0b4b43
|
2019-03-22T18:52:03
|
|
http: maintain authentication across connections
For request-based authentication mechanisms (Basic, Digest) we should
keep the authentication context alive across socket connections, since
the authentication headers must be transmitted with every request.
However, we should continue to remove authentication contexts for
mechanisms with connection affinity (NTLM, Negotiate) since we need to
reauthenticate for every socket connection.
|
|
ce72ae95
|
2019-03-22T10:53:30
|
|
http: simplify authentication mechanisms
Hold an individual authentication context instead of trying to maintain
all the contexts; we can select the preferred context during the initial
negotiation.
Subsequent authentication steps will re-use the chosen authentication
(until such time as it's rejected) instead of trying to manage multiple
contexts when all but one will never be used (since we can only
authenticate with a single mechanism at a time.)
Also, when we're given a 401 or 407 in the middle of challenge/response
handling, short-circuit immediately without incrementing the retry
count. The multi-step authentication is expected, and not a "retry" and
should not be penalized as such.
This means that we don't need to keep the contexts around and ensures
that we do not unnecessarily fail for too many retries when we have
challenge/response auth on a proxy and a server and potentially
redirects in play as well.
|
|
6d931ba7
|
2019-03-22T16:35:59
|
|
http: don't set the header in the auth token
|
|
10718526
|
2019-03-09T13:53:16
|
|
http: don't reset replay count after connection
A "connection" to a server is transient, and we may reconnect to a
server in the midst of authentication failures (if the remote indicates
that we should, via `Connection: close`) or in a redirect.
|
|
3192e3c9
|
2019-03-07T16:57:11
|
|
http: provide an NTLM authentication provider
|
|
a7f65f03
|
2019-03-21T15:42:57
|
|
ntlm: add ntlmclient as a dependency
Include https://github.com/ethomson/ntlmclient as a dependency.
|
|
79fc8281
|
2019-03-21T16:49:25
|
|
http: validate server's authentication types
Ensure that the server supports the particular credential type that
we're specifying. Previously we considered credential types as an
input to an auth mechanism - since the HTTP transport only supported
default credentials (via negotiate) and username/password credentials
(via basic), this worked. However, if we are to add another mechanism
that uses username/password credentials, we'll need to be careful to
identify the types that are accepted.
|
|
5ad99210
|
2019-03-07T16:43:45
|
|
http: consume body on proxy auth failure
We must always consume the full parser body if we're going to
keep-alive. So in the authentication failure case, continue advancing
the http message parser until it's complete, then we can retry the
connection.
Not doing so would mean that we have to tear the connection down and
start over. Advancing through fully (even though we don't use the data)
will ensure that we can retry a connection with keep-alive.
|
|
75b20458
|
2019-03-07T16:34:55
|
|
http: always consume body on auth failure
When we get an authentication failure, we must consume the entire body
of the response. If we only read half of the body (on the assumption
that we can ignore the rest) then we will never complete the parsing of
the message. This means that we will never set the complete flag, and
our replay must actually tear down the connection and try again.
This is particularly problematic for stateful authentication mechanisms
(SPNEGO, NTLM) that require that we keep the connection alive.
Note that the prior code is only a problem when the 401 that we are
parsing is too large to be read in a single chunked read from the http
parser.
But now we will continue to invoke the http parser until we've got a
complete message in the authentication failed scenario. Note that we
need not do anything with the message, so when we get an authentication
failed, we'll stop adding data to our buffer, we'll simply loop in the
parser and let it advance its internal state.
|
|
e87f912b
|
2019-03-21T15:29:52
|
|
http: don't realloc the request
|
|
10e8fe55
|
2019-03-21T13:55:54
|
|
transports: add an `is_complete` function for auth
Some authentication mechanisms (like HTTP Basic and Digest) have a
one-step mechanism to create credentials, but there are more complex
mechanisms like NTLM and Negotiate that require challenge/response after
negotiation, requiring several round-trips. Add an `is_complete`
function to know when they have round-tripped enough to be a single
authentication and should now either have succeeded or failed to
authenticate.
|
|
9050c69c
|
2019-03-09T17:24:16
|
|
http: examine keepalive status at message end
We cannot examine the keep-alive status of the http parser in
`http_connect`; it's too late and the critical information about whether
keep-alive is supported has been destroyed.
Per the documentation for `http_should_keep_alive`:
> If http_should_keep_alive() in the on_headers_complete or
> on_message_complete callback returns 0, then this should be
> the last message on the connection.
Query then and set the state.
|
|
956ba48b
|
2019-03-14T10:36:40
|
|
http: increase the replay count
Increase the permissible replay count; with multiple-step authentication
schemes (NTLM, Negotiate), proxy authentication and redirects, we need
to be mindful of the number of steps it takes to get connected.
7 seems high but can be exhausted quickly with just a single authentication
failure over a redirected multi-state authentication pipeline.
|
|
7912db49
|
2019-03-14T10:35:03
|
|
ci: enable all proxy tests
|
|
ad5419b5
|
2019-03-14T10:32:09
|
|
ci: enable SKIP_OFFLINE_TESTS for windows
|
|
1ef77e37
|
2019-03-11T23:33:20
|
|
ci: test NTLM proxy authentication on Windows
Update our CI tests to start a proxy that requires NTLM authentication;
ensure that our WIndows HTTP client can speak NTLM.
|
|
ee3d35cf
|
2019-04-07T19:15:21
|
|
http: support https for proxies
|
|
3d11b6c5
|
2019-03-11T20:36:09
|
|
winhttp: support default credentials for proxies
We did not properly support default credentials for proxies, only for
destination servers. Refactor the credential handling to support sending
either username/password _or_ default credentials to either the proxy or
the destination server.
This actually shares the authentication logic between proxy servers and
destination servers. Due to copy/pasta drift over time, they had
diverged. Now they share a common logic which is: first, use
credentials specified in the URL (if there were any), treating empty
username and password (ie, "http://:@foo.com/") as default credentials,
for compatibility with git. Next, call the credential callbacks.
Finally, fallback to WinHTTP compatibility layers using built-in
authentication like we always have.
Allowing default credentials for proxies requires moving the security
level downgrade into the credential setting routines themselves.
We will update our security level to "high" by default which means that
we will never send default credentials without prompting. (A lower
setting, like the WinHTTP default of "medium" would allow WinHTTP to
handle credentials for us, despite what a user may have requested with
their structures.) Now we start with "high" and downgrade to "low" only
after a user has explicitly requested default credentials.
|
|
757411a0
|
2019-03-11T12:56:09
|
|
network: don't add arbitrary url rules
There's no reason a git repository couldn't be at the root of a server,
and URLs should have an implicit path of '/' when one is not specified.
|
|
c6ab183e
|
2019-03-11T11:43:08
|
|
net: rename gitno_connection_data to git_net_url
"Connection data" is an imprecise and largely incorrect name; these
structures are actually parsed URLs. Provide a parser that takes a URL
string and produces a URL structure (if it is valid).
Separate the HTTP redirect handling logic from URL parsing, keeping a
`gitno_connection_data_handle_redirect` whose only job is redirect
handling logic and does not parse URLs itself.
|
|
7ea8630e
|
2019-04-07T20:11:59
|
|
http: free auth context on failure
When we send HTTP credentials but the server rejects them, tear down the
authentication context so that we can start fresh. To maintain this
state, additionally move all of the authentication handling into
`on_auth_required`.
|
|
005b5bc2
|
2019-04-07T17:55:23
|
|
http: reconnect to proxy on connection close
When we're issuing a CONNECT to a proxy, we expect to keep-alive to the
proxy. However, during authentication negotiations, the proxy may close
the connection. Reconnect if the server closes the connection.
|
|
f4584a1e
|
2019-06-10T12:08:57
|
|
Merge pull request #5102 from libgit2/ethomson/callback_names
Callback type names should be suffixed with `_cb`
|
|
dd47a3ef
|
2019-06-10T11:49:17
|
|
Merge pull request #5099 from pks-t/pks/tests-fix-symlink-outside-sandbox
tests: checkout: fix symlink.git being created outside of sandbox
|
|
178df697
|
2019-06-08T17:16:19
|
|
trace: suffix the callbacks with `_cb`
The trace logging callbacks should match the other callback naming
conventions, using the `_cb` suffix instead of a `_callback` suffix.
|
|
810cefd9
|
2019-06-08T17:14:00
|
|
credentials: suffix the callbacks with `_cb`
The credential callbacks should match the other callback naming
conventions, using the `_cb` suffix instead of a `_callback` suffix.
|
|
438c9958
|
2019-06-10T10:52:01
|
|
Fix memleaks in analysis tests.
Wrap some missed setup api calls in asserts.
|
|
21ddeabe
|
2019-06-07T15:22:42
|
|
Review fixes:
- whitespace -> tabs
- comment style
- improve repo naming in merge/trees/analysis tests.
|
|
7b27b6cf
|
2019-06-06T16:32:09
|
|
Refactor testing:
- move duplication between merge/trees/ and merge/workdir/ into merge/analysis{.c,.h}
- remove merge-resolve.git resource, open the existing merge-resolve as a bare repo instead.
|
|
5427461f
|
2019-03-20T11:51:24
|
|
merge: add doc header to analysis tests
|
|
1d04f477
|
2019-03-19T23:43:56
|
|
merge: tests for bare repo merge analysis
dupe of workdir/analysis.c against a bare repo.
|
|
6d2ab2cf
|
2019-03-19T23:43:10
|
|
merge: analysis support for bare repositories
|
|
cb28df20
|
2019-06-07T14:29:47
|
|
tests: checkout: fix symlink.git being created outside of sandbox
The function `populate_symlink_workdir` creates a new
"symlink.git" repository with a relative path "../symlink.git".
As the current working directory is the sandbox, the new
repository will be created just outside of the sandbox.
Fix this by using `clar_sandbox_path`.
|
|
1f47efc4
|
2019-06-07T14:20:54
|
|
tests: object: consolidate cache tests
The object::cache test module has two tests that do nearly the
same thing: given a cache limit, load a certain set of objects
and verify if those objects have been cached or not.
Convert those tests to the new data-driven initializers to
demonstrate how these are to be used. Furthermore, add some
additional test data. This conversion is mainly done to show this
new facility.
|
|
394951ad
|
2019-06-07T14:11:29
|
|
tests: allow for simple data-driven tests
Right now, we're not able to use data-driven tests at all. E.g.
given a set of tests which we'd like to repeat with different
test data, one has to hand-write any outer loop that iterates
over the test data and then call each of the test functions. Next
to being bothersome, this also has the downside that error
reporting is quite lacking, as one never knows which test data
actually produced failures.
So let's implement the ability to re-run complete test modules
with changing test data. To retain backwards compatibility and
enable trivial addition of new runs with changed test data, we
simply extend clar's generate.py. Instead of scanning for a
single `_initialize` function, each test module may now implement
multiple `_initialize_foo` functions. The generated test harness
will then run all test functions for each of the provided
initializer functions, making it possible to provide different
test data inside of each of the initializer functions. Example:
```
void test_git_example__initialize_with_nonbare_repo(void)
{
g_repo = cl_git_sandbox_init("testrepo");
}
void test_git_example__initialize_with_bare_repo(void)
{
g_repo = cl_git_sandbox_init("testrepo.git");
}
void test_git_example__cleanup(void)
{
cl_git_sandbox_cleanup();
}
void test_git_example__test1(void)
{
test1();
}
void test_git_example__test2(void)
{
test2();
}
```
Executing this test module will cause the following output:
```
$ ./libgit2_clar -sgit::example
git::example (with nonbare repo)..
git::example (with bare repo)..
```
|