|
c690136c
|
2020-02-26T19:34:49
|
|
deps: ntlmclient: fix htonll on big endian FreeBSD
In commit 3828ea67b (deps: ntlmclient: fix missing htonll symbols on
FreeBSD and SunOS, 2020-02-21), we've fixed compilation on BSDs due to
missing `htonll` wrappers. While we are now using `htobe64` for both
Linux and OpenBSD, we decided to use `bswap64` on FreeBSD. While correct
on little endian systems, where we will swap from little- to big-endian,
we will also do the swap on big endian systems. As a result, we do not
use network byte order on such systems.
Fix the issue by using htobe64, as well.
|
|
a48da8fa
|
2020-02-25T22:49:16
|
|
Merge pull request #5417 from pks-t/pks/ntlmclient-htonll
deps: ntlmclient: fix missing htonll symbols on FreeBSD and SunOS
|
|
ebade233
|
2020-02-24T21:49:43
|
|
transports: auth_ntlm: fix use of strdup/strndup
In the NTLM authentication code, we accidentally use strdup(3P) and
strndup(3P) instead of our own wrappers git__strdup and git__strndup,
respectively.
Fix the issue by using our own functions.
|
|
3828ea67
|
2020-02-21T11:26:19
|
|
deps: ntlmclient: fix missing htonll symbols on FreeBSD and SunOS
The ntlmclient dependency defines htonll on Linux-based systems, only.
As a result, non-Linux systems will run into compiler and/or linker
errors due to undefined symbols.
Fix this issue for FreeBSD, OpenBSD and SunOS/OpenSolaris by including
the proper headers and defining the symbol accordingly.
|
|
45ed17bd
|
2020-02-24T21:39:23
|
|
Merge pull request #5420 from petersalomonsen/wasm-git-links
README: add language binding link to wasm-git
|
|
d91c6eda
|
2020-02-23T18:26:47
|
|
README: add language binding link to wasm-git
|
|
705f4e85
|
2020-02-21T11:34:41
|
|
Merge pull request #5412 from kloczek/master
Fix #5410: fix installing libgit2.pc in wrong location
|
|
de1865fc
|
2020-02-21T11:10:05
|
|
Merge pull request #5413 from csware/nsectypo
Fix typo on GIT_USE_NEC
|
|
ff46c5d3
|
2020-02-20T20:47:22
|
|
Fix typo on GIT_USE_NEC
Signed-off-by: Sven Strickroth <email@cs-ware.de>
|
|
81370261
|
2020-02-19T15:57:39
|
|
Merge pull request #5374 from pks-t/pks/diff-with-empty-subtree
tests: diff: verify that we are able to diff with empty subtrees
|
|
ff3297df
|
2020-02-19T14:31:55
|
|
Merge pull request #5408 from pks-t/pks/ci-cleanups
README: update our build matrix to reflect current releases
|
|
dabd0130
|
2020-02-19T14:31:42
|
|
Merge pull request #5409 from libgit2/pks/coverity-fix-sudo
azure: docker: set up HOME variable to fix Coverity builds
|
|
fbda0575
|
2020-02-19T12:54:19
|
|
Fix #5410: fix installing libgit2.pc in wrong location
Remove using custom PKG_BUILD_PREFIu, PKG_BUILD_LIBDIR and
PKG_BUILD_INCLUDEDIR variables.
Use cmake CMAKE_INSTALL_PREFIX, LIB_INSTALL_DIR, INCLUDE_INSTALL_DIR instead.
This patch fixes install libgit2.pc file in correct location and simpifies
cmake module.
|
|
4f1923e8
|
2020-02-19T12:14:32
|
|
Merge pull request #5390 from pks-t/pks/sha1-lookup
sha1_lookup: inline its only function into "pack.c"
|
|
8aa04a37
|
2020-02-19T12:14:16
|
|
Merge pull request #5391 from pks-t/pks/coverity-fixes
Coverity fixes
|
|
6efe3d35
|
2020-02-19T11:34:55
|
|
azure: docker: set up HOME variable to fix Coverity builds
In commit 01a834066 (azure: docker: fix ARM builds by replacing gosu(1),
2020-02-18), we've switched our entrypoint from gosu(1) to use sudo(1)
instead to fix our ARM builds. The switch introduced an incompatibility
that now causes our Coverity builds to fail, as the "--preserve-env"
switch will also keep HOME at its current value. As a result, Coverity
now tries to set up its configuration directory in root's home
directory, which it naturally can't write to.
Fix the issue by adding the "--set-home" flag to sudo(1).
|
|
add54e6c
|
2020-02-19T11:31:01
|
|
README: update our build matrix to reflect current releases
As noted in docs/release.md, we only provide security updates for the
latest two releases. Let's thus drop the build status of both v0.27 and
v0.26 branches, adding the new v0.99 branch instead.
|
|
17223902
|
2020-02-19T11:27:00
|
|
Merge pull request #5291 from libgit2/ethomson/0_99
Release 0.99
|
|
b31cd05f
|
2020-02-19T11:25:31
|
|
Merge pull request #5372 from pks-t/pks/release-script
Release script
|
|
70062e28
|
2019-10-31T17:46:21
|
|
version: update the version number to v0.99
This commit also switches our SOVERSION to be "$MAJOR.$MINOR" instead of
"$MINOR", only. This is in preparation of v1.0, where the previous
scheme would've stopped working in an obvious way.
|
|
a552c103
|
2019-10-31T17:45:16
|
|
docs: update changelog for v0.99
Give the release a name, "Torschlusspanik" (the fear that time is
running out to act). Indeed, the time is running out for changes to be
included in v1.0.
|
|
1256b462
|
2020-02-18T18:10:06
|
|
Merge pull request #5406 from libgit2/pks/azure-fix-arm32
azure: fix ARM32 builds by replacing gosu(1)
|
|
5254c9bb
|
2020-02-18T18:49:41
|
|
Merge pull request #5398 from libgit2/pks/valgrind-openssl
openssl: fix Valgrind issues in nightly builds
|
|
e8660708
|
2020-02-18T18:42:12
|
|
Merge pull request #5400 from lhchavez/fix-packfile-fuzzer
fuzzers: Fix the documentation
|
|
eaa70c6c
|
2020-02-18T18:09:11
|
|
tests: object: decrease number of concurrent cache accesses
In our test case object::cache::fast_thread_rush, we're creating 100
concurrent threads opening a repository and reading objects from it.
This test actually fails on ARM32 with an out-of-memory error, which
isn't entirely unexpected.
Work around the issue by halving the number of threads.
|
|
01a83406
|
2020-02-18T15:20:43
|
|
azure: docker: fix ARM builds by replacing gosu(1)
Our nightly builds are currently failing due to our ARM-based jobs.
These jobs crash immediately when entering the Docker container with a
exception thrown by Go's language runtime. As we're able to successfully
builds the Docker images in previous steps, it's unlikely to be a bug in
Docker itself. Instead, this exception is thrown by gosu(1), which is a
Go-based utility to drop privileges and run by our entrypoint.
Fix the issue by dropping gosu(1) in favor of sudo(1).
|
|
76b49caf
|
2020-02-18T15:20:08
|
|
azure: docker: synchronize Xenial/Bionic build instructions
Our two Docker build instructions for Xenial and Bionic have diverged a
bit. Let's re-synchronize them with each other to make them as similar
as possible.
|
|
f9985688
|
2020-02-18T15:17:45
|
|
azure: docker: detect errors when building images
The build step for our Docker images currently succeeds even if building
the Docker image fails due to missing && chains in the build script. Fix
this by adding them in.
|
|
68bfacb1
|
2020-02-18T15:17:17
|
|
azure: remove unused Linux setup script
Since migrating to Docker containings for our build and test
infrastructure, we do not use the "setup-linux.sh" script anymore.
Remove it to avoid any confusion.
|
|
795a5b2c
|
2020-02-15T04:20:17
|
|
fuzzers: Fix the documentation
Some of the commands are now out of date.
|
|
0119e57d
|
2020-02-11T10:37:32
|
|
streams: openssl: switch approach to silence Valgrind errors
As OpenSSL loves using uninitialized bytes as another source of entropy,
we need to mark them as defined so that Valgrind won't complain about
use of these bytes. Traditionally, we've been using the macro
`VALGRIND_MAKE_MEM_DEFINED` provided by Valgrind, but starting with
OpenSSL 1.1 the code doesn't compile anymore due to `struct SSL` having
become opaque. As such, we also can't set it as defined anymore, as we
have no way of knowing its size.
Let's change gears instead by just swapping out the allocator functions
of OpenSSL with our own ones. The twist is that instead of calling
`malloc`, we just call `calloc` to have the bytes initialized
automatically. Next to soothing Valgrind, this approach has the benefit
of being completely agnostic of the memory sanitizer and is neatly
contained at a single place.
Note that we shouldn't do this for non-Valgrind builds. As we cannot
set up memory functions for a given SSL context, only, we need to swap
them at a global context. Furthermore, as it's possible to call
`OPENSSL_set_mem_functions` once only, we'd prevent users of libgit2 to
set up their own allocators.
|
|
877054f3
|
2020-02-10T12:35:13
|
|
cmake: consolidate Valgrind option
OpenSSL doesn't initialize bytes on purpose in order to generate
additional entropy. Valgrind isn't too happy about that though, causing
it to generate warninings about various issues regarding use of
uninitialized bytes.
We traditionally had some infrastructure to silence these errors in our
OpenSSL stream implementation, where we invoke the Valgrind macro
`VALGRIND_MAKE_MEMDEFINED` in various callbacks that we provide to
OpenSSL. Naturally, we only include these instructions if a preprocessor
define "VALGRIND" is set, and that in turn is only set if passing
"-DVALGRIND" to CMake. We do that in our usual Azure pipelines, but we
in fact forgot to do this in our nightly build. As a result, we get a
slew of warnings for these nightly builds, but not for our normal
builds.
To fix this, we could just add "-DVALGRIND" to our nightly builds. But
starting with commit d827b11b6 (tests: execute leak checker via CTest
directly, 2019-06-28), we do have a secondary variable that directs
whether we want to use memory sanitizers for our builds. As such, every
user wishing to use Valgrind for our tests needs to pass both options
"VALGRIND" and "USE_LEAK_CHECKER", which is cumbersome and error prone,
as can be seen by our own builds.
Instead, let's consolidate this into a single option, removing the old
"-DVALGRIND" one. Instead, let's just add the preprocessor directive if
USE_LEAK_CHECKER equals "valgrind" and remove "-DVALGRIND" from our own
pipelines.
|
|
ee3307a1
|
2020-02-08T17:34:53
|
|
Merge pull request #5392 from pks-t/pks/ci-warnings
azure: fix misleading messages printed to stderr being
|
|
6a61a418
|
2020-02-08T17:32:51
|
|
Merge pull request #5393 from pks-t/pks/tests-iterator-missing-ref
tests: iterator: fix iterator expecting too few items
|
|
2ae45bc3
|
2020-01-30T11:40:13
|
|
scripts: add script to create releases
The current release process is not documented in any way. As a result,
it's not obvious how releases should be done at all, like e.g. which
locations need adjusting.
To fix this, let's introduce a new script that shall from now on be used
to do all releases. As input it gets the tree that shall be released,
the repository in which to do the release, credentials to
authenticate against GitHub and the new version. E.g. executing the
following will create a new release v0.32:
$ ./script/release.py 0.32.0 --user pks-t --password ****
While the password may currently be your usual GitLab password, it's
recommended to use a personal access token intead.
The script will then perform the following steps:
1. Verify that "include/git2/version.h" matches the new version.
2. Verify that "docs/changelog.md" has a section for that new
version.
3. Extract the changelog entries for the current release from
"docs/changelog.md".
4. Generate two archives in "tar.gz" and "zip" format via "git
archive" from the tree passed by the user. If no tree was passed,
we will use "HEAD".
5. Create the GitHub release using the extracted changelog entries
as well as tag and name information derived from the version
passed by the used.
6. Upload both code archives to that release.
This should cover all steps required for a new release and thus ensures
that nothing is missing that shouldn't be.
|
|
1e556eeb
|
2020-01-30T11:37:49
|
|
editorconfig: special-case Python scripts
Python's PEP 8 specifies that one shall use spaces instead of tabs as
coding style, and we actually honor that currently. Our EditorConfig
does not special-case Python scripts, though, which is why we end up
with our C coding style and thus with tabs.
Special-case "*.py" files to override that default with spaces to fix
this.
|
|
17670ef2
|
2020-02-04T10:58:51
|
|
tests: diff: add test to verify behaviour with empty dir ordering
It was reported that, given a file "abc.txt", a diff will be shown if an
empty directory "abb/" is created, but not if "abd/" is created. Add a
test to verify that we do the right thing here and do not depend on any
ordering.
|
|
b0691db3
|
2020-01-31T09:39:12
|
|
tests: diff: verify that we are able to diff with empty subtrees
While it is not allowed for a tree to have an empty tree as child (e.g.
an empty directory), libgit2's tree builder makes it easy to create such
trees. As a result, some applications may inadvertently end up with such
an invalid tree, and we should try our best and handle them.
One such case is when diffing two trees, where one of both trees has
such an empty subtree. It was reported that this will cause our diff
code to fail. While I wasn't able to reproduce this error, let's still
add a test that verifies we continue to handle them correctly.
|
|
26b71d60
|
2020-02-07T14:36:10
|
|
tests: iterator: fix iterator expecting too few items
The testcase iterator::workdir::filesystem_gunk sets up quite a lot of
directories, which is why it only runs in case GITTEST_INVASIVE_SPEED is
set in the environment. Because we do not run our default CI with this
variable, we didn't notice commit 852c83ee4 (refs: refuse to delete
HEAD, 2020-01-15) breaking the test as it introduced a new reference to
the "testrepo" repository.
Fix the oversight by increasing the number of expected iterator items.
|
|
49bb4237
|
2020-02-07T14:04:07
|
|
azure: test: silence termination message when killing git-daemon(1)
In order to properly tear down the test environment, we will kill
git-daemon(1) if we've exercised it. As git-daemon(1) is spawned as a
background process, it is still owned by the shell and thus killing it
later on will print a termination message to the shell's stderr, causing
Azure to report it as an error.
Fix this by disowning the background process.
|
|
fb03f02a
|
2020-02-07T13:56:36
|
|
azure: docker: avoid re-creating libgit2 home directory
The Docker entrypoint currently creates the libgit2 user with "useradd
--create-home". As we start the Docker container with two volumes
pointing into "/home/libgit2/", the home directory will already exist.
While useradd(1) copes with this just fine, it will print error messages
to stderr which end up as failures in our Azure pipelines.
Fix this by simply removing the "--create-home" parameter.
|
|
52cb4040
|
2020-02-07T13:52:07
|
|
azure: test: silence curl to not cause Azure to trop
Without the "--silent" parameter, curl will print a progress meter to
stderr. Azure has the nice feature of interpreting any output to stderr
as errors with a big red warning towards the end of the build. Let's
thus silence curl to not generate any misleading messages.
|
|
a3ec07d7
|
2020-02-07T13:34:18
|
|
azure: docker: pipe downloaded archives into tar(1) directly
When building dependencies for our Docker images, we first download the
sources to disk first, unpack them and finally remove the archive again.
This can be sped up by piping the downloading archive into tar(1)
directly to parallelize both tasks. Furthermore, let's silence curl(1)
to not print to status information to stderr, which tends to be
interpreted as errors by Azure Pipelines.
|
|
b3b92e09
|
2020-02-07T12:56:26
|
|
streams: openssl: ignore return value of `git_mutex_lock`
OpenSSL pre-v1.1 required us to set up a locking function to properly
support multithreading. The locking function signature cannot return any
error codes, and as a result we can't do anything if `git_mutex_lock`
fails. To silence static analysis tools, let's just explicitly ignore
its return value by casting it to `void`.
|
|
7d1b1774
|
2020-02-07T12:50:39
|
|
cache: fix invalid memory access in case updating cache entry fails
When adding a new entry to our cache where an entry with the same OID
exists already, then we only update the existing entry in case it is
unparsed and the new entry is parsed. Currently, we do not check the
return value of `git_oidmap_set` though when updating the existing
entry. As a result, we will _not_ have updated the existing entry if
`git_oidmap_set` fails, but have decremented its refcount and
incremented the new entry's refcount. Later on, this may likely lead to
dereferencing invalid memory.
Fix the issue by checking the return value of `git_oidmap_set`. In case
it fails, we will simply keep the existing stored instead, even though
it's unparsed.
|
|
775af015
|
2020-02-07T12:31:58
|
|
worktree: report errors when unable to read locking reason
Git worktree's have the ability to be locked in order to spare them from
deletion, e.g. if a worktree is absent due to being located on a
removable disk it is a good idea to lock it. When locking such
worktrees, it is possible to give a locking reason in order to help the
user later on when inspecting status of any such locked trees.
The function `git_worktree_is_locked` serves to read out the locking
status. It currently does not properly report any errors when reading
the reason file, and callers are unexpecting of any negative return
values, too. Fix this by converting callers to expect error codes and
checking the return code of `git_futils_readbuffer`.
|
|
2288a713
|
2020-02-07T12:15:34
|
|
repository: check error codes when reading common link
When checking whether a path is a valid repository path, we try to read
the "commondir" link file. In the process, we neither confirm that
constructing the file's path succeeded nor do we verify that reading the
file succeeded, which might cause us to verify repositories on an empty
or bogus path later on.
Fix this by checking return values. As the function to verify repos
doesn't currently support returning errors, this commit also refactors
the function to return an error code, passing validity of the repo via
an out parameter instead, and adjusts all existing callers.
|
|
b169cd52
|
2020-02-07T12:13:42
|
|
pack-objects: check return code of `git_zstream_set_input`
While `git_zstream_set_input` cannot fail right now, it might change in
the future if we ever decide to have it check its parameters more
vigorously. Let's thus check whether its return code signals an error.
|
|
90450d88
|
2020-02-07T12:10:12
|
|
indexer: check return code of `git_hash_ctx_init`
Initialization of the hashing context may fail on some systems, most
notably on Win32 via the legacy hashing context. As such, we need to
always check the error code of `git_hash_ctx_init`, which is not done
when creating a new indexer.
Fix the issue by adding checks.
|
|
6eebfc06
|
2020-02-07T11:57:48
|
|
push: check error code returned by `git_revwalk_hide`
When queueing objects we want to push, we call `git_revwalk_hide` to
hide all objects already known to the remote from our revwalk. We do not
check its return value though, where the orginial intent was to ignore
the case where the pushed OID is not a known committish. As
`git_revwalk_hide` can fail due to other reasons like out-of-memory
exceptions, we should still check its return value.
Fix the issue by checking the function's return value, ignoring
errors hinting that it's not a committish. As `git_revwalk__push_commit`
currently clobbers these error codes, we need to adjust it as well in
order to make it available downstream.
|
|
31a577d0
|
2020-02-07T11:55:23
|
|
notes: check error code returned by `git_iterator_advance`
When calling `git_note_next`, we end up calling `git_iterator_advance`
but ignore its error code. The intent is that we do not want to return
an error if it returns `GIT_ITEROVER`, as we want to return that value
on the next invocation of `git_note_next`. We should still check for any
other error codes returned by `git_iterator_advance` to catch unexpected
internal errors.
Fix this by checking the function's return value, ignoring
`GIT_ITEROVER`.
|
|
2e6cbff8
|
2020-02-07T11:53:51
|
|
tests: add missing error checks
We should always verify error codes returned by function calls in our
test suite to not accidentally miss any weird results. Coverity reported
missing checks in several locations, which this commit fixes.
|
|
7d65d4cb
|
2020-02-07T11:39:24
|
|
tests: blame: fix conversion specifiers in format string
While the blame helper function `hunk_message` accepts a printf-style
format string, we didn't add a compiler attribute to let the compiler
check for correct conversion specifiers. As a result, some users of the
function used wrong specifiers.
Add the GIT_FORMAT_PRINTF attribute to the function and fix resulting
warnings by using the correct specifiers.
|
|
03ac24b1
|
2020-02-07T11:36:36
|
|
Merge pull request #5387 from pks-t/pks/transport-http-custom-headers
transports: http: fix custom headers not being applied
|
|
65ac33ae
|
2020-02-07T11:18:24
|
|
Merge pull request #5382 from libgit2/pks/azure-coverity
azure: fix Coverity pipeline
|
|
46228d86
|
2020-02-06T11:10:27
|
|
transports: http: fix custom headers not being applied
In commit b9c5b15a7 (http: use the new httpclient, 2019-12-22), the HTTP
code got refactored to extract a generic HTTP client that operates
independently of the Git protocol. Part of refactoring was the creation
of a new `git_http_request` struct that encapsulates the generation of
requests. Our Git-specific HTTP transport was converted to use that in
`generate_request`, but during the process we forgot to set up custom
headers for the `git_http_request` and as a result we do not send out
these headers anymore.
Fix the issue by correctly setting up the request's custom headers and
add a test to verify we correctly send them.
|
|
f0f1cd1d
|
2020-02-07T10:51:17
|
|
sha1_lookup: inline its only function into "pack.c"
The file "sha1_lookup.c" contains a single function `sha1_position`
only which is used only in the packfile implementation. As the function
is comparatively small, to enable the compiler to optimize better and to
remove symbol visibility, move it into "pack.c".
|
|
86c54cc8
|
2020-02-04T11:51:56
|
|
azure: coverity: fix Coverity builds due to various issues
There's several issues with our Coverity builds, like e.g. missing wget
in our containers. Simplify our Coverity pipeline and fix these issues.
|
|
ccffea6b
|
2020-02-04T11:47:16
|
|
azure: coverity: convert to use self-built containers
Back in commit 5a6740e7f (azure: build Docker images as part of the
pipeline, 2019-08-02), we have converted our pipelines to use self-built
Docker images to ease making changes to our Dockerfiles. The commit
didn't adjust our Coverity pipeline, though, so let's do this now.
|
|
b4eb0282
|
2020-02-04T11:46:05
|
|
azure: coverity: fix invalid syntax for Docker image
In commit bbc0b20bd (azure: fix Coverity's build due to wrong container
name, 2019-08-02), Coverity builds were fixed to use the correct
container names. Unfortunately, the "fix" completely broke our Coverity
builds due to using wrong syntax for the Docker task. Let's fix this by
using "imageName" instead of the Docker dict.
|
|
bd6b1c41
|
2020-02-06T06:14:20
|
|
Merge pull request #5381 from pks-t/pks/tests-flaky-proxy
azure: tests: re-run flaky proxy tests
|
|
c51bd2f2
|
2020-02-04T12:15:56
|
|
azure: tests: reset FAILED status if flaky re-run succeeds
While we already do have logic to re-run flaky tests, the FAILED
variable currently does not get reset to "0". As a result, successful
reruns will still cause the test to be registered as failed.
Fix this by resetting the variable accordingly.
|
|
b33ad764
|
2020-02-04T11:26:57
|
|
azure: tests: re-run flaky proxy tests
The proxy tests regularly fail in our CI environment. Unfortunately,
this is expected due to the network layer. Thus, let's re-try the proxy
tests up to five times in case they fail.
|
|
55975171
|
2020-02-01T09:03:00
|
|
Merge pull request #5373 from pks-t/pks/fetchhead-strip-creds
fetchhead: strip credentials from remote URL
|
|
93a9044f
|
2020-01-31T08:49:34
|
|
fetchhead: strip credentials from remote URL
If fetching from an anonymous remote via its URL, then the URL gets
written into the FETCH_HEAD reference. This is mainly done to give
valuable context to some commands, like for example git-merge(1), which
will put the URL into the generated MERGE_MSG. As a result, what gets
written into FETCH_HEAD may become public in some cases. This is
especially important considering that URLs may contain credentials, e.g.
when cloning 'https://foo:bar@example.com/repo' we persist the complete
URL into FETCH_HEAD and put it without any kind of sanitization into the
MERGE_MSG. This is obviously bad, as your login data has now just leaked
as soon as you do git-push(1).
When writing the URL into FETCH_HEAD, upstream git does strip
credentials first. Let's do the same by trying to parse the remote URL
as a "real" URL, removing any credentials and then re-formatting the
URL. In case this fails, e.g. when it's a file path or not a valid URL,
we just fall back to using the URL as-is without any sanitization. Add
tests to verify our behaviour.
|
|
a1bff63b
|
2020-01-31T13:44:47
|
|
Merge pull request #5375 from pks-t/pks/test-ci
azure-pipelines: properly expand negotiate passwords
|
|
7aa99dd3
|
2020-01-31T11:41:43
|
|
azure-pipelines: properly expand negotiate passwords
To allow testing against a Kerberos instance, we have added variables
for the Kerberos password to allow authentication against LIBGIT2.ORG in
commit e5fb5fe5a (ci: perform SPNEGO tests, 2019-10-20). To set up the
password, we assign
"GITTEST_NEGOTIATE_PASSWORD=$(GITTEST_NEGOTIATE_PASSWORD)"
in the environmentVariables section which is then passed through to a
template. As the template does build-time expansion of the environment
variables, it will expand the above line verbosely, and due to the
envVar section not doing any further expansion the password variable
will end up with the value "$(GITTEST_NEGOTIATE_PASSWORD)" in the
container's environment.
Fix this fixed by doing expansion of GITTEST_NEGOTIATE_PASSWORD at
build-time, as well.
|
|
aa4cd778
|
2020-01-30T10:40:44
|
|
Merge pull request #5336 from libgit2/ethomson/credtype
cred: change enum to git_credential_t and GIT_CREDENTIAL_*
|
|
f9b41a66
|
2020-01-30T10:30:12
|
|
Merge pull request #5371 from ayush-1506/julia_link
Update link to libgit2 Julia language binding
|
|
103a76b4
|
2020-01-30T17:58:43
|
|
Update link to Julia libgit2
|
|
3f54ba8b
|
2020-01-18T13:51:40
|
|
credential: change git_cred to git_credential
We avoid abbreviations where possible; rename git_cred to
git_credential.
In addition, we have standardized on a trailing `_t` for enum types,
instead of using "type" in the name. So `git_credtype_t` has become
`git_credential_t` and its members have become `GIT_CREDENTIAL` instead
of `GIT_CREDTYPE`.
Finally, the source and header files have been renamed to `credential`
instead of `cred`.
Keep previous name and values as deprecated, and include the new header
files from the previous ones.
|
|
4383ab40
|
2020-01-24T15:32:09
|
|
Merge pull request #5365 from libgit2/ethomson/no_void
Return int from non-free functions
|
|
4cae9e71
|
2020-01-18T18:02:08
|
|
git_libgit2_version: return an int
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
|
f78f6bd5
|
2020-01-18T18:00:39
|
|
error functions: return an int
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
|
4b331f02
|
2020-01-18T17:56:05
|
|
revwalk functions: return an int
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
|
82050fa1
|
2020-01-18T17:53:26
|
|
mempack functions: return an int
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
|
a3126a72
|
2020-01-18T17:50:38
|
|
repository functions: return an int
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
|
cb43274a
|
2020-01-18T17:42:52
|
|
index functions: return an int
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
|
82154e58
|
2020-01-18T17:41:21
|
|
remote functions: return an int
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
|
3351506a
|
2020-01-18T17:38:36
|
|
tree functions: return an int
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
|
2e8c3b0b
|
2020-01-18T17:17:46
|
|
oid functions: return an int
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
|
9893d376
|
2020-01-18T15:41:20
|
|
git_attr_cache_flush: return an int
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
|
4460bf40
|
2020-01-24T11:08:44
|
|
Merge pull request #5286 from libgit2/ethomson/gssapi
HTTP: Support Apache-based servers with Negotiate
|
|
e9cef7c4
|
2020-01-11T23:53:45
|
|
http: introduce GIT_ERROR_HTTP
Disambiguate between general network problems and HTTP problems in error
codes.
|
|
7fd9b3f5
|
2020-01-01T20:48:15
|
|
ci: add NTLM tests
Download poxygit, a debugging git server, and clone from it using NTLM,
both IIS-style (with connection affinity) and Apache-style ("broken",
requiring constant reauthentication).
|
|
29762e40
|
2020-01-01T16:14:37
|
|
httpclient: use defines for status codes
|
|
3e9ee04f
|
2019-12-29T18:46:44
|
|
trace: compare against an int value
When tracing is disabled, don't let `git_trace__level` return a void,
since that can't be compared against.
|
|
76fd406a
|
2019-12-26T16:37:01
|
|
http: send probe packets
When we're authenticating with a connection-based authentication scheme
(NTLM, Negotiate), we need to make sure that we're still connected
between the initial GET where we did the authentication and the POST
that we're about to send. Our keep-alive session may have not kept
alive, but more likely, some servers do not authenticate the entire
keep-alive connection and may have "forgotten" that we were
authenticated, namely Apache and nginx.
Send a "probe" packet, that is an HTTP POST request to the upload-pack
or receive-pack endpoint, that consists of an empty git pkt ("0000").
If we're authenticated, we'll get a 200 back. If we're not, we'll get a
401 back, and then we'll resend that probe packet with the first step of
our authentication (asking to start authentication with the given
scheme). We expect _yet another_ 401 back, with the authentication
challenge.
Finally, we will send our authentication response with the actual POST
data. This will allow us to authenticate without draining the POST data
in the initial request that gets us a 401.
|
|
b9c5b15a
|
2019-12-22T14:12:24
|
|
http: use the new httpclient
Untangle the notion of the http transport from the actual http
implementation. The http transport now uses the httpclient.
|
|
bf55facf
|
2019-10-25T12:24:34
|
|
tests: allow users to use expect/continue
|
|
7372573b
|
2019-10-25T12:22:10
|
|
httpclient: support expect/continue
Allow users to opt-in to expect/continue handling when sending a POST
and we're authenticated with a "connection-based" authentication
mechanism like NTLM or Negotiate.
If the response is a 100, return to the caller (to allow them to post
their body). If the response is *not* a 100, buffer the response for
the caller.
HTTP expect/continue is generally safe, but some legacy servers
have not implemented it correctly. Require it to be opt-in.
|
|
6c21c989
|
2019-12-14T21:32:07
|
|
httpclient: support CONNECT proxies
Fully support HTTP proxies, in particular CONNECT proxies, that allow us
to speak TLS through a proxy.
|
|
6b208836
|
2019-12-18T21:55:28
|
|
httpclient: handle chunked responses
Detect responses that are sent with Transfer-Encoding: chunked, and
record that information so that we can consume the entire message body.
|
|
6a095679
|
2019-12-14T10:34:36
|
|
httpclient: support authentication
Store the last-seen credential challenges (eg, all the
'WWW-Authenticate' headers in a response message). Given some
credentials, find the best (first) challenge whose mechanism supports
these credentials. (eg, 'Basic' supports username/password credentials,
'Negotiate' supports default credentials).
Set up an authentication context for this mechanism and these
credentials. Continue exchanging challenge/responses until we're
authenticated.
|
|
0e39a8fa
|
2019-12-29T10:05:14
|
|
net: free the url's query component
|
|
0b8358c8
|
2019-12-14T11:04:58
|
|
net: introduce path formatting function
Introduce a function to format the path and query string for a URL,
suitable for creating an HTTP request.
|
|
1152f361
|
2019-12-13T18:37:19
|
|
httpclient: consume final chunk message
When sending a new request, ensure that we got the entirety of the
response body. Our caller may have decided that they were done reading.
If we were not at the end of the message, this means that we need to
tear down the connection and cannot do keep-alive.
However, if the caller read all of the message, but we still have a
final end-of-response chunk signifier (ie, "0\r\n\r\n") on the socket,
then we should consider that the response was successfully copmleted.
If we're asked to send a new request, try to read from the socket, just
to clear out that end-of-chunk message, marking ourselves as
disconnected on any errors.
|
|
dcd3b815
|
2019-12-13T15:28:57
|
|
tests: support CLAR_TRACE_LEVEL
The CLAR_TRACE_LEVEL environment variable was supported when building
with GIT_TRACE. Now we always build with GIT_TRACE, but that variable
is not provided to tests. Simply support clar tracing always.
|
|
84b99a95
|
2019-12-12T13:53:43
|
|
httpclient: add chunk support to POST
Teach httpclient how to support chunking when POSTing request bodies.
|
|
eacecebd
|
2019-12-12T13:25:32
|
|
httpclient: introduce a simple http implementation
Introduce a new http client implementation that can GET and POST to
remote URLs.
Consumers can use `git_http_client_init` to create a new client,
`git_http_client_send_request` to send a request to the remote server
and `git_http_client_read_response` to read the response.
The http client implementation will perform the I/O with the remote
server (http or https) but does not understand the git smart transfer
protocol. This allows us to split the concerns of the http subtransport
from the actual http implementation.
|