|
894ccf4b
|
2018-02-20T16:14:54
|
|
Merge pull request #4535 from libgit2/ethomson/checkout_typechange_with_index_and_wd
checkout: when examining index (instead of workdir), also examine mode
|
|
ce7080a0
|
2018-02-20T10:38:27
|
|
diff_tform: fix rename detection with rewrite/delete pair
A rewritten file can either be classified as a modification of its
contents or of a delete of the complete file followed by an addition of
the new content. This distinction becomes important when we want to
detect renames for rewrites. Given a scenario where a file "a" has been
deleted and another file "b" has been renamed to "a", this should be
detected as a deletion of "a" followed by a rename of "a" -> "b". Thus,
splitting of the original rewrite into a delete/add pair is important
here.
This splitting is represented by a flag we can set at the current delta.
While the flag is already being set in case we want to break rewrites,
we do not do so in case where the `GIT_DIFF_FIND_RENAMES_FROM_REWRITES`
flag is set. This can trigger an assert when we try to match the source
and target deltas.
Fix the issue by setting the `GIT_DIFF_FLAG__TO_SPLIT` flag at the delta
when it is a rename target and `GIT_DIFF_FIND_RENAMES_FROM_REWRITES` is
set.
|
|
d7fea1e1
|
2018-02-18T16:10:33
|
|
checkout: take mode into account when comparing index to baseline
When checking out a file, we determine whether the baseline (what we
expect to be in the working directory) actually matches the contents
of the working directory. This is safe behavior to prevent us from
overwriting changes in the working directory.
We look at the index to optimize this test: if we know that the index
matches the working directory, then we can simply look at the index
data compared to the baseline.
We have historically compared the baseline to the index entry by oid.
However, we must also compare the mode of the two items to ensure that
they are identical. Otherwise, we will refuse to update the working
directory for a mode change.
|
|
f1ad004c
|
2018-02-18T22:29:48
|
|
Merge pull request #4529 from libgit2/ethomson/index_add_requires_files
git_index_add_frombuffer: only accept files/links
|
|
5f774dbf
|
2018-02-11T10:14:13
|
|
git_index_add_frombuffer: only accept files/links
Ensure that the buffer given to `git_index_add_frombuffer` represents a
regular blob, an executable blob, or a link. Explicitly reject commit
entries (submodules) - it makes little sense to allow users to add a
submodule from a string; there's no possible path to success.
|
|
06b8a40f
|
2018-02-16T11:29:46
|
|
Explicitly mark fallthrough cases with comments
A lot of compilers nowadays generate warnings when there are cases in a
switch statement which implicitly fall through to the next case. To
avoid this warning, the last line in the case that is falling through
can have a comment matching a regular expression, where one possible
comment body would be `/* fall through */`.
An alternative to the comment would be an explicit attribute like e.g.
`[[clang::fallthrough]` or `__attribute__ ((fallthrough))`. But GCC only
introduced support for such an attribute recently with GCC 7. Thus, and
also because the fallthrough comment is supported by most compilers, we
settle for using comments instead.
One shortcoming of that method is that compilers are very strict about
that. Most interestingly, that comment _really_ has to be the last line.
In case a closing brace follows the comment, the heuristic will fail.
|
|
92324d84
|
2018-02-16T11:28:53
|
|
util: clean up header includes
While "util.h" declares the macro `git__tolower`, which simply resorts
to tolower(3P) on Unix-like systems, the <ctype.h> header is only being
included in "util.c". Thus, anybody who has included "util.h" without
having <ctype.h> included will fail to compile as soon as the macro is
in use.
Furthermore, we can clean up additional includes in "util.c" and simply
replace them with an include for "common.h".
|
|
7c6e9175
|
2018-02-16T11:11:11
|
|
index: shut up warning on uninitialized variable
Even though the `entry` variable will always be initialized when
`read_entry` returns success and even though we never dereference
`entry` in case `read_entry` fails, GCC prints a warning about
uninitialized use. Just initialize the pointer to `NULL` in order to
shut GCC up.
|
|
84f03b3a
|
2018-02-16T10:48:55
|
|
streams: openssl: fix use of uninitialized variable
When verifying the server certificate, we do try to make sure that the
hostname actually matches the certificate alternative names. In cases
where the host is either an IPv4 or IPv6 address, we have to compare the
binary representations of the hostname with the declared IP address of
the certificate. We only do that comparison in case we were successfully
able to parse the hostname as an IP, which would always result in the
memory region being initialized. Still, GCC 6.4.0 was complaining about
usage of non-initialized memory.
Fix the issue by simply asserting that `addr` needs to be initialized.
This shuts up the GCC warning.
|
|
ee6be190
|
2018-01-31T08:36:19
|
|
http: standardize user-agent addition
The winhttp and posix http each need to add the user-agent to their
requests. Standardize on a single function to include this so that we
do not get the version numbers we're sending out of sync.
Assemble the complete user agent in `git_http__user_agent`, returning
assembled strings.
Co-authored-by: Patrick Steinhardt <ps@pks.im>
|
|
178fda8a
|
2018-02-09T17:55:18
|
|
hash: win32: fix missing comma in `giterr_set`
|
|
638c6b8c
|
2018-02-09T17:32:15
|
|
odb_loose: only close file descriptor if it was opened successfully
|
|
a43bcd2c
|
2018-02-09T17:31:50
|
|
odb: fix memory leaks due to not freeing hash context
|
|
619f61a8
|
2018-02-01T06:22:36
|
|
odb: error when we can't create object header
Return an error to the caller when we can't create an object header for
some reason (printf failure) instead of simply asserting.
|
|
9985edb5
|
2018-02-01T06:32:55
|
|
hash: set error messages on failure
|
|
7ec7aa4a
|
2018-02-01T05:54:57
|
|
odb: assert on logic errors when writing objects
There's no recovery possible if we're so confused or corrupted that
we're trying to overwrite our memory. Simply assert.
|
|
138e4c2b
|
2018-02-01T06:35:31
|
|
git_odb__hashfd: propagate error on failures
|
|
35ed256b
|
2018-02-01T05:11:05
|
|
git_odb__hashobj: provide errors messages on failures
Provide error messages on hash failures: assert when given invalid
input instead of failing with a user error; provide error messages
on program errors.
|
|
59d99adc
|
2018-01-31T09:34:52
|
|
odb: check for alloc errors on hardcoded objects
It's unlikely that we'll fail to allocate a single byte, but let's check
for allocation failures for good measure. Untangle `-1` being a marker
of not having found the hardcoded odb object; use that to reflect actual
errors.
|
|
ef902864
|
2018-01-31T09:30:51
|
|
odb: error when we can't alloc an object
At the moment, we're swallowing the allocation failure. We need to
return the error to the caller.
|
|
0fd0bfe4
|
2018-02-08T22:51:46
|
|
Merge pull request #4450 from libgit2/ethomson/odb_loose_readstream
Streaming read support for the loose ODB backend
|
|
d749822c
|
2018-02-08T22:50:58
|
|
Merge pull request #4491 from libgit2/ethomson/recursive
Recursive merge: reverse the order of merge bases
|
|
ba4faf6e
|
2018-02-08T17:15:33
|
|
buf_text: remove `offset` parameter of BOM detection function
The function to detect a BOM takes an offset where it shall look for a
BOM. No caller uses that, and searching for the BOM in the middle of a
buffer seems to be very unlikely, as a BOM should only ever exist at
file start.
Remove the parameter, as it has already caused confusion due to its
weirdness.
|
|
2eea5f1c
|
2018-02-08T10:27:31
|
|
config_parse: fix reading files with BOM
The function `skip_bom` is being used to detect and skip BOM marks
previously to parsing a configuration file. To do so, it simply uses
`git_buf_text_detect_bom`. But since the refactoring to use the parser
interface in commit 9e66590bd (config_parse: use common parser
interface, 2017-07-21), the BOM detection was actually broken.
The issue stems from a misunderstanding of `git_buf_text_detect_bom`. It
was assumed that its third parameter limits the length of the character
sequence that is to be analyzed, while in fact it was an offset at which
we want to detect the BOM. Fix the parameter to be `0` instead of the
buffer length, as we always want to check the beginning of the
configuration file.
|
|
848153f3
|
2018-02-08T10:02:29
|
|
config_parse: handle empty lines with CRLF
Currently, the configuration parser will fail reading empty lines with
just an CRLF-style line ending. Special-case the '\r' character in order
to handle it the same as Unix-style line endings. Add tests to spot this
regression in the future.
|
|
5340ca77
|
2018-02-08T09:31:51
|
|
config_parse: add comment to clarify logic getting next character
Upon each line, the configuration parser tries to get either the first
non-whitespace character or the first whitespace character, in case
there is no non-whitespace character. The logic handling this looks
rather odd and doesn't immediately convey this meaning, so add a comment
to clarify what happens.
|
|
1403c612
|
2018-01-22T14:44:31
|
|
merge: virtual commit should be last argument to merge-base
Our virtual commit must be the last argument to merge-base: since our
algorithm pushes _both_ parents of the virtual commit, it needs to be
the last argument, since merge-base:
> Given three commits A, B and C, git merge-base A B C will compute the
> merge base between A and a hypothetical commit M
We want to calculate the merge base between the actual commit ("two")
and the virtual commit ("one") - since one actually pushes its parents
to the merge-base calculation, we need to calculate the merge base of
"two" and the parents of one.
|
|
b924df1e
|
2018-01-21T18:05:45
|
|
merge: reverse merge bases for recursive merge
When the commits being merged have multiple merge bases, reverse the
order when creating the virtual merge base. This is for compatibility
with git's merge-recursive algorithm, and ensures that we build
identical trees.
Git does this to try to use older merge bases first. Per 8918b0c:
> It seems to be the only sane way to do it: when a two-head merge is
> done, and the merge-base and one of the two branches agree, the
> merge assumes that the other branch has something new.
>
> If we start creating virtual commits from newer merge-bases, and go
> back to older merge-bases, and then merge with newer commits again,
> chances are that a patch is lost, _because_ the merge-base and the
> head agree on it. Unlikely, yes, but it happened to me.
|
|
ed51feb7
|
2018-01-21T18:01:20
|
|
oidarray: introduce git_oidarray__reverse
Provide a simple function to reverse an oidarray.
|
|
26f5d36d
|
2018-02-04T10:27:39
|
|
Merge pull request #4489 from libgit2/ethomson/conflicts_crlf
Conflict markers should match EOL style in conflicting files
|
|
8abd514c
|
2018-02-02T17:37:12
|
|
Merge pull request #4499 from pks-t/pks/setuid-config
sysdir: do not use environment in setuid case
|
|
2553cbe3
|
2018-02-02T11:33:46
|
|
Merge pull request #4512 from libgit2/ethomson/header_guards
Consistent header guards
|
|
53454b68
|
2018-02-02T11:31:15
|
|
Merge pull request #4510 from pks-t/pks/attr-file-bare-stat
attr: avoid stat'ting files for bare repositories
|
|
0967459e
|
2018-01-25T13:11:34
|
|
sysdir: do not use environment in setuid case
In order to derive the location of some Git directories, we currently
use the environment variables $HOME and $XDG_CONFIG_HOME. This might
prove to be problematic whenever the binary is run with setuid, that is
when the effective user does not equal the real user. In case the
environment variables do not get sanitized by the caller, we thus might
end up using the real user's configuration when doing stuff as the
effective user.
The fix is to use the passwd entry's directory instead of $HOME in this
situation. As this might break scenarios where the user explicitly sets
$HOME to another path, this fix is only applied in case the effective
user does not equal the real user.
|
|
09df354e
|
2018-02-01T16:52:43
|
|
odb_loose: HEADER_LEN -> MAX_HEADER_LEN
`MAX_HEADER_LEN` is a more descriptive constant name.
|
|
624614b2
|
2017-12-19T00:43:49
|
|
odb_loose: validate length when checking for zlib content
When checking to see if a file has zlib deflate content, make sure that
we actually have read at least two bytes before examining the array.
|
|
1118ba3e
|
2017-12-18T23:08:40
|
|
odb_loose: `read_header` for packlike loose objects
Support `read_header` for "packlike loose objects", which were a
temporarily and uncommonly used format loose object format that encodes
the header before the zlib deflate data.
This will never actually be seen in the wild, but add support for it for
completeness and (more importantly) because our corpus of test data has
objects in this format, so it's easier to support it than to try to
special case it.
|
|
4c7a16b7
|
2017-12-18T15:56:21
|
|
odb_loose: read_header should use zstream
Make `read_header` use the common zstream implementation.
Remove the now unnecessary zlib wrapper in odb_loose.
|
|
6155e06b
|
2017-12-17T18:44:02
|
|
zstream: introduce a single chunk reader
Introduce `get_output_chunk` that will inflate/deflate all the available
input buffer into the output buffer. `get_output` will call
`get_output_chunk` in a loop, while other consumers can use it to
inflate only a piece of the data.
|
|
80dc3946
|
2017-12-17T16:26:48
|
|
odb_loose: packlike loose objects use `git_zstream`
Refactor packlike loose object reads to use `git_zstream` for
simplification.
|
|
7cb5bae7
|
2017-12-17T11:55:18
|
|
odb: loose object streaming for packlike loose objects
A "packlike" loose object was a briefly lived loose object format where
the type and size were encoded in uncompressed space at the beginning of
the file, followed by the compressed object contents. Handle these in a
streaming manner as well.
|
|
b61846f2
|
2017-12-17T02:14:29
|
|
odb: introduce streaming loose object reader
Provide a streaming loose object reader.
|
|
97f9a5f0
|
2017-12-17T01:12:49
|
|
odb: provide length and type with streaming read
The streaming read functionality should provide the length and the type
of the object, like the normal read functionality does.
|
|
c74e9271
|
2017-12-16T22:10:11
|
|
odb_loose: stream -> writestream
There are two streaming functions; one for reading, one for writing.
Disambiguate function names between `stream` and `writestream` to make
allowances for a read stream.
|
|
abb04caa
|
2018-02-01T15:55:48
|
|
consistent header guards
use consistent names for the #include / #define header guard pattern.
|
|
e28e17e6
|
2018-02-01T10:36:33
|
|
attr: avoid stat'ting files for bare repositories
Depending on whether the path we want to look up an attribute for is a
file or a directory, the fnmatch function will be called with different
flags. Because of this, we have to first stat(3) the path to determine
whether it is a file or directory in `git_attr_path__init`. This is
wasteful though in bare repositories, where we can already be assured
that the path will never exist at all due to there being no worktree. In
this case, we will execute an unnecessary syscall, which might be
noticeable on networked file systems.
What happens right now is that we always pass the `GIT_DIR_FLAG_UNKOWN`
flag to `git_attr_path__init`, which causes it to `stat` the file itself
to determine its type. As it is calling `git_path_isdir` on the path,
which will always return `false` in case the path does not exist, we end
up with the path always being treated as a file in case of a bare
repository. As such, we can just check the bare-repository case in all
callers and then pass in `GIT_DIR_FLAG_FALSE` ourselves, avoiding the
need to `stat`. While this may not always be correct, it at least is no
different from our current behavior.
|
|
341608dc
|
2018-01-31T14:48:42
|
|
Merge pull request #4507 from tomas/patch-1
Honor 'GIT_USE_NSEC' option in `filesystem_iterator_set_current`
|
|
9d8510b3
|
2018-01-31T09:28:43
|
|
Merge pull request #4488 from libgit2/ethomson/conflict_marker_size
Use longer conflict markers in recursive merge base
|
|
054e4c08
|
2018-01-31T14:28:25
|
|
Set ctime/mtime nanosecs to 0 if USE_NSEC is not defined
|
|
752006dd
|
2018-01-30T23:21:19
|
|
Honor 'GIT_USE_NSEC' option in `filesystem_iterator_set_current`
This should have been part of PR #3638. Without this we still get
nsec-related errors, even when using -DGIT_USE_NSEC:
error: ‘struct stat’ has no member named ‘st_mtime_nsec’
|
|
275f103d
|
2018-01-12T08:59:40
|
|
odb: reject reading and writing null OIDs
The null OID (hash with all zeroes) indicates a missing object in
upstream git and is thus not a valid object ID. Add defensive
measurements to avoid writing such a hash to the object database in the
very unlikely case where some data results in the null OID. Furthermore,
add shortcuts when reading the null OID from the ODB to avoid ever
returning an object when a faulty repository may contain the null OID.
|
|
c0487bde
|
2018-01-12T08:23:43
|
|
tree: reject writing null-OID entries to a tree
In commit a96d3cc3f (cache-tree: reject entries with null sha1,
2017-04-21), the git.git project has changed its stance on null OIDs in
tree objects. Previously, null OIDs were accepted in tree entries to
help tools repair broken history. This resulted in some problems though
in that many code paths mistakenly passed null OIDs to be added to a
tree, which was not properly detected.
Align our own code base according to the upstream change and reject
writing tree entries early when the OID is all-zero.
|
|
d23ce187
|
2018-01-22T11:55:28
|
|
odb: export mempack backend
Fixes #4492, #4496.
|
|
7f52bc5a
|
2018-01-20T18:19:26
|
|
xdiff: upgrade to git's included xdiff
Upgrade xdiff to git's most recent version, which includes changes to
CR/LF handling. Now CR/LF included in the input files will be detected
and conflict markers will be emitted with CR/LF when appropriate.
|
|
185b0d08
|
2018-01-20T19:41:28
|
|
merge: recursive uses larger conflict markers
Git uses longer conflict markers in the recursive merge base - two more
than the default (thus, 9 character long conflict markers). This allows
users to tell the difference between the recursive merge conflicts and
conflicts between the ours and theirs branches.
This was introduced in git d694a17986a28bbc19e2a6c32404ca24572e400f.
Update our tests to expect this as well.
|
|
b8e9467a
|
2018-01-20T19:39:34
|
|
merge: allow custom conflict marker size
Allow for a custom conflict marker size, allowing callers to override
the default size of the "<<<<<<<" and ">>>>>>>" markers in the
conflicted output file.
|
|
45f58409
|
2018-01-20T15:15:40
|
|
Merge pull request #4484 from pks-t/pks/fail-creating-branch-HEAD
branch: refuse creating branches named 'HEAD'
|
|
4ea8035d
|
2018-01-20T14:56:51
|
|
Merge pull request #4478 from libgit2/cmn/packed-refs-sorted
refs: include " sorted " in our packed-refs header
|
|
a9677e01
|
2018-01-19T09:20:59
|
|
branch: refuse creating branches named 'HEAD'
Since a625b092c (branch: correctly reject refs/heads/{-dash,HEAD},
2017-11-14), which is included in v2.16.0, upstream git refuses to
create branches which are named HEAD to avoid ambiguity with the
symbolic HEAD reference. Adjust our own code to match that behaviour and
reject creating branches names HEAD.
|
|
4893a9c0
|
2018-01-17T13:54:42
|
|
Merge pull request #4451 from libgit2/charliesome/trailer-info
Implement message trailer parsing API
|
|
d4a3a4b5
|
2018-01-17T12:52:08
|
|
rename find_trailer to extract_trailer_block
|
|
d43974fb
|
2018-01-16T13:40:26
|
|
Change trailer API to return a simple array
|
|
9bf37ddd
|
2018-01-12T15:17:41
|
|
refs: include " sorted " in our packed-refs header
This lets git know that we do in fact have written our packed-refs file
sorted (which is apparently not necessarily the case) and it can then use the
new-ish mmaped access which lets it avoid significant amounts of effort parsing
potentially large files to get to a single piece of data.
|
|
90f81f9f
|
2018-01-12T12:56:57
|
|
transports: local: fix memory leak in reference walk
Upon downloading the pack file, the local transport will iterate through
every reference using `git_reference_foreach`. The function is a bit
tricky though in that it requires the passed callback to free the
references, which does not currently happen.
Fix the memory leak by freeing all passed references in the callback.
|
|
5734768b
|
2018-01-10T19:19:34
|
|
Merge remote-tracking branch 'origin/master' into charliesome/trailer-info
|
|
b21c5408
|
2018-01-08T12:33:07
|
|
cmake: add openssl to the private deps list when it's the TLS implementation
We might want OpenSSL to be the implementation for SHA-1 and/or TLS. If we only
want it for TLS (e.g. we're building with the collision-detecting SHA-1
implementation) then we did not indicate this to the systems including us a
static library.
Add OpenSSL to the list also during the TLS decision to make sure we say we
should link to it if we use it for TLS.
|
|
b85548ed
|
2018-01-08T12:30:50
|
|
cmake: treat LIBGIT2_PC_REQUIRES as a list
It is indeed a list of dependencies for those which include the static archive.
This is in preparation for adding two possible places where we might add openssl
as a dependency.
|
|
70db57d4
|
2018-01-05T15:31:51
|
|
Merge pull request #4398 from pks-t/pks/generic-sha1
cmake: allow explicitly choosing SHA1 backend
|
|
70aa6146
|
2017-12-05T08:48:31
|
|
cmake: allow explicitly choosing SHA1 backend
Right now, if SHA1DC is disabled, the SHA1 backend is mostly chosen
based on which system libgit2 is being compiled on and which libraries
have been found. To give developers and distributions more choice,
enable them to request specific backends by passing in a
`-DSHA1_BACKEND=<BACKEND>` option instead. This completely replaces the
previous auto-selection.
|
|
f315cd14
|
2018-01-03T18:44:12
|
|
make separators const a macro as well
|
|
1cda43ba
|
2018-01-03T18:30:04
|
|
make comment_line_char const a macro
|
|
a223bae5
|
2018-01-03T14:57:25
|
|
Merge pull request #4437 from pks-t/pks/openssl-hash-errors
hash: openssl: check return values of SHA1_* functions
|
|
399c0b19
|
2018-01-03T14:55:06
|
|
Merge pull request #4462 from pks-t/pks/diff-generated-excessive-stats
diff_generate: avoid excessive stats of .gitattribute files
|
|
d8896bda
|
2018-01-03T16:07:36
|
|
diff_generate: avoid excessive stats of .gitattribute files
When generating a diff between two trees, for each file that is to be
diffed we have to determine whether it shall be treated as text or as
binary files. While git has heuristics to determine which kind of diff
to generate, users can also that default behaviour by setting or
unsetting the 'diff' attribute for specific files.
Because of that, we have to query gitattributes in order to determine
how to diff the current files. Instead of hitting the '.gitattributes'
file every time we need to query an attribute, which can get expensive
especially on networked file systems, we try to cache them instead. This
works perfectly fine for every '.gitattributes' file that is found, but
we hit cache invalidation problems when we determine that an attribuse
file is _not_ existing. We do create an entry in the cache for missing
'.gitattributes' files, but as soon as we hit that file again we
invalidate it and stat it again to see if it has now appeared.
In the case of diffing large trees with each other, this behaviour is
very suboptimal. For each pair of files that is to be diffed, we will
repeatedly query every directory component leading towards their
respective location for an attributes file. This leads to thousands or
even hundreds of thousands of wasted syscalls.
The attributes cache already has a mechanism to help in that scenario in
form of the `git_attr_session`. As long as the same attributes session
is still active, we will not try to re-query the gitmodules files at all
but simply retain our currently cached results. To fix our problem, we
can create a session at the top-most level, which is the initialization
of the `git_diff` structure, and use it in order to look up the correct
diff driver. As the `git_diff` structure is used to generate patches for
multiple files at once, this neatly solves our problem by retaining the
session until patches for all files have been generated.
The fix has been tested with linux.git by calling
`git_diff_tree_to_tree` and `git_diff_to_buf` with v4.10^{tree} and
v4.14^{tree}.
| time | .gitattributes stats
without fix | 33.201s | 844614
with fix | 30.327s | 4441
While execution only improved by roughly 10%, the stat(3) syscalls for
.gitattributes files decreased by 99.5%. The benchmarks were quite
simple with best-of-three timings on Linux ext4 systems. One can assume
that for network based file systems the performance gain will be a lot
larger due to a much higher latency.
|
|
30455a56
|
2018-01-03T13:09:21
|
|
Merge pull request #4439 from tiennou/fix/4352
cmake: create a dummy file for Xcode
|
|
ba56f781
|
2018-01-03T12:54:42
|
|
streams: openssl: fix thread-safety for OpenSSL error messages
The function `ERR_error_string` can be invoked without providing a
buffer, in which case OpenSSL will simply return a string printed into a
static buffer. Obviously and as documented in ERR_error_string(3), this
is not thread-safe at all. As libgit2 is a library, though, it is easily
possible that other threads may be using OpenSSL at the same time, which
might lead to clobbered error strings.
Fix the issue by instead using a stack-allocated buffer. According to
the documentation, the caller has to provide a buffer of at least 256
bytes of size. While we do so, make sure that the buffer will never get
overflown by switching to `ERR_error_string_n` to specify the buffer's
size.
|
|
75e1737a
|
2017-12-08T10:10:19
|
|
hash: openssl: check return values of SHA1_* functions
The OpenSSL functions `SHA1_Init`, `SHA1_Update` and `SHA1_Final` all
return 1 for success and 0 otherwise, but we never check their return
values. Do so.
|
|
98303ea3
|
2018-01-03T11:27:12
|
|
Merge pull request #4457 from libgit2/ethomson/tree_error_messages
tree: standard error messages are lowercase
|
|
e8bc8558
|
2018-01-02T13:29:49
|
|
Merge remote-tracking branch 'origin/master' into charliesome/trailer-info
|
|
7610638e
|
2018-01-01T17:52:06
|
|
Merge pull request #4453 from libgit2/ethomson/spnego
winhttp: properly support ntlm and negotiate
|
|
2c99011a
|
2017-12-31T09:33:19
|
|
tree: standard error messages are lowercase
Our standard error messages begin with a lower case letter so that they
can be prefixed or embedded nicely.
These error messages were missed during the standardization pass since
they use the `tree_error` helper function.
|
|
d6210245
|
2017-12-30T13:09:43
|
|
Merge pull request #4159 from richardipsum/notes-commit
Support using notes via a commit rather than a ref
|
|
8cdf439b
|
2017-12-30T13:07:03
|
|
Merge pull request #4028 from chescock/improve-local-fetch
Transfer fewer objects on push and local fetch
|
|
2b7a3393
|
2017-12-30T12:47:57
|
|
Merge pull request #4455 from libgit2/ethomson/branch_symlinks
refs: traverse symlinked directories
|
|
e14bf97e
|
2017-12-30T08:09:22
|
|
Merge pull request #4443 from libgit2/ethomson/large_loose_blobs
Inflate large loose blobs
|
|
9e94b6af
|
2017-12-30T00:12:46
|
|
iterator: cleanups with symlink dir handling
Perform some error checking when examining symlink directories.
|
|
e9628e7b
|
2017-10-30T11:38:33
|
|
branches: Check symlinked subdirectories
Native Git allows symlinked directories under .git/refs. This
change allows libgit2 to also look for references that live under
symlinked directories.
Signed-off-by: Andy Doan <andy@opensourcefoundries.com>
|
|
526dea1c
|
2017-12-29T17:41:24
|
|
winhttp: properly support ntlm and negotiate
When parsing unauthorized responses, properly parse headers looking for
both NTLM and Negotiate challenges. Set the HTTP credentials to default
credentials (using a `NULL` username and password) with the schemes
supported by ourselves and the server.
|
|
083b1a2e
|
2017-12-28T10:38:31
|
|
Merge pull request #4021 from carlosmn/cmn/refspecs-fetchhead
FETCH_HEAD and multiple refspecs
|
|
1b4fbf2e
|
2017-11-19T09:47:07
|
|
remote: append to FETCH_HEAD rather than overwrite for each refspec
We treat each refspec on its own, but the code currently overwrites the contents
of FETCH_HEAD so we end up with the entries for the last refspec we processed.
Instead, truncate it before performing the updates and append to it when
updating the references.
|
|
3ccc1a4d
|
2017-11-19T09:46:02
|
|
futils: add a function to truncate a file
We want to do this in order to get FETCH_HEAD to be empty when we start updating
it due to fetching from the remote.
|
|
4110fc84
|
2017-12-23T23:30:29
|
|
Merge pull request #4285 from pks-t/pks/patches-with-whitespace
patch_parse: fix parsing unquoted filenames with spaces
|
|
c3514b0b
|
2017-12-23T14:59:07
|
|
Fix unpack double free
If an element has been cached, but then the call to
packfile_unpack_compressed() fails, the very next thing that happens is
that its data is freed and then the element is not removed from the
cache, which frees the data again.
This change sets obj->data to NULL to avoid the double-free. It also
stops trying to resolve deltas after two continuous failed rounds of
resolution, and adds a test for this.
|
|
9f7ad3c5
|
2017-12-23T10:55:13
|
|
Merge pull request #4430 from tiennou/fix/openssl-x509-leak
Free OpenSSL peer certificate
|
|
30d91760
|
2017-12-23T10:52:08
|
|
Merge pull request #4435 from lhchavez/ubsan-shift-overflow
libFuzzer: Prevent a potential shift overflow
|
|
1ddc57b3
|
2017-12-23T10:09:12
|
|
Merge pull request #4402 from libgit2/ethomson/iconv
cmake: let USE_ICONV be optional on macOS
|
|
06f3aa5f
|
2017-12-23T10:07:44
|
|
Merge pull request #4429 from novalis/delete-modify-submodule-merge
Do not attempt to check out submodule as blob when merging a submodule modify/deltete conflict
|
|
bdb54214
|
2017-12-11T16:46:05
|
|
hash: commoncrypto hash should support large files
Teach the CommonCrypto hash mechanisms to support large files. The hash
primitives take a `CC_LONG` (aka `uint32_t`) at a time. So loop to give
the hash function at most an unsigned 32 bit's worth of bytes until we
have hashed the entire file.
|
|
a89560d5
|
2017-12-10T17:26:43
|
|
hash: win32 hash mechanism should support large files
Teach the win32 hash mechanisms to support large files. The hash
primitives take at most `ULONG_MAX` bytes at a time. Loop, giving the
hash function the maximum supported number of bytes, until we have
hashed the entire file.
|
|
3e6533ba
|
2017-12-10T17:25:00
|
|
odb_loose: reject objects that cannot fit in memory
Check the size of objects being read from the loose odb backend and
reject those that would not fit in memory with an error message that
reflects the actual problem, instead of error'ing later with an
unintuitive error message regarding truncation or invalid hashes.
|