|
f0e693b1
|
2021-09-07T17:53:49
|
|
str: introduce `git_str` for internal, `git_buf` is external
libgit2 has two distinct requirements that were previously solved by
`git_buf`. We require:
1. A general purpose string class that provides a number of utility APIs
for manipulating data (eg, concatenating, truncating, etc).
2. A structure that we can use to return strings to callers that they
can take ownership of.
By using a single class (`git_buf`) for both of these purposes, we have
confused the API to the point that refactorings are difficult and
reasoning about correctness is also difficult.
Move the utility class `git_buf` to be called `git_str`: this represents
its general purpose, as an internal string buffer class. The name also
is an homage to Junio Hamano ("gitstr").
The public API remains `git_buf`, and has a much smaller footprint. It
is generally only used as an "out" param with strict requirements that
follow the documentation. (Exceptions exist for some legacy APIs to
avoid breaking callers unnecessarily.)
Utility functions exist to convert a user-specified `git_buf` to a
`git_str` so that we can call internal functions, then converting it
back again.
|
|
31ecaca2
|
2021-09-30T08:11:40
|
|
hash: hash functions operate on byte arrays not git_oids
Separate the concerns of the hash functions from the git_oid functions.
The git_oid structure will need to understand either SHA1 or SHA256; the
hash functions should only deal with the appropriate one of these.
|
|
2a713da1
|
2021-09-29T21:31:17
|
|
hash: accept the algorithm in inputs
|
|
168fe39b
|
2018-11-28T14:26:57
|
|
object_type: use new enumeration names
Use the new object_type enumeration names within the codebase.
|
|
ecf4f33a
|
2018-02-08T11:14:48
|
|
Convert usage of `git_buf_free` to new `git_buf_dispose`
|
|
619f61a8
|
2018-02-01T06:22:36
|
|
odb: error when we can't create object header
Return an error to the caller when we can't create an object header for
some reason (printf failure) instead of simply asserting.
|
|
909a1992
|
2017-12-31T09:56:30
|
|
odb_loose: largefile tests only on 64 bit platforms
Only run the large file tests on 64 bit platforms.
Even though we support streaming reads on objects, and do not need to
fit them in memory, we use `size_t` in various places to reflect the
size of an object.
|
|
27078e58
|
2017-12-18T23:11:42
|
|
odb_loose: test read_header on large blobs
Test that we can read_header on large blobs. This should succeed on all
platforms since we read only a few bytes into memory to be able to
parse the header.
|
|
dbe3d3e9
|
2017-12-17T02:12:19
|
|
odb_loose: test reading a large file in stream
Since some test situations may have generous disk space, but limited RAM
(eg hosted build agents), test that we can stream a large file into a
loose object, and then stream it out of the loose object storage.
|
|
456e5218
|
2017-12-20T16:13:31
|
|
tests: add GITTEST_SLOW env var check
Writing very large files may be slow, particularly on inefficient
filesystems and when running instrumented code to detect invalid memory
accesses (eg within valgrind or similar tools).
Introduce `GITTEST_SLOW` so that tests that are slow can be skipped by
the CI system.
|
|
3e6533ba
|
2017-12-10T17:25:00
|
|
odb_loose: reject objects that cannot fit in memory
Check the size of objects being read from the loose odb backend and
reject those that would not fit in memory with an error message that
reflects the actual problem, instead of error'ing later with an
unintuitive error message regarding truncation or invalid hashes.
|
|
dacc3291
|
2017-11-30T15:49:05
|
|
odb: test loose reading/writing large objects
Introduce a test for very large objects in the ODB. Write a large
object (5 GB) and ensure that the write succeeds and provides us the
expected object ID. Introduce a test that writes that file and
ensures that we can subsequently read it.
|