|
6e010bb1
|
2017-06-12T15:43:56
|
|
tests: odb: allow passing fake objects to the fake backend
Right now, the fake backend is quite restrained in the way how it
works: we pass it an OID which it is to return later as well as an error
code we want it to return. While this is sufficient for existing tests,
we can make the fake backend a little bit more generic in order to allow
us testing for additional scenarios.
To do so, we change the backend to not accept an error code and OID
which it is to return for queries, but instead a simple array of OIDs
with their respective blob contents. On each query, the fake backend
simply iterates through this array and returns the first matching
object.
|
|
369cb45f
|
2017-06-12T15:21:58
|
|
tests: do not reuse OID from backend
In order to make the fake backend more useful, we want to enable it
holding multiple object references. To do so, we need to decouple it
from the single fake OID it currently holds, which we simply move up
into the calling tests.
|
|
2add34d0
|
2017-06-12T14:53:46
|
|
tests: odb: move fake backend into its own file
The fake backend used by the test suite `odb::backend::nonrefreshing` is
useful to have some low-level tests for the ODB layer. As such, we move
the implementation into its own `backend_helpers` module.
|
|
28a0741f
|
2017-04-10T09:30:08
|
|
odb: verify object hashes
The upstream git.git project verifies objects when looking them up from
disk. This avoids scenarios where objects have somehow become corrupt on
disk, e.g. due to hardware failures or bit flips. While our mantra is
usually to follow upstream behavior, we do not do so in this case, as we
never check hashes of objects we have just read from disk.
To fix this, we create a new error class `GIT_EMISMATCH` which denotes
that we have looked up an object with a hashsum mismatch. `odb_read_1`
will then, after having read the object from its backend, hash the
object and compare the resulting hash to the expected hash. If hashes do
not match, it will return an error.
This obviously introduces another computation of checksums and could
potentially impact performance. Note though that we usually perform I/O
operations directly before doing this computation, and as such the
actual overhead should be drowned out by I/O. Running our test suite
seems to confirm this guess. On a Linux system with best-of-five
timings, we had 21.592s with the check enabled and 21.590s with the
ckeck disabled. Note though that our test suite mostly contains very
small blobs only. It is expected that repositories with bigger blobs may
notice an increased hit by this check.
In addition to a new test, we also had to change the
odb::backend::nonrefreshing test suite, which now triggers a hashsum
mismatch when looking up the commit "deadbeef...". This is expected, as
the fake backend allocated inside of the test will return an empty
object for the OID "deadbeef...", which will obviously not hash back to
"deadbeef..." again. We can simply adjust the hash to equal the hash of
the empty object here to fix this test.
|
|
e29e8029
|
2017-04-10T10:31:22
|
|
tests: odb: make hash of fake backend configurable
In the odb::backend::nonrefreshing test suite, we set up a fake backend
so that we are able to determine if backend functions are called
correctly. During the setup, we also parse an OID which is later on used
to read out the pseudo-object. While this procedure works right now, it
will create problems later when we implement hash verification for
looked up objects. The current OID ("deadbeef") will not match the hash
of contents we give back to the ODB layer and thus cannot be verified.
Make the hash configurable so that we can simply switch the returned for
single tests.
|
|
17820381
|
2013-11-14T14:05:52
|
|
Rename tests-clar to tests
|