|
d730d3f4
|
2013-07-31T16:40:42
|
|
Major rename detection changes
After doing further profiling, I found that a lot of time was
being spent attempting to insert hashes into the file hash
signature when using the rolling hash because the rolling hash
approach generates a hash per byte of the file instead of one
per run/line of data.
To optimize this, I decided to convert back to a run-based file
signature algorithm which would be more like core Git.
After changing this, a number of the existing tests started to
fail. In some cases, this appears to have been because the test
was coded to be too specific to the particular results of the file
similarity metric and in some cases there appear to have been bugs
in the core rename detection code where only by the coincidence
of the file similarity scoring were the expected results being
generated.
This renames all the variables in the core rename detection code
to be more consistent and hopefully easier to follow which made it
a bit easier to reason about the behavior of that code and fix the
problems that I was seeing. I think it's in better shape now.
There are a couple of tests now that attempt to stress test the
rename detection code and they are quite slow. Most of the time
is spent setting up the test data on disk and in the index. When
we roll out performance improvements for index insertion, it
should also speed up these tests I hope.
|
|
427cc255
|
2013-07-24T13:11:11
|
|
Use local variables in hash calc to avoid aliasing
|
|
1fed6b07
|
2013-05-13T21:57:37
|
|
Fix trailing whitespaces
|
|
e40f1c2d
|
2013-03-08T16:39:57
|
|
Make tree iterator handle icase equivalence
There is a serious bug in the previous tree iterator implementation.
If case insensitivity resulted in member elements being equivalent
to one another, and those member elements were trees, then the
children of the colliding elements would be processed in sequence
instead of in a single flattened list. This meant that the tree
iterator was not truly acting like a case-insensitive list.
This completely reworks the tree iterator to manage lists with
case insensitive equivalence classes and advance through the items
in a unified manner in a single sorted frame.
It is possible that at a future date we might want to update this
to separate the case insensitive and case sensitive tree iterators
so that the case sensitive one could be a minimal amount of code
and the insensitive one would always know what it needed to do
without checking flags.
But there would be so much shared code between the two, that I'm
not sure it that's a win. For now, this gets what we need.
More tests are needed, though.
|
|
97b71374
|
2013-02-28T14:14:45
|
|
Add GIT_STDLIB_CALL
This removes the one-off GIT_CDECL and adds a new standard way of
doing this named GIT_STDLIB_CALL with a src/win32 specific def
when on the Windows platform.
|
|
f708c89f
|
2013-02-27T15:15:39
|
|
fixing some warnings on Windows
|
|
11b5beb7
|
2013-02-27T15:07:28
|
|
use cdecl for hashsig sorting functions on Windows
|
|
9bc8be3d
|
2013-02-19T10:25:41
|
|
Refine pluggable similarity API
This plugs in the three basic similarity strategies for handling
whitespace via internal use of the pluggable API. In so doing, I
realized that the use of git_buf in the hashsig API was not needed
and actually just made it harder to use, so I tweaked that API as
well.
Note that the similarity metric is still not hooked up in the
find_similarity code - this is just setting out the function that
will be used.
|
|
5e5848eb
|
2013-02-14T17:25:10
|
|
Change similarity metric to sampled hashes
This moves the similarity metric code out of buf_text and into a
new file. Also, this implements a different approach to similarity
measurement based on a Rabin-Karp rolling hash where we only keep
the top 100 and bottom 100 hashes. In theory, that should be
sufficient samples to given a fairly accurate measurement while
limiting the amount of data we keep for file signatures no matter
how large the file is.
|