| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Instead of going out to strtol, which is made to parse generic numbers,
copy a parse function from git which is specialised for file modes.
|
|\
| |
| | |
tree: mark cloned tree entries as un-pooled
|
|/
|
|
|
|
|
|
|
|
|
|
| |
When duplicating a `struct git_tree_entry` with
`git_tree_entry_dup` the resulting structure is not allocated
inside a memory pool. As we do a 1:1 copy of the original struct,
though, we also copy the `pooled` field, which is set to `true`
for pooled entries. This results in a huge memory leak as we
never free tree entries that were duplicated from a pooled
tree entry.
Fix this by marking the newly duplicated entry as un-pooled.
|
|\
| |
| | |
Improvements to tree parsing speed
|
| |
| |
| |
| |
| |
| | |
Return an error in case the length is too big. Also take this
opportunity to have a single allocating function for the size and
overflow logic.
|
| |
| |
| |
| |
| |
| | |
This reduces the size of the struct from 32 to 26 bytes, and leaves a
single padding byte at the end of the struct (which comes from the
zero-length array).
|
| |
| |
| |
| |
| | |
We already know the size due to the `memchr()` so use that information
instead of calling `strlen()` on it.
|
| |
| |
| |
| |
| |
| |
| | |
These are rather small allocations, so we end up spending a non-trivial
amount of time asking the OS for memory. Since these entries are tied to
the lifetime of their tree, we can give the tree a pool so we speed up
the allocations.
|
| |
| |
| |
| |
| |
| |
| | |
We've already looked at the filename with `memchr()` and then used
`strlen()` to allocate the entry. We already know how much we have to
advance to get to the object id, so add the filename length instead of
looking at each byte again.
|
|\ \
| | |
| | | |
Compiler warning fixes
|
| | | |
|
|/ / |
|
|\ \
| |/
|/| |
Recursive Merge
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When building a recursive merge base, allow conflicts to occur.
Use the file (with conflict markers) as the common ancestor.
The user has already seen and dealt with this conflict by virtue
of having a criss-cross merge. If they resolved this conflict
identically in both branches, then there will be no conflict in the
result. This is the best case scenario.
If they did not resolve the conflict identically in the two branches,
then we will generate a new conflict. If the user is simply using
standard conflict output then the results will be fairly sensible.
But if the user is using a mergetool or using diff3 output, then the
common ancestor will be a conflict file (itself with diff3 output,
haha!). This is quite terrible, but it matches git's behavior.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Use annotated commits to act as our virtual bases, instead of regular
commits, to avoid polluting the odb with virtual base commits and
trees. Instead, build an annotated commit with an index and pointers
to the commits that it was merged from.
|
| | |
|
| |
| |
| |
| |
| |
| | |
When there are more than two common ancestors, continue merging the
virtual base with the additional common ancestors, effectively
octopus merging a new virtual base.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
When the commits to merge have multiple common ancestors, build a
"virtual" base tree by merging the common ancestors.
|
| |
| |
| |
| |
| | |
Add a simple recursive test - where multiple ancestors exist and
creating a virtual merge base from them would prevent a conflict.
|
| | |
|
|\ \
| | |
| | | |
Memleak fixes
|
| | | |
|
| | | |
|
| | | |
|
|\ \ \
| |/ /
|/| | |
checkout: only consider nsecs when built that way
|
|/ /
| |
| |
| |
| |
| |
| |
| | |
When examining the working directory and determining whether it's
up-to-date, only consider the nanoseconds in the index entry when
built with `GIT_USE_NSEC`. This prevents us from believing that
the working directory is always dirty when the index was originally
written with a git client that uinderstands nsecs (like git 2.x).
|
|\ \
| |/
|/| |
Fix <0 unsigned comparison (stat.st_size should be an off_t)
|
| | |
|
|\ \
| | |
| | | |
Fix some warnings
|
| |/ |
|
|\ \
| | |
| | | |
Stat fixes
|
| |/ |
|
|\ \
| |/
|/| |
repository: distinguish sequencer cherry-pick and revert
|
|/
|
|
| |
These are not quite like their plain counterparts and require special handling.
|
|\
| |
| | |
Racy fixes for writing new indexes
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Ensure that `git_index_read_index` clears the uptodate bit on
files that it modifies.
Further, do not propagate the cache from an on-disk index into
another on-disk index. Although this should not be done, as
`git_index_read_index` is used to bring an in-memory index into
another index (that may or may not be on-disk), ensure that we do
not accidentally bring in these bits when misused.
|
| |
| |
| |
| |
| | |
Ensure that `git_index_read_tree` clears the uptodate bit on files
that it modifies.
|
| |
| |
| |
| |
| |
| | |
The uptodate bit should have a lifecycle of a single read->write
on the index. Once the index is written, the files within it should
be scanned for racy timestamps against the new index timestamp.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Test that entries are only smudged when we write the index: the
entry smudging is to prevent us from updating an index in a way
that it would be impossible to tell that an item was racy.
Consider when we load an index: any entries that have the same
(or newer) timestamp than the index itself are considered racy,
and are subject to further scrutiny.
If we *save* that index with the same entries that we loaded,
then the index would now have a newer timestamp than the entries,
and they would no longer be given that additional scrutiny, failing
our racy detection! So test that we smudge those entries only on
writing the new index, but that we can detect them (in diff) without
having to write.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When there's no matching index entry (for whatever reason), don't
try to dereference the null return value to get at the id.
Otherwise when we break something in the index API, the checkout
test crashes for confusing reasons and causes us to step through
it in a debugger thinking that we had broken much more than we
actually did.
|