| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
| |
The object::cache test module has two tests that do nearly the
same thing: given a cache limit, load a certain set of objects
and verify if those objects have been cached or not.
Convert those tests to the new data-driven initializers to
demonstrate how these are to be used. Furthermore, add some
additional test data. This conversion is mainly done to show this
new facility.
|
| | |
|
| | |
|
| |
|
|
|
| |
Move to the `git_error` name in the internal API for error-related
functions.
|
| |
|
|
|
|
|
| |
We use the term "invalid" to refer to bad or malformed data, eg
`GIT_REF_INVALID` and `GIT_EINVALIDSPEC`. Since we're changing the
names of the `git_object_t`s in this release, update it to be
`GIT_OBJECT_INVALID` instead of `BAD`.
|
| |
|
|
| |
Use the new object_type enumeration names within the codebase.
|
| |
|
|
| |
Use the new-style index names throughout our own codebase.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `parse_mode` option uses an open-coded octal number parser. The
parser is quite naive in that it simply parses until hitting a character
that is not in the accepted range of '0' - '7', completely ignoring the
fact that we can at most accept a 16 bit unsigned integer as filemode.
If the filemode is bigger than UINT16_MAX, it will thus overflow and
provide an invalid filemode for the object entry.
Fix the issue by using `git__strntol32` instead and doing a bounds
check. As this function already handles overflows, it neatly solves the
problem.
Note that previously, `parse_mode` was also skipping the character
immediately after the filemode. In proper trees, this should be a simple
space, but in fact the parser accepted any character and simply skipped
over it. As a consequence of using `git__strntol32`, we now need to an
explicit check for a trailing whitespace after having parsed the
filemode. Because of the newly introduced error message, the test
object::tree::parse::mode_doesnt_cause_oob_read needs adjustment to its
error message check, which in fact is a good thing as it demonstrates
that we now fail looking for the whitespace immediately following the
filemode.
Add a test that shows that we will fail to parse such invalid filemodes
now.
|
| |
|
|
|
|
|
|
|
|
|
| |
When parsing a tree entry's mode, we will eagerly parse until we hit a
character that is not in the accepted set of octal digits '0' - '7'. If
the provided buffer is not a NUL terminated one, we may thus read
out-of-bounds.
Fix the issue by passing the buffer length to `parse_mode` and paying
attention to it. Note that this is not a vulnerability in our usual code
paths, as all object data read from the ODB is NUL terminated.
|
| |
|
|
|
|
|
|
|
|
|
| |
We currently don't have any tests that directly exercise the tree
parser. This is due to the fact that the parsers for raw object data has
only been recently introduce with commit ca4db5f4a (object: implement
function to parse raw data, 2017-10-13), and previous to that the setup
simply was too cumbersome as it always required going through the ODB.
Now that we have the infrastructure, add a suite of tests that directly
exercise the tree parser and various edge cases.
|
| |
|
|
|
|
|
|
|
|
|
| |
The commit message encoding is currently being parsed by the
`git__prefixcmp` function. As this function does not accept a buffer
length, it will happily skip over a buffer's end if it is not `NUL`
terminated.
Fix the issue by using `git__prefixncmp` instead. Add a test that
verifies that we are unable to parse the encoding field if it's cut off
by the supplied buffer length.
|
| |
|
|
|
|
| |
We currently do not have any test suites dedicated to parsing commits
from their raw representations. Add one based on `git_object__from_raw`
to be able to test special cases more easily.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When parsing tags, we skip all unknown fields that appear before the tag
message. This skipping is done by using a plain `strstr(buffer, "\n\n")`
to search for the two newlines that separate tag fields from tag
message. As it is not possible to supply a buffer length to `strstr`,
this call may skip over the buffer's end and thus result in an out of
bounds read. As `strstr` may return a pointer that is out of bounds, the
following computation of `buffer_end - buffer` will overflow and result
in an allocation of an invalid length.
Fix the issue by using `git__memmem` instead. Add a test that verifies
parsing the tag fails not due to the allocation failure but due to the
tag having no message.
|
| |
|
|
|
|
|
|
| |
While the tests in object::tag::read exercises reading and parsing valid
tags from the ODB, they barely try to verify that the parser fails in a
sane way when parsing invalid tags. Create a new test suite
object::tag::parse that directly exercise the parser by using
`git_object__from_raw` and add various tests for valid and invalid tags.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
When we add entries to a treebuilder we validate them. But we validate even
those that we're adding because they exist in the base tree. This disables
using the normal mechanisms on these trees, even to fix them.
Keep track of whether the entry we're appending comes from an existing tree and
bypass the name and id validation if it's from existing data.
|
| |
|
|
|
|
|
|
|
| |
C++ style comment ("//") are not specified by the ISO C90 standard and
thus do not conform to it. While libgit2 aims to conform to C90, we did
not enforce it until now, which is why quite a lot of these
non-conforming comments have snuck into our codebase. Do a tree-wide
conversion of all C++ style comments to the supported C style comments
to allow us enforcing strict C90 compliance in a later commit.
|
| | |
|
| |
|
|
|
|
| |
Instead of laving it uninitialized and relying on luck for it to be non-zero,
let's give it a dummy hash so we make valgrind happy (in this case the hash
comes from `sha1sum </dev/null`.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
In commit a96d3cc3f (cache-tree: reject entries with null sha1,
2017-04-21), the git.git project has changed its stance on null OIDs in
tree objects. Previously, null OIDs were accepted in tree entries to
help tools repair broken history. This resulted in some problems though
in that many code paths mistakenly passed null OIDs to be added to a
tree, which was not properly detected.
Align our own code base according to the upstream change and reject
writing tree entries early when the OID is all-zero.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
Verifying hashsums of objects we are reading from the ODB may be costly
as we have to perform an additional hashsum calculation on the object.
Especially when reading large objects, the penalty can be as high as
35%, as can be seen when executing the equivalent of `git cat-file` with
and without verification enabled. To mitigate for this, we add a global
option for libgit2 which enables the developer to turn off the
verification, e.g. when he can be reasonably sure that the objects on
disk won't be corrupted.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The upstream git.git project verifies objects when looking them up from
disk. This avoids scenarios where objects have somehow become corrupt on
disk, e.g. due to hardware failures or bit flips. While our mantra is
usually to follow upstream behavior, we do not do so in this case, as we
never check hashes of objects we have just read from disk.
To fix this, we create a new error class `GIT_EMISMATCH` which denotes
that we have looked up an object with a hashsum mismatch. `odb_read_1`
will then, after having read the object from its backend, hash the
object and compare the resulting hash to the expected hash. If hashes do
not match, it will return an error.
This obviously introduces another computation of checksums and could
potentially impact performance. Note though that we usually perform I/O
operations directly before doing this computation, and as such the
actual overhead should be drowned out by I/O. Running our test suite
seems to confirm this guess. On a Linux system with best-of-five
timings, we had 21.592s with the check enabled and 21.590s with the
ckeck disabled. Note though that our test suite mostly contains very
small blobs only. It is expected that repositories with bigger blobs may
notice an increased hit by this check.
In addition to a new test, we also had to change the
odb::backend::nonrefreshing test suite, which now triggers a hashsum
mismatch when looking up the commit "deadbeef...". This is expected, as
the fake backend allocated inside of the test will return an empty
object for the OID "deadbeef...", which will obviously not hash back to
"deadbeef..." again. We can simply adjust the hash to equal the hash of
the empty object here to fix this test.
|
| |
|
|
|
|
| |
We currently have no tests which check whether we fail reading corrupted
objects. Add one which modifies contents of an object stored on disk and
then tries to read the object.
|
| |
|
|
|
|
|
|
|
|
|
| |
The object::lookup tests do use the "testrepo.git" repository in a
read-only way, so we do not set up the repository as a sandbox but
simply open it. But in a future commit, we will want to test looking up
objects which are corrupted in some way, which requires us to modify the
on-disk data. Doing this in a repository without creating the sandbox
will modify contents of our libgit2 repository, though.
Create the repository in a sandbox to avoid this.
|
| |
|
|
|
| |
We do not currently use the sorted version of this input in the
function, which means we produce bad results.
|
| | |
|
| | |
|
| |
|
|
|
| |
When we remove all entries in a tree, we should remove that tree from
its parent rather than include the empty tree.
|
| | |
|
| |
|
|
| |
This gives us trees with subdirectories, which the new test needs.
|
| |
|
|
|
|
|
|
|
|
|
| |
Instead of going through the usual steps of reading a tree recursively
into an index, modifying it and writing it back out as a tree, introduce
a function to perform simple updates more efficiently.
`git_tree_create_updated` avoids reading trees which are not modified
and supports upsert and delete operations. It is not as versatile as
modifying the index, but it makes some common operations much more
efficient.
|
| |
|
|
|
|
|
| |
While no extra header fields are defined for tags, git accepts them by
ignoring them and continuing the search for the message. There are a few
tags like this in the wild which git parses just fine, so we should do
the same.
|
| |
|
|
|
|
| |
The callback mechanism makes it awkward to write data from an IO
source; move to `_fromstream()` which lets the caller remain in control,
in the same vein as we prefer iterators over foreach callbacks.
|
| |
|
|
|
|
|
|
| |
By returning when the count goes to zero rather than below it, setting
`howmany` to 7 in fact writes out the string 6 times.
Correct the termination condition to write out the string the amount of
times we specify.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pair of `git_blob_create_frombuffer()` and
`git_blob_create_frombuffer_commit()` is meant to replace
`git_blob_create_fromchunks()` by providing a way for a user to write a
new blob when they want filtering or they do not know the size.
This approach allows the caller to retain control over when to add data
to this buffer and a more natural fit into higher-level language's own
stream abstractions instead of having to handle IO wait in the callback.
The in-memory buffer size of 2MB is chosen somewhat arbitrarily to be a
round multiple of usual page sizes and a value where most blobs seem
likely to be either going to be way below or way over that size. It's
also a round number of pages.
This implementation re-uses the helper we have from `_fromchunks()` so
we end up writing everything to disk, but hopefully more efficiently
than with a default filebuf. A later optimisation can be to avoid
writing the in-memory contents to disk, with some extra complexity.
|
| |
|
|
|
| |
Instead of copying over the data into the individual entries, point to
the originals, which are already in a format we can use.
|
| |
|
|
|
|
|
|
| |
Submodules don't exist in the objectdb and the code is making us try to
look for a blob with its commit id, which is obviously not going to
work.
Skip the test if the user wants to insert a submodule.
|
| | |
|
| |
|
|
|
| |
Use legitimate (existing) object IDs in tests so that we have the
ability to turn on strict object validation when running tests.
|
| |
|
|
|
| |
When `GIT_OPT_ENABLE_STRICT_OBJECT_CREATION` is turned on, validate
the tree and parent ids given to treebuilder insertion.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
This function is a constructor, so let's name it like one and leave
_create() for the reference functions, which do create/write the
reference.
|
| |
|
|
|
|
| |
Path validation may be influenced by `core.protectHFS` and
`core.protectNTFS` configuration settings, thus treebuilders
can take a repository to influence their configuration.
|
| |
|
|
|
|
|
|
|
|
| |
There are some combination of objects and target types which we know
cannot be fulfilled. Return EINVALIDSPEC for those to signify that there
is a mismatch in the user-provided data and what the object model is
capable of satisfying.
If we start at a tag and in the course of peeling find out that we
cannot reach a particular type, we return EPEEL.
|
| | |
|
| |
|
|
|
|
| |
The old `allocfmt` is of no use to callers, as they are not able to free
the returned buffer. Export a new API that returns a static string that
doesn't need to be freed.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Finding a filename in a vector means we need to resort it every time we
want to read from it, which includes every time we want to write to it
as well, as we want to find duplicate keys.
A hash-map fits what we want to do much more accurately, as we do not
care about sorting, but just the particular filename.
We still keep removed entries around, as the interface let you assume
they were going to be around until the treebuilder is cleared or freed,
but in this case that involves an append to a vector in the filter case,
which can now fail.
The only time we care about sorting is when we write out the tree, so
let's make that the only time we do any sorting.
|