| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Ensure that the buffer given to `git_index_add_frombuffer` represents a
regular blob, an executable blob, or a link. Explicitly reject commit
entries (submodules) - it makes little sense to allow users to add a
submodule from a string; there's no possible path to success.
|
|\
| |
| | |
docs: fix typo in "release.md" filename
|
|/ |
|
|\
| |
| | |
docs: add release documentation
|
| | |
|
| |
| |
| |
| |
| | |
This should provide the release manager enough to know which steps to take when
it's time to cut a new release.
|
|\ \
| | |
| | | |
CHANGELOG: update for v0.27.0
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
http: standardize user-agent addition
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The winhttp and posix http each need to add the user-agent to their
requests. Standardize on a single function to include this so that we
do not get the version numbers we're sending out of sync.
Assemble the complete user agent in `git_http__user_agent`, returning
assembled strings.
Co-authored-by: Patrick Steinhardt <ps@pks.im>
|
|\ \ \
| | | |
| | | | |
Plug resource leaks
|
| | | | |
|
| | | | |
|
|/ / / |
|
|\ \ \
| | | |
| | | | |
odb: error when we can't alloc an object
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Return an error to the caller when we can't create an object header for
some reason (printf failure) instead of simply asserting.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
There's no recovery possible if we're so confused or corrupted that
we're trying to overwrite our memory. Simply assert.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Provide error messages on hash failures: assert when given invalid
input instead of failing with a user error; provide error messages
on program errors.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It's unlikely that we'll fail to allocate a single byte, but let's check
for allocation failures for good measure. Untangle `-1` being a marker
of not having found the hardcoded odb object; use that to reflect actual
errors.
|
|/ / /
| | |
| | |
| | |
| | | |
At the moment, we're swallowing the allocation failure. We need to
return the error to the caller.
|
|\ \ \
| | | |
| | | | |
Streaming read support for the loose ODB backend
|
| | | |
| | | |
| | | |
| | | | |
`MAX_HEADER_LEN` is a more descriptive constant name.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Only run the large file tests on 64 bit platforms.
Even though we support streaming reads on objects, and do not need to
fit them in memory, we use `size_t` in various places to reflect the
size of an object.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
When checking to see if a file has zlib deflate content, make sure that
we actually have read at least two bytes before examining the array.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Test that we can read_header on large blobs. This should succeed on all
platforms since we read only a few bytes into memory to be able to
parse the header.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Support `read_header` for "packlike loose objects", which were a
temporarily and uncommonly used format loose object format that encodes
the header before the zlib deflate data.
This will never actually be seen in the wild, but add support for it for
completeness and (more importantly) because our corpus of test data has
objects in this format, so it's easier to support it than to try to
special case it.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Make `read_header` use the common zstream implementation.
Remove the now unnecessary zlib wrapper in odb_loose.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Introduce `get_output_chunk` that will inflate/deflate all the available
input buffer into the output buffer. `get_output` will call
`get_output_chunk` in a loop, while other consumers can use it to
inflate only a piece of the data.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Refactor packlike loose object reads to use `git_zstream` for
simplification.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
A "packlike" loose object was a briefly lived loose object format where
the type and size were encoded in uncompressed space at the beginning of
the file, followed by the compressed object contents. Handle these in a
streaming manner as well.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Since some test situations may have generous disk space, but limited RAM
(eg hosted build agents), test that we can stream a large file into a
loose object, and then stream it out of the loose object storage.
|
| | | |
| | | |
| | | |
| | | | |
Provide a streaming loose object reader.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
The streaming read functionality should provide the length and the type
of the object, like the normal read functionality does.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
There are two streaming functions; one for reading, one for writing.
Disambiguate function names between `stream` and `writestream` to make
allowances for a read stream.
|
|\ \ \ \
| | | | |
| | | | | |
Recursive merge: reverse the order of merge bases
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Our virtual commit must be the last argument to merge-base: since our
algorithm pushes _both_ parents of the virtual commit, it needs to be
the last argument, since merge-base:
> Given three commits A, B and C, git merge-base A B C will compute the
> merge base between A and a hypothetical commit M
We want to calculate the merge base between the actual commit ("two")
and the virtual commit ("one") - since one actually pushes its parents
to the merge-base calculation, we need to calculate the merge base of
"two" and the parents of one.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Virtual base building: ensure that the virtual base is created and
revwalked in the same way as git.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When the commits being merged have multiple merge bases, reverse the
order when creating the virtual merge base. This is for compatibility
with git's merge-recursive algorithm, and ensures that we build
identical trees.
Git does this to try to use older merge bases first. Per 8918b0c:
> It seems to be the only sane way to do it: when a two-head merge is
> done, and the merge-base and one of the two branches agree, the
> merge assumes that the other branch has something new.
>
> If we start creating virtual commits from newer merge-bases, and go
> back to older merge-bases, and then merge with newer commits again,
> chances are that a patch is lost, _because_ the merge-base and the
> head agree on it. Unlikely, yes, but it happened to me.
|
| | | | |
| | | | |
| | | | |
| | | | | |
Provide a simple function to reverse an oidarray.
|
| | | | | |
|
|\ \ \ \ \
| |_|_|/ /
|/| | | | |
config: handle CRLF-only lines and BOM
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The function to detect a BOM takes an offset where it shall look for a
BOM. No caller uses that, and searching for the BOM in the middle of a
buffer seems to be very unlikely, as a BOM should only ever exist at
file start.
Remove the parameter, as it has already caused confusion due to its
weirdness.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The function `skip_bom` is being used to detect and skip BOM marks
previously to parsing a configuration file. To do so, it simply uses
`git_buf_text_detect_bom`. But since the refactoring to use the parser
interface in commit 9e66590bd (config_parse: use common parser
interface, 2017-07-21), the BOM detection was actually broken.
The issue stems from a misunderstanding of `git_buf_text_detect_bom`. It
was assumed that its third parameter limits the length of the character
sequence that is to be analyzed, while in fact it was an offset at which
we want to detect the BOM. Fix the parameter to be `0` instead of the
buffer length, as we always want to check the beginning of the
configuration file.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Currently, the configuration parser will fail reading empty lines with
just an CRLF-style line ending. Special-case the '\r' character in order
to handle it the same as Unix-style line endings. Add tests to spot this
regression in the future.
|