| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
The winhttp and posix http each need to add the user-agent to their
requests. Standardize on a single function to include this so that we
do not get the version numbers we're sending out of sync.
Assemble the complete user agent in `git_http__user_agent`, returning
assembled strings.
Co-authored-by: Patrick Steinhardt <ps@pks.im>
|
|\
| |
| | |
Plug resource leaks
|
| | |
|
| | |
|
|/ |
|
|\
| |
| | |
odb: error when we can't alloc an object
|
| | |
|
| |
| |
| |
| |
| | |
Return an error to the caller when we can't create an object header for
some reason (printf failure) instead of simply asserting.
|
| |
| |
| |
| |
| | |
There's no recovery possible if we're so confused or corrupted that
we're trying to overwrite our memory. Simply assert.
|
| | |
|
| |
| |
| |
| |
| |
| | |
Provide error messages on hash failures: assert when given invalid
input instead of failing with a user error; provide error messages
on program errors.
|
| |
| |
| |
| |
| |
| |
| | |
It's unlikely that we'll fail to allocate a single byte, but let's check
for allocation failures for good measure. Untangle `-1` being a marker
of not having found the hardcoded odb object; use that to reflect actual
errors.
|
|/
|
|
|
| |
At the moment, we're swallowing the allocation failure. We need to
return the error to the caller.
|
|\
| |
| | |
Streaming read support for the loose ODB backend
|
| |
| |
| |
| | |
`MAX_HEADER_LEN` is a more descriptive constant name.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Only run the large file tests on 64 bit platforms.
Even though we support streaming reads on objects, and do not need to
fit them in memory, we use `size_t` in various places to reflect the
size of an object.
|
| |
| |
| |
| |
| | |
When checking to see if a file has zlib deflate content, make sure that
we actually have read at least two bytes before examining the array.
|
| |
| |
| |
| |
| |
| | |
Test that we can read_header on large blobs. This should succeed on all
platforms since we read only a few bytes into memory to be able to
parse the header.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Support `read_header` for "packlike loose objects", which were a
temporarily and uncommonly used format loose object format that encodes
the header before the zlib deflate data.
This will never actually be seen in the wild, but add support for it for
completeness and (more importantly) because our corpus of test data has
objects in this format, so it's easier to support it than to try to
special case it.
|
| |
| |
| |
| |
| | |
Make `read_header` use the common zstream implementation.
Remove the now unnecessary zlib wrapper in odb_loose.
|
| |
| |
| |
| |
| |
| |
| | |
Introduce `get_output_chunk` that will inflate/deflate all the available
input buffer into the output buffer. `get_output` will call
`get_output_chunk` in a loop, while other consumers can use it to
inflate only a piece of the data.
|
| | |
|
| |
| |
| |
| |
| | |
Refactor packlike loose object reads to use `git_zstream` for
simplification.
|
| |
| |
| |
| |
| |
| |
| | |
A "packlike" loose object was a briefly lived loose object format where
the type and size were encoded in uncompressed space at the beginning of
the file, followed by the compressed object contents. Handle these in a
streaming manner as well.
|
| |
| |
| |
| |
| |
| | |
Since some test situations may have generous disk space, but limited RAM
(eg hosted build agents), test that we can stream a large file into a
loose object, and then stream it out of the loose object storage.
|
| |
| |
| |
| | |
Provide a streaming loose object reader.
|
| |
| |
| |
| |
| | |
The streaming read functionality should provide the length and the type
of the object, like the normal read functionality does.
|
| |
| |
| |
| |
| |
| |
| | |
There are two streaming functions; one for reading, one for writing.
Disambiguate function names between `stream` and `writestream` to make
allowances for a read stream.
|
|\ \
| | |
| | | |
Recursive merge: reverse the order of merge bases
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Our virtual commit must be the last argument to merge-base: since our
algorithm pushes _both_ parents of the virtual commit, it needs to be
the last argument, since merge-base:
> Given three commits A, B and C, git merge-base A B C will compute the
> merge base between A and a hypothetical commit M
We want to calculate the merge base between the actual commit ("two")
and the virtual commit ("one") - since one actually pushes its parents
to the merge-base calculation, we need to calculate the merge base of
"two" and the parents of one.
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Virtual base building: ensure that the virtual base is created and
revwalked in the same way as git.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When the commits being merged have multiple merge bases, reverse the
order when creating the virtual merge base. This is for compatibility
with git's merge-recursive algorithm, and ensures that we build
identical trees.
Git does this to try to use older merge bases first. Per 8918b0c:
> It seems to be the only sane way to do it: when a two-head merge is
> done, and the merge-base and one of the two branches agree, the
> merge assumes that the other branch has something new.
>
> If we start creating virtual commits from newer merge-bases, and go
> back to older merge-bases, and then merge with newer commits again,
> chances are that a patch is lost, _because_ the merge-base and the
> head agree on it. Unlikely, yes, but it happened to me.
|
| | |
| | |
| | |
| | | |
Provide a simple function to reverse an oidarray.
|
| | | |
|
|\ \ \
| | | |
| | | | |
config: handle CRLF-only lines and BOM
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The function to detect a BOM takes an offset where it shall look for a
BOM. No caller uses that, and searching for the BOM in the middle of a
buffer seems to be very unlikely, as a BOM should only ever exist at
file start.
Remove the parameter, as it has already caused confusion due to its
weirdness.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The function `skip_bom` is being used to detect and skip BOM marks
previously to parsing a configuration file. To do so, it simply uses
`git_buf_text_detect_bom`. But since the refactoring to use the parser
interface in commit 9e66590bd (config_parse: use common parser
interface, 2017-07-21), the BOM detection was actually broken.
The issue stems from a misunderstanding of `git_buf_text_detect_bom`. It
was assumed that its third parameter limits the length of the character
sequence that is to be analyzed, while in fact it was an offset at which
we want to detect the BOM. Fix the parameter to be `0` instead of the
buffer length, as we always want to check the beginning of the
configuration file.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Currently, the configuration parser will fail reading empty lines with
just an CRLF-style line ending. Special-case the '\r' character in order
to handle it the same as Unix-style line endings. Add tests to spot this
regression in the future.
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Upon each line, the configuration parser tries to get either the first
non-whitespace character or the first whitespace character, in case
there is no non-whitespace character. The logic handling this looks
rather odd and doesn't immediately convey this meaning, so add a comment
to clarify what happens.
|
|\ \ \
| |/ /
|/| | |
CMake: minor fixups
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Move the odd code that provides a hierarchical display for projects
within the IDEs to its own module.
|
| | |
| | |
| | |
| | | |
Move the nanosecond detection in time structures to its own module.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Enable CMake policy CMP0042, if supported:
> CMake 2.8.12 and newer has support for using ``@rpath`` in a target's
> install name. This was enabled by setting the target property
> ``MACOSX_RPATH``. The ``@rpath`` in an install name is a more
> flexible and powerful mechanism than ``@executable_path`` or
> ``@loader_path`` for locating shared libraries.
|
| | |
| | |
| | |
| | |
| | | |
We can use policy checks to see if a policy exists in cmake, like
CMP0051, instead of relying on the version.
|
|\ \ \
| |/ /
|/| | |
Conflict markers should match EOL style in conflicting files
|
| | |
| | |
| | |
| | |
| | |
| | | |
Ensure that when the files being merged have CR/LF line endings that the
conflict markers produced in the conflict file also have CR/LF line
endings.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Upgrade xdiff to git's most recent version, which includes changes to
CR/LF handling. Now CR/LF included in the input files will be detected
and conflict markers will be emitted with CR/LF when appropriate.
|