summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* git_index_add_frombuffer: only accept files/linksethomson/index_add_requires_filesEdward Thomson2018-02-182-5/+70
| | | | | | | Ensure that the buffer given to `git_index_add_frombuffer` represents a regular blob, an executable blob, or a link. Explicitly reject commit entries (submodules) - it makes little sense to allow users to add a submodule from a string; there's no possible path to success.
* Merge pull request #4532 from pks-t/pks/release-doc-filenamePatrick Steinhardt2018-02-151-0/+0
|\ | | | | docs: fix typo in "release.md" filename
| * docs: fix typo in "release.md" filenamePatrick Steinhardt2018-02-151-0/+0
|/
* Merge pull request #4485 from libgit2/cmn/release-docsPatrick Steinhardt2018-02-151-0/+74
|\ | | | | docs: add release documentation
| * docs: udpates to wording in release documentationcmn/release-docsCarlos Martín Nieto2018-01-271-5/+17
| |
| * docs: add release documentationCarlos Martín Nieto2018-01-191-0/+62
| | | | | | | | | | This should provide the release manager enough to know which steps to take when it's time to cut a new release.
* | Merge pull request #4501 from pks-t/pks/v0.27.0-release-notesPatrick Steinhardt2018-02-151-5/+77
|\ \ | | | | | | CHANGELOG: update for v0.27.0
| * | CHANGELOG: update for v0.27.0, second batchPatrick Steinhardt2018-02-091-6/+15
| | |
| * | CHANGELOG: update for v0.27.0Patrick Steinhardt2018-02-091-0/+63
| | |
* | | Merge pull request #4508 from libgit2/ethomson/user_agentEdward Thomson2018-02-103-27/+30
|\ \ \ | | | | | | | | http: standardize user-agent addition
| * | | http: standardize user-agent additionethomson/user_agentEdward Thomson2018-02-103-27/+30
|/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The winhttp and posix http each need to add the user-agent to their requests. Standardize on a single function to include this so that we do not get the version numbers we're sending out of sync. Assemble the complete user agent in `git_http__user_agent`, returning assembled strings. Co-authored-by: Patrick Steinhardt <ps@pks.im>
* | | Merge pull request #4527 from pks-t/pks/resource-leaksEdward Thomson2018-02-093-2/+6
|\ \ \ | | | | | | | | Plug resource leaks
| * | | hash: win32: fix missing comma in `giterr_set`Patrick Steinhardt2018-02-091-1/+1
| | | |
| * | | odb_loose: only close file descriptor if it was opened successfullyPatrick Steinhardt2018-02-091-1/+2
| | | |
| * | | odb: fix memory leaks due to not freeing hash contextPatrick Steinhardt2018-02-092-0/+3
|/ / /
* | | Merge pull request #4509 from libgit2/ethomson/odb_alloc_errorEdward Thomson2018-02-096-53/+126
|\ \ \ | | | | | | | | odb: error when we can't alloc an object
| * | | hash: set error messages on failureethomson/odb_alloc_errorEdward Thomson2018-02-091-8/+33
| | | |
| * | | odb: error when we can't create object headerEdward Thomson2018-02-095-30/+63
| | | | | | | | | | | | | | | | | | | | Return an error to the caller when we can't create an object header for some reason (printf failure) instead of simply asserting.
| * | | odb: assert on logic errors when writing objectsEdward Thomson2018-02-091-2/+1
| | | | | | | | | | | | | | | | | | | | There's no recovery possible if we're so confused or corrupted that we're trying to overwrite our memory. Simply assert.
| * | | git_odb__hashfd: propagate error on failuresEdward Thomson2018-02-091-1/+1
| | | |
| * | | git_odb__hashobj: provide errors messages on failuresEdward Thomson2018-02-091-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | Provide error messages on hash failures: assert when given invalid input instead of failing with a user error; provide error messages on program errors.
| * | | odb: check for alloc errors on hardcoded objectsEdward Thomson2018-02-091-6/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It's unlikely that we'll fail to allocate a single byte, but let's check for allocation failures for good measure. Untangle `-1` being a marker of not having found the hardcoded odb object; use that to reflect actual errors.
| * | | odb: error when we can't alloc an objectEdward Thomson2018-02-091-2/+6
|/ / / | | | | | | | | | | | | At the moment, we're swallowing the allocation failure. We need to return the error to the caller.
* | | Merge pull request #4450 from libgit2/ethomson/odb_loose_readstreamEdward Thomson2018-02-088-225/+555
|\ \ \ | | | | | | | | Streaming read support for the loose ODB backend
| * | | odb_loose: HEADER_LEN -> MAX_HEADER_LENethomson/odb_loose_readstreamEdward Thomson2018-02-011-7/+7
| | | | | | | | | | | | | | | | `MAX_HEADER_LEN` is a more descriptive constant name.
| * | | odb_loose: largefile tests only on 64 bit platformsEdward Thomson2018-02-011-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Only run the large file tests on 64 bit platforms. Even though we support streaming reads on objects, and do not need to fit them in memory, we use `size_t` in various places to reflect the size of an object.
| * | | odb_loose: validate length when checking for zlib contentEdward Thomson2018-02-011-4/+7
| | | | | | | | | | | | | | | | | | | | When checking to see if a file has zlib deflate content, make sure that we actually have read at least two bytes before examining the array.
| * | | odb_loose: test read_header on large blobsEdward Thomson2018-02-011-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | Test that we can read_header on large blobs. This should succeed on all platforms since we read only a few bytes into memory to be able to parse the header.
| * | | odb_loose: test read_header explicitlyEdward Thomson2018-02-011-0/+30
| | | |
| * | | odb_loose: `read_header` for packlike loose objectsEdward Thomson2018-02-011-20/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Support `read_header` for "packlike loose objects", which were a temporarily and uncommonly used format loose object format that encodes the header before the zlib deflate data. This will never actually be seen in the wild, but add support for it for completeness and (more importantly) because our corpus of test data has objects in this format, so it's easier to support it than to try to special case it.
| * | | odb_loose: read_header should use zstreamEdward Thomson2018-02-011-85/+24
| | | | | | | | | | | | | | | | | | | | Make `read_header` use the common zstream implementation. Remove the now unnecessary zlib wrapper in odb_loose.
| * | | zstream: introduce a single chunk readerEdward Thomson2018-02-012-36/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce `get_output_chunk` that will inflate/deflate all the available input buffer into the output buffer. `get_output` will call `get_output_chunk` in a loop, while other consumers can use it to inflate only a piece of the data.
| * | | odb: test loose object streamingEdward Thomson2018-02-011-0/+54
| | | |
| * | | odb_loose: packlike loose objects use `git_zstream`Edward Thomson2018-02-011-88/+71
| | | | | | | | | | | | | | | | | | | | Refactor packlike loose object reads to use `git_zstream` for simplification.
| * | | odb: loose object streaming for packlike loose objectsEdward Thomson2018-02-011-37/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | | A "packlike" loose object was a briefly lived loose object format where the type and size were encoded in uncompressed space at the beginning of the file, followed by the compressed object contents. Handle these in a streaming manner as well.
| * | | odb_loose: test reading a large file in streamEdward Thomson2018-02-011-1/+47
| | | | | | | | | | | | | | | | | | | | | | | | Since some test situations may have generous disk space, but limited RAM (eg hosted build agents), test that we can stream a large file into a loose object, and then stream it out of the loose object storage.
| * | | odb: introduce streaming loose object readerEdward Thomson2018-02-011-7/+148
| | | | | | | | | | | | | | | | Provide a streaming loose object reader.
| * | | odb: provide length and type with streaming readEdward Thomson2018-02-013-4/+17
| | | | | | | | | | | | | | | | | | | | The streaming read functionality should provide the length and the type of the object, like the normal read functionality does.
| * | | odb_loose: stream -> writestreamEdward Thomson2018-02-011-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | There are two streaming functions; one for reading, one for writing. Disambiguate function names between `stream` and `writestream` to make allowances for a read stream.
* | | | Merge pull request #4491 from libgit2/ethomson/recursiveEdward Thomson2018-02-08100-29/+120
|\ \ \ \ | | | | | | | | | | Recursive merge: reverse the order of merge bases
| * | | | merge: virtual commit should be last argument to merge-baseethomson/recursiveTyrie Vella2018-02-041-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our virtual commit must be the last argument to merge-base: since our algorithm pushes _both_ parents of the virtual commit, it needs to be the last argument, since merge-base: > Given three commits A, B and C, git merge-base A B C will compute the > merge base between A and a hypothetical commit M We want to calculate the merge base between the actual commit ("two") and the virtual commit ("one") - since one actually pushes its parents to the merge-base calculation, we need to calculate the merge base of "two" and the parents of one.
| * | | | Add failing test case for virtual commit merge base issueEdward Thomson2018-02-0427-0/+32
| | | | |
| * | | | merge::trees::recursive: test for virtual base buildingEdward Thomson2018-02-041-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | Virtual base building: ensure that the virtual base is created and revwalked in the same way as git.
| * | | | merge: reverse merge bases for recursive mergeEdward Thomson2018-02-044-28/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the commits being merged have multiple merge bases, reverse the order when creating the virtual merge base. This is for compatibility with git's merge-recursive algorithm, and ensures that we build identical trees. Git does this to try to use older merge bases first. Per 8918b0c: > It seems to be the only sane way to do it: when a two-head merge is > done, and the merge-base and one of the two branches agree, the > merge assumes that the other branch has something new. > > If we start creating virtual commits from newer merge-bases, and go > back to older merge-bases, and then merge with newer commits again, > chances are that a patch is lost, _because_ the merge-base and the > head agree on it. Unlikely, yes, but it happened to me.
| * | | | oidarray: introduce git_oidarray__reverseEdward Thomson2018-02-042-0/+13
| | | | | | | | | | | | | | | | | | | | Provide a simple function to reverse an oidarray.
| * | | | Introduce additional criss-cross merge branchesEdward Thomson2018-02-0468-0/+11
| | | | |
* | | | | Merge pull request #4521 from pks-t/pks/config-crlf-linesEdward Thomson2018-02-084-11/+60
|\ \ \ \ \ | |_|_|/ / |/| | | | config: handle CRLF-only lines and BOM
| * | | | buf_text: remove `offset` parameter of BOM detection functionPatrick Steinhardt2018-02-083-11/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The function to detect a BOM takes an offset where it shall look for a BOM. No caller uses that, and searching for the BOM in the middle of a buffer seems to be very unlikely, as a BOM should only ever exist at file start. Remove the parameter, as it has already caused confusion due to its weirdness.
| * | | | config_parse: fix reading files with BOMPatrick Steinhardt2018-02-082-1/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The function `skip_bom` is being used to detect and skip BOM marks previously to parsing a configuration file. To do so, it simply uses `git_buf_text_detect_bom`. But since the refactoring to use the parser interface in commit 9e66590bd (config_parse: use common parser interface, 2017-07-21), the BOM detection was actually broken. The issue stems from a misunderstanding of `git_buf_text_detect_bom`. It was assumed that its third parameter limits the length of the character sequence that is to be analyzed, while in fact it was an offset at which we want to detect the BOM. Fix the parameter to be `0` instead of the buffer length, as we always want to check the beginning of the configuration file.
| * | | | config_parse: handle empty lines with CRLFPatrick Steinhardt2018-02-082-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the configuration parser will fail reading empty lines with just an CRLF-style line ending. Special-case the '\r' character in order to handle it the same as Unix-style line endings. Add tests to spot this regression in the future.