summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* merge_blobs: use strbuf instead of manually-sized mmfile_tjk/merge-tree-merge-blobsJeff King2016-02-161-11/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ancient merge_blobs function (which is used nowhere except in the equally ancient git-merge-tree, which does not itself seem to be called by any modern git code), tries to create a plausible base object for an add/add conflict by finding the common parts of the "ours" and "theirs" blobs. It does so by calling xdiff with XDIFF_EMIT_COMMON, and stores the result in a buffer that is as big as the smaller of "ours" and "theirs". In theory, this is right; we cannot have more common content than is in the smaller of the two blobs. But in practice, xdiff may give us more: if neither file ends in a newline, we get the "\nNo newline at end of file" marker. This is somewhat of a bug in itself (the "no newline" string becomes part of the blob output!), but much worse is that we may overflow our output buffer with this string (if the common content was otherwise close to the size of the smaller blob). The minimal fix for the memory corruption is to size the buffer appropriately. We could do so by manually adding in an extra 29 bytes for the "no newline" string to our buffer size. But that's somewhat fragile. Instead, let's replace the fixed-size output buffer with a strbuf which can grow as necessary. Reported-by: Stefan Frühwirth <stefan.fruehwirth@uni-graz.at> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Git 2.4.10v2.4.10Junio C Hamano2015-09-284-3/+22
| | | | Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Sync with 2.3.10Junio C Hamano2015-09-2827-28/+446
|\
| * Git 2.3.10v2.3.10maint-2.3Junio C Hamano2015-09-284-3/+22
| | | | | | | | Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * Merge branch 'jk/xdiff-memory-limits' into maint-2.3Junio C Hamano2015-09-2811-26/+57
| |\
| | * merge-file: enforce MAX_XDIFF_SIZE on incoming filesJeff King2015-09-281-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The previous commit enforces MAX_XDIFF_SIZE at the interfaces to xdiff: xdi_diff (which calls xdl_diff) and ll_xdl_merge (which calls xdl_merge). But we have another direct call to xdl_merge in merge-file.c. If it were written today, this probably would just use the ll_merge machinery. But it predates that code, and uses slightly different options to xdl_merge (e.g., ZEALOUS_ALNUM). We could try to abstract out an xdi_merge to match the existing xdi_diff, but even that is difficult. Rather than simply report error, we try to treat large files as binary, and that distinction would happen outside of xdi_merge. The simplest fix is to just replicate the MAX_XDIFF_SIZE check in merge-file.c. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| | * xdiff: reject files larger than ~1GBJeff King2015-09-283-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The xdiff code is not prepared to handle extremely large files. It uses "int" in many places, which can overflow if we have a very large number of lines or even bytes in our input files. This can cause us to produce incorrect diffs, with no indication that the output is wrong. Or worse, we may even underallocate a buffer whose size is the result of an overflowing addition. We're much better off to tell the user that we cannot diff or merge such a large file. This patch covers both cases, but in slightly different ways: 1. For merging, we notice the large file and cleanly fall back to a binary merge (which is effectively "we cannot merge this"). 2. For diffing, we make the binary/text distinction much earlier, and in many different places. For this case, we'll use the xdi_diff as our choke point, and reject any diff there before it hits the xdiff code. This means in most cases we'll die() immediately after. That's not ideal, but in practice we shouldn't generally hit this code path unless the user is trying to do something tricky. We already consider files larger than core.bigfilethreshold to be binary, so this code would only kick in when that is circumvented (either by bumping that value, or by using a .gitattribute to mark a file as diffable). In other words, we can avoid being "nice" here, because there is already nice code that tries to do the right thing. We are adding the suspenders to the nice code's belt, so notice when it has been worked around (both to protect the user from malicious inputs, and because it is better to die() than generate bogus output). The maximum size was chosen after experimenting with feeding large files to the xdiff code. It's just under a gigabyte, which leaves room for two obvious cases: - a diff3 merge conflict result on files of maximum size X could be 3*X plus the size of the markers, which would still be only about 3G, which fits in a 32-bit int. - some of the diff code allocates arrays of one int per record. Even if each file consists only of blank lines, then a file smaller than 1G will have fewer than 1G records, and therefore the int array will fit in 4G. Since the limit is arbitrary anyway, I chose to go under a gigabyte, to leave a safety margin (e.g., we would not want to overflow by allocating "(records + 1) * sizeof(int)" or similar. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| | * react to errors in xdi_diffJeff King2015-09-287-24/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we call into xdiff to perform a diff, we generally lose the return code completely. Typically by ignoring the return of our xdi_diff wrapper, but sometimes we even propagate that return value up and then ignore it later. This can lead to us silently producing incorrect diffs (e.g., "git log" might produce no output at all, not even a diff header, for a content-level diff). In practice this does not happen very often, because the typical reason for xdiff to report failure is that it malloc() failed (it uses straight malloc, and not our xmalloc wrapper). But it could also happen when xdiff triggers one our callbacks, which returns an error (e.g., outf() in builtin/rerere.c tries to report a write failure in this way). And the next patch also plans to add more failure modes. Let's notice an error return from xdiff and react appropriately. In most of the diff.c code, we can simply die(), which matches the surrounding code (e.g., that is what we do if we fail to load a file for diffing in the first place). This is not that elegant, but we are probably better off dying to let the user know there was a problem, rather than simply generating bogus output. We could also just die() directly in xdi_diff, but the callers typically have a bit more context, and can provide a better message (and if we do later decide to pass errors up, we're one step closer to doing so). There is one interesting case, which is in diff_grep(). Here if we cannot generate the diff, there is nothing to match, and we silently return "no hits". This is actually what the existing code does already, but we make it a little more explicit. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | Merge branch 'jk/transfer-limit-redirection' into maint-2.3Junio C Hamano2015-09-286-15/+78
| |\ \
| | * | http: limit redirection depthBlake Burkhart2015-09-253-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | By default, libcurl will follow circular http redirects forever. Let's put a cap on this so that somebody who can trigger an automated fetch of an arbitrary repository (e.g., for CI) cannot convince git to loop infinitely. The value chosen is 20, which is the same default that Firefox uses. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| | * | http: limit redirection to protocol-whitelistBlake Burkhart2015-09-254-5/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, libcurl would follow redirection to any protocol it was compiled for support with. This is desirable to allow redirection from HTTP to HTTPS. However, it would even successfully allow redirection from HTTP to SFTP, a protocol that git does not otherwise support at all. Furthermore git's new protocol-whitelisting could be bypassed by following a redirect within the remote helper, as it was only enforced at transport selection time. This patch limits redirects within libcurl to HTTP, HTTPS, FTP and FTPS. If there is a protocol-whitelist present, this list is limited to those also allowed by the whitelist. As redirection happens from within libcurl, it is impossible for an HTTP redirect to a protocol implemented within another remote helper. When the curl version git was compiled with is too old to support restrictions on protocol redirection, we warn the user if GIT_ALLOW_PROTOCOL restrictions were requested. This is a little inaccurate, as even without that variable in the environment, we would still restrict SFTP, etc, and we do not warn in that case. But anything else means we would literally warn every time git accesses an http remote. This commit includes a test, but it is not as robust as we would hope. It redirects an http request to ftp, and checks that curl complained about the protocol, which means that we are relying on curl's specific error message to know what happened. Ideally we would redirect to a working ftp server and confirm that we can clone without protocol restrictions, and not with them. But we do not have a portable way of providing an ftp server, nor any other protocol that curl supports (https is the closest, but we would have to deal with certificates). [jk: added test and version warning] Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| | * | transport: refactor protocol whitelist codeJeff King2015-09-252-10/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current callers only want to die when their transport is prohibited. But future callers want to query the mechanism without dying. Let's break out a few query functions, and also save the results in a static list so we don't have to re-parse for each query. Based-on-a-patch-by: Blake Burkhart <bburky@bburky.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | Merge branch 'jk/transfer-limit-protocol' into maint-2.3Junio C Hamano2015-09-2813-1/+306
| |\ \ \ | | |/ / | | | / | | |/ | |/|
| | * submodule: allow only certain protocols for submodule fetchesJeff King2015-09-232-0/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some protocols (like git-remote-ext) can execute arbitrary code found in the URL. The URLs that submodules use may come from arbitrary sources (e.g., .gitmodules files in a remote repository). Let's restrict submodules to fetching from a known-good subset of protocols. Note that we apply this restriction to all submodule commands, whether the URL comes from .gitmodules or not. This is more restrictive than we need to be; for example, in the tests we run: git submodule add ext::... which should be trusted, as the URL comes directly from the command line provided by the user. But doing it this way is simpler, and makes it much less likely that we would miss a case. And since such protocols should be an exception (especially because nobody who clones from them will be able to update the submodules!), it's not likely to inconvenience anyone in practice. Reported-by: Blake Burkhart <bburky@bburky.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| | * transport: add a protocol-whitelist environment variableJeff King2015-09-2311-1/+254
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we are cloning an untrusted remote repository into a sandbox, we may also want to fetch remote submodules in order to get the complete view as intended by the other side. However, that opens us up to attacks where a malicious user gets us to clone something they would not otherwise have access to (this is not necessarily a problem by itself, but we may then act on the cloned contents in a way that exposes them to the attacker). Ideally such a setup would sandbox git entirely away from high-value items, but this is not always practical or easy to set up (e.g., OS network controls may block multiple protocols, and we would want to enable some but not others). We can help this case by providing a way to restrict particular protocols. We use a whitelist in the environment. This is more annoying to set up than a blacklist, but defaults to safety if the set of protocols git supports grows). If no whitelist is specified, we continue to default to allowing all protocols (this is an "unsafe" default, but since the minority of users will want this sandboxing effect, it is the only sensible one). A note on the tests: ideally these would all be in a single test file, but the git-daemon and httpd test infrastructure is an all-or-nothing proposition rather than a test-by-test prerequisite. By putting them all together, we would be unable to test the file-local code on machines without apache. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Git 2.4.9v2.4.9Junio C Hamano2015-09-044-3/+13
| | | | | | | | Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Sync with 2.3.9Junio C Hamano2015-09-047-26/+49
|\ \ | |/
| * Git 2.3.9v2.3.9Junio C Hamano2015-09-044-3/+13
| | | | | | | | Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * Sync with 2.2.3Junio C Hamano2015-09-046-25/+38
| |\
| | * Git 2.2.3v2.2.3maint-2.2Junio C Hamano2015-09-044-3/+13
| | | | | | | | | | | | Signed-off-by: Junio C Hamano <gitster@pobox.com>
| | * Merge branch 'jk/long-paths' into maint-2.2Junio C Hamano2015-09-044-24/+27
| | |\
| | | * show-branch: use a strbuf for reflog descriptionsJeff King2015-09-041-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we show "branch@{0}", we format into a fixed-size buffer using sprintf. This can overflow if you have long branch names. We can fix it by using a temporary strbuf. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| | | * read_info_alternates: handle paths larger than PATH_MAXJeff King2015-09-041-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This function assumes that the relative_base path passed into it is no larger than PATH_MAX, and writes into a fixed-size buffer. However, this path may not have actually come from the filesystem; for example, add_submodule_odb generates a path using a strbuf and passes it in. This is hard to trigger in practice, though, because the long submodule directory would have to exist on disk before we would try to open its info/alternates file. We can easily avoid the bug, though, by simply creating the filename on the heap. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| | | * notes: use a strbuf in add_non_noteJeff King2015-09-041-9/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we are loading a notes tree into our internal hash table, we also collect any files that are clearly non-notes. We format the name of the file into a PATH_MAX buffer, but unlike true notes (which cannot be larger than a fanned-out sha1 hash), these tree entries can be arbitrarily long, overflowing our buffer. We can fix this by switching to a strbuf. It doesn't even cost us an extra allocation, as we can simply hand ownership of the buffer over to the non-note struct. This is of moderate security interest, as you might fetch notes trees from an untrusted remote. However, we do not do so by default, so you would have to manually fetch into the notes namespace. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| | | * verify_absent: allow filenames longer than PATH_MAXJeff King2015-09-041-7/+10
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When unpack-trees wants to know whether a path will overwrite anything in the working tree, we use lstat() to see if there is anything there. But if we are going to write "foo/bar", we can't just lstat("foo/bar"); we need to look for leading prefixes (e.g., "foo"). So we use the lstat cache to find the length of the leading prefix, and copy the filename up to that length into a temporary buffer (since the original name is const, we cannot just stick a NUL in it). The copy we make goes into a PATH_MAX-sized buffer, which will overflow if the prefix is longer than PATH_MAX. How this happens is a little tricky, since in theory PATH_MAX is the biggest path we will have read from the filesystem. But this can happen if: - the compiled-in PATH_MAX does not accurately reflect what the filesystem is capable of - the leading prefix is not _quite_ what is on disk; it contains the next element from the name we are checking. So if we want to write "aaa/bbb/ccc/ddd" and "aaa/bbb" exists, the prefix of interest is "aaa/bbb/ccc". If "aaa/bbb" approaches PATH_MAX, then "ccc" can overflow it. So this can be triggered, but it's hard to do. In particular, you cannot just "git clone" a bogus repo. The verify_absent checks happen before unpack-trees writes anything to the filesystem, so there are never any leading prefixes during the initial checkout, and the bug doesn't trigger. And by definition, these files are larger than PATH_MAX, so writing them will fail, and clone will complain (though it may write a partial path, which will cause a subsequent "git checkout" to hit the bug). We can fix it by creating the temporary path on the heap. The extra malloc overhead is not important, as we are already making at least one stat() call (and probably more for the prefix discovery). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Git 2.4.8v2.4.8Junio C Hamano2015-08-034-5/+25
| | | | | | | | | | | | Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'js/rebase-i-clean-up-upon-continue-to-skip' into maintJunio C Hamano2015-08-032-1/+26
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Abandoning an already applied change in "git rebase -i" with "--continue" left CHERRY_PICK_HEAD and confused later steps. * js/rebase-i-clean-up-upon-continue-to-skip: rebase -i: do not leave a CHERRY_PICK_HEAD file behind t3404: demonstrate CHERRY_PICK_HEAD bug
| * | | rebase -i: do not leave a CHERRY_PICK_HEAD file behindjs/rebase-i-clean-up-upon-continue-to-skipJohannes Schindelin2015-06-292-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When skipping commits whose changes were already applied via `git rebase --continue`, we need to clean up said file explicitly. The same is not true for `git rebase --skip` because that will execute `git reset --hard` as part of the "skip" handling in git-rebase.sh, even before git-rebase--interactive.sh is called. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | t3404: demonstrate CHERRY_PICK_HEAD bugJohannes Schindelin2015-06-291-0/+21
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When rev-list's --cherry option does not detect that a patch has already been applied upstream, an interactive rebase would offer to reapply it and consequently stop at that patch with a failure, mentioning that the diff is empty. Traditionally, a `git rebase --continue` simply skips the commit in such a situation. However, as pointed out by Gábor Szeder, this leaves a CHERRY_PICK_HEAD behind, making the Git prompt believe that a cherry pick is still going on. This commit adds a test case demonstrating this bug. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'ss/clone-guess-dir-name-simplify' into maintJunio C Hamano2015-08-031-13/+6
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | Code simplification. * ss/clone-guess-dir-name-simplify: clone: simplify string handling in guess_dir_name()
| * | | clone: simplify string handling in guess_dir_name()ss/clone-guess-dir-name-simplifySebastian Schuberth2015-07-091-13/+6
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Sebastian Schuberth <sschuberth@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | Merge branch 'sg/completion-commit-cleanup' into maintJunio C Hamano2015-08-031-1/+1
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | * sg/completion-commit-cleanup: completion: teach 'scissors' mode to 'git commit --cleanup='
| * | | | completion: teach 'scissors' mode to 'git commit --cleanup='sg/completion-commit-cleanupSZEDER Gábor2015-06-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: SZEDER Gábor <szeder@ira.uka.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'pt/am-abort-fix' into maintJunio C Hamano2015-08-032-8/+104
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Various fixes around "git am" that applies a patch to a history that is not there yet. * pt/am-abort-fix: am --abort: keep unrelated commits on unborn branch am --abort: support aborting to unborn branch am --abort: revert changes introduced by failed 3way merge am --skip: support skipping while on unborn branch am -3: support 3way merge on unborn branch am --skip: revert changes introduced by failed 3way merge
| * | | | | am --abort: keep unrelated commits on unborn branchpt/am-abort-fixPaul Tan2015-06-082-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since 7b3b7e3 (am --abort: keep unrelated commits since the last failure and warn, 2010-12-21), git-am would refuse to rewind HEAD if commits were made since the last git-am failure. This check was implemented in safe_to_abort(), which checked to see if HEAD's hash matched the abort-safety file. However, this check was skipped if the abort-safety file was empty, which can happen if git-am failed while on an unborn branch. As such, if any commits were made since then, they would be discarded. Fix this by carrying on the abort safety check even if the abort-safety file is empty. Signed-off-by: Paul Tan <pyokagan@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | am --abort: support aborting to unborn branchPaul Tan2015-06-082-1/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When git-am is first run on an unborn branch, no ORIG_HEAD is created. As such, any applied commits will remain even after a git am --abort. To be consistent with the behavior of git am --abort when it is not run from an unborn branch, we empty the index, and then destroy the branch pointed to by HEAD if there is no ORIG_HEAD. Signed-off-by: Paul Tan <pyokagan@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | am --abort: revert changes introduced by failed 3way mergePaul Tan2015-06-082-1/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Even when a merge conflict occurs with am --3way, the index will be modified with the results of any successfully merged files. These changes to the index will not be reverted with a "git read-tree --reset -u HEAD ORIG_HEAD", as git read-tree will not be aware of how the current index differs from HEAD or ORIG_HEAD. To fix this, we first reset any conflicting entries in the index. The resulting index will contain the results of successfully merged files introduced by the failed merge. We write this index to a tree, and then use git read-tree to fast-forward this "index tree" back to ORIG_HEAD, thus undoing all the changes from the failed merge. When we are on an unborn branch, HEAD and ORIG_HEAD will not point to valid trees. In this case, use an empty tree. Signed-off-by: Paul Tan <pyokagan@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | am --skip: support skipping while on unborn branchPaul Tan2015-06-082-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When git am --skip is run, git am will copy HEAD's tree entries to the index with "git reset HEAD". However, on an unborn branch, HEAD does not point to a tree, so "git reset HEAD" will fail. Fix this by treating HEAD as en empty tree when we are on an unborn branch. Signed-off-by: Paul Tan <pyokagan@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | am -3: support 3way merge on unborn branchPaul Tan2015-06-082-1/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While on an unborn branch, git am -3 will fail to do a threeway merge as it references HEAD as "our tree", but HEAD does not point to a valid tree. Fix this by using an empty tree as "our tree" when we are on an unborn branch. Signed-off-by: Paul Tan <pyokagan@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | am --skip: revert changes introduced by failed 3way mergePaul Tan2015-06-082-1/+17
| | |_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Even when a merge conflict occurs with am --3way, the index will be modified with the results of any succesfully merged files (such as a new file). These changes to the index will not be reverted with a "git read-tree --reset -u HEAD HEAD", as git read-tree will not be aware of how the current index differs from HEAD. To fix this, we first reset any conflicting entries from the index. The resulting index will contain the results of successfully merged files. We write the index to a tree, then use git read-tree -m to fast-forward the "index tree" back to HEAD, thus undoing all the changes from the failed merge. Signed-off-by: Paul Tan <pyokagan@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'mh/reporting-broken-refs-from-for-each-ref' into maintJunio C Hamano2015-08-033-7/+83
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git for-each-ref" reported "missing object" for 0{40} when it encounters a broken ref. The lack of object whose name is 0{40} is not the problem; the ref being broken is. * mh/reporting-broken-refs-from-for-each-ref: read_loose_refs(): treat NULL_SHA1 loose references as broken read_loose_refs(): simplify function logic for-each-ref: report broken references correctly t6301: new tests of for-each-ref error handling
| * | | | | read_loose_refs(): treat NULL_SHA1 loose references as brokenmh/reporting-broken-refs-from-for-each-refMichael Haggerty2015-06-082-1/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NULL_SHA1 is used to indicate an "invalid object name" throughout our code (and the code of other git implementations), so it is vastly more likely that an on-disk reference was set to this value due to a software bug than that NULL_SHA1 is the legitimate SHA-1 of an actual object. Therefore, if a loose reference has the value NULL_SHA1, consider it to be broken. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | read_loose_refs(): simplify function logicMichael Haggerty2015-06-031-7/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make it clearer that there are two possible ways to read the reference, but that we handle read errors uniformly regardless of which way it was read. This refactoring also makes the following change easier to implement. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | for-each-ref: report broken references correctlyMichael Haggerty2015-06-022-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If there is a loose reference file with invalid contents, "git for-each-ref" incorrectly reports the problem as being a missing object with name NULL_SHA1: $ echo '12345678' >.git/refs/heads/nonsense $ git for-each-ref fatal: missing object 0000000000000000000000000000000000000000 for refs/heads/nonsense With an explicit "--format" string, it can even report that the reference validly points at NULL_SHA1: $ git for-each-ref --format='%(objectname) %(refname)' 0000000000000000000000000000000000000000 refs/heads/nonsense $ echo $? 0 This has been broken since b7dd2d2 for-each-ref: Do not lookup objects when they will not be used (2009-05-27) , which changed for-each-ref from using for_each_ref() to using git_for_each_rawref() in order to avoid looking up the referred-to objects unnecessarily. (When "git for-each-ref" is given a "--format" string that doesn't include information about the pointed-to object, it does not look up the object at all, which makes it considerably faster. Iterating with DO_FOR_EACH_INCLUDE_BROKEN is essential to this optimization because otherwise for_each_ref() would itself need to check whether the object exists as part of its brokenness test.) But for_each_rawref() includes broken references in the iteration, and "git for-each-ref" doesn't itself reject references with REF_ISBROKEN. The result is that broken references are processed *as if* they had the value NULL_SHA1, which is the value stored in entries for broken references. Change "git for-each-ref" to emit warnings for references that are REF_ISBROKEN but to otherwise skip them. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | t6301: new tests of for-each-ref error handlingMichael Haggerty2015-06-021-0/+56
| | |_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add tests that for-each-ref correctly reports broken loose reference files and references that point at missing objects. In fact, two of these tests fail, because (1) NULL_SHA1 is not recognized as an invalid reference value, and (2) for-each-ref doesn't respect REF_ISBROKEN. Fixes to come. Note that when for-each-ref is run with a --format option that doesn't require the object to be looked up, then we should still notice if a loose reference file is corrupt or contains NULL_SHA1, but we don't notice if it points at a missing object because we don't do an object lookup. This is OK. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'sg/commit-cleanup-scissors' into maintJunio C Hamano2015-08-032-5/+28
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git commit --cleanup=scissors" was not careful enough to protect against getting fooled by a line that looked like scissors. * sg/commit-cleanup-scissors: commit: cope with scissors lines in commit message
| * | | | | commit: cope with scissors lines in commit messagesg/commit-cleanup-scissorsSZEDER Gábor2015-06-092-5/+28
| | |_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The diff and submodule shortlog appended to the commit message template by 'git commit --verbose' are not stripped when the commit message contains an indented scissors line. When cleaning up a commit message with 'git commit --verbose' or '--cleanup=scissors' the code is careful and triggers only on a pure scissors line, i.e. a line containing nothing but a comment character, a space, and the scissors cut. This is good, because people can embed scissors lines in the commit message while using 'git commit --verbose', and the text they write after their indented scissors line doesn't get deleted. While doing so, however, the cleanup function only looks at the first line matching the scissors pattern and if it doesn't start at the beginning of the line, then the function just returns without performing any cleanup. This is wrong, because a "real" scissors line added by 'git commit --verbose' might follow, and in that case the diff and submodule shortlog get included in the commit message. Fix this by changing the scissors pattern to match only at the beginning of the line, yet be careful to catch scissors on the first line as well. Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: SZEDER Gábor <szeder@ira.uka.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Git 2.4.7v2.4.7Junio C Hamano2015-07-274-3/+57
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'jk/pretty-encoding-doc' into maintJunio C Hamano2015-07-271-1/+4
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Doc update. * jk/pretty-encoding-doc: docs: clarify that --encoding can produce invalid sequences
| * | | | | docs: clarify that --encoding can produce invalid sequencesjk/pretty-encoding-docJeff King2015-06-171-1/+4
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the common case that the commit encoding matches the output encoding, we do not touch the buffer at all, which makes things much more efficient. But it might be unclear to a consumer that we will pass through bogus sequences. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>