summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* git-p4: correct hasBranchPrefix verbose outputao/p4-has-branch-prefix-fixAndrew Oakley2016-06-221-1/+1
| | | | | | | | | The logic here was inverted, you got a message saying the file is ignored for each file that is not ignored. Signed-off-by: Andrew Oakley <aoakley@roku.com> Acked-by: Lars Schneider <larsxschneider@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* git-p4: add option to keep empty commitsls/p4-keep-empty-commitsLars Schneider2015-12-103-17/+165
| | | | | | | | | | | | A changelist that contains only excluded files due to a client spec was imported as an empty commit. Fix that issue by ignoring these commits. Add option "git-p4.keepEmptyCommits" to make the previous behavior available. Signed-off-by: Lars Schneider <larsxschneider@gmail.com> Helped-by: Pete Harlan Acked-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'ls/p4-changes-block-size'Junio C Hamano2015-05-113-14/+119
|\ | | | | | | | | | | | | | | | | "git p4" learned "--changes-block-size <n>" to read the changes in chunks from Perforce, instead of making one call to "p4 changes" that may trigger "too many rows scanned" error from Perforce. * ls/p4-changes-block-size: git-p4: use -m when running p4 changes
| * git-p4: use -m when running p4 changesls/p4-changes-block-sizeLex Spoon2015-04-203-14/+119
| | | | | | | | | | | | | | | | | | | | | | Simply running "p4 changes" on a large branch can result in a "too many rows scanned" error from the Perforce server. It is better to use a sequence of smaller calls to "p4 changes", using the "-m" option to limit the size of each call. Signed-off-by: Lex Spoon <lex@lexspoon.org> Acked-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'jc/plug-fmt-merge-msg-leak'Junio C Hamano2015-05-111-5/+11
|\ \ | | | | | | | | | | | | * jc/plug-fmt-merge-msg-leak: fmt-merge-msg: plug small leak of commit buffer
| * | fmt-merge-msg: plug small leak of commit bufferjc/plug-fmt-merge-msg-leakJunio C Hamano2015-04-201-5/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A broken or badly formatted commit might not record author or committer lines or we may not find a valid name on them. The function record_person() returned after calling get_commit_buffer() without calling unuse_commit_buffer() on the memory it obtained in such cases, potentially leaking it. Helped-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'nd/slim-index-pack-memory-usage'Junio C Hamano2015-05-111-111/+179
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Memory usage of "git index-pack" has been trimmed by tens of per-cent. * nd/slim-index-pack-memory-usage: index-pack: kill union delta_base to save memory index-pack: reduce object_entry size to save memory
| * | | index-pack: kill union delta_base to save memoryNguyễn Thái Ngọc Duy2015-04-181-100/+160
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Once we know the number of objects in the input pack, we allocate an array of nr_objects of struct delta_entry. On x86-64, this struct is 32 bytes long. The union delta_base, which is part of struct delta_entry, provides enough space to store either ofs-delta (8 bytes) or ref-delta (20 bytes). Because ofs-delta encoding is more efficient space-wise and more performant at runtime than ref-delta encoding, Git packers try to use ofs-delta whenever possible, and it is expected that objects encoded as ref-delta are minority. In the best clone case where no ref-delta object is present, we waste (20-8) * nr_objects bytes because of this union. That's about 38MB out of 100MB for deltas[] with 3.4M objects, or 38%. deltas[] would be around 62MB without the waste. This patch attempts to eliminate that. deltas[] array is split into two: one for ofs-delta and one for ref-delta. Many functions are also duplicated because of this split. With this patch, ofs_deltas[] array takes 51MB. ref_deltas[] should remain unallocated in clone case (0 bytes). This array grows as we see ref-delta. We save about half in this case, or 25% of total bookkeeping. The saving is more than the calculation above because some padding in the old delta_entry struct is removed. ofs_delta_entry is 16 bytes, including the 4 bytes padding. That's 13MB for padding, but packing the struct could break platforms that do not support unaligned access. If someone on 32-bit is really low on memory and only deals with packs smaller than 2G, using 32-bit off_t would eliminate the padding and save 27MB on top. A note about ofs_deltas allocation. We could use ref_deltas memory allocation strategy for ofs_deltas. But that probably just adds more overhead on top. ofs-deltas are generally the majority (1/2 to 2/3) in any pack. Incremental realloc may lead to too many memcpy. And if we preallocate, say 1/2 or 2/3 of nr_objects initially, the growth rate of ALLOC_GROW() could make this array larger than nr_objects, wasting more memory. Brought-up-by: Matthew Sporleder <msporleder@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | index-pack: reduce object_entry size to save memoryNguyễn Thái Ngọc Duy2015-02-271-11/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For each object in the input pack, we need one struct object_entry. On x86-64, this struct is 64 bytes long. Although: - The 8 bytes for delta_depth and base_object_no are only useful when show_stat is set. And it's never set unless someone is debugging. - The three fields hdr_size, type and real_type take 4 bytes each even though they never use more than 4 bits. By moving delta_depth and base_object_no out of struct object_entry and make the other 3 fields one byte long instead of 4, we shrink 25% of this struct. On a 3.4M object repo (*) that's about 53MB. The saving is less impressive compared to index-pack memory use for basic bookkeeping (**), about 16%. (*) linux-2.6.git already has 4M objects as of v3.19-rc7 so this is not an unrealistic number of objects that we have to deal with. (**) 3.4M * (sizeof(object_entry) + sizeof(delta_entry)) = 311MB Brought-up-by: Matthew Sporleder <msporleder@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | Merge branch 'jk/still-interesting'Junio C Hamano2015-05-111-4/+19
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git rev-list --objects $old --not --all" to see if everything that is reachable from $old is already connected to the existing refs was very inefficient. * jk/still-interesting: limit_list: avoid quadratic behavior from still_interesting
| * | | | limit_list: avoid quadratic behavior from still_interestingJeff King2015-04-171-4/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we are limiting a rev-list traversal due to UNINTERESTING refs, we have to walk down the tips (both interesting and uninteresting) to find where they intersect. We keep a queue of commits to examine, pop commits off the queue one by one, and potentially add their parents. The size of the queue will naturally fluctuate based on the "width" of the history graph; i.e., the number of simultaneous lines of development. But for the most part it will stay in the same ballpark as the initial number of tips we fed, shrinking over time (as we hit common ancestors of the tips). So roughly speaking, if we start with `N` tips, we'll spend much of the time with a queue around `N` items. For each UNINTERESTING commit we pop, we call still_interesting to check whether marking its parents as UNINTERESTING has made the whole queue uninteresting (in which case we can quit early). Because the queue is stored as a linked list, this is `O(N)`, where `N` is the number of items in the queue. So processing a queue with `N` commits marked UNINTERESTING (and one or more interesting commits) will take `O(N^2)`. If you feed a lot of positive tips, this isn't a problem. They aren't UNINTERESTING, so they don't incur the still_interesting check. It also isn't a problem if you traverse from an interesting tip to some UNINTERESTING bases. We order the queue by recency, so the interesting commits stay at the front of the queue as we walk down them. The linear check can exit early as soon as it sees one interesting commit left in the queue. But if you want to know whether an older commit is reachable from a set of newer tips, we end up processing in the opposite direction: from the UNINTERESTING ones down to the interesting one. This may happen when we call: git rev-list $commits --not --all in check_everything_connected after a fetch. If we fetched something much older than most of our refs, and if we have a large number of refs, the traversal cost is dominated by the quadratic behavior. These commands simulate the connectivity check of such a fetch, when you have `$n` distinct refs in the receiver: # positive ref is 100,000 commits deep git rev-list --all | head -100000 | tail -1 >input # huge number of more recent negative refs git rev-list --all | head -$n | sed s/^/^/ >>input time git rev-list --stdin <input Here are timings for various `n` on the linux.git repository. The `n=1` case provides a baseline for just walking the commits, which lets us see the still_interesting overhead. The times marked with `+` subtract that baseline to show just the extra time growth due to the large number of refs. The `x` numbers show the slowdown of the adjusted time versus the prior trial. n | before | after -------------------------------------------------------- 1 | 0.991s | 0.848s 10000 | 1.120s (+0.129s) | 0.885s (+0.037s) 20000 | 1.451s (+0.460s, 3.5x) | 0.923s (+0.075s, 2.0x) 40000 | 2.731s (+1.740s, 3.8x) | 0.994s (+0.146s, 1.9x) 80000 | 8.235s (+7.244s, 4.2x) | 1.123s (+0.275s, 1.9x) Each trial doubles `n`, so you can see the quadratic (`4x`) behavior before this patch. Afterwards, we have a roughly linear relationship. The implementation is fairly straightforward. Whenever we do the linear search, we cache the interesting commit we find, and next time check it before doing another linear search. If that commit is removed from the list or becomes UNINTERESTING itself, then we fall back to the linear search. This is very similar to the trick used by fce87ae (Fix quadratic performance in rewrite_one., 2008-07-12). I considered and rejected several possible alternatives: 1. Keep a count of UNINTERESTING commits in the queue. This requires managing the count not only when removing an item from the queue, but also when marking an item as UNINTERESTING. That requires touching the other functions which mark commits, and would require knowing quickly which commits are in the queue (lookup in the queue is linear, so we would need an auxiliary structure or to also maintain an IN_QUEUE flag in each commit object). 2. Keep a separate list of interesting commits. Drop items from it when they are dropped from the queue, or if they become UNINTERESTING. This again suffers from extra complexity to maintain the list, not to mention CPU and memory. 3. Use a better data structure for the queue. This is something that could help the fix in fce87ae, because we order the queue by recency, and it is about inserting quickly in recency order. So a normal priority queue would help there. But here, we cannot disturb the order of the queue, which makes things harder. We really do need an auxiliary index to track the flag we care about, which is basically option (2) above. The "cache" trick is simple, and the numbers above show that it works well in practice. This is because the length of time it takes to find an interesting commit is proportional to the length of time it will remain cached (i.e., if we have to walk a long way to find it, it also means we have to pop a lot of elements in the queue until we get rid of it and have to find another interesting commit). The worst case is still quadratic, though. We could have `N` uninteresting commits at the front of the queue, followed by `N` interesting commits, where commit `i` has parent `i+N`. When we pop commit `i`, we will notice that the parent of the next commit, `i+1+N` is still interesting and cache it. But then handling commit `i+1`, we will mark its parent `i+1+N` uninteresting, and immediately invalidate our cache. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'jk/reading-packed-refs'Junio C Hamano2015-05-118-6/+77
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An earlier rewrite to use strbuf_getwholeline() instead of fgets(3) to read packed-refs file revealed that the former is unacceptably inefficient. * jk/reading-packed-refs: t1430: add another refs-escape test read_packed_refs: avoid double-checking sane refs strbuf_getwholeline: use getdelim if it is available strbuf_getwholeline: avoid calling strbuf_grow strbuf_addch: avoid calling strbuf_grow config: use getc_unlocked when reading from file strbuf_getwholeline: use getc_unlocked git-compat-util: add fallbacks for unlocked stdio strbuf_getwholeline: use getc macro
| * | | | | t1430: add another refs-escape testjk/reading-packed-refsJeff King2015-04-161-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In t1430, we check whether deleting the branch "../../foo" will delete ".git/foo". However, this is not that interesting a test; the precious file ".git/foo" does not look like a ref, so even if we did not notice the "escape" from the "refs/" hierarchy, we would fail for that reason (i.e., if you turned refname_is_safe into a noop, the test still passes). Let's add an additional test for the same thing, but with a file that actually looks like a ref. That will make sure we are exercising the refname_is_safe code. While we're at it, let's also make the code work a little harder by adding some extra paths and some empty path components. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | read_packed_refs: avoid double-checking sane refsJeff King2015-04-161-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Prior to d0f810f (refs.c: allow listing and deleting badly named refs, 2014-09-03), read_packed_refs would barf on any malformed refnames by virtue of calling create_ref_entry with the "check" parameter set to 1. That commit loosened our reading so that we call check_refname_format ourselves and just set a REF_BAD_NAME flag. We then call create_ref_entry with the check parameter set to 0. That function learned to do an extra safety check even when the check parameter is 0, so that we don't load any dangerous refnames (like "../../../etc/passwd"). This is implemented by calling refname_is_safe() in create_ref_entry(). However, we can observe that refname_is_safe() can only be true if check_refname_format() also failed. So in the common case of a sanely named ref, we perform _both_ checks, even though we know that the latter will never trigger. This has a noticeable performance impact when the packed-refs file is large. Let's drop the refname_is_safe check from create_ref_entry(), and make it the responsibility of the caller. Of the three callers that pass a check parameter of "0", two will have just called check_refname_format(), and can check the refname-safety only when it fails. The third case, pack_if_possible_fn, is copying from an existing ref entry, which must have previously passed our safety check. With this patch, running "git rev-parse refs/heads/does-not-exist" on a repo with a large (1.6GB) packed-refs file went from: real 0m6.768s user 0m6.340s sys 0m0.432s to: real 0m5.703s user 0m5.276s sys 0m0.432s for a wall-clock speedup of 15%. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | strbuf_getwholeline: use getdelim if it is availableJeff King2015-04-163-0/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We spend a lot of time in strbuf_getwholeline in a tight loop reading characters from a stdio handle into a buffer. The libc getdelim() function can do this for us with less overhead. It's in POSIX.1-2008, and was a GNU extension before that. Therefore we can't rely on it, but can fall back to the existing getc loop when it is not available. The HAVE_GETDELIM knob is turned on automatically for Linux, where we have glibc. We don't need to set any new feature-test macros, because we already define _GNU_SOURCE. Other systems that implement getdelim may need to other macros (probably _POSIX_C_SOURCE >= 200809L), but we can address that along with setting the Makefile knob after testing the feature on those systems. Running "git rev-parse refs/heads/does-not-exist" on a repo with an extremely large (1.6GB) packed-refs file went from (best-of-5): real 0m8.601s user 0m8.084s sys 0m0.524s to: real 0m6.768s user 0m6.340s sys 0m0.432s for a wall-clock speedup of 21%. Based on a patch from Rasmus Villemoes <rv@rasmusvillemoes.dk>. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | strbuf_getwholeline: avoid calling strbuf_growJeff King2015-04-161-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As with the recent speedup to strbuf_addch, we can avoid calling strbuf_grow() in a tight loop of single-character adds by instead checking strbuf_avail. Note that we would instead call strbuf_addch directly here, but it does more work than necessary: it will NUL-terminate the result for each character read. Instead, in this loop we read the characters one by one and then add the terminator manually at the end. Running "git rev-parse refs/heads/does-not-exist" on a repo with an extremely large (1.6GB) packed-refs file went from (best-of-5): real 0m10.948s user 0m10.548s sys 0m0.412s to: real 0m8.601s user 0m8.084s sys 0m0.524s for a wall-clock speedup of 21%. Helped-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | strbuf_addch: avoid calling strbuf_growJeff King2015-04-161-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We mark strbuf_addch as inline, because we expect it may be called from a tight loop. However, the first thing it does is call the non-inline strbuf_grow(), which can handle arbitrary-sized growth. Since we know that we only need a single character, we can use the inline strbuf_avail() to quickly check whether we need to grow at all. Our check is redundant when we do call strbuf_grow(), but that's OK. The common case is that we avoid calling it at all, and we have made that case faster. On a silly pathological case: perl -le ' print "[core]"; print "key$_ = value$_" for (1..1000000) ' >input git config -f input core.key1 this dropped the time to run git-config from: real 0m0.159s user 0m0.152s sys 0m0.004s to: real 0m0.140s user 0m0.136s sys 0m0.004s for a savings of 12%. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | config: use getc_unlocked when reading from fileJeff King2015-04-161-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We read config files character-by-character from a stdio handle using fgetc(). This incurs significant locking overhead, even though we know that only one thread can possibly access the handle. We can speed this up by taking the lock ourselves, and then using getc_unlocked to read each character. On a silly pathological case: perl -le ' print "[core]"; print "key$_ = value$_" for (1..1000000) ' >input git config -f input core.key1 this dropped the time to run git-config from: real 0m0.263s user 0m0.260s sys 0m0.000s to: real 0m0.159s user 0m0.152s sys 0m0.004s for a savings of 39%. Most config files are not this big, but the savings should be proportional to the size of the file (i.e., we always save 39%, just of a much smaller number). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | strbuf_getwholeline: use getc_unlockedJeff King2015-04-161-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | strbuf_getwholeline calls getc in a tight loop. On modern libc implementations, the stdio code locks the handle for every operation, which means we are paying a significant overhead. We can get around this by locking the handle for the whole loop and using the unlocked variant. Running "git rev-parse refs/heads/does-not-exist" on a repo with an extremely large (1.6GB) packed-refs file went from: real 0m18.900s user 0m18.472s sys 0m0.448s to: real 0m10.953s user 0m10.384s sys 0m0.580s for a wall-clock speedup of 42%. All times are best-of-3, and done on a glibc 2.19 system. Note that we call into strbuf_grow while holding the lock. It's possible for that function to call other stdio functions (e.g., printing to stderr when dying due to malloc error); however, the POSIX.1-2001 definition of flockfile makes it clear that the locks are per-handle, so we are fine unless somebody else tries to read from our same handle. This doesn't ever happen in the current code, and is unlikely to be added in the future (we would have to do something exotic like add a die_routine that tried to read from stdin). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | git-compat-util: add fallbacks for unlocked stdioJeff King2015-04-161-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | POSIX.1-2001 specifies some functions for optimizing the locking out of tight getc() loops. Not all systems are POSIX, though, and even not all POSIX systems are required to implement these functions. We can check for the feature-test macro to see if they are available, and if not, provide a noop implementation. There's no Makefile knob here, because we should just detect this automatically. If there are very bizarre systems, we may need to add one, but it's not clear yet in which direction: 1. If a system defines _POSIX_THREAD_SAFE_FUNCTIONS but these functions are missing or broken, we would want a knob to manually turn them off. 2. If a system has these functions but does not define _POSIX_THREAD_SAFE_FUNCTIONS, we would want a knob to manually turn them on. We can add such a knob when we find a real-world system that matches this. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | strbuf_getwholeline: use getc macroJeff King2015-04-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | strbuf_getwholeline calls fgetc in a tight loop. Using the getc form, which can be implemented as a macro, should be faster (and we do not care about it evaluating our argument twice, as we just have a plain variable). On my glibc system, running "git rev-parse refs/heads/does-not-exist" on a file with an extremely large (1.6GB) packed-refs file went from (best of 3 runs): real 0m19.383s user 0m18.876s sys 0m0.528s to: real 0m18.900s user 0m18.472s sys 0m0.448s for a wall-clock speedup of 2.5%. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | Merge branch 'lm/squelch-bg-progress'Junio C Hamano2015-05-112-8/+22
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many long-running operations show progress eye-candy, even when they are later backgrounded. Hide the eye-candy when the process is sent to the background instead. * lm/squelch-bg-progress: compat/mingw: stubs for getpgid() and tcgetpgrp() progress: no progress in background
| * | | | | | compat/mingw: stubs for getpgid() and tcgetpgrp()Johannes Sixt2015-04-151-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Windows does not have process groups. It is, therefore, the simplest to pretend that each process is in its own process group. While here, move the getppid() stub from its old location (between two sync related functions) next to the two new functions. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | progress: no progress in backgroundLuke Mewburn2015-04-151-6/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Disable the display of the progress if stderr is not the current foreground process. Still display the final result when done. Signed-off-by: Luke Mewburn <luke@mewburn.net> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | | Merge branch 'jk/sha1-file-reduce-useless-warnings'Junio C Hamano2015-05-112-6/+2
|\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * jk/sha1-file-reduce-useless-warnings: sha1_file: squelch "packfile cannot be accessed" warnings
| * | | | | | | sha1_file: squelch "packfile cannot be accessed" warningsjk/sha1-file-reduce-useless-warningsJeff King2015-03-302-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we find an object in a packfile index, we make sure we can still open the packfile itself (or that it is already open), as it might have been deleted by a simultaneous repack. If we can't access the packfile, we print a warning for the user and tell the caller that we don't have the object (we can then look in other packfiles, or find a loose version, before giving up). The warning we print to the user isn't really accomplishing anything, and it is potentially confusing to users. In the normal case, it is complete noise; we find the object elsewhere, and the user does not have to care that we racily saw a packfile index that became stale. It didn't affect the operation at all. A possibly more interesting case is when we later can't find the object, and report failure to the user. In this case the warning could be considered a clue toward that ultimate failure. But it's not really a useful clue in practice. We wouldn't even print it consistently (since we are racing with another process, we might not even see the .idx file, or we might win the race and open the packfile, completing the operation). This patch drops the warning entirely (not only from the fill_pack_entry site, but also from an identical use in pack-objects). If we did find the warning interesting in the error case, we could stuff it away and reveal it to the user when we later die() due to the broken object. But that complexity just isn't worth it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | | | Merge branch 'nd/multiple-work-trees'Junio C Hamano2015-05-1152-250/+1391
|\ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A replacement for contrib/workdir/git-new-workdir that does not rely on symbolic links and make sharing of objects and refs safer by making the borrowee and borrowers aware of each other. * nd/multiple-work-trees: (41 commits) prune --worktrees: fix expire vs worktree existence condition t1501: fix test with split index t2026: fix broken &&-chain t2026 needs procondition SANITY git-checkout.txt: a note about multiple checkout support for submodules checkout: add --ignore-other-wortrees checkout: pass whole struct to parse_branchname_arg instead of individual flags git-common-dir: make "modules/" per-working-directory directory checkout: do not fail if target is an empty directory t2025: add a test to make sure grafts is working from a linked checkout checkout: don't require a work tree when checking out into a new one git_path(): keep "info/sparse-checkout" per work-tree count-objects: report unused files in $GIT_DIR/worktrees/... gc: support prune --worktrees gc: factor out gc.pruneexpire parsing code gc: style change -- no SP before closing parenthesis checkout: clean up half-prepared directories in --to mode checkout: reject if the branch is already checked out elsewhere prune: strategies for linked checkouts checkout: support checking out into a new working directory ...
| * | | | | | | | prune --worktrees: fix expire vs worktree existence conditionMax Kirillov2015-03-312-3/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `git prune --worktrees` was pruning worktrees which were non-existent OR expired, while it rather should prune those which are orphaned AND expired, as git-checkout documentation describes. Fix it. Add test 'not prune proper checkouts', which uses valid but expired worktree. Modify test 'not prune recent checkouts' to remove the worktree before pruning - link in worktrees still must survive. In older form it is useless because would pass always when the other test passes. Signed-off-by: Max Kirillov <max@max630.net> Acked-by: Duy Nguyen <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | t1501: fix test with split indexThomas Gummerer2015-03-241-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | t1501-worktree.sh does not copy the shared index in the "relative $GIT_WORK_TREE and git subprocesses" test, which makes the test fail when GIT_TEST_SPLIT_INDEX is set. Copy the shared index as well in order to fix this. Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | t2026: fix broken &&-chainJunio C Hamano2015-03-201-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | t2026 needs procondition SANITYTorsten Bögershausen2015-01-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When running t0026 as root 'prune directories with unreadable gitdir' fails. Skip this test if SANITY is not set (the use of POSIXPERM is wrong here) Signed-off-by: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | git-checkout.txt: a note about multiple checkout support for submodulesNguyễn Thái Ngọc Duy2015-01-071-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The goal seems to be using multiple checkouts to reduce disk space. But we have not reached an agreement how things should be. There are a couple options. - You may want to keep $SUB repos elsewhere (perhaps in a central place) outside $SUPER. This is also true for nested submodules where a superproject may be a submodule of another superproject. - You may want to keep all $SUB repos in $SUPER/modules (or some other place in $SUPER) - We could even push it further and merge all $SUB repos into $SUPER instead of storing them separately. But that would at least require ref namespace enabled. On top of that, git-submodule.sh expects $GIT_DIR/config to be per-worktree, at least for the submodule.* part. Here I think we have two options, either update config.c to also read $GIT_DIR/config.worktree (which is per worktree) in addition to $GIT_DIR/config (shared) and store worktree-specific vars in the new place, or update git-submodule.sh to read/write submodule.* directly from $GIT_DIR/config.submodule (per worktree). These take time to address properly. Meanwhile, make a note to the user that they should not use multiple worktrees in submodule context. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | checkout: add --ignore-other-wortreesNguyễn Thái Ngọc Duy2015-01-073-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Noticed-by: Mark Levedahl <mlevedahl@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | checkout: pass whole struct to parse_branchname_arg instead of individual flagsNguyễn Thái Ngọc Duy2015-01-071-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | git-common-dir: make "modules/" per-working-directory directoryMax Kirillov2014-12-013-4/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Each working directory of main repository has its own working directory of submodule, and in most cases they should be checked out to different revisions. So they should be separated. It looks logical to make submodule instances in different working directories to reuse the submodule directory in the common dir of the main repository, and probably this is how "checkout --to" should initialize them called on the main repository, but they also should work fine being completely separated clones. Testfile t7410-submodule-checkout-to.sh demostrates the behavior. Signed-off-by: Max Kirillov <max@max630.net> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | checkout: do not fail if target is an empty directoryMax Kirillov2014-12-012-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Non-recursive checkout creates empty directpries in place of submodules. If then I try to "checkout --to" submodules there, it refuses to do so, because directory already exists. Fix by allowing checking out to empty directory. Add test and modify the existing one so that it uses non-empty directory. Signed-off-by: Max Kirillov <max@max630.net> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | t2025: add a test to make sure grafts is working from a linked checkoutNguyễn Thái Ngọc Duy2014-12-011-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | checkout: don't require a work tree when checking out into a new oneDennis Kaarsemaker2014-12-013-1/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For normal use cases, it does not make sense for 'checkout' to work on a bare repository, without a worktree. But "checkout --to" is an exception because it _creates_ a new worktree. Allow this option to run on bare repositories. People who check out from a bare repository should remember that core.logallrefupdates is off by default and it should be turned back on. `--to` cannot do this automatically behind the user's back because some user may deliberately want no reflog. For people interested in repository setup/discovery code, is_bare_repository_cfg (aka "core.bare") is unchanged by this patch, which means 'true' by default for bare repos. Fortunately when we get the repo through a linked checkout, is_bare_repository_cfg is never used. So all is still good. [nd: commit message] Signed-off-by: Dennis Kaarsemaker <dennis@kaarsemaker.net> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | git_path(): keep "info/sparse-checkout" per work-treeNguyễn Thái Ngọc Duy2014-12-012-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently git_path("info/sparse-checkout") resolves to $GIT_COMMON_DIR/info/sparse-checkout in multiple worktree mode. It makes more sense for the sparse checkout patterns to be per worktree, so you can have multiple checkouts with different parts of the tree. With this, "git checkout --to <new>" on a sparse checkout will create <new> as a full checkout. Which is expected, it's how a new checkout is made. The user can reshape the worktree afterwards. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | count-objects: report unused files in $GIT_DIR/worktrees/...Nguyễn Thái Ngọc Duy2014-12-013-3/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In linked checkouts, borrowed parts like config is taken from $GIT_COMMON_DIR. $GIT_DIR/config is never used. Report them as garbage. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | gc: support prune --worktreesNguyễn Thái Ngọc Duy2014-12-013-4/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Helped-by: Marc Branchaud <marcnarc@xiplink.com> Signed-off-by: Marc Branchaud <marcnarc@xiplink.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | gc: factor out gc.pruneexpire parsing codeNguyễn Thái Ngọc Duy2014-12-011-10/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | gc: style change -- no SP before closing parenthesisNguyễn Thái Ngọc Duy2014-12-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | checkout: clean up half-prepared directories in --to modeNguyễn Thái Ngọc Duy2014-12-012-0/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | checkout: reject if the branch is already checked out elsewhereNguyễn Thái Ngọc Duy2014-12-012-7/+104
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | One branch obviously can't be checked out at two places (but detached heads are ok). Give the user a choice in this case: --detach, -b new-branch, switch branch in the other checkout first or simply 'cd' and continue to work there. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | prune: strategies for linked checkoutsNguyễn Thái Ngọc Duy2014-12-017-2/+251
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (alias R=$GIT_COMMON_DIR/worktrees/<id>) - linked checkouts are supposed to keep its location in $R/gitdir up to date. The use case is auto fixup after a manual checkout move. - linked checkouts are supposed to update mtime of $R/gitdir. If $R/gitdir's mtime is older than a limit, and it points to nowhere, worktrees/<id> is to be pruned. - If $R/locked exists, worktrees/<id> is not supposed to be pruned. If $R/locked exists and $R/gitdir's mtime is older than a really long limit, warn about old unused repo. - "git checkout --to" is supposed to make a hard link named $R/link pointing to the .git file on supported file systems to help detect the user manually deleting the checkout. If $R/link exists and its link count is greated than 1, the repo is kept. Helped-by: Marc Branchaud <marcnarc@xiplink.com> Helped-by: Eric Sunshine <sunshine@sunshineco.com> Helped-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Marc Branchaud <marcnarc@xiplink.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | checkout: support checking out into a new working directoryNguyễn Thái Ngọc Duy2014-12-016-4/+212
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git checkout --to" sets up a new working directory with a .git file pointing to $GIT_DIR/worktrees/<id>. It then executes "git checkout" again on the new worktree with the same arguments except "--to" is taken out. The second checkout execution, which is not contaminated with any info from the current repository, will actually check out and everything that normal "git checkout" does. Helped-by: Marc Branchaud <marcnarc@xiplink.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | use new wrapper write_file() for simple file writingNguyễn Thái Ngọc Duy2014-12-015-31/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes common problems in these code about error handling, forgetting to close the file handle after fprintf() fails, or not printing out the error string.. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | wrapper.c: wrapper to open a file, fprintf then closeNguyễn Thái Ngọc Duy2014-12-012-0/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | setup.c: support multi-checkout repo setupNguyễn Thái Ngọc Duy2014-12-019-14/+115
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The repo setup procedure is updated to detect $GIT_DIR/commondir and set $GIT_COMMON_DIR properly. The core.worktree is ignored when $GIT_COMMON_DIR is set. This is because the config file is shared in multi-checkout setup, but checkout directories _are_ different. Making core.worktree effective in all checkouts mean it's back to a single checkout. Helped-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>