summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* add get_size_from_delta()Nicolas Pitre2007-04-162-34/+40
| | | | | | | | ... which consists of existing code split out of packed_delta_info() for other callers to use it as well. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: make in_pack_header_size a variable of its ownNicolas Pitre2007-04-161-14/+12
| | | | | | | | | | | | | | | | | | | | | | | | It currently aliases delta_size on the principle that reused deltas won't go through the whole delta matching loop hence delta_size was unused. This is not true if given delta doesn't find its base in the pack though. But we need that information even for whole object data reuse. Well in short the current state looks awful and is prone to bugs. It just works fine now because try_delta() tests trg_entry->delta before using trg_entry->delta_size, but that is a bit subtle and I was wondering for a while why things just worked fine... even if I'm guilty of having introduced this abomination myself in the first place. Let's do the sensible thing instead with no ambiguity, which is to have a separate variable for in_pack_header_size. This might even help future optimizations. While at it, let's reorder some struct object_entry members so they all align well with their own width, regardless of the architecture or the size of off_t. Some memory saving is to be expected with this alone. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: get rid of create_final_object_list()Nicolas Pitre2007-04-161-55/+72
| | | | | | | | | | | | | | | | | Because we don't have to know the SHA1 h(hence the name) of the pack up front anymore, let's get rid of yet another global sorted object list and sort them only in write_index_file(), then compute the object list SHA1 on the fly. This has the advantage of saving another chunk of memory, and the sorted list SHA1 won't be computed needlessly on servers during a fetch. Of course the cunning plan is also to make write_index_file() much like the function with the same name in index-pack.c for an eventual easy sharing. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: get rid of reuse_cached_packNicolas Pitre2007-04-161-72/+14
| | | | | | | | | | | | | | | | | | This capability is practically never useful, and therefore never tested, because it is fairly unlikely that the requested pack will be already available. Furthermore it is of little gain over the ability to reuse existing pack data. In fact the ability to change delta type on the fly when reusing delta data is a nice thing that has almost no cost and allows greater backward compatibility with a client's capabilities than if the client is blindly sent a whole pack without any discrimination. And this "feature" is simply in the way of other cleanups. Let's get rid of it. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: clean up list sortingNicolas Pitre2007-04-161-31/+22
| | | | | | | | | | | | Get rid of sort_comparator() as it impose a run time double indirect function call for little compile time type checking gain. Also get rid of create_sorted_list() as it only has one user which would as well be just fine doing its sorting locally. Eventually the list of deltifiable objects might be shorter than the whole object list. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: rework check_delta_limit usageNicolas Pitre2007-04-161-44/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | Objects that have delta "children" from pack data reuse must consider the depth of their deepest child when they try to deltify themselves for those children not to become too deep. However, in the context of a "thin" pack, the delta children depth was skipped entirely on the presumption that the pack was always going to be exploded on the receiving end, hence the delta length wasn't an issue. Now that we keep received packs as is and reuse pack data when repacking, those packs do contain delta chains that are longer than expected. Worse, those delta chain may even grow longer when the pack is further repacked into another thin pack for a subsequent transmission. So this patch restores strict delta length even for thin packs, and it moves check_delta_limit() usage directly in the delta loop where it is needed. This way the delta_limit can be removed from struct object_entry as well. Oh and the initial value was wrong too. The progress_interval() function was moved to a more logical location in the process. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: equal objects in size should delta against newer objectsNicolas Pitre2007-04-161-1/+1
| | | | | | | | | | | | | Before finding best delta combinations, we sort objects by name hash, then by size, then by their position in memory. Then we walk the list backwards to test delta candidates. We hope that a bigger size usually means a newer objects. But a bigger address in memory does not mean a newer object. So the last comparison must be reversed. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: optimize preferred base handling a bitNicolas Pitre2007-04-161-15/+12
| | | | | | | | Let's avoid some cycles when there is no base to test against, and avoid unnecessary object lookups. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* clean up add_object_entry()Nicolas Pitre2007-04-111-27/+25
| | | | | | | | This function used to call locate_object_entry_hash() _twice_ per added object while only once should suffice. Let's reorganize that code a bit. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* tests for various pack index featuresNicolas Pitre2007-04-111-0/+146
| | | | | | | | | | | | This is a fairly complete list of tests for various aspects of pack index versions 1 and 2. Tests on index v2 include 32-bit and 64-bit offsets, as well as a nice demonstration of the flawed repacking integrity checks that index version 2 intend to solve over index version 1 with the per object CRC. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* use test-genrandom in tests instead of /dev/urandomNicolas Pitre2007-04-111-1/+1
| | | | | | This way tests are completely deterministic and possibly more portable. Signed-off-by: Nicolas Pitre <nico@cam.org>
* simple random data generator for testsNicolas Pitre2007-04-113-2/+40
| | | | | | | | | | | | | Reliance on /dev/urandom produces test vectors that are, well, random. This can cause problems impossible to track down when the data is different from one test invokation to another. The goal is not to have random data to test, but rather to have a convenient way to create sets of large files with non compressible and non deltifiable data in a reproducible way. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* validate reused pack data with CRC when possibleNicolas Pitre2007-04-101-12/+34
| | | | | | | | | This replaces the inflate validation with a CRC validation when reusing data from a pack which uses index version 2. That makes repacking much safer against corruptions, and it should be a bit faster too. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* allow forcing index v2 and 64-bit offset tresholdNicolas Pitre2007-04-102-6/+32
| | | | | | | | This is necessary for testing the new capabilities in some automated way without having an actual 4GB+ pack. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-redundant.c: learn about index v2Nicolas Pitre2007-04-101-20/+27
| | | | | | | | | | | Initially the conversion was made using nth_packed_object_sha1() which made this file completely index version agnostic. Unfortunately the overhead was quite significant so I went back to raw index walking but with selectable base and step values which brought back similar performances as the original. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* show-index.c: learn about index v2Nicolas Pitre2007-04-101-9/+59
| | | | | | | | When index v2 is encountered, the CRC32 of each object is also displayed in parenthesis at the end of the line. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* sha1_file.c: learn about index version 2Nicolas Pitre2007-04-101-29/+89
| | | | | | | | | | | | | With this patch, packs larger than 4GB are usable, even on a 32-bit machine (at least on Linux). If off_t is not large enough to deal with a large pack then die() is called instead of attempting to use the pack and producing garbage. This was tested with a 8GB pack specially created for the occasion on a 32-bit machine. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* index-pack: learn about pack index version 2Nicolas Pitre2007-04-101-9/+57
| | | | | | | | | | | Like previous patch but for index-pack. [ There is quite some code duplication between pack-objects and index-pack for generating a pack index (and fast-import as well I suppose). This should be reworked into a common function eventually. But not now. ] Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: learn about pack index version 2Nicolas Pitre2007-04-101-12/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pack index version 2 goes as follows: - 8 bytes of header with signature and version. - 256 entries of 4-byte first-level fan-out table. - Table of sorted 20-byte SHA1 records for each object in pack. - Table of 4-byte CRC32 entries for raw pack object data. - Table of 4-byte offset entries for objects in the pack if offset is representable with 31 bits or less, otherwise it is an index in the next table with top bit set. - Table of 8-byte offset entries indexed from previous table for offsets which are 32 bits or more (optional). - 20-byte SHA1 checksum of sorted object names. - 20-byte SHA1 checksum of the above. The object SHA1 table is all contiguous so future pack format that would contain this table directly won't require big changes to the code. It is also tighter for slightly better cache locality when looking up entries. Support for large packs exceeding 31 bits in size won't impose an index size bloat for packs within that range that don't need a 64-bit offset. And because newer objects which are likely to be the most frequently used are located at the beginning of the pack, they won't pay the 64-bit offset lookup at run time either even if the pack is large. Right now an index version 2 is created only when the biggest offset in a pack reaches 31 bits. It might be a good idea to always use index version 2 eventually to benefit from the CRC32 it contains when reusing pack data while repacking. [jc: with the "oops" fix to keep track of the last offset correctly] Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* compute object CRC32 with index-packNicolas Pitre2007-04-101-3/+13
| | | | | | | Same as previous patch but for index-pack. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* compute a CRC32 for each object as stored in a packNicolas Pitre2007-04-103-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The most important optimization for performance when repacking is the ability to reuse data from a previous pack as is and bypass any delta or even SHA1 computation by simply copying the raw data from one pack to another directly. The problem with this is that any data corruption within a copied object would go unnoticed and the new (repacked) pack would be self-consistent with its own checksum despite containing a corrupted object. This is a real issue that already happened at least once in the past. In some attempt to prevent this, we validate the copied data by inflating it and making sure no error is signaled by zlib. But this is still not perfect as a significant portion of a pack content is made of object headers and references to delta base objects which are not deflated and therefore not validated when repacking actually making the pack data reuse still not as safe as it could be. Of course a full SHA1 validation could be performed, but that implies full data inflating and delta replaying which is extremely costly, which cost the data reuse optimization was designed to avoid in the first place. So the best solution to this is simply to store a CRC32 of the raw pack data for each object in the pack index. This way any object in a pack can be validated before being copied as is in another pack, including header and any other non deflated data. Why CRC32 instead of a faster checksum like Adler32? Quoting Wikipedia: Jonathan Stone discovered in 2001 that Adler-32 has a weakness for very short messages. He wrote "Briefly, the problem is that, for very short packets, Adler32 is guaranteed to give poor coverage of the available bits. Don't take my word for it, ask Mark Adler. :-)" The problem is that sum A does not wrap for short messages. The maximum value of A for a 128-byte message is 32640, which is below the value 65521 used by the modulo operation. An extended explanation can be found in RFC 3309, which mandates the use of CRC32 instead of Adler-32 for SCTP, the Stream Control Transmission Protocol. In the context of a GIT pack, we have lots of small objects, especially deltas, which are likely to be quite small and in a size range for which Adler32 is dimed not to be sufficient. Another advantage of CRC32 is the possibility for recovery from certain types of small corruptions like single bit errors which are the most probable type of corruptions. OK what this patch does is to compute the CRC32 of each object written to a pack within pack-objects. It is not written to the index yet and it is obviously not validated when reusing pack data yet either. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* add overflow tests on pack offset variablesNicolas Pitre2007-04-103-16/+34
| | | | | | | | | | | | Change a few size and offset variables to more appropriate type, then add overflow tests on those offsets. This prevents any bad data to be generated/processed if off_t happens to not be large enough to handle some big packs. Better be safe than sorry. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* make overflow test on delta base offset work regardless of variable sizeNicolas Pitre2007-04-105-4/+12
| | | | | | | | | | | | | This patch introduces the MSB() macro to obtain the desired number of most significant bits from a given variable independently of the variable type. It is then used to better implement the overflow test on the OBJ_OFS_DELTA base offset variable with the property of always working correctly regardless of the type/size of that variable. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* get rid of num_packed_objects()Nicolas Pitre2007-04-107-22/+17
| | | | | | | | | | | | | The coming index format change doesn't allow for the number of objects to be determined from the size of the index file directly. Instead, Let's initialize a field in the packed_git structure with the object count when the index is validated since the count is always known at that point. While at it let's reorder some struct packed_git fields to avoid padding due to needed 64-bit alignment for some of them. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* git-archive: make tar the default formatRené Scharfe2007-04-093-5/+12
| | | | | | | | As noted by Junio, --format=tar should be assumed if no format was specified. Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Merge branch 'jc/push'Junio C Hamano2007-04-081-7/+10
|\ | | | | | | | | | | * jc/push: git-push to multiple locations does not stop at the first failure git-push reports the URL after failing.
| * git-push to multiple locations does not stop at the first failureJunio C Hamano2007-04-071-7/+8
| | | | | | | | | | | | | | | | | | | | When pushing into multiple repositories with git push, via multiple URL in .git/remotes/$shorthand or multiple url variables in [remote "$shorthand"] section, we used to stop upon the first failure. Continue the operation and report the failure at the end. Signed-off-by: Junio C Hamano <junkio@cox.net>
| * git-push reports the URL after failing.Junio C Hamano2007-04-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This came up on #git when somebody was getting 'unable to create ./objects/tmp_oXXXX' but sweared he had write permission to that directory. It turned out that the repository URL was changed and he was accessing a repository he does not have a write permission anymore. I am not sure how much this would have helped somebody who believed he was accessing location when the permission of that location was changed while he was looking the other way, though. But giving more information on the error path would be better, and the next change would be helped with this as well. Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Merge branch 'jc/merge-subtree'Junio C Hamano2007-04-087-3/+373
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * jc/merge-subtree: A new merge stragety 'subtree'. It is safe to merge this early as this is a feature that user explicitly needs to ask for and would not trigger otherwise. A known issue with the current implementation is that the subtree matching heuristics is very stupid. It could run ls-tree twice and try to count intersection. Giving it wider audience would help it to get improved by motivated volunteers. Signed-off-by: Junio C Hamano <junkio@cox.net>
| * | A new merge stragety 'subtree'.Junio C Hamano2007-04-077-3/+373
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This merge strategy largely piggy-backs on git-merge-recursive. When merging trees A and B, if B corresponds to a subtree of A, B is first adjusted to match the tree structure of A, instead of reading the trees at the same level. This adjustment is also done to the common ancestor tree. If you are pulling updates from git-gui repository into git.git repository, the root level of the former corresponds to git-gui/ subdirectory of the latter. The tree object of git-gui's toplevel is wrapped in a fake tree object, whose sole entry has name 'git-gui' and records object name of the true tree, before being used by the 3-way merge code. If you are merging the other way, only the git-gui/ subtree of git.git is extracted and merged into git-gui's toplevel. The detection of corresponding subtree is done by comparing the pathnames and types in the toplevel of the tree. Heuristics galore! That's the git way ;-). Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Merge branch 'js/fetch-progress'Junio C Hamano2007-04-081-4/+10
|\ \ | | | | | | | | | | | | * js/fetch-progress: git-fetch: add --quiet
| * | git-fetch: add --quietJunio C Hamano2007-03-091-4/+10
| | | | | | | | | | | | | | | | | | | | | Pass it to underlying fetch-pack, and also have it affect if -v is passed to http-fetch and rsync. Signed-off-by: Junio C Hamano <junkio@cox.net>
* | | Merge branch 'maint'Junio C Hamano2007-04-082-22/+31
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | * maint: Add Documentation/cmd-list.made to .gitignore git-svn: fix log command to avoid infinite loop on long commit messages git-svn: dcommit/rebase confused by patches with git-svn-id: lines git-svn: bail out on incorrect command-line options
| * | | Add Documentation/cmd-list.made to .gitignoreJunio C Hamano2007-04-081-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | Noticed by Randal L. Schwartz. Signed-off-by: Junio C Hamano <junkio@cox.net>
| * | | git-svn: fix log command to avoid infinite loop on long commit messagesEric Wong2007-04-081-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This bug has been around since the the conversion to use the Git.pm library back in October or November. Eventually I'd like "git rev-list/log" to have the option to not truncate overly long messages. Signed-off-by: Eric Wong <normalperson@yhbt.net> Signed-off-by: Junio C Hamano <junkio@cox.net>
| * | | git-svn: dcommit/rebase confused by patches with git-svn-id: linesEric Wong2007-04-081-20/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When patches are merged from another git-svn managed branch, they will have the git-svn-id: metadata line in them (generated by git-format-patch). When doing rebase or dcommit via git-svn, this would cause git-svn to find the wrong upstream branch. We now verify that the commit is consistent with the value in the .rev_db file. Signed-off-by: Eric Wong <normalperson@yhbt.net> Signed-off-by: Junio C Hamano <junkio@cox.net>
| * | | git-svn: bail out on incorrect command-line optionsEric Wong2007-04-081-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git svn log" is the only command that needs the pass-through option in Getopt::Long; otherwise we will bail out and let the user know something is wrong. Also, avoid printing out unaccepted mixed-case options (that are reserved for the command-line) such as --useSvmProps in the usage() function. Signed-off-by: Eric Wong <normalperson@yhbt.net> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | | | Start 1.5.2 cycle by prepareing RelNotes for it.Junio C Hamano2007-04-072-1/+77
| | | | | | | | | | | | | | | | Signed-off-by: Junio C Hamano <junkio@cox.net>
* | | | Merge branch 'jc/read-tree-df' (early part)Junio C Hamano2007-04-074-28/+167
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'jc/read-tree-df' (early part): Fix switching to a branch with D/F when current branch has file D. Fix twoway_merge that passed d/f conflict marker to merged_entry(). Fix read-tree --prefix=dir/. unpack-trees: get rid of *indpos parameter. unpack_trees.c: pass unpack_trees_options structure to keep_entry() as well. add_cache_entry(): removal of file foo does not conflict with foo/bar
| * | | | Fix switching to a branch with D/F when current branch has file D.Junio C Hamano2007-04-041-1/+112
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This loosens the over-eager verify_absent() check that gets upset to find directory D in the current working tree when switching to a branch that has a file there. The check needs to make sure that we do not lose precious working tree files as a result of removing directory D and replacing it with the file from the other branch, which is a tad expensive but this is a less common case. Signed-off-by: Junio C Hamano <junkio@cox.net>
| * | | | Fix twoway_merge that passed d/f conflict marker to merged_entry().Junio C Hamano2007-04-041-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When switching from one tree to another, we should not send a marker that says "this file does not exist in the new tree -- I am a placeholder to tell you that, and not a real blob" down to merged_entry() as the result of the merge.
| * | | | Fix read-tree --prefix=dir/.Junio C Hamano2007-04-041-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing code is not wrong per-se, but it started scanning the index from a location that does not match the tree being read, and wasted cycles. Signed-off-by: Junio C Hamano <junkio@cox.net>
| * | | | unpack-trees: get rid of *indpos parameter.Junio C Hamano2007-04-042-9/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This variable keeps track of which entry in the original index the traversal is looking at, and belongs to the unpack_trees_options structure along with other traversal status information. Signed-off-by: Junio C Hamano <junkio@cox.net>
| * | | | unpack_trees.c: pass unpack_trees_options structure to keep_entry() as well.Junio C Hamano2007-04-041-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Other decision functions, deleted_entry() and merged_entry() take one as their parameter, and this function should. I'll be introducing a separate index to build the result in, and am planning to pass it as the part of the structure. Signed-off-by: Junio C Hamano <junkio@cox.net>
| * | | | add_cache_entry(): removal of file foo does not conflict with foo/barJunio C Hamano2007-04-041-9/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Similarly, removal of file foo/bar does not conflict with a file foo. Signed-off-by: Junio C Hamano <junkio@cox.net>
* | | | | Merge branch 'maint'Junio C Hamano2007-04-072-4/+50
|\ \ \ \ \ | | |/ / / | |/| | / | |_|_|/ |/| | | | | | | * maint: Prepare for 1.5.1.1 cvsserver: small corrections to asciidoc documentation
| * | | Prepare for 1.5.1.1Junio C Hamano2007-04-072-1/+47
| | | | | | | | | | | | | | | | Signed-off-by: Junio C Hamano <junkio@cox.net>
| * | | cvsserver: small corrections to asciidoc documentationFrank Lichtenheld2007-04-071-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix a typo: s/Not/Note/ Some formating fixes: Use ` ` syntax for all filenames and ' ' syntax for all commandline switches. Signed-off-by: Frank Lichtenheld <frank@lichtenheld.de> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | | | Merge branch 'jc/index-output'Junio C Hamano2007-04-0713-25/+66
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * jc/index-output: git-read-tree --index-output=<file> _GIT_INDEX_OUTPUT: allow plumbing to output to an alternative index file. Conflicts: builtin-apply.c
| * | | | git-read-tree --index-output=<file>Junio C Hamano2007-04-036-16/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This corrects the interface mistake of the previous one, and gives a command line parameter to the only plumbing command that currently needs it: "git-read-tree". We can add the calls to set_alternate_index_output() to other plumbing commands that update the index if/when needed. Signed-off-by: Junio C Hamano <junkio@cox.net>