summaryrefslogtreecommitdiff
path: root/builtin-pack-objects.c
Commit message (Collapse)AuthorAgeFilesLines
...
* pack-objects: clean up list sortingNicolas Pitre2007-04-161-31/+22
| | | | | | | | | | | | Get rid of sort_comparator() as it impose a run time double indirect function call for little compile time type checking gain. Also get rid of create_sorted_list() as it only has one user which would as well be just fine doing its sorting locally. Eventually the list of deltifiable objects might be shorter than the whole object list. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: rework check_delta_limit usageNicolas Pitre2007-04-161-44/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | Objects that have delta "children" from pack data reuse must consider the depth of their deepest child when they try to deltify themselves for those children not to become too deep. However, in the context of a "thin" pack, the delta children depth was skipped entirely on the presumption that the pack was always going to be exploded on the receiving end, hence the delta length wasn't an issue. Now that we keep received packs as is and reuse pack data when repacking, those packs do contain delta chains that are longer than expected. Worse, those delta chain may even grow longer when the pack is further repacked into another thin pack for a subsequent transmission. So this patch restores strict delta length even for thin packs, and it moves check_delta_limit() usage directly in the delta loop where it is needed. This way the delta_limit can be removed from struct object_entry as well. Oh and the initial value was wrong too. The progress_interval() function was moved to a more logical location in the process. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: equal objects in size should delta against newer objectsNicolas Pitre2007-04-161-1/+1
| | | | | | | | | | | | | Before finding best delta combinations, we sort objects by name hash, then by size, then by their position in memory. Then we walk the list backwards to test delta candidates. We hope that a bigger size usually means a newer objects. But a bigger address in memory does not mean a newer object. So the last comparison must be reversed. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: optimize preferred base handling a bitNicolas Pitre2007-04-161-15/+12
| | | | | | | | Let's avoid some cycles when there is no base to test against, and avoid unnecessary object lookups. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* clean up add_object_entry()Nicolas Pitre2007-04-111-27/+25
| | | | | | | | This function used to call locate_object_entry_hash() _twice_ per added object while only once should suffice. Let's reorganize that code a bit. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* validate reused pack data with CRC when possibleNicolas Pitre2007-04-101-12/+34
| | | | | | | | | This replaces the inflate validation with a CRC validation when reusing data from a pack which uses index version 2. That makes repacking much safer against corruptions, and it should be a bit faster too. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* allow forcing index v2 and 64-bit offset tresholdNicolas Pitre2007-04-101-3/+17
| | | | | | | | This is necessary for testing the new capabilities in some automated way without having an actual 4GB+ pack. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: learn about pack index version 2Nicolas Pitre2007-04-101-12/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pack index version 2 goes as follows: - 8 bytes of header with signature and version. - 256 entries of 4-byte first-level fan-out table. - Table of sorted 20-byte SHA1 records for each object in pack. - Table of 4-byte CRC32 entries for raw pack object data. - Table of 4-byte offset entries for objects in the pack if offset is representable with 31 bits or less, otherwise it is an index in the next table with top bit set. - Table of 8-byte offset entries indexed from previous table for offsets which are 32 bits or more (optional). - 20-byte SHA1 checksum of sorted object names. - 20-byte SHA1 checksum of the above. The object SHA1 table is all contiguous so future pack format that would contain this table directly won't require big changes to the code. It is also tighter for slightly better cache locality when looking up entries. Support for large packs exceeding 31 bits in size won't impose an index size bloat for packs within that range that don't need a 64-bit offset. And because newer objects which are likely to be the most frequently used are located at the beginning of the pack, they won't pay the 64-bit offset lookup at run time either even if the pack is large. Right now an index version 2 is created only when the biggest offset in a pack reaches 31 bits. It might be a good idea to always use index version 2 eventually to benefit from the CRC32 it contains when reusing pack data while repacking. [jc: with the "oops" fix to keep track of the last offset correctly] Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* compute a CRC32 for each object as stored in a packNicolas Pitre2007-04-101-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The most important optimization for performance when repacking is the ability to reuse data from a previous pack as is and bypass any delta or even SHA1 computation by simply copying the raw data from one pack to another directly. The problem with this is that any data corruption within a copied object would go unnoticed and the new (repacked) pack would be self-consistent with its own checksum despite containing a corrupted object. This is a real issue that already happened at least once in the past. In some attempt to prevent this, we validate the copied data by inflating it and making sure no error is signaled by zlib. But this is still not perfect as a significant portion of a pack content is made of object headers and references to delta base objects which are not deflated and therefore not validated when repacking actually making the pack data reuse still not as safe as it could be. Of course a full SHA1 validation could be performed, but that implies full data inflating and delta replaying which is extremely costly, which cost the data reuse optimization was designed to avoid in the first place. So the best solution to this is simply to store a CRC32 of the raw pack data for each object in the pack index. This way any object in a pack can be validated before being copied as is in another pack, including header and any other non deflated data. Why CRC32 instead of a faster checksum like Adler32? Quoting Wikipedia: Jonathan Stone discovered in 2001 that Adler-32 has a weakness for very short messages. He wrote "Briefly, the problem is that, for very short packets, Adler32 is guaranteed to give poor coverage of the available bits. Don't take my word for it, ask Mark Adler. :-)" The problem is that sum A does not wrap for short messages. The maximum value of A for a 128-byte message is 32640, which is below the value 65521 used by the modulo operation. An extended explanation can be found in RFC 3309, which mandates the use of CRC32 instead of Adler-32 for SCTP, the Stream Control Transmission Protocol. In the context of a GIT pack, we have lots of small objects, especially deltas, which are likely to be quite small and in a size range for which Adler32 is dimed not to be sufficient. Another advantage of CRC32 is the possibility for recovery from certain types of small corruptions like single bit errors which are the most probable type of corruptions. OK what this patch does is to compute the CRC32 of each object written to a pack within pack-objects. It is not written to the index yet and it is obviously not validated when reusing pack data yet either. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* add overflow tests on pack offset variablesNicolas Pitre2007-04-101-6/+13
| | | | | | | | | | | | Change a few size and offset variables to more appropriate type, then add overflow tests on those offsets. This prevents any bad data to be generated/processed if off_t happens to not be large enough to handle some big packs. Better be safe than sorry. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* make overflow test on delta base offset work regardless of variable sizeNicolas Pitre2007-04-101-1/+1
| | | | | | | | | | | | | This patch introduces the MSB() macro to obtain the desired number of most significant bits from a given variable independently of the variable type. It is then used to better implement the overflow test on the OBJ_OFS_DELTA base offset variable with the property of always working correctly regardless of the type/size of that variable. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* get rid of num_packed_objects()Nicolas Pitre2007-04-101-2/+2
| | | | | | | | | | | | | The coming index format change doesn't allow for the number of objects to be determined from the size of the index file directly. Instead, Let's initialize a field in the packed_git structure with the object count when the index is validated since the count is always known at that point. While at it let's reorder some struct packed_git fields to avoid padding due to needed 64-bit alignment for some of them. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* clean up and optimize nth_packed_object_sha1() usageNicolas Pitre2007-04-051-1/+1
| | | | | | | | | | | | | Let's avoid the open coded pack index reference in pack-object and use nth_packed_object_sha1() instead. This will help encapsulating index format differences in one place. And while at it there is no reason to copy SHA1's over and over while a direct pointer to it in the index will do just fine. Signed-off-by: Nicolas Pitre <nico@cam.org> Acked-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Initialize tree descriptors with a helper function rather than by hand.Linus Torvalds2007-03-211-4/+2
| | | | | | | | | | | | | This removes slightly more lines than it adds, but the real reason for doing this is that future optimizations will require more setup of the tree descriptor, and so we want to do it in one place. Also renamed the "desc.buf" field to "desc.buffer" just to trigger compiler errors for old-style manual initializations, making sure I didn't miss anything. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Remove "pathlen" from "struct name_entry"Linus Torvalds2007-03-211-1/+1
| | | | | | | | | Since we have the "tree_entry_len()" helper function these days, and don't need to do a full strlen(), there's no point in saving the path length - it's just redundant information. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* [PATCH] clean up pack index handling a bitNicolas Pitre2007-03-161-6/+8
| | | | | | | | | | | | | | | | | Especially with the new index format to come, it is more appropriate to encapsulate more into check_packed_git_idx() and assume less of the index format in struct packed_git. To that effect, the index_base is renamed to index_data with void * type so it is not used directly but other pointers initialized with it. This allows for a couple pointer cast removal, as well as providing a better generic name to grep for when adding support for new index versions or formats. And index_data is declared const too while at it. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Use off_t in pack-objects/fast-import when we mean an offsetShawn O. Pearce2007-03-071-26/+26
| | | | | | | | | | Always use an off_t value in pack-objects anytime we are dealing with an offset to some data within a packfile. Also fixed a minor uintmax_t that was incorrectly defined before. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Use uint32_t for pack-objects counters.Shawn O. Pearce2007-03-071-33/+30
| | | | | | | | | | | | | | | | | | As we technically try to support up to a maximum of 2**32-1 objects in a single packfile we should act like it and use unsigned 32 bit integers for all of our object counts and progress output. This change does not modify everything in pack-objects that probably needs to change to fully support the maximum of 2**32-1 objects. I'm intentionally breaking the improvements into slightly smaller commits to make them easier to follow. No logic change should be occuring here, with the exception that some comparsions will now work properly when the number of objects exceeds 2**31-1. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* convert object type handling from a string to a numberNicolas Pitre2007-02-271-17/+15
| | | | | | | | | | | | | | | We currently have two parallel notation for dealing with object types in the code: a string and a numerical value. One of them is obviously redundent, and the most used one requires more stack space and a bunch of strcmp() all over the place. This is an initial step for the removal of the version using a char array found in object reading code paths. The patch is unfortunately large but there is no sane way to split it in smaller parts without breaking the system. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* formalize typename(), and add its reverse type_from_string()Nicolas Pitre2007-02-271-12/+1
| | | | | | | | | | | | Sometime typename() is used, sometimes type_names[] is accessed directly. Let's enforce typename() all the time which allows for validating the type. Also let's add a function to go from a name to a type and use it instead of manual memcpy() when appropriate. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Merge branch 'maint'Junio C Hamano2007-02-251-3/+9
|\ | | | | | | | | | | | | | | | | * maint: Add Release Notes to prepare for 1.5.0.2 Allow arbitrary number of arguments to git-pack-objects rerere: do not deal with symlinks. rerere: do not skip two conflicted paths next to each other. Don't modify CREDITS-FILE if it hasn't changed.
| * Allow arbitrary number of arguments to git-pack-objectsRoland Dreier2007-02-251-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a repository ever gets in a situation where there are too many packs (more than 60 or so), perhaps because of frequent use of git-fetch -k or incremental git-repack, then it becomes impossible to fully repack the repository with git-repack -a. That command just dies with the cryptic message fatal: too many internal rev-list options This message comes from git-pack-objects, which is passed one command line option like --unpacked=pack-<SHA1>.pack for each pack file to be repacked. However, the current code has a static limit of 64 command line arguments and just aborts if more arguments are passed to it. Fix this by dynamically allocating the array of command line arguments, and doubling the size each time it overflows. Signed-off-by: Roland Dreier <roland@digitalvampire.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | prefixcmp(): fix-up mechanical conversion.Junio C Hamano2007-02-201-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Previous step converted use of strncmp() with literal string mechanically even when the result is only used as a boolean: if (!strncmp("foo", arg, 3)) ==> if (!(-prefixcmp(arg, "foo"))) This step manually cleans them up to read: if (!prefixcmp(arg, "foo")) Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Mechanical conversion to use prefixcmp()Junio C Hamano2007-02-201-3/+3
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | This mechanically converts strncmp() to use prefixcmp(), but only when the parameters match specific patterns, so that they can be verified easily. Leftover from this will be fixed in a separate step, including idiotic conversions like if (!strncmp("foo", arg, 3)) => if (!(-prefixcmp(arg, "foo"))) This was done by using this script in px.perl #!/usr/bin/perl -i.bak -p if (/strncmp\(([^,]+), "([^\\"]*)", (\d+)\)/ && (length($2) == $3)) { s|strncmp\(([^,]+), "([^\\"]*)", (\d+)\)|prefixcmp($1, "$2")|; } if (/strncmp\("([^\\"]*)", ([^,]+), (\d+)\)/ && (length($1) == $3)) { s|strncmp\("([^\\"]*)", ([^,]+), (\d+)\)|(-prefixcmp($2, "$1"))|; } and running: $ git grep -l strncmp -- '*.c' | xargs perl px.perl Signed-off-by: Junio C Hamano <junkio@cox.net>
* Use fixed-size integers for .idx file I/OJunio C Hamano2007-01-181-3/+3
| | | | | | This attempts to finish what Simon started in the previous commit. Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: fix use of use_pack().Junio C Hamano2006-12-291-2/+5
| | | | | | | | | The code calls use_pack() to make that the variably encoded offset fits in the mmap'ed window, but it forgot that the operation gives the pointer to the beginning of the asked region. Signed-off-by: Junio C Hamano <junkio@cox.net>
* Fix random segfaults in pack-objects.Shawn O. Pearce2006-12-291-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Junio noticed that 'non-trivial' pushes were failing if executed using the sliding window mmap changes. This was somewhat difficult to track down as the failure was appearing randomly. It turns out this was a failure caused by the delta base reference (either ref or offset format) spanning over the end of a mmap window. The error in pack-objects was we were not recalling use_pack after the object header was unpacked, and therefore we did not get the promise of at least 20 bytes in the buffer for the delta base parsing. This would case later memcmp() calls to walk into unassigned address space at the end of the window. The reason Junio and I had hard time tracking this down in current Git repositories is we were both probably packing with offset deltas, which minimized the odds of the delta base reference spanning over the end of the mmap window. Stepping back and repacking with version 1.3.3 (which only supported reference deltas) increased the likelyhood of seeing the bug. The correct technique (as used in sha1_file.c) is to invoke use_pack() after unpack_object_header_gently to ensure we have enough data available for the delta base decoding. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Loop over pack_windows when inflating/accessing data.Shawn O. Pearce2006-12-291-6/+52
| | | | | | | | | | | | | | | | | | | | | | When multiple mmaps start getting used for all pack file access it is not possible to get all data associated with a specific object in one contiguous memory region. This limitation prevents simply passing a single address and length to SHA1_Update or to inflate. Instead we need to loop until we have processed all data of interest. As we loop over the data we are always interested in reusing the same window 'cursor', as the prior window will no longer be of any use to us. This allows the use_pack() call to automatically decrement the use count of the prior window before setting up access for us to the next window. Within each loop we need to make use of the available length output parameter of use_pack() to tell us how many bytes are available in the current memory region, as we cannot tell otherwise. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Replace use_packed_git with window cursors.Shawn O. Pearce2006-12-291-9/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Part of the implementation concept of the sliding mmap window for pack access is to permit multiple windows per pack to be mapped independently. Since the inuse_cnt is associated with the mmap and not with the file, this value is in struct pack_window and needs to be incremented/decremented for each pack_window accessed by any code. To faciliate that implementation we need to replace all uses of use_packed_git() and unuse_packed_git() with a different API that follows struct pack_window objects rather than struct packed_git. The way this works is when we need to start accessing a pack for the first time we should setup a new window 'cursor' by declaring a local and setting it to NULL: struct pack_windows *w_curs = NULL; To obtain the memory region which contains a specific section of the pack file we invoke use_pack(), supplying the address of our current window cursor: unsigned int len; unsigned char *addr = use_pack(p, &w_curs, offset, &len); the returned address `addr` will be the first byte at `offset` within the pack file. The optional variable len will also be updated with the number of bytes remaining following the address. Multiple calls to use_pack() with the same window cursor will update the window cursor, moving it from one window to another when necessary. In this way each window cursor variable maintains only one struct pack_window inuse at a time. Finally before exiting the scope which originally declared the window cursor we must invoke unuse_pack() to unuse the current window (which may be different from the one that was first obtained from use_pack): unuse_pack(&w_curs); This implementation is still not complete with regards to multiple windows, as only one window per pack file is supported right now. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Refactor packed_git to prepare for sliding mmap windows.Shawn O. Pearce2006-12-291-2/+2
| | | | | | | | | | | | | | | | | | The idea behind the sliding mmap window pack reader implementation is to have multiple mmap regions active against the same pack file, thereby allowing the process to mmap in only the active/hot sections of the pack and reduce overall virtual address space usage. To implement this we need to refactor the mmap related data (pack_base, pack_use_cnt) out of struct packed_git and move them into a new struct pack_window. We are refactoring the code to support a single struct pack_window per packfile, thereby emulating the prior behavior of mmap'ing the entire pack file. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Teach git-repack to preserve objects referred to by reflog entries.Junio C Hamano2006-12-201-1/+2
| | | | | | | | | | | | This adds a new option --reflog to pack-objects and revision machinery; do not bother documenting it for now, since this is only useful for local repacking. When the option is passed, objects reachable from reflog entries are marked as interesting while computing the set of objects to pack. Signed-off-by: Junio C Hamano <junkio@cox.net>
* simplify inclusion of system header files.Junio C Hamano2006-12-201-2/+0
| | | | | | | | | | | | | | | | | | | | This is a mechanical clean-up of the way *.c files include system header files. (1) sources under compat/, platform sha-1 implementations, and xdelta code are exempt from the following rules; (2) the first #include must be "git-compat-util.h" or one of our own header file that includes it first (e.g. config.h, builtin.h, pkt-line.h); (3) system headers that are included in "git-compat-util.h" need not be included in individual C source files. (4) "git-compat-util.h" does not have to include subsystem specific header files (e.g. expat.h). Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: remove redundent status informationNicolas Pitre2006-11-291-2/+4
| | | | | | | | | | | The final 'nr_result' and 'written' values must always be the same otherwise we're in deep trouble. So let's remove a redundent report. And for paranoia sake let's make sure those two variables are actually equal after all objects are written (one never knows). Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: tweak "do not even attempt delta" heuristicsJunio C Hamano2006-11-171-1/+3
| | | | | | | | | | The heuristics to give up deltification when both the source and the target are both in the same pack affects negatively when we are repacking the subset of objects in the existing pack. This caused any incremental updates to use suboptimal packs. Tweak the heuristics to avoid this problem. Signed-off-by: Junio C Hamano <junkio@cox.net>
* git-pack-objects progress flag documentation and cleanupNicolas Pitre2006-11-071-9/+10
| | | | | | | | This adds documentation for --progress and --all-progress, remove a duplicate --progress handling and make usage string more readable. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* make git-push a bit more verboseNicolas Pitre2006-11-011-7/+11
| | | | | | | | | | Currently git-push displays progress status for the local packing of objects to send, but nothing once it starts to push it over the connection. Having progress status in that later case is especially nice when pushing lots of objects over a slow network link. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: document --delta-base-offset optionJunio C Hamano2006-10-101-1/+1
| | | | Signed-off-by: Junio C Hamano <junkio@cox.net>
* allow delta data reuse even if base object is a preferred baseNicolas Pitre2006-09-271-1/+1
| | | | | Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* zap a debug remnantNicolas Pitre2006-09-271-1/+0
| | | | | Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* make pack data reuse compatible with both delta typesNicolas Pitre2006-09-271-74/+130
| | | | | | | | | | | This is the missing part to git-pack-objects allowing it to reuse delta data to/from any of the two delta types. It can reuse delta from any type, and it outputs base offsets when --allow-delta-base-offset is provided and the base is also included in the pack. Otherwise it outputs base sha1 references just like it always did. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* make git-pack-objects able to create deltas with offset to baseNicolas Pitre2006-09-271-14/+33
| | | | | | | | | | | | | | | | This is enabled with --delta-base-offset only, and doesn't work with pack data reuse yet. The idea is to allow for the fetch protocol to use an extension flag to notify the remote end that --delta-base-offset can be used with git-pack-objects. Eventually git-repack will always provide this flag. With this, all delta base objects are now pushed before deltas that depend on them. This is a requirements for OBJ_OFS_DELTA. This is not a requirement for OBJ_REF_DELTA but always doing so makes the code simpler. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* introduce delta objects with offset to baseNicolas Pitre2006-09-271-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a new object, namely OBJ_OFS_DELTA, renames OBJ_DELTA to OBJ_REF_DELTA to better make the distinction between those two delta objects, and adds support for the handling of those new delta objects in sha1_file.c only. The OBJ_OFS_DELTA contains a relative offset from the delta object's position in a pack instead of the 20-byte SHA1 reference to identify the base object. Since the base is likely to be not so far away, the relative offset is more likely to have a smaller encoding on average than an absolute offset. And for those delta objects the base must always be stored first because there is no way to know the distance of later objects when streaming a pack. Hence this relative offset is always meant to be negative. The offset encoding is slightly denser than the one used for object size -- credits to <linux@horizon.com> (whoever this is) for bringing it to my attention. This allows for pack size reduction between 3.2% (Linux-2.6) to over 5% (linux-historic). Runtime pack access should be faster too since delta replay does skip a search in the pack index for each delta in a chain. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* many cleanups to sha1_file.cNicolas Pitre2006-09-231-4/+4
| | | | | | | | | | | | | | | | | | | | | | Those cleanups are mainly to set the table for the support of deltas with base objects referenced by offsets instead of sha1. This means that many pack lookup functions are converted to take a pack/offset tuple instead of a sha1. This eliminates many struct pack_entry usages since this structure carried redundent information in many cases, and it increased stack footprint needlessly for a couple recursively called functions that used to declare a local copy of it for every recursion loop. In the process, packed_object_info_detail() has been reorganized as well so to look much saner and more amenable to deltas with offset support. Finally the appropriate adjustments have been made to functions that depend on the above changes. But there is no functionality changes yet simply some code refactoring at this point. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: document --revs, --unpacked and --all.Junio C Hamano2006-09-121-1/+1
| | | | Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: further work on internal rev-list logic.Junio C Hamano2006-09-071-60/+39
| | | | | | | | | | This teaches the internal rev-list logic to understand options that are needed for pack handling: --all, --unpacked, and --thin. It also moves two functions from builtin-rev-list to list-objects so that the two programs can share more code. Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: run rev-list equivalent internally.Junio C Hamano2006-09-071-82/+215
| | | | | | | | | | | | | | | | | | | | | Instead of piping the rev-list output from its standard input, you can say: pack-objects --all --unpacked --revs pack and feed the rev parameters you would otherwise give the rev-list on its command line from the standard input. In other words: echo 'master..next' | pack-objects --revs pack and rev-list --objects master..next | pack-objects pack are equivalent. Signed-off-by: Junio C Hamano <junkio@cox.net>
* more lightweight revalidation while reusing deflated stream in packingJunio C Hamano2006-09-031-29/+52
| | | | | | | | | | | | | | When copying from an existing pack and when copying from a loose object with new style header, the code makes sure that the piece we are going to copy out inflates well and inflate() consumes the data in full while doing so. The check to see if the xdelta really apply is quite expensive as you described, because you would need to have the image of the base object which can be represented as a delta against something else. Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: fix thinko in revalidate codeJunio C Hamano2006-09-031-6/+7
| | | | | | | | | When revalidating an entry from an existing pack entry->size and entry->type are not necessarily the size of the final object when the entry is deltified, but for base objects they must match. Signed-off-by: Junio C Hamano <junkio@cox.net>
* pack-objects: re-validate data we copy from elsewhere.Junio C Hamano2006-09-021-4/+63
| | | | | | | | | | | When reusing data from an existing pack and from a new style loose objects, we used to just copy it staight into the resulting pack. Instead make sure they are not corrupt, but do so only when we are not streaming to stdout, in which case the receiving end will do the validation either by unpacking the stream or by constructing the .idx file. Signed-off-by: Junio C Hamano <junkio@cox.net>
* Convert memcpy(a,b,20) to hashcpy(a,b).Shawn Pearce2006-08-231-3/+3
| | | | | | | | | | | | | | | | | | | | | | | This abstracts away the size of the hash values when copying them from memory location to memory location, much as the introduction of hashcmp abstracted away hash value comparsion. A few call sites were using char* rather than unsigned char* so I added the cast rather than open hashcpy to be void*. This is a reasonable tradeoff as most call sites already use unsigned char* and the existing hashcmp is also declared to be unsigned char*. [jc: Splitted the patch to "master" part, to be followed by a patch for merge-recursive.c which is not in "master" yet. Fixed the cast in the latter hunk to combine-diff.c which was wrong in the original. Also converted ones left-over in combine-diff.c, diff-lib.c and upload-pack.c ] Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>