summaryrefslogtreecommitdiff
path: root/sha1_file.c
Commit message (Collapse)AuthorAgeFilesLines
* Use xrealloc instead of reallocJonas Fonseca2006-08-261-1/+1
| | | | | | | | | Change places that use realloc, without a proper error path, to instead use xrealloc. Drop an erroneous error path in the daemon code that used errno in the die message in favour of the simpler xrealloc. Signed-off-by: Jonas Fonseca <fonseca@diku.dk> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Convert unpack_entry_gently and friends to use offsets.Shawn Pearce2006-08-261-18/+15
| | | | | | | | | | | | | Change unpack_entry_gently and its helper functions to use offsets rather than addresses and left counts to supply pack position information. In most cases this makes the code easier to follow, and it reduces the number of local variables in a few functions. It also better prepares this code for mapping partial segments of packs and altering what regions of a pack are mapped while unpacking an entry. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Cleanup unpack_object_header to use only offsets.Shawn Pearce2006-08-261-7/+3
| | | | | | | | | | | | If we're always incrementing both the offset and the pointer we aren't gaining anything by keeping both. Instead just use the offset since that's what we were given and what we are expected to return. Also using offset is likely to make it easier to remap the pack in the future should partial mapping of very large packs get implemented. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Cleanup unpack_entry_gently and friends to use type_name array.Shawn Pearce2006-08-261-28/+6
| | | | | | | | | | | [PATCH 3/5] Cleanup unpack_entry_gently and friends to use type_name array. This change allows combining all of the non-delta entries into a single case, as well as to remove an unnecessary local variable in unpack_entry_gently. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Reuse compression code in unpack_compressed_entry.Shawn Pearce2006-08-261-21/+4
| | | | | | | | | | | | [PATCH 2/5] Reuse compression code in unpack_compressed_entry. This cleans up the code by reusing a perfectly good decompression implementation at the expense of 1 extra byte of memory allocated in temporary memory while the delta is being decompressed and applied to the base. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Reorganize/rename unpack_non_delta_entry to unpack_compressed_entry.Shawn Pearce2006-08-261-28/+28
| | | | | | | | | | | | This function was moved above unpack_delta_entry so we can call it from within unpack_delta_entry without a forward declaration. This change looks worse than it is. Its really just a relocation of unpack_non_delta_entry to earlier in the file and renaming the function to unpack_compressed_entry. No other changes were made. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Convert memcpy(a,b,20) to hashcpy(a,b).Shawn Pearce2006-08-231-9/+9
| | | | | | | | | | | | | | | | | | | | | | | This abstracts away the size of the hash values when copying them from memory location to memory location, much as the introduction of hashcmp abstracted away hash value comparsion. A few call sites were using char* rather than unsigned char* so I added the cast rather than open hashcpy to be void*. This is a reasonable tradeoff as most call sites already use unsigned char* and the existing hashcmp is also declared to be unsigned char*. [jc: Splitted the patch to "master" part, to be followed by a patch for merge-recursive.c which is not in "master" yet. Fixed the cast in the latter hunk to combine-diff.c which was wrong in the original. Also converted ones left-over in combine-diff.c, diff-lib.c and upload-pack.c ] Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Remove unnecessary forward declaration of unpack_entry.Shawn Pearce2006-08-211-3/+0
| | | | | | | | This declaration probably used to be necessary but the code has been refactored since to use unpack_entry_gently instead. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Verify we know how to read a pack before trying to using it.Shawn Pearce2006-08-211-0/+12
| | | | | | | | | | | | | | If the pack format were to ever change or be extended in the future there is no assurance that just because the pack file lives in objects/pack and doesn't end in .idx that we can read and decompress its contents properly. If we encounter what we think is a pack file and it isn't or we don't recognize its version then die and suggest to the user that they upgrade to a newer version of GIT which can handle that pack file. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Do not use memcmp(sha1_1, sha1_2, 20) with hardcoded length.David Rientjes2006-08-171-8/+8
| | | | | | | | | | | | | Introduces global inline: hashcmp(const unsigned char *sha1, const unsigned char *sha2) Uses memcmp for comparison and returns the result based on the length of the hash name (a future runtime decision). Acked-by: Alex Riesen <raa.lkml@gmail.com> Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
* remove unnecessary initializationsDavid Rientjes2006-08-151-1/+1
| | | | | | | | [jc: I needed to hand merge the changes to the updated codebase, so the result needs to be checked.] Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Merge branch 'jc/pack-objects'Junio C Hamano2006-08-121-13/+22
|\
| * sha1_file.c: expose map_sha1_file() interface.Junio C Hamano2006-07-251-13/+22
| | | | | | | | | | | | | | | | | | | | This exposes map_sha1_file() interface to mmap a loose object file, and legacy_loose_object() function, split from unpack_sha1_header(). They will be used in the next patch to reuse the deflated data from new-style loose object files when generating packs. Signed-off-by: Junio C Hamano <junkio@cox.net>
* | drop length argument of has_extensionRene Scharfe2006-08-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As Fredrik points out the current interface of has_extension() is potentially confusing. Its parameters include both a nul-terminated string and a length-limited string. This patch drops the length argument, requiring two nul-terminated strings; all callsites are updated. I checked that all of them indeed provide nul-terminated strings. Filenames need to be nul-terminated anyway if they are to be passed to open() etc. The performance penalty of the additional strlen() is negligible compared to the system calls which inevitably surround has_extension() calls. Additionally, change has_extension() to use size_t inside instead of int, as that is the exact type strlen() returns and memcmp() expects. Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Add has_extension()Rene Scharfe2006-08-101-1/+1
|/ | | | | | | | The little helper has_extension() documents through its name what we are trying to do and makes sure we don't forget the underrun check. Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Merge branch 'lt/objformat'Junio C Hamano2006-07-241-8/+98
|\ | | | | | | | | * lt/objformat: sha1_file: add the ability to parse objects in "pack file format"
| * sha1_file: add the ability to parse objects in "pack file format"Linus Torvalds2006-07-131-8/+98
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pack-file format is slightly different from the traditional git object format, in that it has a much denser binary header encoding. The traditional format uses an ASCII string with type and length information, which is somewhat wasteful. A new object format starts with uncompressed binary header followed by compressed payload -- this will allow us later to copy the payload straight to packfiles. Obviously they cannot be read by older versions of git, so for now new object files are created with the traditional format. core.legacyheaders configuration item, when set to false makes the code write in new format for people to experiment with. Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Make lazy mkdir more robust.Shawn Pearce2006-07-121-14/+12
|/ | | | | | | | | | | | | Linus Torvalds <torvalds@osdl.org> wrote: It's entirely possible that we should just make that whole if (ret == ENOENT) go away. Yes, it's the right error code if a subdirectory is missing, and yes, POSIX requires it, and yes, WXP is probably just a horrible piece of sh*t, but on the other hand, I don't think git really has any serious reason to even care.
* Make the unpacked object header functions static to sha1_file.cLinus Torvalds2006-07-111-2/+2
| | | | | | | Nobody else uses them, and I'm going to start changing them. Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Avoid C99 comments, use old-style C comments instead.Pavel Roskin2006-07-101-2/+2
| | | | | | | | | | This doesn't make the code uglier or harder to read, yet it makes the code more portable. This also simplifies checking for other potential incompatibilities. "gcc -std=c89 -pedantic" can flag many incompatible constructs as warnings, but C99 comments will cause it to emit an error. Signed-off-by: Pavel Roskin <proski@gnu.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Fix more typos, primarily in the codePavel Roskin2006-07-101-1/+1
| | | | | | | | | The only visible change is that git-blame doesn't understand "--compability" anymore, but it does accept "--compatibility" instead, which is already documented. Signed-off-by: Pavel Roskin <proski@gnu.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Make zlib compression level configurable, and change default.Joachim B Haga2006-07-031-2/+2
| | | | | | | | | | | | | | | | With the change in default, "git add ." on kernel dir is about twice as fast as before, with only minimal (0.5%) change in object size. The speed difference is even more noticeable when committing large files, which is now up to 8 times faster. The configurability is through setting core.compression = [-1..9] which maps to the zlib constants; -1 is the default, 0 is no compression, and 1..9 are various speed/size tradeoffs, 9 being slowest. Signed-off-by: Joachim B Haga (cjhaga@fys.uio.no) Acked-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Make some strings constTimo Hirvonen2006-06-281-1/+1
| | | | | Signed-off-by: Timo Hirvonen <tihirvon@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Remove all void-pointer arithmetic.Florian Forster2006-06-201-14/+15
| | | | | | | | ANSI C99 doesn't allow void-pointer arithmetic. This patch fixes this in various ways. Usually the strategy that required the least changes was used. Signed-off-by: Florian Forster <octo@verplant.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* shared repository - add a few missing calls to adjust_shared_perm().Junio C Hamano2006-06-091-23/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | There were a few calls to adjust_shared_perm() that were missing: - init-db creates refs, refs/heads, and refs/tags before reading from templates that could specify sharedrepository in the config file; - updating config file created it under user's umask without adjusting; - updating refs created it under user's umask without adjusting; - switching branches created .git/HEAD under user's umask without adjusting. This moves adjust_shared_perm() from sha1_file.c to path.c, since a few SIMPLE_PROGRAM need to call repository configuration functions which in turn need to call adjust_shared_perm(). sha1_file.c needs to link with SHA1 computation library which is usually not linked to SIMPLE_PROGRAM. Signed-off-by: Junio C Hamano <junkio@cox.net>
* sha1_file: avoid re-preparing duplicate packsJeff King2006-06-021-0/+6
| | | | | | | | When adding packs, skip the pack if we already have it in the packed_git list. This might happen if we are re-preparing our packs because of a missing object. Signed-off-by: Junio C Hamano <junkio@cox.net>
* handle concurrent pruning of packed objectsJeff King2006-06-021-6/+18
| | | | | | | | | | | | | | This patch causes read_sha1_file and sha1_object_info to re-examine the list of packs if an object cannot be found. It works by re-running prepare_packed_git() after an object fails to be found. It does not attempt to clean up the old pack list. Old packs which are in use can continue to be used (until unused by lru selection). New packs are placed at the front of the list and will thus be examined before old packs. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Clean up sha1 file writingLinus Torvalds2006-05-241-61/+78
| | | | | | | | | | | | | | | | This cleans up and future-proofs the sha1 file writing in sha1_file.c. In particular, instead of doing a simple "write()" call and just verifying that it succeeds (or - as in one place - just assuming it does), it uses "write_buffer()" to write data to the file descriptor while correctly checking for partial writes, EINTR etc. It also splits up write_sha1_to_fd() to be a lot more readable: if we need to re-create the compressed object, we do so in a separate helper function, making the logic a whole lot more modular and obvious. Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* remove the artificial restriction tagsize < 8kbBjörn Engelmann2006-05-231-10/+36
| | | | | Signed-off-by: Björn Engelmann <BjEngelmann@gmx.de> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Merge branch 'fix'Junio C Hamano2006-05-151-2/+1
|\ | | | | | | | | | | | | | | | | | | * fix: Fix pack-index issue on 64-bit platforms a bit more portably. Install git-send-email by default Fix compilation on newer NetBSD systems git config syntax updates Another config file parsing fix. checkout: use --aggressive when running a 3-way merge (-m).
| * Fix pack-index issue on 64-bit platforms a bit more portably.v1.3.3Junio C Hamano2006-05-151-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Apparently <stdint.h> is not enough for uint32_t on OpenBSD; use "unsigned int" -- hopefully that would stay 32-bit on every platform we care about, at least until we update the pack-index file format. Our sha1 routines optimized for architectures use uint32_t and expects '#include <stdint.h>' to be enough, so OpenBSD on arm or ppc might have similar issues down the road, I dunno. Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Merge branch 'fix'Junio C Hamano2006-05-141-0/+1
|\ \ | |/ | | | | | | * fix: include header to define uint32_t, necessary on Mac OS X
| * include header to define uint32_t, necessary on Mac OS XBen Clifford2006-05-141-0/+1
| | | | | | | | Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Merge branch 'fix'Junio C Hamano2006-05-131-1/+1
|\ \ | |/ | | | | | | * fix: Fix git-pack-objects for 64-bit platforms
| * Fix git-pack-objects for 64-bit platformsDennis Stosberg2006-05-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The offset of an object in the pack is recorded as a 4-byte integer in the index file. When reading the offset from the mmap'ed index in prepare_pack_revindex(), the address is dereferenced as a long*. This works fine as long as the long type is four bytes wide. On NetBSD/sparc64, however, a long is 8 bytes wide and so dereferencing the offset produces garbage. [jc: taking suggestion by Linus to use uint32_t] Signed-off-by: Dennis Stosberg <dennis@stosberg.net> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Transitively read alternativesMartin Waitz2006-05-071-72/+106
| | | | | | | | | | | | | | | | | | | | | | | | When adding an alternate object store then add entries from its info/alternates files, too. Relative entries are only allowed in the current repository. Loops and duplicate alternates through multiple repositories are ignored. Just to be sure that nothing breaks it is not allow to build deep nesting levels using info/alternates. Signed-off-by: Martin Waitz <tali@admingilde.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | sha1_to_hex() usage cleanupLinus Torvalds2006-05-031-2/+3
|/ | | | | | | | | | | | | Somebody on the #git channel complained that the sha1_to_hex() thing uses a static buffer which caused an error message to show the same hex output twice instead of showing two different ones. That's pretty easily rectified by making it uses a simple LRU of a few buffers, which also allows some other users (that were aware of the buffer re-use) to be written in a more straightforward manner. Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* packed_object_info_detail(): check for corrupt packfile.Junio C Hamano2006-04-171-2/+4
| | | | | | | Serge E. Hallyn noticed that we compute how many input bytes are still left, but did not use it for sanity checking. Signed-off-by: Junio C Hamano <junkio@cox.net>
* Merge branch 'maint'Junio C Hamano2006-04-071-2/+4
|\ | | | | | | | | | | * maint: count-delta: match get_delta_hdr_size() changes. check patch_delta bounds more carefully
| * check patch_delta bounds more carefullyNicolas Pitre2006-04-071-2/+4
| | | | | | | | | | | | | | Let's avoid going south with invalid delta data. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Use blob_, commit_, tag_, and tree_type throughout.Peter Eriksen2006-04-041-18/+22
| | | | | | | | | | | | | | | | | | This replaces occurences of "blob", "commit", "tag", and "tree", where they're really used as type specifiers, which we already have defined global constants for. Signed-off-by: Peter Eriksen <s022018@student.dtu.dk> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | unpack_delta_entry(): reduce memory footprint.Junio C Hamano2006-03-191-8/+10
| | | | | | | | | | | | | | | | | | | | | | Currently we unpack the delta data from the pack and then unpack the base object to apply that delta data to it. When getting an object that is deeply deltified, we can reduce memory footprint by unpacking the base object first and then unpacking the delta data, because we will need to keep at most one delta data in memory that way. Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Merge fixes early for next maint series.Junio C Hamano2006-02-231-3/+4
|\ \ | |/
| * Give no terminating LF to error() function.Junio C Hamano2006-02-221-3/+4
| | | | | | | | Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Merge fixes up to GIT 1.2.3Junio C Hamano2006-02-221-3/+16
|\ \ | |/
| * pack-objects: reuse data from existing packs.Junio C Hamano2006-02-221-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When generating a new pack, notice if we have already needed objects in existing packs. If an object is stored deltified, and its base object is also what we are going to pack, then reuse the existing deltified representation unconditionally, bypassing all the expensive find_deltas() and try_deltas() calls. Also, notice if what we are going to write out exactly match what is already in an existing pack (either deltified or just compressed). In such a case, we can just copy it instead of going through the usual uncompressing & recompressing cycle. Without this patch, in linux-2.6 repository with about 1500 loose objects and a single mega pack: $ git-rev-list --objects v2.6.16-rc3 >RL $ wc -l RL 184141 RL $ time git-pack-objects p <RL Generating pack... Done counting 184141 objects. Packing 184141 objects.................... a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2 real 12m4.323s user 11m2.560s sys 0m55.950s With this patch, the same input: $ time ../git.junio/git-pack-objects q <RL Generating pack... Done counting 184141 objects. Packing 184141 objects..................... a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2 Total 184141, written 184141, reused 182441 real 1m2.608s user 0m55.090s sys 0m1.830s Signed-off-by: Junio C Hamano <junkio@cox.net>
| * detect broken alternates.Junio C Hamano2006-02-221-3/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The real problem triggered an earlier fix was that an alternate entry was pointing at a removed directory. Complaining on object/pack directory that cannot be opendir-ed produces noise in an ancient repository that does not have object/pack directory and has never been packed. Detect the real user error and report it. Also if opendir failed for other reasons (e.g. no read permissions), report that as well. Spotted by Andrew Vasquez <andrew.vasquez@qlogic.com>. Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Merge branch 'jc/pack-reuse'Junio C Hamano2006-02-211-0/+19
|\ \ | | | | | | | | | | | | | | | | | | | | | * jc/pack-reuse: pack-objects: avoid delta chains that are too long. git-repack: allow passing a couple of flags to pack-objects. pack-objects: finishing touches. pack-objects: reuse data from existing packs.
| * | pack-objects: reuse data from existing packs.Junio C Hamano2006-02-171-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When generating a new pack, notice if we have already needed objects in existing packs. If an object is stored deltified, and its base object is also what we are going to pack, then reuse the existing deltified representation unconditionally, bypassing all the expensive find_deltas() and try_deltas() calls. Also, notice if what we are going to write out exactly match what is already in an existing pack (either deltified or just compressed). In such a case, we can just copy it instead of going through the usual uncompressing & recompressing cycle. Without this patch, in linux-2.6 repository with about 1500 loose objects and a single mega pack: $ git-rev-list --objects v2.6.16-rc3 >RL $ wc -l RL 184141 RL $ time git-pack-objects p <RL Generating pack... Done counting 184141 objects. Packing 184141 objects.................... a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2 real 12m4.323s user 11m2.560s sys 0m55.950s With this patch, the same input: $ time ../git.junio/git-pack-objects q <RL Generating pack... Done counting 184141 objects. Packing 184141 objects..................... a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2 Total 184141, written 184141, reused 182441 real 1m2.608s user 0m55.090s sys 0m1.830s Signed-off-by: Junio C Hamano <junkio@cox.net>
* | | Merge fixes up to GIT 1.2.2Junio C Hamano2006-02-181-1/+3
|\ \ \ | |/ / |/| / | |/