summaryrefslogtreecommitdiff
path: root/t/t1400-update-ref.sh
diff options
context:
space:
mode:
authorMichael Haggerty <mhagger@alum.mit.edu>2015-04-24 13:35:49 +0200
committerJunio C Hamano <gitster@pobox.com>2015-05-12 21:28:03 -0700
commitcf018ee0cd897150300a1d6431c07c840ab5b54e (patch)
tree69ff3b34d668ac93ccdbd68ba73b73ff0229e125 /t/t1400-update-ref.sh
parentcbf50f9e3d8e54f09ebbe6159a1bdd622c6c2f26 (diff)
downloadgit-cf018ee0cd897150300a1d6431c07c840ab5b54e.tar.gz
ref_transaction_commit(): fix atomicity and avoid fd exhaustion
The old code was roughly for update in updates: acquire locks and check old_sha for update in updates: if changing value: write_ref_to_lockfile() commit_ref_update() for update in updates: if deleting value: unlink() rewrite packed-refs file for update in updates: if reference still locked: unlock_ref() This has two problems. Non-atomic updates ================== The atomicity of the reference transaction depends on all pre-checks being done in the first loop, before any changes have started being committed in the second loop. The problem is that write_ref_to_lockfile() (previously part of write_ref_sha1()), which is called from the second loop, contains two more checks: * It verifies that new_sha1 is a valid object * If the reference being updated is a branch, it verifies that new_sha1 points at a commit object (as opposed to a tag, tree, or blob). If either of these checks fails, the "transaction" is aborted during the second loop. But this might happen after some reference updates have already been permanently committed. In other words, the all-or-nothing promise of "git update-ref --stdin" could be violated. So these checks have to be moved to the first loop. File descriptor exhaustion ========================== The old code locked all of the references in the first loop, leaving all of the lockfiles open until later loops. Since we might be updating a lot of references, this could result in file descriptor exhaustion. The solution ============ After this patch, the code looks like for update in updates: acquire locks and check old_sha if changing value: write_ref_to_lockfile() else: close_ref() for update in updates: if changing value: commit_ref_update() for update in updates: if deleting value: unlink() rewrite packed-refs file for update in updates: if reference still locked: unlock_ref() This fixes both problems: 1. The pre-checks in write_ref_to_lockfile() are now done in the first loop, before any changes have been committed. If any of the checks fails, the whole transaction can now be rolled back correctly. 2. All lockfiles are closed in the first loop immediately after they are created (either by write_ref_to_lockfile() or by close_ref()). This means that there is never more than one open lockfile at a time, preventing file descriptor exhaustion. To simplify the bookkeeping across loops, add a new REF_NEEDS_COMMIT bit to update->flags, which keeps track of whether the corresponding lockfile needs to be committed, as opposed to just unlocked. (Since "struct ref_update" is internal to the refs module, this change is not visible to external callers.) This change fixes two tests in t1400. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Diffstat (limited to 't/t1400-update-ref.sh')
-rwxr-xr-xt/t1400-update-ref.sh4
1 files changed, 2 insertions, 2 deletions
diff --git a/t/t1400-update-ref.sh b/t/t1400-update-ref.sh
index 7a69f1a083..636d3a167c 100755
--- a/t/t1400-update-ref.sh
+++ b/t/t1400-update-ref.sh
@@ -1071,7 +1071,7 @@ run_with_limited_open_files () {
test_lazy_prereq ULIMIT_FILE_DESCRIPTORS 'run_with_limited_open_files true'
-test_expect_failure ULIMIT_FILE_DESCRIPTORS 'large transaction creating branches does not burst open file limit' '
+test_expect_success ULIMIT_FILE_DESCRIPTORS 'large transaction creating branches does not burst open file limit' '
(
for i in $(test_seq 33)
do
@@ -1082,7 +1082,7 @@ test_expect_failure ULIMIT_FILE_DESCRIPTORS 'large transaction creating branches
)
'
-test_expect_failure ULIMIT_FILE_DESCRIPTORS 'large transaction deleting branches does not burst open file limit' '
+test_expect_success ULIMIT_FILE_DESCRIPTORS 'large transaction deleting branches does not burst open file limit' '
(
for i in $(test_seq 33)
do