summaryrefslogtreecommitdiff
path: root/app/helpers/storage_health_helper.rb
diff options
context:
space:
mode:
authorBob Van Landuyt <bob@vanlanduyt.co>2017-11-13 16:52:07 +0100
committerBob Van Landuyt <bob@vanlanduyt.co>2017-12-08 09:11:39 +0100
commitf1ae1e39ce6b7578c5697c977bc3b52b119301ab (patch)
tree1d01033287e4e15e505c7b8b3f69ced4e6cf21c8 /app/helpers/storage_health_helper.rb
parent12d33b883adda7093f0f4b838532871036af3925 (diff)
downloadgitlab-ce-f1ae1e39ce6b7578c5697c977bc3b52b119301ab.tar.gz
Move the circuitbreaker check out in a separate processbvl-circuitbreaker-process
Moving the check out of the general requests, makes sure we don't have any slowdown in the regular requests. To keep the process performing this checks small, the check is still performed inside a unicorn. But that is called from a process running on the same server. Because the checks are now done outside normal request, we can have a simpler failure strategy: The check is now performed in the background every `circuitbreaker_check_interval`. Failures are logged in redis. The failures are reset when the check succeeds. Per check we will try `circuitbreaker_access_retries` times within `circuitbreaker_storage_timeout` seconds. When the number of failures exceeds `circuitbreaker_failure_count_threshold`, we will block access to the storage. After `failure_reset_time` of no checks, we will clear the stored failures. This could happen when the process that performs the checks is not running.
Diffstat (limited to 'app/helpers/storage_health_helper.rb')
-rw-r--r--app/helpers/storage_health_helper.rb6
1 files changed, 1 insertions, 5 deletions
diff --git a/app/helpers/storage_health_helper.rb b/app/helpers/storage_health_helper.rb
index 4d2180f7eee..b76c1228220 100644
--- a/app/helpers/storage_health_helper.rb
+++ b/app/helpers/storage_health_helper.rb
@@ -18,16 +18,12 @@ module StorageHealthHelper
current_failures = circuit_breaker.failure_count
translation_params = { number_of_failures: current_failures,
- maximum_failures: maximum_failures,
- number_of_seconds: circuit_breaker.failure_wait_time }
+ maximum_failures: maximum_failures }
if circuit_breaker.circuit_broken?
s_("%{number_of_failures} of %{maximum_failures} failures. GitLab will not "\
"retry automatically. Reset storage information when the problem is "\
"resolved.") % translation_params
- elsif circuit_breaker.backing_off?
- _("%{number_of_failures} of %{maximum_failures} failures. GitLab will "\
- "block access for %{number_of_seconds} seconds.") % translation_params
else
_("%{number_of_failures} of %{maximum_failures} failures. GitLab will "\
"allow access on the next attempt.") % translation_params