- 02 Feb, 2018 1 commit
-
-
Mario de la Ossa authored
-
- 08 Dec, 2017 1 commit
-
-
Bob Van Landuyt authored
Moving the check out of the general requests, makes sure we don't have any slowdown in the regular requests. To keep the process performing this checks small, the check is still performed inside a unicorn. But that is called from a process running on the same server. Because the checks are now done outside normal request, we can have a simpler failure strategy: The check is now performed in the background every `circuitbreaker_check_interval`. Failures are logged in redis. The failures are reset when the check succeeds. Per check we will try `circuitbreaker_access_retries` times within `circuitbreaker_storage_timeout` seconds. When the number of failures exceeds `circuitbreaker_failure_count_threshold`, we will block access to the storage. After `failure_reset_time` of no checks, we will clear the stored failures. This could happen when the process that performs the checks is not running.
-
- 19 Sep, 2017 1 commit
-
-
Nick Thomas authored
-
- 11 Jul, 2017 1 commit
-
-
Paul Charlton authored
-
- 10 Jul, 2017 1 commit
-
-
Pawel Chojnacki authored
+ fix typos, and capitalization + point configuration to `gitlab.rb` as well
-
- 06 Jul, 2017 1 commit
-
-
Pawel Chojnacki authored
-
- 04 Jul, 2017 2 commits
-
-
Pawel Chojnacki authored
+ fix wrong test setup
-
Pawel Chojnacki authored
in favor of whitelist that will be used to control the access to monitoring resources
-
- 02 Jun, 2017 1 commit
-
-
Pawel Chojnacki authored
-
- 07 Apr, 2017 1 commit
-
-
Paweł Chojnacki authored
-