- 25 Aug, 2017 2 commits
-
-
Stan Hu authored
Customers often have Sidekiq jobs that failed without much context. Without Sentry, there's no way to tell where these exceptions were hit. Adding in additional lines adds a bit more Redis storage overhead. This commit adds in backtrace logging for workers that delete groups/projects and import/export projects. Closes #27626
-
Robert Speicher authored
This reverts merge request !13813
-
- 24 Aug, 2017 1 commit
-
-
Stan Hu authored
Customers often have Sidekiq jobs that failed without much context. Without Sentry, there's no way to tell where these exceptions were hit. Adding in additional lines adds a bit more Redis storage overhead. This commit adds in backtrace logging for workers that delete groups/projects and import/export projects. Closes #27626
-
- 08 Feb, 2017 1 commit
-
-
dixpac authored
* Changed name of delete_user_service and worker to destroy * Move and change delete_group_service to Groups::DestroyService * Rename Notes::DeleteService to Notes::DestroyService
-
- 21 Oct, 2016 1 commit
-
-
Yorick Peterse authored
Dumping too many jobs in the same queue (e.g. the "default" queue) is a dangerous setup. Jobs that take a long time to process can effectively block any other work from being performed given there are enough of these jobs. Furthermore it becomes harder to monitor the jobs as a single queue could contain jobs for different workers. In such a setup the only reliable way of getting counts per job is to iterate over all jobs in a queue, which is a rather time consuming process. By using separate queues for various workers we have better control over throughput, we can add weight to queues, and we can monitor queues better. Some workers still use the same queue whenever their work is related. For example, the various CI pipeline workers use the same "pipeline" queue. This commit includes a Rails migration that moves Sidekiq jobs from the old queues to the new ones. This migration also takes care of doing the inverse if ever needed. This does require downtime as otherwise new jobs could be scheduled in the old queues after this migration completes. This commit also includes an RSpec test that blacklists the use of the "default" queue and ensures cron workers use the "cronjob" queue. Fixes gitlab-org/gitlab-ce#23370
-
- 11 Aug, 2016 1 commit
-
-
Stan Hu authored
There is a race condition in DestroyGroupService now that projects are deleted asynchronously: 1. User attempts to delete group 2. DestroyGroupService iterates through all projects and schedules a Sidekiq job to delete each Project 3. DestroyGroupService destroys the Group, leaving all its projects without a namespace 4. Projects::DestroyService runs later but the can?(current_user, :remove_project) is `false` because the user no longer has permission to destroy projects with no namespace. 5. This leaves the project in pending_delete state with no namespace/group. Projects without a namespace or group also adds another problem: it's not possible to destroy the container registry tags, since container_registry_path_with_namespace is the wrong value. The fix is to destroy the group asynchronously and to run execute directly on Projects::DestroyService. Closes #17893
-