- 28 Oct, 2016 1 commit
-
-
Frank Groeneveld authored
-
- 25 Oct, 2016 1 commit
-
-
Yorick Peterse authored
This changes ProjectCacheWorker.perform_async so it only schedules a job when no lease for the given project is present. This ensures we don't end up scheduling hundreds of jobs when they won't be executed anyway.
-
- 21 Oct, 2016 1 commit
-
-
Yorick Peterse authored
Dumping too many jobs in the same queue (e.g. the "default" queue) is a dangerous setup. Jobs that take a long time to process can effectively block any other work from being performed given there are enough of these jobs. Furthermore it becomes harder to monitor the jobs as a single queue could contain jobs for different workers. In such a setup the only reliable way of getting counts per job is to iterate over all jobs in a queue, which is a rather time consuming process. By using separate queues for various workers we have better control over throughput, we can add weight to queues, and we can monitor queues better. Some workers still use the same queue whenever their work is related. For example, the various CI pipeline workers use the same "pipeline" queue. This commit includes a Rails migration that moves Sidekiq jobs from the old queues to the new ones. This migration also takes care of doing the inverse if ever needed. This does require downtime as otherwise new jobs could be scheduled in the old queues after this migration completes. This commit also includes an RSpec test that blacklists the use of the "default" queue and ensures cron workers use the "cronjob" queue. Fixes gitlab-org/gitlab-ce#23370
-
- 20 Oct, 2016 1 commit
-
-
Yorick Peterse authored
This ensures ProjectCacheWorker jobs for a given project are performed at most once per 15 minutes. This should reduce disk load a bit in cases where there are multiple pushes happening (which should schedule multiple ProjectCacheWorker jobs).
-
- 17 Oct, 2016 4 commits
-
-
Grzegorz Bizon authored
It may happen that job meant to remove expired artifacts will be executed asynchronously when, in the meantime, project associated with given build gets removed by another asynchronous job. In that case we should not remove artifacts because such build will be removed anyway, when project removal is complete.
-
Nick Thomas authored
The amount of precision times have in databases is variable, so we need tolerances when comparing in specs. It's better to have the tolerance defined in one place than several.
-
Kamil Trzcinski authored
-
- 14 Oct, 2016 1 commit
-
-
Grzegorz Bizon authored
-
- 13 Oct, 2016 5 commits
-
-
Paco Guzman authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
- 12 Oct, 2016 1 commit
-
-
Grzegorz Bizon authored
-
- 11 Oct, 2016 1 commit
-
-
tiagonbotelho authored
-
- 10 Oct, 2016 1 commit
-
-
Yorick Peterse authored
This commit introduces a Sidekiq worker that precalculates the list of trending projects on a daily basis. The resulting set is stored in a database table that is then queried by Project.trending. This setup means that Unicorn workers no longer _may_ have to calculate the list of trending projects. Furthermore it supports filtering without any complex caching mechanisms. The data in the "trending_projects" table is inserted in the same order as the project ranking. This means that getting the projects in the correct order is simply a matter of: SELECT projects.* FROM projects INNER JOIN trending_projects ON trending_projects.project_id = projects.id ORDER BY trending_projects.id ASC; Such a query will only take a few milliseconds at most (as measured on GitLab.com), opposed to a few seconds for the query used for calculating the project ranks. The migration in this commit does not require downtime and takes care of populating an initial list of trending projects.
-
- 07 Oct, 2016 2 commits
-
-
Paco Guzman authored
ExpireBuildArtifactsWorker query builds table without ordering enqueuing one job per build to cleanup We use Sidekiq::Client.push_bulk to avoid Redis round trips
-
Grzegorz Bizon authored
-
- 06 Oct, 2016 1 commit
-
-
Grzegorz Bizon authored
-
- 05 Oct, 2016 1 commit
-
-
Yorick Peterse authored
This refactors Gitlab::Identifier so it uses fewer queries and is actually tested. Queries are reduced by caching the output as well as using 1 query (instead of 2) to find a user using an SSH key.
-
- 04 Oct, 2016 1 commit
-
-
Grzegorz Bizon authored
-
- 07 Sep, 2016 1 commit
-
-
Olaf Tomalka authored
Since contribution calendar shows only 12 months of activity, events older than that time are not visible anywhere and can be safely pruned saving big amount of database storage. Fixes #21164
-
- 01 Sep, 2016 1 commit
-
-
Felipe Artur authored
-
- 19 Aug, 2016 2 commits
-
-
Sean McGivern authored
-
Sean McGivern authored
`after_sha` maps to the source branch, as it's the head of our compare, so these were just the wrong way around.
-
- 18 Aug, 2016 2 commits
-
-
Sean McGivern authored
-
Sean McGivern authored
-
- 12 Aug, 2016 1 commit
-
- 11 Aug, 2016 2 commits
-
-
Stan Hu authored
There is a race condition in DestroyGroupService now that projects are deleted asynchronously: 1. User attempts to delete group 2. DestroyGroupService iterates through all projects and schedules a Sidekiq job to delete each Project 3. DestroyGroupService destroys the Group, leaving all its projects without a namespace 4. Projects::DestroyService runs later but the can?(current_user, :remove_project) is `false` because the user no longer has permission to destroy projects with no namespace. 5. This leaves the project in pending_delete state with no namespace/group. Projects without a namespace or group also adds another problem: it's not possible to destroy the container registry tags, since container_registry_path_with_namespace is the wrong value. The fix is to destroy the group asynchronously and to run execute directly on Projects::DestroyService. Closes #17893
-
Kamil Trzcinski authored
This change simplifies a Pipeline processing by introducing a special new status: created. This status is used for all builds that are created for a pipeline. We are then processing next stages and queueing some of the builds (created -> pending) or skipping them (created -> skipped). This makes it possible to simplify and solve a few ordering problems with how previously builds were scheduled. This also allows us to visualise a full pipeline (with created builds). This also removes an after_touch used for updating a pipeline state parameters. Right now in various places we explicitly call a reload_status! on pipeline to force it to be updated and saved.
-
- 09 Aug, 2016 1 commit
-
-
tiagonbotelho authored
-
- 07 Aug, 2016 1 commit
-
-
Adam Niedzielski authored
-
- 04 Aug, 2016 3 commits
-
-
Adam Niedzielski authored
-
Stan Hu authored
When destroying a namespace, the `skip_repo` parameter is supposed to prevent the repository directory from being destroyed and allow the namespace after_destroy hook to run. If the namespace fails to be deleted for some reason, we could be left with repositories that are deleted with existing projects.
-
- 26 Jul, 2016 1 commit
-
-
Alejandro Rodríguez authored
-
- 21 Jul, 2016 1 commit
-
-
Sean McGivern authored
-
- 13 Jul, 2016 1 commit
-
-
Stan Hu authored
Possible workaround for #15392
-
- 12 Jul, 2016 1 commit
-
-
Stan Hu authored
Due to a stale NFS cache, it's possible that a branch lookup fails while `git gc` is running and causes missing branches in merge requests. Possible workaround for #15392
-