- 04 Dec, 2016 1 commit
-
-
Z.J. van de Weg authored
-
- 01 Dec, 2016 2 commits
-
-
Yorick Peterse authored
By passing commit data to this worker we remove the need for querying the Git repository for every job. This in turn reduces the time spent processing each job. The migration included migrates jobs from the old format to the new format. For this to work properly it requires downtime as otherwise workers may start producing errors until they're using a newer version of the worker code.
-
Robert Speicher authored
-
- 25 Nov, 2016 1 commit
-
-
Yorick Peterse authored
When I proposed using serializable transactions I was hoping we would be able to refresh data of individual users concurrently. Unfortunately upon closer inspection it was revealed this was not the case. This could result in a lot of queries failing due to serialization errors, overloading the database in the process (given enough workers trying to update the target table). To work around this we're now using a Redis lease that is cancelled upon completion. This ensures we can update the data of different users concurrently without overloading the database. The code will try to obtain the lease until it succeeds, waiting at least 1 second between retries. This is necessary as we may otherwise end up _not_ updating the data which is not an option.
-
- 21 Nov, 2016 2 commits
-
-
Yorick Peterse authored
This refactors repository caching so it's possible to selectively refresh certain caches, instead of just expiring and refreshing everything. To allow this the various methods that were cached (e.g. "tag_count" and "readme") use a similar pattern that makes expiring and refreshing their data much easier. In this new setup caches are refreshed as follows: 1. After a commit (but before running ProjectCacheWorker) we expire some basic caches such as the commit count and repository size. 2. ProjectCacheWorker will recalculate the commit count, repository size, then refresh a specific set of caches based on the list of files changed in a push payload. This requires a bunch of changes to the various methods that may be cached. For one, data should not be cached if a branch used or the entire repository does not exist. To prevent all these methods from handling this manually this is taken care of in Repository#cache_method_output. Some methods still manually check for the existence of a repository but this result is also cached. With selective flushing implemented ProjectCacheWorker no longer uses an exclusive lease for all of its work. Instead this worker only uses a lease to limit the number of times the repository size is updated as this is a fairly expensive operation.
-
Grzegorz Bizon authored
-
- 18 Nov, 2016 1 commit
-
-
Ahmad Sherif authored
Closes #23150
-
- 17 Nov, 2016 1 commit
-
-
James Lopez authored
-
- 12 Nov, 2016 1 commit
-
-
Oswaldo Ferreira authored
- Also remove unnecessary param
-
- 09 Nov, 2016 1 commit
-
-
Toon Claes authored
It adds a button to the branches page that the user can use to delete all the branches that are already merged. This can be used to clean up all the branches that were forgotten to delete while merging MRs. Fixes #21076.
-
- 07 Nov, 2016 1 commit
-
-
Yorick Peterse authored
This moves the code used for processing commits from GitPushService to its own Sidekiq worker: ProcessCommitWorker. Using a Sidekiq worker allows us to process multiple commits in parallel. This in turn will lead to issues being closed faster and cross references being created faster. Furthermore by isolating this code into a separate class it's easier to test and maintain the code. The new worker also ensures it can efficiently check which issues can be closed, without having to run numerous SQL queries for every issue.
-
- 04 Nov, 2016 2 commits
-
-
Jacob Vosmaer authored
-
Jacob Vosmaer authored
-
- 28 Oct, 2016 1 commit
-
-
Frank Groeneveld authored
-
- 25 Oct, 2016 1 commit
-
-
Yorick Peterse authored
This changes ProjectCacheWorker.perform_async so it only schedules a job when no lease for the given project is present. This ensures we don't end up scheduling hundreds of jobs when they won't be executed anyway.
-
- 21 Oct, 2016 2 commits
-
-
Yorick Peterse authored
Dumping too many jobs in the same queue (e.g. the "default" queue) is a dangerous setup. Jobs that take a long time to process can effectively block any other work from being performed given there are enough of these jobs. Furthermore it becomes harder to monitor the jobs as a single queue could contain jobs for different workers. In such a setup the only reliable way of getting counts per job is to iterate over all jobs in a queue, which is a rather time consuming process. By using separate queues for various workers we have better control over throughput, we can add weight to queues, and we can monitor queues better. Some workers still use the same queue whenever their work is related. For example, the various CI pipeline workers use the same "pipeline" queue. This commit includes a Rails migration that moves Sidekiq jobs from the old queues to the new ones. This migration also takes care of doing the inverse if ever needed. This does require downtime as otherwise new jobs could be scheduled in the old queues after this migration completes. This commit also includes an RSpec test that blacklists the use of the "default" queue and ensures cron workers use the "cronjob" queue. Fixes gitlab-org/gitlab-ce#23370
-
- 20 Oct, 2016 1 commit
-
-
Yorick Peterse authored
This ensures ProjectCacheWorker jobs for a given project are performed at most once per 15 minutes. This should reduce disk load a bit in cases where there are multiple pushes happening (which should schedule multiple ProjectCacheWorker jobs).
-
- 18 Oct, 2016 1 commit
-
-
Lin Jen-Shin authored
We use bcc here because we don't want to generate this emails for a thousand times. This could be potentially expensive in a loop, and recipients would contain all project watchers so it could be a lot.
-
- 17 Oct, 2016 9 commits
-
-
Grzegorz Bizon authored
It may happen that job meant to remove expired artifacts will be executed asynchronously when, in the meantime, project associated with given build gets removed by another asynchronous job. In that case we should not remove artifacts because such build will be removed anyway, when project removal is complete.
-
Nick Thomas authored
The amount of precision times have in databases is variable, so we need tolerances when comparing in specs. It's better to have the tolerance defined in one place than several.
-
Kamil Trzcinski authored
-
Lin Jen-Shin authored
-
Lin Jen-Shin authored
-
- 14 Oct, 2016 1 commit
-
-
Grzegorz Bizon authored
-
- 13 Oct, 2016 5 commits
-
-
Paco Guzman authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
- 12 Oct, 2016 1 commit
-
-
Grzegorz Bizon authored
-
- 11 Oct, 2016 1 commit
-
-
tiagonbotelho authored
-
- 10 Oct, 2016 1 commit
-
-
Yorick Peterse authored
This commit introduces a Sidekiq worker that precalculates the list of trending projects on a daily basis. The resulting set is stored in a database table that is then queried by Project.trending. This setup means that Unicorn workers no longer _may_ have to calculate the list of trending projects. Furthermore it supports filtering without any complex caching mechanisms. The data in the "trending_projects" table is inserted in the same order as the project ranking. This means that getting the projects in the correct order is simply a matter of: SELECT projects.* FROM projects INNER JOIN trending_projects ON trending_projects.project_id = projects.id ORDER BY trending_projects.id ASC; Such a query will only take a few milliseconds at most (as measured on GitLab.com), opposed to a few seconds for the query used for calculating the project ranks. The migration in this commit does not require downtime and takes care of populating an initial list of trending projects.
-
- 07 Oct, 2016 2 commits
-
-
Paco Guzman authored
ExpireBuildArtifactsWorker query builds table without ordering enqueuing one job per build to cleanup We use Sidekiq::Client.push_bulk to avoid Redis round trips
-
Grzegorz Bizon authored
-
- 06 Oct, 2016 1 commit
-
-
Grzegorz Bizon authored
-