- 25 Nov, 2016 1 commit
-
-
Yorick Peterse authored
When I proposed using serializable transactions I was hoping we would be able to refresh data of individual users concurrently. Unfortunately upon closer inspection it was revealed this was not the case. This could result in a lot of queries failing due to serialization errors, overloading the database in the process (given enough workers trying to update the target table). To work around this we're now using a Redis lease that is cancelled upon completion. This ensures we can update the data of different users concurrently without overloading the database. The code will try to obtain the lease until it succeeds, waiting at least 1 second between retries. This is necessary as we may otherwise end up _not_ updating the data which is not an option.
-
- 21 Nov, 2016 1 commit
-
-
Yorick Peterse authored
This refactors repository caching so it's possible to selectively refresh certain caches, instead of just expiring and refreshing everything. To allow this the various methods that were cached (e.g. "tag_count" and "readme") use a similar pattern that makes expiring and refreshing their data much easier. In this new setup caches are refreshed as follows: 1. After a commit (but before running ProjectCacheWorker) we expire some basic caches such as the commit count and repository size. 2. ProjectCacheWorker will recalculate the commit count, repository size, then refresh a specific set of caches based on the list of files changed in a push payload. This requires a bunch of changes to the various methods that may be cached. For one, data should not be cached if a branch used or the entire repository does not exist. To prevent all these methods from handling this manually this is taken care of in Repository#cache_method_output. Some methods still manually check for the existence of a repository but this result is also cached. With selective flushing implemented ProjectCacheWorker no longer uses an exclusive lease for all of its work. Instead this worker only uses a lease to limit the number of times the repository size is updated as this is a fairly expensive operation.
-
- 19 Nov, 2016 1 commit
-
-
Stan Hu authored
If there are any old or retries in the Sidekiq queue, NewNoteWorker will fail with the error: wrong number of arguments (given 2, expected 1) This change allows the optional second argument to be used to preserve backwards compatibility. It can be removed later. Closes #24678
-
- 18 Nov, 2016 1 commit
-
-
Ahmad Sherif authored
Closes #23150
-
- 17 Nov, 2016 3 commits
-
-
Kamil Trzcinski authored
-
James Lopez authored
-
James Lopez authored
add pipeline id to merge request metrics table. Also, updated the pipeline worker to populate this field.
-
- 16 Nov, 2016 1 commit
-
-
Kamil Trzcinski authored
-
- 12 Nov, 2016 1 commit
-
-
Oswaldo Ferreira authored
- Also remove unnecessary param
-
- 09 Nov, 2016 1 commit
-
-
Toon Claes authored
It adds a button to the branches page that the user can use to delete all the branches that are already merged. This can be used to clean up all the branches that were forgotten to delete while merging MRs. Fixes #21076.
-
- 08 Nov, 2016 1 commit
-
-
Kamil Trzcinski authored
-
- 07 Nov, 2016 1 commit
-
-
Yorick Peterse authored
This moves the code used for processing commits from GitPushService to its own Sidekiq worker: ProcessCommitWorker. Using a Sidekiq worker allows us to process multiple commits in parallel. This in turn will lead to issues being closed faster and cross references being created faster. Furthermore by isolating this code into a separate class it's easier to test and maintain the code. The new worker also ensures it can efficiently check which issues can be closed, without having to run numerous SQL queries for every issue.
-
- 04 Nov, 2016 1 commit
-
-
Jacob Vosmaer authored
-
- 01 Nov, 2016 1 commit
-
-
Elan Ruusamäe authored
-
- 28 Oct, 2016 1 commit
-
-
Frank Groeneveld authored
-
- 25 Oct, 2016 1 commit
-
-
Yorick Peterse authored
This changes ProjectCacheWorker.perform_async so it only schedules a job when no lease for the given project is present. This ensures we don't end up scheduling hundreds of jobs when they won't be executed anyway.
-
- 24 Oct, 2016 1 commit
-
- 21 Oct, 2016 2 commits
-
-
Yorick Peterse authored
Dumping too many jobs in the same queue (e.g. the "default" queue) is a dangerous setup. Jobs that take a long time to process can effectively block any other work from being performed given there are enough of these jobs. Furthermore it becomes harder to monitor the jobs as a single queue could contain jobs for different workers. In such a setup the only reliable way of getting counts per job is to iterate over all jobs in a queue, which is a rather time consuming process. By using separate queues for various workers we have better control over throughput, we can add weight to queues, and we can monitor queues better. Some workers still use the same queue whenever their work is related. For example, the various CI pipeline workers use the same "pipeline" queue. This commit includes a Rails migration that moves Sidekiq jobs from the old queues to the new ones. This migration also takes care of doing the inverse if ever needed. This does require downtime as otherwise new jobs could be scheduled in the old queues after this migration completes. This commit also includes an RSpec test that blacklists the use of the "default" queue and ensures cron workers use the "cronjob" queue. Fixes gitlab-org/gitlab-ce#23370
-
Jose Torres authored
-
- 20 Oct, 2016 1 commit
-
-
Yorick Peterse authored
This ensures ProjectCacheWorker jobs for a given project are performed at most once per 15 minutes. This should reduce disk load a bit in cases where there are multiple pushes happening (which should schedule multiple ProjectCacheWorker jobs).
-
- 17 Oct, 2016 6 commits
-
-
Grzegorz Bizon authored
It may happen that job meant to remove expired artifacts will be executed asynchronously when, in the meantime, project associated with given build gets removed by another asynchronous job. In that case we should not remove artifacts because such build will be removed anyway, when project removal is complete.
-
Kamil Trzcinski authored
-
Kamil Trzcinski authored
-
- 14 Oct, 2016 3 commits
-
-
Kamil Trzcinski authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
- 13 Oct, 2016 6 commits
-
-
Paco Guzman authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
- 12 Oct, 2016 1 commit
-
-
Grzegorz Bizon authored
-
- 10 Oct, 2016 1 commit
-
-
Yorick Peterse authored
This commit introduces a Sidekiq worker that precalculates the list of trending projects on a daily basis. The resulting set is stored in a database table that is then queried by Project.trending. This setup means that Unicorn workers no longer _may_ have to calculate the list of trending projects. Furthermore it supports filtering without any complex caching mechanisms. The data in the "trending_projects" table is inserted in the same order as the project ranking. This means that getting the projects in the correct order is simply a matter of: SELECT projects.* FROM projects INNER JOIN trending_projects ON trending_projects.project_id = projects.id ORDER BY trending_projects.id ASC; Such a query will only take a few milliseconds at most (as measured on GitLab.com), opposed to a few seconds for the query used for calculating the project ranks. The migration in this commit does not require downtime and takes care of populating an initial list of trending projects.
-
- 07 Oct, 2016 3 commits
-
-
Paco Guzman authored
ExpireBuildArtifactsWorker query builds table without ordering enqueuing one job per build to cleanup We use Sidekiq::Client.push_bulk to avoid Redis round trips
-
Grzegorz Bizon authored
-
Nick Thomas authored
This commit adds a number of _html columns and, with the exception of Note, starts updating them whenever the content of their partner fields changes. Note has a collision with the note_html attr_accessor; that will be fixed later A background worker for clearing these cache columns is also introduced - use `rake cache:clear` to set it off. You can clear the database or Redis caches separately by running `rake cache:clear:db` or `rake cache:clear:redis`, respectively.
-