- 21 Dec, 2016 1 commit
-
-
Markus Koller authored
This adds counters for build artifacts and LFS objects, and moves the preexisting repository_size and commit_count from the projects table into a new project_statistics table. The counters are displayed in the administration area for projects and groups, and also available through the API for admins (on */all) and normal users (on */owned) The statistics are updated through ProjectCacheWorker, which can now do more granular updates with the new :statistics argument.
-
- 19 Dec, 2016 2 commits
-
-
Nick Thomas authored
-
Yorick Peterse authored
Prior to this commit the refreshing of authorized projects was done in two steps: 1. Remove existing authorizations 2. Insert a new list of all authorizations This can lead to a high amount of dead tuples as every time all rows are being replaced. For example, if a user with 100 authorizations is given access to a new project this would lead to: * 100 rows being removed * 101 new rows being inserted This commit changes the way this system works so it only removes/inserts what is necessary. Using the above example this would lead to only 1 new row being inserted, with the initial 100 being left untouched. Fixes https://gitlab.com/gitlab-org/gitlab-ce/issues/25257
-
- 01 Dec, 2016 1 commit
-
-
Yorick Peterse authored
By passing commit data to this worker we remove the need for querying the Git repository for every job. This in turn reduces the time spent processing each job. The migration included migrates jobs from the old format to the new format. For this to work properly it requires downtime as otherwise workers may start producing errors until they're using a newer version of the worker code.
-
- 25 Nov, 2016 1 commit
-
-
Yorick Peterse authored
When I proposed using serializable transactions I was hoping we would be able to refresh data of individual users concurrently. Unfortunately upon closer inspection it was revealed this was not the case. This could result in a lot of queries failing due to serialization errors, overloading the database in the process (given enough workers trying to update the target table). To work around this we're now using a Redis lease that is cancelled upon completion. This ensures we can update the data of different users concurrently without overloading the database. The code will try to obtain the lease until it succeeds, waiting at least 1 second between retries. This is necessary as we may otherwise end up _not_ updating the data which is not an option.
-
- 21 Nov, 2016 2 commits
-
-
Yorick Peterse authored
This refactors repository caching so it's possible to selectively refresh certain caches, instead of just expiring and refreshing everything. To allow this the various methods that were cached (e.g. "tag_count" and "readme") use a similar pattern that makes expiring and refreshing their data much easier. In this new setup caches are refreshed as follows: 1. After a commit (but before running ProjectCacheWorker) we expire some basic caches such as the commit count and repository size. 2. ProjectCacheWorker will recalculate the commit count, repository size, then refresh a specific set of caches based on the list of files changed in a push payload. This requires a bunch of changes to the various methods that may be cached. For one, data should not be cached if a branch used or the entire repository does not exist. To prevent all these methods from handling this manually this is taken care of in Repository#cache_method_output. Some methods still manually check for the existence of a repository but this result is also cached. With selective flushing implemented ProjectCacheWorker no longer uses an exclusive lease for all of its work. Instead this worker only uses a lease to limit the number of times the repository size is updated as this is a fairly expensive operation.
-
Grzegorz Bizon authored
-
- 19 Nov, 2016 1 commit
-
-
Stan Hu authored
If there are any old or retries in the Sidekiq queue, NewNoteWorker will fail with the error: wrong number of arguments (given 2, expected 1) This change allows the optional second argument to be used to preserve backwards compatibility. It can be removed later. Closes #24678
-
- 18 Nov, 2016 1 commit
-
-
Ahmad Sherif authored
Closes #23150
-
- 17 Nov, 2016 3 commits
-
-
Kamil Trzcinski authored
-
James Lopez authored
-
James Lopez authored
add pipeline id to merge request metrics table. Also, updated the pipeline worker to populate this field.
-
- 16 Nov, 2016 1 commit
-
-
Kamil Trzcinski authored
-
- 12 Nov, 2016 1 commit
-
-
Oswaldo Ferreira authored
- Also remove unnecessary param
-
- 09 Nov, 2016 1 commit
-
-
Toon Claes authored
It adds a button to the branches page that the user can use to delete all the branches that are already merged. This can be used to clean up all the branches that were forgotten to delete while merging MRs. Fixes #21076.
-
- 08 Nov, 2016 1 commit
-
-
Kamil Trzcinski authored
-
- 07 Nov, 2016 1 commit
-
-
Yorick Peterse authored
This moves the code used for processing commits from GitPushService to its own Sidekiq worker: ProcessCommitWorker. Using a Sidekiq worker allows us to process multiple commits in parallel. This in turn will lead to issues being closed faster and cross references being created faster. Furthermore by isolating this code into a separate class it's easier to test and maintain the code. The new worker also ensures it can efficiently check which issues can be closed, without having to run numerous SQL queries for every issue.
-
- 04 Nov, 2016 1 commit
-
-
Jacob Vosmaer authored
-
- 01 Nov, 2016 1 commit
-
-
Elan Ruusamäe authored
-
- 28 Oct, 2016 1 commit
-
-
Frank Groeneveld authored
-
- 25 Oct, 2016 1 commit
-
-
Yorick Peterse authored
This changes ProjectCacheWorker.perform_async so it only schedules a job when no lease for the given project is present. This ensures we don't end up scheduling hundreds of jobs when they won't be executed anyway.
-
- 24 Oct, 2016 1 commit
-
- 21 Oct, 2016 2 commits
-
-
Yorick Peterse authored
Dumping too many jobs in the same queue (e.g. the "default" queue) is a dangerous setup. Jobs that take a long time to process can effectively block any other work from being performed given there are enough of these jobs. Furthermore it becomes harder to monitor the jobs as a single queue could contain jobs for different workers. In such a setup the only reliable way of getting counts per job is to iterate over all jobs in a queue, which is a rather time consuming process. By using separate queues for various workers we have better control over throughput, we can add weight to queues, and we can monitor queues better. Some workers still use the same queue whenever their work is related. For example, the various CI pipeline workers use the same "pipeline" queue. This commit includes a Rails migration that moves Sidekiq jobs from the old queues to the new ones. This migration also takes care of doing the inverse if ever needed. This does require downtime as otherwise new jobs could be scheduled in the old queues after this migration completes. This commit also includes an RSpec test that blacklists the use of the "default" queue and ensures cron workers use the "cronjob" queue. Fixes gitlab-org/gitlab-ce#23370
-
Jose Torres authored
-
- 20 Oct, 2016 1 commit
-
-
Yorick Peterse authored
This ensures ProjectCacheWorker jobs for a given project are performed at most once per 15 minutes. This should reduce disk load a bit in cases where there are multiple pushes happening (which should schedule multiple ProjectCacheWorker jobs).
-
- 17 Oct, 2016 6 commits
-
-
Grzegorz Bizon authored
It may happen that job meant to remove expired artifacts will be executed asynchronously when, in the meantime, project associated with given build gets removed by another asynchronous job. In that case we should not remove artifacts because such build will be removed anyway, when project removal is complete.
-
Kamil Trzcinski authored
-
Kamil Trzcinski authored
-
- 14 Oct, 2016 3 commits
-
-
Kamil Trzcinski authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
- 13 Oct, 2016 6 commits
-
-
Paco Guzman authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-