BigW Consortium Gitlab

  1. 21 Dec, 2016 1 commit
    • Add more storage statistics · 3ef4f74b
      Markus Koller authored
      This adds counters for build artifacts and LFS objects, and moves
      the preexisting repository_size and commit_count from the projects
      table into a new project_statistics table.
      
      The counters are displayed in the administration area for projects
      and groups, and also available through the API for admins (on */all)
      and normal users (on */owned)
      
      The statistics are updated through ProjectCacheWorker, which can now
      do more granular updates with the new :statistics argument.
  2. 19 Dec, 2016 2 commits
  3. 01 Dec, 2016 1 commit
    • Pass commit data to ProcessCommitWorker · 6b4d3356
      Yorick Peterse authored
      By passing commit data to this worker we remove the need for querying
      the Git repository for every job. This in turn reduces the time spent
      processing each job.
      
      The migration included migrates jobs from the old format to the new
      format. For this to work properly it requires downtime as otherwise
      workers may start producing errors until they're using a newer version
      of the worker code.
  4. 25 Nov, 2016 1 commit
    • Refresh project authorizations using a Redis lease · 92b2c74c
      Yorick Peterse authored
      When I proposed using serializable transactions I was hoping we would be
      able to refresh data of individual users concurrently. Unfortunately
      upon closer inspection it was revealed this was not the case. This could
      result in a lot of queries failing due to serialization errors,
      overloading the database in the process (given enough workers trying to
      update the target table).
      
      To work around this we're now using a Redis lease that is cancelled upon
      completion. This ensures we can update the data of different users
      concurrently without overloading the database.
      
      The code will try to obtain the lease until it succeeds, waiting at
      least 1 second between retries. This is necessary as we may otherwise
      end up _not_ updating the data which is not an option.
  5. 21 Nov, 2016 2 commits
    • Refactor cache refreshing/expiring · ffb9b3ef
      Yorick Peterse authored
      This refactors repository caching so it's possible to selectively
      refresh certain caches, instead of just expiring and refreshing
      everything.
      
      To allow this the various methods that were cached (e.g. "tag_count" and
      "readme") use a similar pattern that makes expiring and refreshing
      their data much easier.
      
      In this new setup caches are refreshed as follows:
      
      1. After a commit (but before running ProjectCacheWorker) we expire some
         basic caches such as the commit count and repository size.
      
      2. ProjectCacheWorker will recalculate the commit count, repository
         size, then refresh a specific set of caches based on the list of
         files changed in a push payload.
      
      This requires a bunch of changes to the various methods that may be
      cached. For one, data should not be cached if a branch used or the
      entire repository does not exist. To prevent all these methods from
      handling this manually this is taken care of in
      Repository#cache_method_output. Some methods still manually check for
      the existence of a repository but this result is also cached.
      
      With selective flushing implemented ProjectCacheWorker no longer uses an
      exclusive lease for all of its work. Instead this worker only uses a
      lease to limit the number of times the repository size is updated as
      this is a fairly expensive operation.
  6. 19 Nov, 2016 1 commit
    • Preserve optional second parameter in NewNoteWorker jobs · 99432cbc
      Stan Hu authored
      If there are any old or retries in the Sidekiq queue, NewNoteWorker
      will fail with the error:
      
      wrong number of arguments (given 2, expected 1)
      
      This change allows the optional second argument to be used
      to preserve backwards compatibility. It can be removed later.
      
      Closes #24678
  7. 18 Nov, 2016 1 commit
  8. 17 Nov, 2016 3 commits
  9. 16 Nov, 2016 1 commit
  10. 12 Nov, 2016 1 commit
  11. 09 Nov, 2016 1 commit
    • Add button to delete all merged branches · 1afab9eb
      Toon Claes authored
      It adds a button to the branches page that the user can use to delete
      all the branches that are already merged. This can be used to clean up
      all the branches that were forgotten to delete while merging MRs.
      
      Fixes #21076.
  12. 08 Nov, 2016 1 commit
  13. 07 Nov, 2016 1 commit
    • Process commits in a separate worker · 509910b8
      Yorick Peterse authored
      This moves the code used for processing commits from GitPushService to
      its own Sidekiq worker: ProcessCommitWorker.
      
      Using a Sidekiq worker allows us to process multiple commits in
      parallel. This in turn will lead to issues being closed faster and cross
      references being created faster. Furthermore by isolating this code into
      a separate class it's easier to test and maintain the code.
      
      The new worker also ensures it can efficiently check which issues can be
      closed, without having to run numerous SQL queries for every issue.
  14. 04 Nov, 2016 1 commit
  15. 01 Nov, 2016 1 commit
  16. 28 Oct, 2016 1 commit
  17. 25 Oct, 2016 1 commit
    • Don't schedule ProjectCacheWorker unless needed · 3b4af59a
      Yorick Peterse authored
      This changes ProjectCacheWorker.perform_async so it only schedules a job
      when no lease for the given project is present. This ensures we don't
      end up scheduling hundreds of jobs when they won't be executed anyway.
  18. 24 Oct, 2016 1 commit
  19. 21 Oct, 2016 2 commits
    • Re-organize queues to use for Sidekiq · 97731760
      Yorick Peterse authored
      Dumping too many jobs in the same queue (e.g. the "default" queue) is a
      dangerous setup. Jobs that take a long time to process can effectively
      block any other work from being performed given there are enough of
      these jobs.
      
      Furthermore it becomes harder to monitor the jobs as a single queue
      could contain jobs for different workers. In such a setup the only
      reliable way of getting counts per job is to iterate over all jobs in a
      queue, which is a rather time consuming process.
      
      By using separate queues for various workers we have better control over
      throughput, we can add weight to queues, and we can monitor queues
      better. Some workers still use the same queue whenever their work is
      related. For example, the various CI pipeline workers use the same
      "pipeline" queue.
      
      This commit includes a Rails migration that moves Sidekiq jobs from the
      old queues to the new ones. This migration also takes care of doing the
      inverse if ever needed. This does require downtime as otherwise new jobs
      could be scheduled in the old queues after this migration completes.
      
      This commit also includes an RSpec test that blacklists the use of the
      "default" queue and ensures cron workers use the "cronjob" queue.
      
      Fixes gitlab-org/gitlab-ce#23370
    • Add retry limit and set it at four · decf0fef
      Jose Torres authored
  20. 20 Oct, 2016 1 commit
    • Restrict ProjectCacheWorker jobs to one per 15 min · bc31a489
      Yorick Peterse authored
      This ensures ProjectCacheWorker jobs for a given project are performed
      at most once per 15 minutes. This should reduce disk load a bit in cases
      where there are multiple pushes happening (which should schedule
      multiple ProjectCacheWorker jobs).
  21. 17 Oct, 2016 6 commits
  22. 14 Oct, 2016 3 commits
  23. 13 Oct, 2016 6 commits