BigW Consortium Gitlab

  1. 23 Feb, 2017 6 commits
  2. 13 Jan, 2017 1 commit
  3. 21 Dec, 2016 1 commit
  4. 25 Aug, 2016 1 commit
  5. 17 Aug, 2016 1 commit
    • Tracking of custom events · d345591f
      Yorick Peterse authored
      GitLab Performance Monitoring is now able to track custom events not
      directly related to application performance. These events include the
      number of tags pushed, repositories created, builds registered, etc.
      
      The use of these events is to get a better overview of how a GitLab
      instance is used and how that may affect performance. For example, a
      large number of Git pushes may have a negative impact on the underlying
      storage engine.
      
      Events are stored in the "events" measurement and are not prefixed with
      "rails_" or "sidekiq_", this makes it easier to query events with the
      same name triggered from different parts of the application. All events
      being stored in the same measurement also makes it easier to downsample
      data.
      
      Currently the following events are tracked:
      
      * Creating repositories
      * Removing repositories
      * Changing the default branch of a repository
      * Pushing a new tag
      * Removing an existing tag
      * Pushing a commit (along with the branch being pushed to)
      * Pushing a new branch
      * Removing an existing branch
      * Importing a repository (along with the URL we're importing)
      * Forking a repository (along with the source/target path)
      * CI builds registered (and when no build could be found)
      * CI builds being updated
      * Rails and Sidekiq exceptions
      
      Fixes gitlab-org/gitlab-ce#13720
  6. 28 Jul, 2016 1 commit
    • Reduce instrumentation overhead · 905f8d76
      Yorick Peterse authored
      This reduces the overhead of the method instrumentation code primarily
      by reducing the number of method calls. There are also some other small
      optimisations such as not casting timing values to Floats (there's no
      particular need for this), using Symbols for method call metric names,
      and reducing the number of Hash lookups for instrumented methods.
      
      The exact impact depends on the code being executed. For example, for a
      method that's only called once the difference won't be very noticeable.
      However, for methods that are called many times the difference can be
      more significant.
      
      For example, the loading time of a large commit
      (nrclark/dummy_project@81ebdea5df2fb42e59257cb3eaad671a5c53ca36)
      was reduced from around 19 seconds to around 15 seconds using these
      changes.
  7. 05 Jul, 2016 1 commit
  8. 28 Jun, 2016 1 commit
    • Use clock_gettime for all performance timestamps · d7b4f36a
      Yorick Peterse authored
      Process.clock_gettime allows getting the real time in nanoseconds as
      well as allowing one to get a monotonic timestamp. This offers greater
      accuracy without the overhead of having to allocate a Time instance. In
      general using Time.now/Time.new is about 2x slower than using
      Process.clock_gettime(). For example:
      
          require 'benchmark/ips'
      
          Benchmark.ips do |bench|
            bench.report 'Time.now' do
              Time.now.to_f
            end
      
            bench.report 'clock_gettime' do
              Process.clock_gettime(Process::CLOCK_MONOTONIC, :millisecond)
            end
      
            bench.compare!
          end
      
      Running this benchmark gives:
      
          Calculating -------------------------------------
                      Time.now   108.052k i/100ms
                 clock_gettime   125.984k i/100ms
          -------------------------------------------------
                      Time.now      2.343M (± 7.1%) i/s -     11.670M
                 clock_gettime      4.979M (± 0.8%) i/s -     24.945M
      
          Comparison:
                 clock_gettime:  4979393.8 i/s
                      Time.now:  2342986.8 i/s - 2.13x slower
      
      Another benefit of using Process.clock_gettime() is that we can simplify
      the code a bit since it can give timestamps in nanoseconds out of the
      box.
  9. 23 Jun, 2016 1 commit
  10. 17 Jun, 2016 2 commits
    • Track method call times/counts as a single metric · be3b8784
      Yorick Peterse authored
      Previously we'd create a separate Metric instance for every method call
      that would exceed the method call threshold. This is problematic because
      it doesn't provide us with information to accurately get the _total_
      execution time of a particular method. For example, if the method
      "Foo#bar" was called 4 times with a runtime of ~10 milliseconds we'd end
      up with 4 different Metric instances. If we were to then get the
      average/95th percentile/etc of the timings this would be roughly 10
      milliseconds. However, the _actual_ total time spent in this method
      would be around 40 milliseconds.
      
      To solve this problem we now create a single Metric instance per method.
      This Metric instance contains the _total_ real/CPU time and the call
      count for every instrumented method.
  11. 16 Jun, 2016 2 commits
  12. 14 Jun, 2016 4 commits
    • Filter out classes without names in the sampler · ab91f122
      Yorick Peterse authored
      We can't do a lot with classes without names as we can't filter by them,
      have no idea where they come from, etc. As such it's best to just ignore
      these.
    • Instrument private/protected methods · dadc5313
      Paco Guzman authored
      By default instrumentation will instrument public,
      protected and private methods, because usually
      heavy work is done on private method or at least
      that’s what facts is showing
    • Instrument Grape Endpoint with Metrics::RackMiddleware · 509082ba
      Paco Guzman authored
      Generating the following tags
      
      Grape#GET /projects/:id/archive
      
      from Grape::Route objects like
      
      { :path => /:version/projects/:id/archive(.:format)
        :version => “v3”,
        :method => “GET” }
      
      Use an instance variable to cache raw_path transformations.
      This variable is only going to growth to the number of 
      endpoints of the API, not with exact different requests
      
      We can store this cache as an instance variable because 
      middleware are initialised only once
    • Measure CPU time for instrumented methods · 120fbbd4
      Paco Guzman authored
  13. 03 Jun, 2016 2 commits
  14. 24 May, 2016 1 commit
  15. 15 May, 2016 1 commit
  16. 12 May, 2016 1 commit
    • Removed tracking of total method execution times · 945c5b3f
      Yorick Peterse authored
      Because method call timings are inclusive (that is, they include the
      time of any sub method calls) this would lead to the total method
      execution time often being far greater than the total transaction time.
      Because this is incredibly confusing it's best to simply _not_ track the
      total method execution time, after all it's not that useful to begin
      with.
      
      Fixes gitlab-org/gitlab-ce#17239
  17. 18 Apr, 2016 2 commits
  18. 11 Apr, 2016 1 commit
  19. 08 Apr, 2016 1 commit
  20. 25 Jan, 2016 1 commit
    • Correct arity for instrumented methods w/o args · b74308c0
      Yorick Peterse authored
      This ensures that an instrumented method that doesn't take arguments
      reports an arity of 0, instead of -1.
      
      If Ruby had a proper method for finding out the required arguments of a
      method (e.g. Method#required_arguments) this would not have been an
      issue. Sadly the only two methods we have are Method#parameters and
      Method#arity, and both are equally painful to use.
      
      Fixes gitlab-org/gitlab-ce#12450
  21. 13 Jan, 2016 1 commit
    • Randomize metrics sample intervals · 057eb824
      Yorick Peterse authored
      Sampling data at a fixed interval means we can potentially miss data
      from events occurring between sampling intervals. For example, say we
      sample data every 15 seconds but Unicorn workers get killed after 10
      seconds. In this particular case it's possible to miss interesting data
      as the sampler will never get to actually submitting data.
      
      To work around this (at least for the most part) the sampling interval
      is randomized as following:
      
      1. Take the user specified sampling interval (15 seconds by default)
      2. Divide it by 2 (referred to as "half" below)
      3. Generate a range (using a step of 0.1) from -"half" to "half"
      4. Every time the sampler goes to sleep we'll grab the user provided
         interval and add a randomly chosen "adjustment" to it while making
         sure we don't pick the same value twice in a row.
      
      For a specified timeout of 15 this means the actual intervals can be
      anywhere between 7.5 and 22.5, but never can the same interval be used
      twice in a row.
      
      The rationale behind this change is that on dev.gitlab.org I'm sometimes
      seeing certain Gitlab::Git/Rugged objects being retained, but only for a
      few minutes every 24 hours. Knowing the code of Gitlab and how much
      memory it uses/leaks I suspect we're missing data due to workers getting
      terminated before the sampler can write its data to InfluxDB.
  22. 12 Jan, 2016 2 commits
    • Stop tracking call stacks for instrumented views · 355c341f
      Yorick Peterse authored
      Where a vew is called from doesn't matter as much. We already know what
      action they belong to and this is more than enough information. By
      removing the file/line number from the list of tags we should also be
      able to reduce the number of series stored in InfluxDB.
    • Track memory allocated during a transaction · 5679ee01
      Yorick Peterse authored
      This gives a very rough estimate of how much memory is allocated during
      a transaction. This only works reliably when using a single-threaded
      application server and a Ruby implementation with a GIL as otherwise
      memory allocated by other threads might skew the statistics. Sadly
      there's no way around this as Ruby doesn't provide a reliable way of
      gathering accurate object sizes upon allocation on a per-thread basis.
  23. 11 Jan, 2016 1 commit
    • Tag all transaction metrics with an "action" tag · 35b501f3
      Yorick Peterse authored
      Without this it's impossible to find out what methods/views/queries are
      executed by a certain controller or Sidekiq worker. While this will
      increase the total number of series it should stay within reasonable
      limits due to the amount of "actions" being small enough.
  24. 07 Jan, 2016 3 commits
  25. 06 Jan, 2016 1 commit