BigW Consortium Gitlab

  1. 17 Aug, 2016 1 commit
    • Tracking of custom events · d345591f
      Yorick Peterse authored
      GitLab Performance Monitoring is now able to track custom events not
      directly related to application performance. These events include the
      number of tags pushed, repositories created, builds registered, etc.
      
      The use of these events is to get a better overview of how a GitLab
      instance is used and how that may affect performance. For example, a
      large number of Git pushes may have a negative impact on the underlying
      storage engine.
      
      Events are stored in the "events" measurement and are not prefixed with
      "rails_" or "sidekiq_", this makes it easier to query events with the
      same name triggered from different parts of the application. All events
      being stored in the same measurement also makes it easier to downsample
      data.
      
      Currently the following events are tracked:
      
      * Creating repositories
      * Removing repositories
      * Changing the default branch of a repository
      * Pushing a new tag
      * Removing an existing tag
      * Pushing a commit (along with the branch being pushed to)
      * Pushing a new branch
      * Removing an existing branch
      * Importing a repository (along with the URL we're importing)
      * Forking a repository (along with the source/target path)
      * CI builds registered (and when no build could be found)
      * CI builds being updated
      * Rails and Sidekiq exceptions
      
      Fixes gitlab-org/gitlab-ce#13720
  2. 28 Jul, 2016 1 commit
    • Reduce instrumentation overhead · 905f8d76
      Yorick Peterse authored
      This reduces the overhead of the method instrumentation code primarily
      by reducing the number of method calls. There are also some other small
      optimisations such as not casting timing values to Floats (there's no
      particular need for this), using Symbols for method call metric names,
      and reducing the number of Hash lookups for instrumented methods.
      
      The exact impact depends on the code being executed. For example, for a
      method that's only called once the difference won't be very noticeable.
      However, for methods that are called many times the difference can be
      more significant.
      
      For example, the loading time of a large commit
      (nrclark/dummy_project@81ebdea5df2fb42e59257cb3eaad671a5c53ca36)
      was reduced from around 19 seconds to around 15 seconds using these
      changes.
  3. 28 Jun, 2016 1 commit
    • Use clock_gettime for all performance timestamps · d7b4f36a
      Yorick Peterse authored
      Process.clock_gettime allows getting the real time in nanoseconds as
      well as allowing one to get a monotonic timestamp. This offers greater
      accuracy without the overhead of having to allocate a Time instance. In
      general using Time.now/Time.new is about 2x slower than using
      Process.clock_gettime(). For example:
      
          require 'benchmark/ips'
      
          Benchmark.ips do |bench|
            bench.report 'Time.now' do
              Time.now.to_f
            end
      
            bench.report 'clock_gettime' do
              Process.clock_gettime(Process::CLOCK_MONOTONIC, :millisecond)
            end
      
            bench.compare!
          end
      
      Running this benchmark gives:
      
          Calculating -------------------------------------
                      Time.now   108.052k i/100ms
                 clock_gettime   125.984k i/100ms
          -------------------------------------------------
                      Time.now      2.343M (± 7.1%) i/s -     11.670M
                 clock_gettime      4.979M (± 0.8%) i/s -     24.945M
      
          Comparison:
                 clock_gettime:  4979393.8 i/s
                      Time.now:  2342986.8 i/s - 2.13x slower
      
      Another benefit of using Process.clock_gettime() is that we can simplify
      the code a bit since it can give timestamps in nanoseconds out of the
      box.
  4. 17 Jun, 2016 1 commit
    • Track method call times/counts as a single metric · be3b8784
      Yorick Peterse authored
      Previously we'd create a separate Metric instance for every method call
      that would exceed the method call threshold. This is problematic because
      it doesn't provide us with information to accurately get the _total_
      execution time of a particular method. For example, if the method
      "Foo#bar" was called 4 times with a runtime of ~10 milliseconds we'd end
      up with 4 different Metric instances. If we were to then get the
      average/95th percentile/etc of the timings this would be roughly 10
      milliseconds. However, the _actual_ total time spent in this method
      would be around 40 milliseconds.
      
      To solve this problem we now create a single Metric instance per method.
      This Metric instance contains the _total_ real/CPU time and the call
      count for every instrumented method.
  5. 12 Jan, 2016 1 commit
    • Track memory allocated during a transaction · 5679ee01
      Yorick Peterse authored
      This gives a very rough estimate of how much memory is allocated during
      a transaction. This only works reliably when using a single-threaded
      application server and a Ruby implementation with a GIL as otherwise
      memory allocated by other threads might skew the statistics. Sadly
      there's no way around this as Ruby doesn't provide a reliable way of
      gathering accurate object sizes upon allocation on a per-thread basis.
  6. 11 Jan, 2016 1 commit
    • Tag all transaction metrics with an "action" tag · 35b501f3
      Yorick Peterse authored
      Without this it's impossible to find out what methods/views/queries are
      executed by a certain controller or Sidekiq worker. While this will
      increase the total number of series it should stay within reasonable
      limits due to the amount of "actions" being small enough.
  7. 07 Jan, 2016 2 commits
    • Store request methods/URIs as values · 7b10cb6f
      Yorick Peterse authored
      Since filtering by these values is very rare (they're mostly just
      displayed as-is) we don't need to waste any index space by saving them
      as tags. By storing them as values we also greatly reduce the number of
      series in InfluxDB.
    • Removed UUIDs from metrics transactions · 364b07cf
      Yorick Peterse authored
      While useful for finding out what methods/views belong to a transaction
      this might result in too much data being stored in InfluxDB.
  8. 04 Jan, 2016 2 commits
  9. 31 Dec, 2015 1 commit
  10. 29 Dec, 2015 1 commit
    • Write to InfluxDB directly via UDP · 620e7bb3
      Yorick Peterse authored
      This removes the need for Sidekiq and any overhead/problems introduced
      by TCP. There are a few things to take into account:
      
      1. When writing data to InfluxDB you may still get an error if the
         server becomes unavailable during the write. Because of this we're
         catching all exceptions and just ignore them (for now).
      2. Writing via UDP apparently requires the timestamp to be in
         nanoseconds. Without this data either isn't written properly.
      3. Due to the restrictions on UDP buffer sizes we're writing metrics one
         by one, instead of writing all of them at once.
  11. 17 Dec, 2015 1 commit
    • Storing of application metrics in InfluxDB · 141e946c
      Yorick Peterse authored
      This adds the ability to write application metrics (e.g. SQL timings) to
      InfluxDB. These metrics can in turn be visualized using Grafana, or
      really anything else that can read from InfluxDB. These metrics can be
      used to track application performance over time, between different Ruby
      versions, different GitLab versions, etc.
      
      == Transaction Metrics
      
      Currently the following is tracked on a per transaction basis (a
      transaction is a Rails request or a single Sidekiq job):
      
      * Timings per query along with the raw (obfuscated) SQL and information
        about what file the query originated from.
      * Timings per view along with the path of the view and information about
        what file triggered the rendering process.
      * The duration of a request itself along with the controller/worker
        class and method name.
      * The duration of any instrumented method calls (more below).
      
      == Sampled Metrics
      
      Certain metrics can't be directly associated with a transaction. For
      example, a process' total memory usage is unrelated to any running
      transactions. While a transaction can result in the memory usage going
      up there's no accurate way to determine what transaction is to blame,
      this becomes especially problematic in multi-threaded environments.
      
      To solve this problem there's a separate thread that takes samples at a
      fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
      currently tracks the following:
      
      * The process' total memory usage.
      * The number of file descriptors opened by the process.
      * The amount of Ruby objects (using ObjectSpace.count_objects).
      * GC statistics such as timings, heap slots, etc.
      
      The default/current interval is 15 seconds, any smaller interval might
      put too much pressure on InfluxDB (especially when running dozens of
      processes).
      
      == Method Instrumentation
      
      While currently not yet used methods can be instrumented to track how
      long they take to run. Unlike the likes of New Relic this doesn't
      require modifying the source code (e.g. including modules), it all
      happens from the outside. For example, to track `User.by_login` we'd add
      the following code somewhere in an initializer:
      
          Gitlab::Metrics::Instrumentation.
            instrument_method(User, :by_login)
      
      to instead instrument an instance method:
      
          Gitlab::Metrics::Instrumentation.
            instrument_instance_method(User, :save)
      
      Instrumentation for either all public model methods or a few crucial
      ones will be added in the near future, I simply haven't gotten to doing
      so just yet.
      
      == Configuration
      
      By default metrics are disabled. This means users don't have to bother
      setting anything up if they don't want to. Metrics can be enabled by
      editing one's gitlab.yml configuration file (see
      config/gitlab.yml.example for example settings).
      
      == Writing Data To InfluxDB
      
      Because InfluxDB is still a fairly young product I expect the worse.
      Data loss, unexpected reboots, the database not responding, you name it.
      Because of this data is _not_ written to InfluxDB directly, instead it's
      queued and processed by Sidekiq. This ensures that users won't notice
      anything when InfluxDB is giving trouble.
      
      The metrics worker can be started in a standalone manner as following:
      
          bundle exec sidekiq -q metrics
      
      The corresponding class is called MetricsWorker.