- 25 Aug, 2016 1 commit
-
-
Paco Guzman authored
-
- 17 Aug, 2016 1 commit
-
-
Yorick Peterse authored
GitLab Performance Monitoring is now able to track custom events not directly related to application performance. These events include the number of tags pushed, repositories created, builds registered, etc. The use of these events is to get a better overview of how a GitLab instance is used and how that may affect performance. For example, a large number of Git pushes may have a negative impact on the underlying storage engine. Events are stored in the "events" measurement and are not prefixed with "rails_" or "sidekiq_", this makes it easier to query events with the same name triggered from different parts of the application. All events being stored in the same measurement also makes it easier to downsample data. Currently the following events are tracked: * Creating repositories * Removing repositories * Changing the default branch of a repository * Pushing a new tag * Removing an existing tag * Pushing a commit (along with the branch being pushed to) * Pushing a new branch * Removing an existing branch * Importing a repository (along with the URL we're importing) * Forking a repository (along with the source/target path) * CI builds registered (and when no build could be found) * CI builds being updated * Rails and Sidekiq exceptions Fixes gitlab-org/gitlab-ce#13720
-
- 28 Jul, 2016 1 commit
-
-
Yorick Peterse authored
This reduces the overhead of the method instrumentation code primarily by reducing the number of method calls. There are also some other small optimisations such as not casting timing values to Floats (there's no particular need for this), using Symbols for method call metric names, and reducing the number of Hash lookups for instrumented methods. The exact impact depends on the code being executed. For example, for a method that's only called once the difference won't be very noticeable. However, for methods that are called many times the difference can be more significant. For example, the loading time of a large commit (nrclark/dummy_project@81ebdea5df2fb42e59257cb3eaad671a5c53ca36) was reduced from around 19 seconds to around 15 seconds using these changes.
-
- 05 Jul, 2016 2 commits
-
-
Paco Guzman authored
-
Paco Guzman authored
-
- 28 Jun, 2016 1 commit
-
-
Yorick Peterse authored
Process.clock_gettime allows getting the real time in nanoseconds as well as allowing one to get a monotonic timestamp. This offers greater accuracy without the overhead of having to allocate a Time instance. In general using Time.now/Time.new is about 2x slower than using Process.clock_gettime(). For example: require 'benchmark/ips' Benchmark.ips do |bench| bench.report 'Time.now' do Time.now.to_f end bench.report 'clock_gettime' do Process.clock_gettime(Process::CLOCK_MONOTONIC, :millisecond) end bench.compare! end Running this benchmark gives: Calculating ------------------------------------- Time.now 108.052k i/100ms clock_gettime 125.984k i/100ms ------------------------------------------------- Time.now 2.343M (± 7.1%) i/s - 11.670M clock_gettime 4.979M (± 0.8%) i/s - 24.945M Comparison: clock_gettime: 4979393.8 i/s Time.now: 2342986.8 i/s - 2.13x slower Another benefit of using Process.clock_gettime() is that we can simplify the code a bit since it can give timestamps in nanoseconds out of the box.
-
- 23 Jun, 2016 1 commit
-
-
Paco Guzman authored
-
- 17 Jun, 2016 2 commits
-
-
Yorick Peterse authored
Previously we'd create a separate Metric instance for every method call that would exceed the method call threshold. This is problematic because it doesn't provide us with information to accurately get the _total_ execution time of a particular method. For example, if the method "Foo#bar" was called 4 times with a runtime of ~10 milliseconds we'd end up with 4 different Metric instances. If we were to then get the average/95th percentile/etc of the timings this would be roughly 10 milliseconds. However, the _actual_ total time spent in this method would be around 40 milliseconds. To solve this problem we now create a single Metric instance per method. This Metric instance contains the _total_ real/CPU time and the call count for every instrumented method.
-
Paco Guzman authored
-
- 16 Jun, 2016 2 commits
-
-
James Lopez authored
This reverts commit 13e37a3e.
-
James Lopez authored
-
- 14 Jun, 2016 4 commits
-
-
Yorick Peterse authored
We can't do a lot with classes without names as we can't filter by them, have no idea where they come from, etc. As such it's best to just ignore these.
-
Paco Guzman authored
By default instrumentation will instrument public, protected and private methods, because usually heavy work is done on private method or at least that’s what facts is showing
-
Paco Guzman authored
Generating the following tags Grape#GET /projects/:id/archive from Grape::Route objects like { :path => /:version/projects/:id/archive(.:format) :version => “v3”, :method => “GET” } Use an instance variable to cache raw_path transformations. This variable is only going to growth to the number of endpoints of the API, not with exact different requests We can store this cache as an instance variable because middleware are initialised only once
-
Paco Guzman authored
-
- 03 Jun, 2016 2 commits
-
-
James Lopez authored
This reverts commit 3e991230.
-
James Lopez authored
# Conflicts: # app/models/project.rb
-
- 15 May, 2016 1 commit
-
-
Pablo Carranza authored
-
- 12 May, 2016 1 commit
-
-
Yorick Peterse authored
Because method call timings are inclusive (that is, they include the time of any sub method calls) this would lead to the total method execution time often being far greater than the total transaction time. Because this is incredibly confusing it's best to simply _not_ track the total method execution time, after all it's not that useful to begin with. Fixes gitlab-org/gitlab-ce#17239
-
- 18 Apr, 2016 2 commits
-
-
Yorick Peterse authored
Fixes gitlab-org/gitlab-ce#15335
-
Yorick Peterse authored
By using Module#prepend we can define a Module containing all proxy methods. This removes the need for setting up crazy method alias chains and in turn prevents us from having to deal with all that madness (e.g. methods calling each other recursively). Fixes gitlab-org/gitlab-ce#15281
-
- 11 Apr, 2016 1 commit
-
-
Yorick Peterse authored
This makes it easier to query, simplifies the code, and makes it possible to figure out what transaction the data belongs to (simply because it's now stored _in_ the transaction). This new setup keeps track of both the real/wall time _and_ CPU time spent in a block, both measured using milliseconds (to keep all units the same).
-
- 08 Apr, 2016 2 commits
-
-
Yorick Peterse authored
This allows us to track how much time of a transaction is spent in dealing with cached data.
-
Yorick Peterse authored
This changes the timestamp of metrics to be more accurate/unique by using Time#to_f combined with a small random jitter value. This combination hopefully reduces the amount of collisions, though there's no way to fully prevent any from occurring. Fixes gitlab-com/operations#175
-
- 25 Jan, 2016 1 commit
-
-
Yorick Peterse authored
This ensures that an instrumented method that doesn't take arguments reports an arity of 0, instead of -1. If Ruby had a proper method for finding out the required arguments of a method (e.g. Method#required_arguments) this would not have been an issue. Sadly the only two methods we have are Method#parameters and Method#arity, and both are equally painful to use. Fixes gitlab-org/gitlab-ce#12450
-
- 13 Jan, 2016 2 commits
-
-
Yorick Peterse authored
Sampling data at a fixed interval means we can potentially miss data from events occurring between sampling intervals. For example, say we sample data every 15 seconds but Unicorn workers get killed after 10 seconds. In this particular case it's possible to miss interesting data as the sampler will never get to actually submitting data. To work around this (at least for the most part) the sampling interval is randomized as following: 1. Take the user specified sampling interval (15 seconds by default) 2. Divide it by 2 (referred to as "half" below) 3. Generate a range (using a step of 0.1) from -"half" to "half" 4. Every time the sampler goes to sleep we'll grab the user provided interval and add a randomly chosen "adjustment" to it while making sure we don't pick the same value twice in a row. For a specified timeout of 15 this means the actual intervals can be anywhere between 7.5 and 22.5, but never can the same interval be used twice in a row. The rationale behind this change is that on dev.gitlab.org I'm sometimes seeing certain Gitlab::Git/Rugged objects being retained, but only for a few minutes every 24 hours. Knowing the code of Gitlab and how much memory it uses/leaks I suspect we're missing data due to workers getting terminated before the sampler can write its data to InfluxDB.
-
Yorick Peterse authored
-
- 12 Jan, 2016 2 commits
-
-
Yorick Peterse authored
Where a vew is called from doesn't matter as much. We already know what action they belong to and this is more than enough information. By removing the file/line number from the list of tags we should also be able to reduce the number of series stored in InfluxDB.
-
Yorick Peterse authored
This gives a very rough estimate of how much memory is allocated during a transaction. This only works reliably when using a single-threaded application server and a Ruby implementation with a GIL as otherwise memory allocated by other threads might skew the statistics. Sadly there's no way around this as Ruby doesn't provide a reliable way of gathering accurate object sizes upon allocation on a per-thread basis.
-
- 11 Jan, 2016 1 commit
-
-
Yorick Peterse authored
Without this it's impossible to find out what methods/views/queries are executed by a certain controller or Sidekiq worker. While this will increase the total number of series it should stay within reasonable limits due to the amount of "actions" being small enough.
-
- 07 Jan, 2016 3 commits
-
-
Yorick Peterse authored
Since filtering by these values is very rare (they're mostly just displayed as-is) we don't need to waste any index space by saving them as tags. By storing them as values we also greatly reduce the number of series in InfluxDB.
-
Yorick Peterse authored
While useful for finding out what methods/views belong to a transaction this might result in too much data being stored in InfluxDB.
-
Yorick Peterse authored
This reverts commit 7549102b. Apparently I was wrong about ActiveSupport::Notifications::Event#duration returning the duration in seconds, instead it returns it in milliseconds already.
-
- 06 Jan, 2016 1 commit
-
-
Yorick Peterse authored
Transaction timings are also already stored in milliseconds, this keeps things consistent.
-
- 04 Jan, 2016 5 commits
-
-
Yorick Peterse authored
This ensures Rails and Sidekiq transactions are split into the series "rails_transactions" and "sidekiq_transactions" respectively.
-
Yorick Peterse authored
This removes the need for any tags to differentiate between Sidekiq and Rails statistics while still being able to separate the two.
-
Yorick Peterse authored
This makes it easier to see where time is spent without having to aggregate all the individual points in the method_calls series.
-
Yorick Peterse authored
-
Yorick Peterse authored
This will be used to store/increment the total query/view rendering timings on a per transaction basis. This in turn can greatly reduce the amount of metrics stored.
-
- 31 Dec, 2015 1 commit
-
-
Yorick Peterse authored
This isn't hugely useful and mostly wastes InfluxDB space. We can re-add this whenever needed (but only once we really need it).
-