@@ -17,32 +17,27 @@ the settings page with a default template. To configure the template, see the
Integration with Prometheus requires the following:
1. GitLab 9.0 or higher
1. The [Kubernetes integration must be enabled][kube] on your project
1. Your app must be deployed on [Kubernetes][]
1. Prometheus must be configured to collect Kubernetes metrics
1. Prometheus must be configured to collect one of the [supported metrics](prometheus_library/metrics.md)
1. Each metric must be have a label to indicate the environment
1. GitLab must have network connectivity to the Prometheus sever
1. GitLab must have network connectivity to the Prometheus server
There are a few steps necessary to set up integration between Prometheus and
GitLab.
## Getting started with Prometheus monitoring
## Configuring Prometheus to collect Kubernetes metrics
Depending on your deployment and where you have located your GitLab server, there are a few options to get started with Prometheus monitoring.
In order for Prometheus to collect Kubernetes metrics, you first must have a
Prometheus server up and running. You have two options here:
* If both GitLab and your applications are installed in the same Kubernetes cluster, you can leverage the [bundled Prometheus server within GitLab](#configuring-omnibus-gitlab-prometheus-to-monitor-kubernetes).
* If your applications are deployed on Kubernetes, but GitLab is not in the same cluster, then you can [configure a Prometheus server in your Kubernetes cluster](#configuring-your-own-prometheus-server-within-kubernetes).
* If your applications are not running in Kubernetes, [get started with Prometheus](#getting-started-with-prometheus-outside-of-kubernetes).
- If you installed Omnibus GitLab inside of Kubernetes, you can simply use the
[bundled version of Prometheus][promgldocs]. In that case, follow the info in the
1. Set up one of the [supported monitoring targets](prometheus_library/metrics.md)
1. Configure the Prometheus server to [collect their metrics](https://prometheus.io/docs/operating/configuration/#scrape_config)
### Configuring Omnibus GitLab Prometheus to monitor Kubernetes deployments
With Omnibus GitLab running inside of Kubernetes, you can leverage the bundled
version of Prometheus to collect the supported metrics. Once enabled, Prometheus will automatically begin monitoring Kubernetes Nodes and any [annotated Pods](https://prometheus.io/docs/operating/configuration/#<kubernetes_sd_config>).
Once deployed, you should see the Prometheus service, deployment, and
pod start within the `prometheus` namespace. The server will begin to collect
metrics from each Kubernetes Node in the cluster, based on the configuration
provided in the template.
provided in the template. It will also attempt to collect metrics from any Kubernetes Pods that have been [annotated for Prometheus](https://prometheus.io/docs/operating/configuration/#pod).
Since GitLab is not running within Kubernetes, the template provides external
network access via a `NodePort` running on `30090`. This method allows access
Developers can view theperformance impact of their changes within the merge
Developers can view theperformance impact of their changes within the merge
request workflow. When a source branch has been deployed to an environment, a sparkline and numeric comparison of the average memory consumption will appear. On the sparkline, a dot
indicates when the current changes were deployed, with up to 30 minutes of
performance data displayed before and after. The comparison shows the difference between the 30 minute average before and after the deployment. This information is updated after
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/12621) in GitLab 9.4
GitLab has support for automatically detecting and monitoring AWS resources, starting with the [Elastic Load Balancer](https://aws.amazon.com/elasticloadbalancing/). This is provided by leveraging the official [Cloudwatch exporter](https://github.com/prometheus/cloudwatch_exporter), which translates [Cloudwatch metrics](https://aws.amazon.com/cloudwatch/) into a Prometheus readable form.
## Configuring Prometheus to monitor for Cloudwatch metrics
To get started with Cloudwatch monitoring, you should install and configure the [Cloudwatch exporter](https://github.com/hnlq715/nginx-vts-exporter) which retrieves and parses the specified Cloudwatch metrics and translates them into a Prometheus monitoring endpoint.
Right now, the only AWS resource supported is the Elastic Load Balancer, whose Cloudwatch metrics can be found [here](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-cloudwatch-metrics.html).
A sample Cloudwatch Exporter configuration file, configured for basic AWS ELB monitoring, is [available for download](../samples/cloudwatch.yml).
## Specifying the Environment label
In order to isolate and only display relevant metrics for a given environment
however, GitLab needs a method to detect which labels are associated. To do this, GitLab will [look for an `environment` label](metrics.md#identifying-environments).
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/8935) in GitLab 9.0
GitLab has support for automatically detecting and monitoring Kubernetes metrics. Kubernetes exposes Node level metrics out of the box via the built-in [Prometheus metrics support in cAdvisor](https://github.com/google/cadvisor). No additional services or exporters are needed.
| Average CPU Utilization (%) | sum(rate(container_cpu_usage_seconds_total{container_name!="POD",%{environment_filter}}[2m])) / count(container_cpu_usage_seconds_total{container_name!="POD",%{environment_filter}}) * 100 |
## Configuring Prometheus to monitor for Kubernetes node metrics
In order for Prometheus to collect Kubernetes metrics, you first must have a
Prometheus server up and running. You have two options here:
- If you have an Omnibus based GitLab installation within your Kubernetes cluster, you can leverage the bundled Prometheus server to [monitor Kubernetes](../../../../administration/monitoring/prometheus/index.md#configuring-prometheus-to-monitor-kubernetes).
- To configure your own Prometheus server, you can follow the [Prometheus documentation](https://prometheus.io/docs/introduction/overview/) or [our guide](../../../../administration/monitoring/prometheus/index.md#configuring-your-own-prometheus-server-within-kubernetes).
## Specifying the Environment label
In order to isolate and only display relevant metrics for a given environment
however, GitLab needs a method to detect which labels are associated. To do this, GitLab will [look for an `environment` label](metrics.md#identifying-environments).
If you are using [GitLab Auto-Deploy][autodeploy] and one of the two [provided Kubernetes monitoring solutions](../prometheus.md#getting-started-with-prometheus-monitoring), the `environment` label will be automatically added.
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/8935) in GitLab 9.0
GitLab offers automatic detection of select [Prometheus exporters](https://prometheus.io/docs/instrumenting/exporters/). Currently supported exporters are:
*[Kubernetes](kubernetes.md)
*[NGINX](nginx.md)
*[Amazon Cloud Watch](cloudwatch.md)
We have tried to surface the most important metrics for each exporter, and will be continuing to add support for additional exporters in future releases. If you would like to add support for other official exporters, [contributions](#adding-to-the-library) are welcome.
## Identifying Environments
GitLab retrieves performance data from the configured Prometheus server, and attempts to identifying the presence of known metrics. Once identified, GitLab then needs to be able to map the data to a particular environment.
In order to isolate and only display relevant metrics for a given environment, GitLab needs a method to detect which labels are associated. To do that,
GitLab will look for the required metrics which have a label that
matches the [$CI_ENVIRONMENT_SLUG][ci-environment-slug].
For example if you are deploying to an environment named `production`, there must be a label for the metric with the value of `production`.
## Adding to the library
We strive to support the 2-4 most important metrics for each common system service that supports Prometheus. If you are looking for support for a particular exporter which has not yet been added to the library, additions can be made [to the `additional_metrics.yml`](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/config/prometheus/additional_metrics.yml) file.
> Note: The library is only for monitoring public, common, system services which all customers can benefit from. Support for monitoring [customer proprietary metrics](https://gitlab.com/gitlab-org/gitlab-ee/issues/2273) will be added in a subsequent release.
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/12621) in GitLab 9.4
GitLab has support for automatically detecting and monitoring NGINX. This is provided by leveraging the [NGINX VTS exporter](https://github.com/hnlq715/nginx-vts-exporter), which translates [VTS statistics](https://github.com/vozlt/nginx-module-vts) into a Prometheus readable form.
## Configuring Prometheus to monitor for NGINX metrics
To get started with NGINX monitoring, you should first enable the [VTS statistics](https://github.com/vozlt/nginx-module-vts)) module for your NGINX server. This will capture and display statistics in an HTML readable form. Next, you should install and configure the [NGINX VTS exporter](https://github.com/hnlq715/nginx-vts-exporter) which parses these statistics and translates them into a Prometheus monitoring endpoint.
If you are using NGINX as your Kubernetes ingress, there is [upcoming direct support](https://github.com/kubernetes/ingress/pull/423) for enabling Prometheus monitoring in the 0.9.0 release.
## Specifying the Environment label
In order to isolate and only display relevant metrics for a given environment
however, GitLab needs a method to detect which labels are associated. To do this, GitLab will [look for an `environment` label](metrics.md#identifying-environments).