Getting Started

Cortex is a powerful platform software that can be run in two modes: as a single binary or as multiple independent microservices. This guide will help you get started with Cortex in single-binary mode using blocks storage.

Prerequisites

Cortex can be configured to use local storage or cloud storage (S3, GCS, and Azure). It can also utilize external Memcached and Redis instances for caching. This guide will focus on running Cortex as a single process with no dependencies.

Running Cortex as a Single Instance

For simplicity, we’ll start by running Cortex as a single process with no dependencies. This mode is not recommended or intended for production environments or production use.

This example uses Docker Compose to set up:

  1. An instance of SeaweedFS for S3-compatible object storage
  2. An instance of Cortex to receive metrics
  3. An instance of Prometheus to send metrics to Cortex
  4. An instance of Grafana to visualize the metrics

Instructions

Start the services

$ cd docs/getting-started
$ docker-compose up -d --wait

We can now access the following services:

If everything is working correctly, Prometheus should be sending metrics that it is scraping to Cortex. Prometheus is configured to send metrics to Cortex via remote_write. Check out the prometheus-config.yaml file to see how this is configured.

Configure SeaweedFS (S3)

# Create buckets in SeaweedFS
$ curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" -X PUT http://localhost:8333/cortex-blocks
$ curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" -X PUT http://localhost:8333/cortex-ruler
$ curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" -X PUT http://localhost:8333/cortex-alertmanager

Configure Cortex Recording Rules and Alerting Rules

We can configure Cortex with cortextool to load recording rules and alerting rules. This is optional, but it is helpful to see how Cortex can be configured to manage rules and alerts.

# Configure recording rules for the cortex tenant (optional)
$ docker run --network host -v $(pwd):/workspace -w /workspace quay.io/cortexproject/cortex-tools:v0.17.0 rules sync rules.yaml alerts.yaml --id cortex --address http://localhost:9009

Configure Cortex Alertmanager

Cortex also comes with a multi-tenant Alertmanager. Let’s load configuration for it to be able to view them in Grafana.

# Configure alertmanager for the cortex tenant
$ docker run --network host -v $(pwd):/workspace -w /workspace quay.io/cortexproject/cortex-tools:v0.17.0 alertmanager load alertmanager-config.yaml --id cortex --address http://localhost:9009

You can configure Alertmanager in Grafana as well.

There’s a list of recording rules and alerts that should be visible in Grafana here.

Explore

Grafana is configured to use Cortex as a data source. Grafana is also configured with Cortex Dashboards to understand the state of the Cortex instance. The dashboards are generated from the cortex-jsonnet repository. There is a Makefile in the repository that can be used to update the dashboards.

# Update the dashboards (optional)
$ make

If everything is working correctly, then the metrics seen in Grafana were successfully sent from Prometheus to Cortex via remote_write!

Other things to explore:

  • Cortex - Administrative interface for Cortex
    • Try shutting down the ingester, and see how it affects metric ingestion.
    • Restart Cortex to bring the ingester back online, and see how Prometheus catches up.
    • Does it affect the querying of metrics in Grafana?
  • Prometheus - Prometheus instance that is sending metrics to Cortex
    • Try querying the metrics in Prometheus.
    • Are they the same as what you see in Cortex?
  • Grafana - Grafana instance that is visualizing the metrics.
    • Try creating a new dashboard and adding a new panel with a query to Cortex.

Clean up

$ docker-compose down

Running Cortex in microservice mode

Now that you have Cortex running as a single instance, let’s explore how to run Cortex in microservice mode.

Prerequisites

This example uses Kind to set up:

  1. A Kubernetes cluster
  2. An instance of SeaweedFS for S3-compatible object storage
  3. An instance of Cortex to receive metrics
  4. An instance of Prometheus to send metrics to Cortex
  5. An instance of Grafana to visualize the metrics

Setup Kind

$ kind create cluster

Configure Helm

$ helm repo add cortex-helm https://cortexproject.github.io/cortex-helm-chart
$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

Instructions

$ cd docs/getting-started

Configure SeaweedFS (S3)

# Create a namespace
$ kubectl create namespace cortex
# We can emulate S3 with SeaweedFS
$ kubectl -n cortex apply -f seaweedfs.yaml
# Port-forward to SeaweedFS to create a bucket
$ kubectl -n cortex port-forward svc/seaweedfs 8333
# Create a bucket
$ curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" -X PUT http://localhost:8333/cortex-bucket

Setup Cortex

# Deploy Cortex using the provided values file which configures
# - blocks storage to use the seaweedfs service
$ helm upgrade --install --version=2.3.0  --namespace cortex cortex cortex-helm/cortex -f cortex-values.yaml

Setup Prometheus

# Deploy Prometheus to scrape metrics in the cluster and send them, via remote_write, to Cortex.
$ helm upgrade --install --version=25.20.1 --namespace cortex prometheus prometheus-community/prometheus -f prometheus-values.yaml

Setup Grafana

# Deploy Grafana to visualize the metrics that were sent to Cortex.
$ helm upgrade --install --version=7.3.9 --namespace cortex grafana grafana/grafana -f grafana-values.yaml

Explore

# Port-forward to Grafana to visualize
kubectl --namespace cortex port-forward deploy/grafana 3000

Grafana is configured to use Cortex as a data source. You can explore the data source in Grafana and query metrics. For example, this explore page is showing the rate of samples being sent to Cortex.

If everything is working correctly, then the metrics seen in Grafana were successfully sent from Prometheus to Cortex via remote_write!

Other things to explore:

# Port forward to the ingester to see the administrative interface for Cortex:
$ kubectl --namespace cortex port-forward deploy/cortex-ingester 8080
  • Cortex Ingester
    • Try shutting down the ingester and see how it affects metric ingestion.
    • Restart ingester pod to bring the ingester back online, and see if Prometheus affected.
    • Does it affect the querying of metrics in Grafana? How many ingesters must be offline before it affects querying?
# Port forward to Prometheus to see the metrics that are being scraped:
$ kubectl --namespace cortex port-forward deploy/prometheus-server 9090
  • Prometheus - Prometheus instance that is sending metrics to Cortex
    • Try querying the metrics in Prometheus.
    • Are they the same as what you see in Cortex?
# Port forward to Prometheus to see the metrics that are being scraped:
$ kubectl --namespace cortex port-forward deploy/grafana 3000
  • Grafana - Grafana instance that is visualizing the metrics.
    • Try creating a new dashboard and adding a new panel with a query to Cortex.

Clean up

$ kind delete cluster