Getting Started
Cortex is a powerful platform software that can be run in two modes: as a single binary or as multiple independent microservices. This guide will help you get started with Cortex in single-binary mode using blocks storage.
Prerequisites
Cortex can be configured to use local storage or cloud storage (S3, GCS, and Azure). It can also utilize external Memcached and Redis instances for caching. This guide will focus on running Cortex as a single process with no dependencies.
Running Cortex as a Single Instance
For simplicity, we’ll start by running Cortex as a single process with no dependencies. This mode is not recommended or intended for production environments or production use.
This example uses Docker Compose to set up:
- An instance of SeaweedFS for S3-compatible object storage
- An instance of Cortex to receive metrics
- An instance of Prometheus to send metrics to Cortex
- An instance of Grafana to visualize the metrics
Instructions
Start the services
$ cd docs/getting-started
$ docker-compose up -d --wait
We can now access the following services:
If everything is working correctly, Prometheus should be sending metrics that it is scraping to Cortex. Prometheus is
configured to send metrics to Cortex via remote_write
. Check out the prometheus-config.yaml
file to see
how this is configured.
Configure SeaweedFS (S3)
# Create buckets in SeaweedFS
$ curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" -X PUT http://localhost:8333/cortex-blocks
$ curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" -X PUT http://localhost:8333/cortex-ruler
$ curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" -X PUT http://localhost:8333/cortex-alertmanager
Configure Cortex Recording Rules and Alerting Rules
We can configure Cortex with cortextool to load recording rules and alerting rules. This is optional, but it is helpful to see how Cortex can be configured to manage rules and alerts.
# Configure recording rules for the cortex tenant (optional)
$ docker run --network host -v $(pwd):/workspace -w /workspace quay.io/cortexproject/cortex-tools:v0.17.0 rules sync rules.yaml alerts.yaml --id cortex --address http://localhost:9009
Configure Cortex Alertmanager
Cortex also comes with a multi-tenant Alertmanager. Let’s load configuration for it to be able to view them in Grafana.
# Configure alertmanager for the cortex tenant
$ docker run --network host -v $(pwd):/workspace -w /workspace quay.io/cortexproject/cortex-tools:v0.17.0 alertmanager load alertmanager-config.yaml --id cortex --address http://localhost:9009
You can configure Alertmanager in Grafana as well.
There’s a list of recording rules and alerts that should be visible in Grafana here.
Explore
Grafana is configured to use Cortex as a data source. Grafana is also configured with Cortex Dashboards to understand the state of the Cortex instance. The dashboards are generated from the cortex-jsonnet repository. There is a Makefile in the repository that can be used to update the dashboards.
# Update the dashboards (optional)
$ make
If everything is working correctly, then the metrics seen in Grafana were successfully sent from Prometheus to Cortex
via remote_write
!
Other things to explore:
- Cortex - Administrative interface for Cortex
- Try shutting down the ingester, and see how it affects metric ingestion.
- Restart Cortex to bring the ingester back online, and see how Prometheus catches up.
- Does it affect the querying of metrics in Grafana?
- Prometheus - Prometheus instance that is sending metrics to Cortex
- Try querying the metrics in Prometheus.
- Are they the same as what you see in Cortex?
- Grafana - Grafana instance that is visualizing the metrics.
- Try creating a new dashboard and adding a new panel with a query to Cortex.
Clean up
$ docker-compose down
Running Cortex in microservice mode
Now that you have Cortex running as a single instance, let’s explore how to run Cortex in microservice mode.
Prerequisites
This example uses Kind to set up:
- A Kubernetes cluster
- An instance of SeaweedFS for S3-compatible object storage
- An instance of Cortex to receive metrics
- An instance of Prometheus to send metrics to Cortex
- An instance of Grafana to visualize the metrics
Setup Kind
$ kind create cluster
Configure Helm
$ helm repo add cortex-helm https://cortexproject.github.io/cortex-helm-chart
$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
Instructions
$ cd docs/getting-started
Configure SeaweedFS (S3)
# Create a namespace
$ kubectl create namespace cortex
# We can emulate S3 with SeaweedFS
$ kubectl -n cortex apply -f seaweedfs.yaml
# Wait for SeaweedFS to be ready
$ kubectl -n cortex wait --for=condition=ready pod -l app=seaweedfs
# Port-forward to SeaweedFS to create a bucket
$ kubectl -n cortex port-forward svc/seaweedfs 8333
# Create buckets in SeaweedFS
$ curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" -X PUT http://localhost:8333/cortex-blocks
$ curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" -X PUT http://localhost:8333/cortex-ruler
$ curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" -X PUT http://localhost:8333/cortex-alertmanager
Setup Cortex
# Deploy Cortex using the provided values file which configures
# - blocks storage to use the seaweedfs service
$ helm upgrade --install --version=2.4.0 --namespace cortex cortex cortex-helm/cortex -f cortex-values.yaml
Setup Prometheus
# Deploy Prometheus to scrape metrics in the cluster and send them, via remote_write, to Cortex.
$ helm upgrade --install --version=25.20.1 --namespace cortex prometheus prometheus-community/prometheus -f prometheus-values.yaml
If everything is working correctly, Prometheus should be sending metrics that it is scraping to Cortex. Prometheus is
configured to send metrics to Cortex via remote_write
. Check out the prometheus-config.yaml
file to see
how this is configured.
Setup Grafana
# Deploy Grafana to visualize the metrics that were sent to Cortex.
$ helm upgrade --install --version=7.3.9 --namespace cortex grafana grafana/grafana -f grafana-values.yaml
# Create dashboards for Cortex
$ for dashboard in $(ls dashboards); do
basename=$(basename -s .json $dashboard)
cmname=grafana-dashboard-$basename
kubectl create -n cortex cm $cmname --from-file=$dashboard=dashboards/$dashboard --save-config=true -o yaml --dry-run=client | kubectl apply -f -
kubectl patch -n cortex cm $cmname -p '{"metadata":{"labels":{"grafana_dashboard":""}}}'
done
# Port-forward to Grafana to visualize
kubectl --namespace cortex port-forward deploy/grafana 3000
Configure Cortex Recording Rules and Alerting Rules (Optional)
We can configure Cortex with cortextool to load recording rules and alerting rules. This is optional, but it is helpful to see how Cortex can be configured to manage rules and alerts.
# Port forward to the alertmanager to configure recording rules and alerts
$ kubectl --namespace cortex port-forward svc/cortex-nginx 8080:80
# Configure recording rules for the cortex tenant
$ cortextool rules sync rules.yaml alerts.yaml --id cortex --address http://localhost:8080
Configure Cortex Alertmanager (Optional)
Cortex also comes with a multi-tenant Alertmanager. Let’s load configuration for it to be able to view them in Grafana.
# Configure alertmanager for the cortex tenant
$ cortextool alertmanager load alertmanager-config.yaml --id cortex --address http://localhost:8080
You can configure Alertmanager in Grafana as well.
There’s a list of recording rules and alerts that should be visible in Grafana here.
Explore
Grafana is configured to use Cortex as a data source. Grafana is also configured with Cortex Dashboards to understand the state of the Cortex instance. The dashboards are generated from the cortex-jsonnet repository. There is a Makefile in the repository that can be used to update the dashboards.
# Update the dashboards (optional)
$ make
If everything is working correctly, then the metrics seen in Grafana were successfully sent from Prometheus to Cortex
via remote_write
!
Other things to explore:
Cortex - Administrative interface for Cortex
# Port forward to the ingester to see the administrative interface for Cortex
$ kubectl --namespace cortex port-forward deploy/cortex-ingester 9009:8080
- Try shutting down the ingester, and see how it affects metric ingestion.
- Restart Cortex to bring the ingester back online, and see how Prometheus catches up.
- Does it affect the querying of metrics in Grafana?
Prometheus - Prometheus instance that is sending metrics to Cortex
# Port forward to Prometheus to see the metrics that are being scraped
$ kubectl --namespace cortex port-forward deploy/prometheus-server 9090
- Try querying the metrics in Prometheus.
- Are they the same as what you see in Cortex?
Grafana - Grafana instance that is visualizing the metrics.
# Port forward to Grafana to visualize
$ kubectl --namespace cortex port-forward deploy/grafana 3000
- Try creating a new dashboard and adding a new panel with a query to Cortex.
Clean up
$ kind delete cluster