Querier

The querier service handles queries using the PromQL query language. This document dives into the storage-specific details of the querier service. The general architecture documentation applies too.

The querier is stateless.

How it works

The querier needs to have an almost up-to-date view over the entire storage bucket, in order to find the right blocks to lookup at query time. The querier can keep the bucket view updated in to two different ways:

  1. Periodically scanning the bucket (default)
  2. Periodically downloading the bucket index

Bucket index disabled (default)

At startup, queriers iterate over the entire storage bucket to discover all tenants blocks and download the meta.json for each block. During this initial bucket scanning phase, a querier is not ready to handle incoming queries yet and its /ready readiness probe endpoint will fail.

While running, queriers periodically iterate over the storage bucket to discover new tenants and recently uploaded blocks. Queriers do not download any content from blocks except a small meta.json file containing the block’s metadata (including the minimum and maximum timestamp of samples within the block).

Queriers use the metadata to compute the list of blocks that need to be queried at query time and fetch matching series from the store-gateway instances holding the required blocks.

Bucket index enabled

When bucket index is enabled, queriers lazily download the bucket index upon the first query received for a given tenant, cache it in memory and periodically keep it update. The bucket index contains the list of blocks and block deletion marks of a tenant, which is later used during the query execution to find the set of blocks that need to be queried for the given query.

Given the bucket index removes the need to scan the bucket, it brings few benefits:

  1. The querier is expected to be ready shortly after startup.
  2. Lower volume of API calls to object storage.

Anatomy of a query request

When a querier receives a query range request, it contains the following parameters:

  • query: the PromQL query expression itself (e.g. rate(node_cpu_seconds_total[1m]))
  • start: the start time
  • end: the end time
  • step: the query resolution (e.g. 30 to have 1 resulting data point every 30s)

Given a query, the querier analyzes the start and end time range to compute a list of all known blocks containing at least 1 sample within this time range. Given the list of blocks, the querier then computes a list of store-gateway instances holding these blocks and sends a request to each matching store-gateway instance asking to fetch all the samples for the series matching the query within the start and end time range.

The request sent to each store-gateway contains the list of block IDs that are expected to be queried, and the response sent back by the store-gateway to the querier contains the list of block IDs that were actually queried. This list may be a subset of the requested blocks, for example due to recent blocks resharding event (ie. last few seconds). The querier runs a consistency check on responses received from the store-gateways to ensure all expected blocks have been queried; if not, the querier retries to fetch samples from missing blocks from different store-gateways (if the -store-gateway.sharding-ring.replication-factor is greater than 1) and if the consistency check fails after all retries, the query execution fails as well (correctness is always guaranteed).

If the query time range covers a period within -querier.query-ingesters-within duration, the querier also sends the request to all ingesters, in order to fetch samples that have not been uploaded to the long-term storage yet.

Once all samples have been fetched from both store-gateways and ingesters, the querier proceeds with running the PromQL engine to execute the query and send back the result to the client.

How queriers connect to store-gateway

Queriers need to discover store-gateways in order to connect to them at query time. The service discovery mechanism used depends whether blocks sharding is enabled in the store-gateways.

When blocks sharding is enabled, queriers need to access to the store-gateways hash ring and thus queriers need to be configured with the same -store-gateway.sharding-ring.* flags (or their respective YAML config options) store-gateways have been configured.

When blocks sharding is disabled, queriers need the -querier.store-gateway-addresses CLI flag (or its respective YAML config option) being set to a comma separated list of store-gateway addresses in DNS Service Discovery format. Queriers will evenly balance the requests to query blocks across the resolved addresses.

Caching

The querier supports the following caches:

Caching is optional, but highly recommended in a production environment. Please also check out the production tips for more information about configuring the cache.

Metadata cache

Store-gateway and querier can use memcached for caching bucket metadata:

  • List of tenants
  • List of blocks per tenant
  • Block’s meta.json content
  • Block’s deletion-mark.json existence and content
  • Tenant’s bucket-index.json.gz content

Using the metadata cache can significantly reduce the number of API calls to object storage and protects from linearly scale the number of these API calls with the number of querier and store-gateway instances (because the bucket is periodically scanned and synched by each querier and store-gateway).

To enable metadata cache, please set -blocks-storage.bucket-store.metadata-cache.backend. Only memcached backend is supported currently. Memcached client has additional configuration available via flags with -blocks-storage.bucket-store.metadata-cache.memcached.* prefix.

Additional options for configuring metadata cache have -blocks-storage.bucket-store.metadata-cache.* prefix. By configuring TTL to zero or negative value, caching of given item type is disabled.

The same memcached backend cluster should be shared between store-gateways and queriers.

Querier configuration

This section described the querier configuration. For the general Cortex configuration and references to common config blocks, please refer to the configuration documentation.

querier_config

The querier_config configures the Cortex querier.

querier:
  # The maximum number of concurrent queries.
  # CLI flag: -querier.max-concurrent
  [max_concurrent: <int> | default = 20]

  # The timeout for a query.
  # CLI flag: -querier.timeout
  [timeout: <duration> | default = 2m]

  # Deprecated (This feature will be always on after v1.18): Use streaming RPCs
  # for metadata APIs from ingester.
  # CLI flag: -querier.ingester-metadata-streaming
  [ingester_metadata_streaming: <boolean> | default = true]

  # Use LabelNames ingester RPCs with match params.
  # CLI flag: -querier.ingester-label-names-with-matchers
  [ingester_label_names_with_matchers: <boolean> | default = false]

  # Maximum number of samples a single query can load into memory.
  # CLI flag: -querier.max-samples
  [max_samples: <int> | default = 50000000]

  # Maximum lookback beyond which queries are not sent to ingester. 0 means all
  # queries are sent to ingester.
  # CLI flag: -querier.query-ingesters-within
  [query_ingesters_within: <duration> | default = 0s]

  # Enable returning samples stats per steps in query response.
  # CLI flag: -querier.per-step-stats-enabled
  [per_step_stats_enabled: <boolean> | default = false]

  # Use compression for metrics query API or instant and range query APIs.
  # Supports 'gzip' and '' (disable compression)
  # CLI flag: -querier.response-compression
  [response_compression: <string> | default = "gzip"]

  # The time after which a metric should be queried from storage and not just
  # ingesters. 0 means all queries are sent to store. When running the blocks
  # storage, if this option is enabled, the time range of the query sent to the
  # store will be manipulated to ensure the query end is not more recent than
  # 'now - query-store-after'.
  # CLI flag: -querier.query-store-after
  [query_store_after: <duration> | default = 0s]

  # Maximum duration into the future you can query. 0 to disable.
  # CLI flag: -querier.max-query-into-future
  [max_query_into_future: <duration> | default = 10m]

  # The default evaluation interval or step size for subqueries.
  # CLI flag: -querier.default-evaluation-interval
  [default_evaluation_interval: <duration> | default = 1m]

  # Max number of steps allowed for every subquery expression in query. Number
  # of steps is calculated using subquery range / step. A value > 0 enables it.
  # CLI flag: -querier.max-subquery-steps
  [max_subquery_steps: <int> | default = 0]

  # Active query tracker monitors active queries, and writes them to the file in
  # given directory. If Cortex discovers any queries in this log during startup,
  # it will log them to the log file. Setting to empty value disables active
  # query tracker, which also disables -querier.max-concurrent option.
  # CLI flag: -querier.active-query-tracker-dir
  [active_query_tracker_dir: <string> | default = "./active-query-tracker"]

  # Time since the last sample after which a time series is considered stale and
  # ignored by expression evaluations.
  # CLI flag: -querier.lookback-delta
  [lookback_delta: <duration> | default = 5m]

  # Comma separated list of store-gateway addresses in DNS Service Discovery
  # format. This option should be set when using the blocks storage and the
  # store-gateway sharding is disabled (when enabled, the store-gateway
  # instances form a ring and addresses are picked from the ring).
  # CLI flag: -querier.store-gateway-addresses
  [store_gateway_addresses: <string> | default = ""]

  store_gateway_client:
    # Enable TLS for gRPC client connecting to store-gateway.
    # CLI flag: -querier.store-gateway-client.tls-enabled
    [tls_enabled: <boolean> | default = false]

    # Path to the client certificate file, which will be used for authenticating
    # with the server. Also requires the key path to be configured.
    # CLI flag: -querier.store-gateway-client.tls-cert-path
    [tls_cert_path: <string> | default = ""]

    # Path to the key file for the client certificate. Also requires the client
    # certificate to be configured.
    # CLI flag: -querier.store-gateway-client.tls-key-path
    [tls_key_path: <string> | default = ""]

    # Path to the CA certificates file to validate server certificate against.
    # If not set, the host's root CA certificates are used.
    # CLI flag: -querier.store-gateway-client.tls-ca-path
    [tls_ca_path: <string> | default = ""]

    # Override the expected name on the server certificate.
    # CLI flag: -querier.store-gateway-client.tls-server-name
    [tls_server_name: <string> | default = ""]

    # Skip validating server certificate.
    # CLI flag: -querier.store-gateway-client.tls-insecure-skip-verify
    [tls_insecure_skip_verify: <boolean> | default = false]

    # Use compression when sending messages. Supported values are: 'gzip',
    # 'snappy' and '' (disable compression)
    # CLI flag: -querier.store-gateway-client.grpc-compression
    [grpc_compression: <string> | default = ""]

    # EXPERIMENTAL: If enabled, gRPC clients perform health checks for each
    # target and fail the request if the target is marked as unhealthy.
    healthcheck_config:
      # The number of consecutive failed health checks required before
      # considering a target unhealthy. 0 means disabled.
      # CLI flag: -querier.store-gateway-client.unhealthy-threshold
      [unhealthy_threshold: <int> | default = 0]

      # The approximate amount of time between health checks of an individual
      # target.
      # CLI flag: -querier.store-gateway-client.interval
      [interval: <duration> | default = 5s]

      # The amount of time during which no response from a target means a failed
      # health check.
      # CLI flag: -querier.store-gateway-client.timeout
      [timeout: <duration> | default = 1s]

  # If enabled, store gateway query stats will be logged using `info` log level.
  # CLI flag: -querier.store-gateway-query-stats-enabled
  [store_gateway_query_stats: <boolean> | default = true]

  # When distributor's sharding strategy is shuffle-sharding and this setting is
  # > 0, queriers fetch in-memory series from the minimum set of required
  # ingesters, selecting only ingesters which may have received series since
  # 'now - lookback period'. The lookback period should be greater or equal than
  # the configured 'query store after' and 'query ingesters within'. If this
  # setting is 0, queriers always query all ingesters (ingesters shuffle
  # sharding on read path is disabled).
  # CLI flag: -querier.shuffle-sharding-ingesters-lookback-period
  [shuffle_sharding_ingesters_lookback_period: <duration> | default = 0s]

  # Experimental. Use Thanos promql engine
  # https://github.com/thanos-io/promql-engine rather than the Prometheus promql
  # engine.
  # CLI flag: -querier.thanos-engine
  [thanos_engine: <boolean> | default = false]

  # If enabled, ignore max query length check at Querier select method. Users
  # can choose to ignore it since the validation can be done before Querier
  # evaluation like at Query Frontend or Ruler.
  # CLI flag: -querier.ignore-max-query-length
  [ignore_max_query_length: <boolean> | default = false]

blocks_storage_config

The blocks_storage_config configures the blocks storage.

blocks_storage:
  # Backend storage to use. Supported backends are: s3, gcs, azure, swift,
  # filesystem.
  # CLI flag: -blocks-storage.backend
  [backend: <string> | default = "s3"]

  s3:
    # The S3 bucket endpoint. It could be an AWS S3 endpoint listed at
    # https://docs.aws.amazon.com/general/latest/gr/s3.html or the address of an
    # S3-compatible service in hostname:port format.
    # CLI flag: -blocks-storage.s3.endpoint
    [endpoint: <string> | default = ""]

    # S3 region. If unset, the client will issue a S3 GetBucketLocation API call
    # to autodetect it.
    # CLI flag: -blocks-storage.s3.region
    [region: <string> | default = ""]

    # S3 bucket name
    # CLI flag: -blocks-storage.s3.bucket-name
    [bucket_name: <string> | default = ""]

    # S3 secret access key
    # CLI flag: -blocks-storage.s3.secret-access-key
    [secret_access_key: <string> | default = ""]

    # S3 access key ID
    # CLI flag: -blocks-storage.s3.access-key-id
    [access_key_id: <string> | default = ""]

    # If enabled, use http:// for the S3 endpoint instead of https://. This
    # could be useful in local dev/test environments while using an
    # S3-compatible backend storage, like Minio.
    # CLI flag: -blocks-storage.s3.insecure
    [insecure: <boolean> | default = false]

    # The signature version to use for authenticating against S3. Supported
    # values are: v4, v2.
    # CLI flag: -blocks-storage.s3.signature-version
    [signature_version: <string> | default = "v4"]

    # The s3 bucket lookup style. Supported values are: auto, virtual-hosted,
    # path.
    # CLI flag: -blocks-storage.s3.bucket-lookup-type
    [bucket_lookup_type: <string> | default = "auto"]

    # If true, attach MD5 checksum when upload objects and S3 uses MD5 checksum
    # algorithm to verify the provided digest. If false, use CRC32C algorithm
    # instead.
    # CLI flag: -blocks-storage.s3.send-content-md5
    [send_content_md5: <boolean> | default = true]

    # The s3_sse_config configures the S3 server-side encryption.
    # The CLI flags prefix for this block config is: blocks-storage
    [sse: <s3_sse_config>]

    http:
      # The time an idle connection will remain idle before closing.
      # CLI flag: -blocks-storage.s3.http.idle-conn-timeout
      [idle_conn_timeout: <duration> | default = 1m30s]

      # The amount of time the client will wait for a servers response headers.
      # CLI flag: -blocks-storage.s3.http.response-header-timeout
      [response_header_timeout: <duration> | default = 2m]

      # If the client connects via HTTPS and this option is enabled, the client
      # will accept any certificate and hostname.
      # CLI flag: -blocks-storage.s3.http.insecure-skip-verify
      [insecure_skip_verify: <boolean> | default = false]

      # Maximum time to wait for a TLS handshake. 0 means no limit.
      # CLI flag: -blocks-storage.s3.tls-handshake-timeout
      [tls_handshake_timeout: <duration> | default = 10s]

      # The time to wait for a server's first response headers after fully
      # writing the request headers if the request has an Expect header. 0 to
      # send the request body immediately.
      # CLI flag: -blocks-storage.s3.expect-continue-timeout
      [expect_continue_timeout: <duration> | default = 1s]

      # Maximum number of idle (keep-alive) connections across all hosts. 0
      # means no limit.
      # CLI flag: -blocks-storage.s3.max-idle-connections
      [max_idle_connections: <int> | default = 100]

      # Maximum number of idle (keep-alive) connections to keep per-host. If 0,
      # a built-in default value is used.
      # CLI flag: -blocks-storage.s3.max-idle-connections-per-host
      [max_idle_connections_per_host: <int> | default = 100]

      # Maximum number of connections per host. 0 means no limit.
      # CLI flag: -blocks-storage.s3.max-connections-per-host
      [max_connections_per_host: <int> | default = 0]

  gcs:
    # GCS bucket name
    # CLI flag: -blocks-storage.gcs.bucket-name
    [bucket_name: <string> | default = ""]

    # JSON representing either a Google Developers Console
    # client_credentials.json file or a Google Developers service account key
    # file. If empty, fallback to Google default logic.
    # CLI flag: -blocks-storage.gcs.service-account
    [service_account: <string> | default = ""]

  azure:
    # Azure storage account name
    # CLI flag: -blocks-storage.azure.account-name
    [account_name: <string> | default = ""]

    # Azure storage account key
    # CLI flag: -blocks-storage.azure.account-key
    [account_key: <string> | default = ""]

    # The values of `account-name` and `endpoint-suffix` values will not be
    # ignored if `connection-string` is set. Use this method over `account-key`
    # if you need to authenticate via a SAS token or if you use the Azurite
    # emulator.
    # CLI flag: -blocks-storage.azure.connection-string
    [connection_string: <string> | default = ""]

    # Azure storage container name
    # CLI flag: -blocks-storage.azure.container-name
    [container_name: <string> | default = ""]

    # Azure storage endpoint suffix without schema. The account name will be
    # prefixed to this value to create the FQDN
    # CLI flag: -blocks-storage.azure.endpoint-suffix
    [endpoint_suffix: <string> | default = ""]

    # Number of retries for recoverable errors
    # CLI flag: -blocks-storage.azure.max-retries
    [max_retries: <int> | default = 20]

    # Deprecated: Azure storage MSI resource. It will be set automatically by
    # Azure SDK.
    # CLI flag: -blocks-storage.azure.msi-resource
    [msi_resource: <string> | default = ""]

    # Azure storage MSI resource managed identity client Id. If not supplied
    # default Azure credential will be used. Set it to empty if you need to
    # authenticate via Azure Workload Identity.
    # CLI flag: -blocks-storage.azure.user-assigned-id
    [user_assigned_id: <string> | default = ""]

    http:
      # The time an idle connection will remain idle before closing.
      # CLI flag: -blocks-storage.azure.http.idle-conn-timeout
      [idle_conn_timeout: <duration> | default = 1m30s]

      # The amount of time the client will wait for a servers response headers.
      # CLI flag: -blocks-storage.azure.http.response-header-timeout
      [response_header_timeout: <duration> | default = 2m]

      # If the client connects via HTTPS and this option is enabled, the client
      # will accept any certificate and hostname.
      # CLI flag: -blocks-storage.azure.http.insecure-skip-verify
      [insecure_skip_verify: <boolean> | default = false]

      # Maximum time to wait for a TLS handshake. 0 means no limit.
      # CLI flag: -blocks-storage.azure.tls-handshake-timeout
      [tls_handshake_timeout: <duration> | default = 10s]

      # The time to wait for a server's first response headers after fully
      # writing the request headers if the request has an Expect header. 0 to
      # send the request body immediately.
      # CLI flag: -blocks-storage.azure.expect-continue-timeout
      [expect_continue_timeout: <duration> | default = 1s]

      # Maximum number of idle (keep-alive) connections across all hosts. 0
      # means no limit.
      # CLI flag: -blocks-storage.azure.max-idle-connections
      [max_idle_connections: <int> | default = 100]

      # Maximum number of idle (keep-alive) connections to keep per-host. If 0,
      # a built-in default value is used.
      # CLI flag: -blocks-storage.azure.max-idle-connections-per-host
      [max_idle_connections_per_host: <int> | default = 100]

      # Maximum number of connections per host. 0 means no limit.
      # CLI flag: -blocks-storage.azure.max-connections-per-host
      [max_connections_per_host: <int> | default = 0]

  swift:
    # OpenStack Swift authentication API version. 0 to autodetect.
    # CLI flag: -blocks-storage.swift.auth-version
    [auth_version: <int> | default = 0]

    # OpenStack Swift authentication URL
    # CLI flag: -blocks-storage.swift.auth-url
    [auth_url: <string> | default = ""]

    # OpenStack Swift username.
    # CLI flag: -blocks-storage.swift.username
    [username: <string> | default = ""]

    # OpenStack Swift user's domain name.
    # CLI flag: -blocks-storage.swift.user-domain-name
    [user_domain_name: <string> | default = ""]

    # OpenStack Swift user's domain ID.
    # CLI flag: -blocks-storage.swift.user-domain-id
    [user_domain_id: <string> | default = ""]

    # OpenStack Swift user ID.
    # CLI flag: -blocks-storage.swift.user-id
    [user_id: <string> | default = ""]

    # OpenStack Swift API key.
    # CLI flag: -blocks-storage.swift.password
    [password: <string> | default = ""]

    # OpenStack Swift user's domain ID.
    # CLI flag: -blocks-storage.swift.domain-id
    [domain_id: <string> | default = ""]

    # OpenStack Swift user's domain name.
    # CLI flag: -blocks-storage.swift.domain-name
    [domain_name: <string> | default = ""]

    # OpenStack Swift project ID (v2,v3 auth only).
    # CLI flag: -blocks-storage.swift.project-id
    [project_id: <string> | default = ""]

    # OpenStack Swift project name (v2,v3 auth only).
    # CLI flag: -blocks-storage.swift.project-name
    [project_name: <string> | default = ""]

    # ID of the OpenStack Swift project's domain (v3 auth only), only needed if
    # it differs the from user domain.
    # CLI flag: -blocks-storage.swift.project-domain-id
    [project_domain_id: <string> | default = ""]

    # Name of the OpenStack Swift project's domain (v3 auth only), only needed
    # if it differs from the user domain.
    # CLI flag: -blocks-storage.swift.project-domain-name
    [project_domain_name: <string> | default = ""]

    # OpenStack Swift Region to use (v2,v3 auth only).
    # CLI flag: -blocks-storage.swift.region-name
    [region_name: <string> | default = ""]

    # Name of the OpenStack Swift container to put chunks in.
    # CLI flag: -blocks-storage.swift.container-name
    [container_name: <string> | default = ""]

    # Max retries on requests error.
    # CLI flag: -blocks-storage.swift.max-retries
    [max_retries: <int> | default = 3]

    # Time after which a connection attempt is aborted.
    # CLI flag: -blocks-storage.swift.connect-timeout
    [connect_timeout: <duration> | default = 10s]

    # Time after which an idle request is aborted. The timeout watchdog is reset
    # each time some data is received, so the timeout triggers after X time no
    # data is received on a request.
    # CLI flag: -blocks-storage.swift.request-timeout
    [request_timeout: <duration> | default = 5s]

  filesystem:
    # Local filesystem storage directory.
    # CLI flag: -blocks-storage.filesystem.dir
    [dir: <string> | default = ""]

  # This configures how the querier and store-gateway discover and synchronize
  # blocks stored in the bucket.
  bucket_store:
    # Directory to store synchronized TSDB index headers.
    # CLI flag: -blocks-storage.bucket-store.sync-dir
    [sync_dir: <string> | default = "tsdb-sync"]

    # How frequently to scan the bucket, or to refresh the bucket index (if
    # enabled), in order to look for changes (new blocks shipped by ingesters
    # and blocks deleted by retention or compaction).
    # CLI flag: -blocks-storage.bucket-store.sync-interval
    [sync_interval: <duration> | default = 15m]

    # Max number of concurrent queries to execute against the long-term storage.
    # The limit is shared across all tenants.
    # CLI flag: -blocks-storage.bucket-store.max-concurrent
    [max_concurrent: <int> | default = 100]

    # Max number of inflight queries to execute against the long-term storage.
    # The limit is shared across all tenants. 0 to disable.
    # CLI flag: -blocks-storage.bucket-store.max-inflight-requests
    [max_inflight_requests: <int> | default = 0]

    # Maximum number of concurrent tenants syncing blocks.
    # CLI flag: -blocks-storage.bucket-store.tenant-sync-concurrency
    [tenant_sync_concurrency: <int> | default = 10]

    # Maximum number of concurrent blocks syncing per tenant.
    # CLI flag: -blocks-storage.bucket-store.block-sync-concurrency
    [block_sync_concurrency: <int> | default = 20]

    # Number of Go routines to use when syncing block meta files from object
    # storage per tenant.
    # CLI flag: -blocks-storage.bucket-store.meta-sync-concurrency
    [meta_sync_concurrency: <int> | default = 20]

    # Minimum age of a block before it's being read. Set it to safe value (e.g
    # 30m) if your object storage is eventually consistent. GCS and S3 are
    # (roughly) strongly consistent.
    # CLI flag: -blocks-storage.bucket-store.consistency-delay
    [consistency_delay: <duration> | default = 0s]

    index_cache:
      # The index cache backend type. Multiple cache backend can be provided as
      # a comma-separated ordered list to enable the implementation of a cache
      # hierarchy. Supported values: inmemory, memcached, redis.
      # CLI flag: -blocks-storage.bucket-store.index-cache.backend
      [backend: <string> | default = "inmemory"]

      inmemory:
        # Maximum size in bytes of in-memory index cache used to speed up blocks
        # index lookups (shared between all tenants).
        # CLI flag: -blocks-storage.bucket-store.index-cache.inmemory.max-size-bytes
        [max_size_bytes: <int> | default = 1073741824]

        # Selectively cache index item types. Supported values are Postings,
        # ExpandedPostings and Series
        # CLI flag: -blocks-storage.bucket-store.index-cache.inmemory.enabled-items
        [enabled_items: <list of string> | default = []]

      memcached:
        # Comma separated list of memcached addresses. Supported prefixes are:
        # dns+ (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV
        # query, dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup
        # made after that).
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.addresses
        [addresses: <string> | default = ""]

        # The socket read/write timeout.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.timeout
        [timeout: <duration> | default = 100ms]

        # The maximum number of idle connections that will be maintained per
        # address.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-idle-connections
        [max_idle_connections: <int> | default = 16]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # The maximum number of concurrent connections running get operations.
        # If set to 0, concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum number of keys a single underlying get operation should
        # run. If more keys are specified, internally keys are split into
        # multiple batches and fetched concurrently, honoring the max
        # concurrency. If set to 0, the max batch size is unlimited.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-get-multi-batch-size
        [max_get_multi_batch_size: <int> | default = 0]

        # The maximum size of an item stored in memcached. Bigger items are not
        # stored. If set to 0, no maximum size is enforced.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-item-size
        [max_item_size: <int> | default = 1048576]

        # Use memcached auto-discovery mechanism provided by some cloud provider
        # like GCP and AWS
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.auto-discovery
        [auto_discovery: <boolean> | default = false]

        set_async_circuit_breaker_config:
          # If true, enable circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.set-async.circuit-breaker.enabled
          [enabled: <boolean> | default = false]

          # Maximum number of requests allowed to pass through when the circuit
          # breaker is half-open. If set to 0, by default it allows 1 request.
          # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.set-async.circuit-breaker.half-open-max-requests
          [half_open_max_requests: <int> | default = 10]

          # Period of the open state after which the state of the circuit
          # breaker becomes half-open. If set to 0, by default open duration is
          # 60 seconds.
          # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.set-async.circuit-breaker.open-duration
          [open_duration: <duration> | default = 5s]

          # Minimal requests to trigger the circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.set-async.circuit-breaker.min-requests
          [min_requests: <int> | default = 50]

          # Consecutive failures to determine if the circuit breaker should
          # open.
          # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.set-async.circuit-breaker.consecutive-failures
          [consecutive_failures: <int> | default = 5]

          # Failure percentage to determine if the circuit breaker should open.
          # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.set-async.circuit-breaker.failure-percent
          [failure_percent: <float> | default = 0.05]

        # Selectively cache index item types. Supported values are Postings,
        # ExpandedPostings and Series
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.enabled-items
        [enabled_items: <list of string> | default = []]

      redis:
        # Comma separated list of redis addresses. Supported prefixes are: dns+
        # (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV query,
        # dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup made after
        # that).
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.addresses
        [addresses: <string> | default = ""]

        # Redis username.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.username
        [username: <string> | default = ""]

        # Redis password.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.password
        [password: <string> | default = ""]

        # Database to be selected after connecting to the server.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.db
        [db: <int> | default = 0]

        # Specifies the master's name. Must be not empty for Redis Sentinel.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.master-name
        [master_name: <string> | default = ""]

        # The maximum number of concurrent GetMulti() operations. If set to 0,
        # concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum size per batch for mget.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.get-multi-batch-size
        [get_multi_batch_size: <int> | default = 100]

        # The maximum number of concurrent SetMulti() operations. If set to 0,
        # concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.max-set-multi-concurrency
        [max_set_multi_concurrency: <int> | default = 100]

        # The maximum size per batch for pipeline set.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-multi-batch-size
        [set_multi_batch_size: <int> | default = 100]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # Client dial timeout.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.dial-timeout
        [dial_timeout: <duration> | default = 5s]

        # Client read timeout.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.read-timeout
        [read_timeout: <duration> | default = 3s]

        # Client write timeout.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.write-timeout
        [write_timeout: <duration> | default = 3s]

        # Whether to enable tls for redis connection.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.tls-enabled
        [tls_enabled: <boolean> | default = false]

        # Path to the client certificate file, which will be used for
        # authenticating with the server. Also requires the key path to be
        # configured.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis..tls-cert-path
        [tls_cert_path: <string> | default = ""]

        # Path to the key file for the client certificate. Also requires the
        # client certificate to be configured.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis..tls-key-path
        [tls_key_path: <string> | default = ""]

        # Path to the CA certificates file to validate server certificate
        # against. If not set, the host's root CA certificates are used.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis..tls-ca-path
        [tls_ca_path: <string> | default = ""]

        # Override the expected name on the server certificate.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis..tls-server-name
        [tls_server_name: <string> | default = ""]

        # Skip validating server certificate.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis..tls-insecure-skip-verify
        [tls_insecure_skip_verify: <boolean> | default = false]

        # If not zero then client-side caching is enabled. Client-side caching
        # is when data is stored in memory instead of fetching data each time.
        # See https://redis.io/docs/manual/client-side-caching/ for more info.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.cache-size
        [cache_size: <int> | default = 0]

        set_async_circuit_breaker_config:
          # If true, enable circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-async.circuit-breaker.enabled
          [enabled: <boolean> | default = false]

          # Maximum number of requests allowed to pass through when the circuit
          # breaker is half-open. If set to 0, by default it allows 1 request.
          # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-async.circuit-breaker.half-open-max-requests
          [half_open_max_requests: <int> | default = 10]

          # Period of the open state after which the state of the circuit
          # breaker becomes half-open. If set to 0, by default open duration is
          # 60 seconds.
          # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-async.circuit-breaker.open-duration
          [open_duration: <duration> | default = 5s]

          # Minimal requests to trigger the circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-async.circuit-breaker.min-requests
          [min_requests: <int> | default = 50]

          # Consecutive failures to determine if the circuit breaker should
          # open.
          # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-async.circuit-breaker.consecutive-failures
          [consecutive_failures: <int> | default = 5]

          # Failure percentage to determine if the circuit breaker should open.
          # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-async.circuit-breaker.failure-percent
          [failure_percent: <float> | default = 0.05]

        # Selectively cache index item types. Supported values are Postings,
        # ExpandedPostings and Series
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.enabled-items
        [enabled_items: <list of string> | default = []]

      multilevel:
        # The maximum number of concurrent asynchronous operations can occur
        # when backfilling cache items.
        # CLI flag: -blocks-storage.bucket-store.index-cache.multilevel.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed when
        # backfilling cache items.
        # CLI flag: -blocks-storage.bucket-store.index-cache.multilevel.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # The maximum number of items to backfill per asynchronous operation.
        # CLI flag: -blocks-storage.bucket-store.index-cache.multilevel.max-backfill-items
        [max_backfill_items: <int> | default = 10000]

    chunks_cache:
      # The chunks cache backend type. Single or Multiple cache backend can be
      # provided. Supported values in single cache: memcached, redis, inmemory,
      # and '' (disable). Supported values in multi level cache: a
      # comma-separated list of (inmemory, memcached, redis)
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.backend
      [backend: <string> | default = ""]

      inmemory:
        # Maximum size in bytes of in-memory chunk cache used to speed up chunk
        # lookups (shared between all tenants).
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.inmemory.max-size-bytes
        [max_size_bytes: <int> | default = 1073741824]

      memcached:
        # Comma separated list of memcached addresses. Supported prefixes are:
        # dns+ (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV
        # query, dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup
        # made after that).
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.addresses
        [addresses: <string> | default = ""]

        # The socket read/write timeout.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.timeout
        [timeout: <duration> | default = 100ms]

        # The maximum number of idle connections that will be maintained per
        # address.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-idle-connections
        [max_idle_connections: <int> | default = 16]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # The maximum number of concurrent connections running get operations.
        # If set to 0, concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum number of keys a single underlying get operation should
        # run. If more keys are specified, internally keys are split into
        # multiple batches and fetched concurrently, honoring the max
        # concurrency. If set to 0, the max batch size is unlimited.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-get-multi-batch-size
        [max_get_multi_batch_size: <int> | default = 0]

        # The maximum size of an item stored in memcached. Bigger items are not
        # stored. If set to 0, no maximum size is enforced.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-item-size
        [max_item_size: <int> | default = 1048576]

        # Use memcached auto-discovery mechanism provided by some cloud provider
        # like GCP and AWS
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.auto-discovery
        [auto_discovery: <boolean> | default = false]

        set_async_circuit_breaker_config:
          # If true, enable circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.set-async.circuit-breaker.enabled
          [enabled: <boolean> | default = false]

          # Maximum number of requests allowed to pass through when the circuit
          # breaker is half-open. If set to 0, by default it allows 1 request.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.set-async.circuit-breaker.half-open-max-requests
          [half_open_max_requests: <int> | default = 10]

          # Period of the open state after which the state of the circuit
          # breaker becomes half-open. If set to 0, by default open duration is
          # 60 seconds.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.set-async.circuit-breaker.open-duration
          [open_duration: <duration> | default = 5s]

          # Minimal requests to trigger the circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.set-async.circuit-breaker.min-requests
          [min_requests: <int> | default = 50]

          # Consecutive failures to determine if the circuit breaker should
          # open.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.set-async.circuit-breaker.consecutive-failures
          [consecutive_failures: <int> | default = 5]

          # Failure percentage to determine if the circuit breaker should open.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.set-async.circuit-breaker.failure-percent
          [failure_percent: <float> | default = 0.05]

      redis:
        # Comma separated list of redis addresses. Supported prefixes are: dns+
        # (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV query,
        # dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup made after
        # that).
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.addresses
        [addresses: <string> | default = ""]

        # Redis username.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.username
        [username: <string> | default = ""]

        # Redis password.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.password
        [password: <string> | default = ""]

        # Database to be selected after connecting to the server.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.db
        [db: <int> | default = 0]

        # Specifies the master's name. Must be not empty for Redis Sentinel.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.master-name
        [master_name: <string> | default = ""]

        # The maximum number of concurrent GetMulti() operations. If set to 0,
        # concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum size per batch for mget.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.get-multi-batch-size
        [get_multi_batch_size: <int> | default = 100]

        # The maximum number of concurrent SetMulti() operations. If set to 0,
        # concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.max-set-multi-concurrency
        [max_set_multi_concurrency: <int> | default = 100]

        # The maximum size per batch for pipeline set.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-multi-batch-size
        [set_multi_batch_size: <int> | default = 100]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # Client dial timeout.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.dial-timeout
        [dial_timeout: <duration> | default = 5s]

        # Client read timeout.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.read-timeout
        [read_timeout: <duration> | default = 3s]

        # Client write timeout.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.write-timeout
        [write_timeout: <duration> | default = 3s]

        # Whether to enable tls for redis connection.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.tls-enabled
        [tls_enabled: <boolean> | default = false]

        # Path to the client certificate file, which will be used for
        # authenticating with the server. Also requires the key path to be
        # configured.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis..tls-cert-path
        [tls_cert_path: <string> | default = ""]

        # Path to the key file for the client certificate. Also requires the
        # client certificate to be configured.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis..tls-key-path
        [tls_key_path: <string> | default = ""]

        # Path to the CA certificates file to validate server certificate
        # against. If not set, the host's root CA certificates are used.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis..tls-ca-path
        [tls_ca_path: <string> | default = ""]

        # Override the expected name on the server certificate.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis..tls-server-name
        [tls_server_name: <string> | default = ""]

        # Skip validating server certificate.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis..tls-insecure-skip-verify
        [tls_insecure_skip_verify: <boolean> | default = false]

        # If not zero then client-side caching is enabled. Client-side caching
        # is when data is stored in memory instead of fetching data each time.
        # See https://redis.io/docs/manual/client-side-caching/ for more info.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.cache-size
        [cache_size: <int> | default = 0]

        set_async_circuit_breaker_config:
          # If true, enable circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-async.circuit-breaker.enabled
          [enabled: <boolean> | default = false]

          # Maximum number of requests allowed to pass through when the circuit
          # breaker is half-open. If set to 0, by default it allows 1 request.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-async.circuit-breaker.half-open-max-requests
          [half_open_max_requests: <int> | default = 10]

          # Period of the open state after which the state of the circuit
          # breaker becomes half-open. If set to 0, by default open duration is
          # 60 seconds.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-async.circuit-breaker.open-duration
          [open_duration: <duration> | default = 5s]

          # Minimal requests to trigger the circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-async.circuit-breaker.min-requests
          [min_requests: <int> | default = 50]

          # Consecutive failures to determine if the circuit breaker should
          # open.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-async.circuit-breaker.consecutive-failures
          [consecutive_failures: <int> | default = 5]

          # Failure percentage to determine if the circuit breaker should open.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-async.circuit-breaker.failure-percent
          [failure_percent: <float> | default = 0.05]

      multilevel:
        # The maximum number of concurrent asynchronous operations can occur
        # when backfilling cache items.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.multilevel.max-async-concurrency
        [max_async_concurrency: <int> | default = 3]

        # The maximum number of enqueued asynchronous operations allowed when
        # backfilling cache items.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.multilevel.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # The maximum number of items to backfill per asynchronous operation.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.multilevel.max-backfill-items
        [max_backfill_items: <int> | default = 10000]

      # Size of each subrange that bucket object is split into for better
      # caching.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.subrange-size
      [subrange_size: <int> | default = 16000]

      # Maximum number of sub-GetRange requests that a single GetRange request
      # can be split into when fetching chunks. Zero or negative value =
      # unlimited number of sub-requests.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.max-get-range-requests
      [max_get_range_requests: <int> | default = 3]

      # TTL for caching object attributes for chunks.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.attributes-ttl
      [attributes_ttl: <duration> | default = 168h]

      # TTL for caching individual chunks subranges.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.subrange-ttl
      [subrange_ttl: <duration> | default = 24h]

    metadata_cache:
      # Backend for metadata cache, if not empty. Supported values: memcached,
      # redis, and '' (disable).
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.backend
      [backend: <string> | default = ""]

      memcached:
        # Comma separated list of memcached addresses. Supported prefixes are:
        # dns+ (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV
        # query, dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup
        # made after that).
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.addresses
        [addresses: <string> | default = ""]

        # The socket read/write timeout.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.timeout
        [timeout: <duration> | default = 100ms]

        # The maximum number of idle connections that will be maintained per
        # address.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-idle-connections
        [max_idle_connections: <int> | default = 16]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # The maximum number of concurrent connections running get operations.
        # If set to 0, concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum number of keys a single underlying get operation should
        # run. If more keys are specified, internally keys are split into
        # multiple batches and fetched concurrently, honoring the max
        # concurrency. If set to 0, the max batch size is unlimited.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-get-multi-batch-size
        [max_get_multi_batch_size: <int> | default = 0]

        # The maximum size of an item stored in memcached. Bigger items are not
        # stored. If set to 0, no maximum size is enforced.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-item-size
        [max_item_size: <int> | default = 1048576]

        # Use memcached auto-discovery mechanism provided by some cloud provider
        # like GCP and AWS
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.auto-discovery
        [auto_discovery: <boolean> | default = false]

        set_async_circuit_breaker_config:
          # If true, enable circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.set-async.circuit-breaker.enabled
          [enabled: <boolean> | default = false]

          # Maximum number of requests allowed to pass through when the circuit
          # breaker is half-open. If set to 0, by default it allows 1 request.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.set-async.circuit-breaker.half-open-max-requests
          [half_open_max_requests: <int> | default = 10]

          # Period of the open state after which the state of the circuit
          # breaker becomes half-open. If set to 0, by default open duration is
          # 60 seconds.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.set-async.circuit-breaker.open-duration
          [open_duration: <duration> | default = 5s]

          # Minimal requests to trigger the circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.set-async.circuit-breaker.min-requests
          [min_requests: <int> | default = 50]

          # Consecutive failures to determine if the circuit breaker should
          # open.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.set-async.circuit-breaker.consecutive-failures
          [consecutive_failures: <int> | default = 5]

          # Failure percentage to determine if the circuit breaker should open.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.set-async.circuit-breaker.failure-percent
          [failure_percent: <float> | default = 0.05]

      redis:
        # Comma separated list of redis addresses. Supported prefixes are: dns+
        # (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV query,
        # dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup made after
        # that).
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.addresses
        [addresses: <string> | default = ""]

        # Redis username.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.username
        [username: <string> | default = ""]

        # Redis password.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.password
        [password: <string> | default = ""]

        # Database to be selected after connecting to the server.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.db
        [db: <int> | default = 0]

        # Specifies the master's name. Must be not empty for Redis Sentinel.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.master-name
        [master_name: <string> | default = ""]

        # The maximum number of concurrent GetMulti() operations. If set to 0,
        # concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum size per batch for mget.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.get-multi-batch-size
        [get_multi_batch_size: <int> | default = 100]

        # The maximum number of concurrent SetMulti() operations. If set to 0,
        # concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.max-set-multi-concurrency
        [max_set_multi_concurrency: <int> | default = 100]

        # The maximum size per batch for pipeline set.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-multi-batch-size
        [set_multi_batch_size: <int> | default = 100]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # Client dial timeout.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.dial-timeout
        [dial_timeout: <duration> | default = 5s]

        # Client read timeout.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.read-timeout
        [read_timeout: <duration> | default = 3s]

        # Client write timeout.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.write-timeout
        [write_timeout: <duration> | default = 3s]

        # Whether to enable tls for redis connection.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.tls-enabled
        [tls_enabled: <boolean> | default = false]

        # Path to the client certificate file, which will be used for
        # authenticating with the server. Also requires the key path to be
        # configured.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis..tls-cert-path
        [tls_cert_path: <string> | default = ""]

        # Path to the key file for the client certificate. Also requires the
        # client certificate to be configured.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis..tls-key-path
        [tls_key_path: <string> | default = ""]

        # Path to the CA certificates file to validate server certificate
        # against. If not set, the host's root CA certificates are used.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis..tls-ca-path
        [tls_ca_path: <string> | default = ""]

        # Override the expected name on the server certificate.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis..tls-server-name
        [tls_server_name: <string> | default = ""]

        # Skip validating server certificate.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis..tls-insecure-skip-verify
        [tls_insecure_skip_verify: <boolean> | default = false]

        # If not zero then client-side caching is enabled. Client-side caching
        # is when data is stored in memory instead of fetching data each time.
        # See https://redis.io/docs/manual/client-side-caching/ for more info.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.cache-size
        [cache_size: <int> | default = 0]

        set_async_circuit_breaker_config:
          # If true, enable circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-async.circuit-breaker.enabled
          [enabled: <boolean> | default = false]

          # Maximum number of requests allowed to pass through when the circuit
          # breaker is half-open. If set to 0, by default it allows 1 request.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-async.circuit-breaker.half-open-max-requests
          [half_open_max_requests: <int> | default = 10]

          # Period of the open state after which the state of the circuit
          # breaker becomes half-open. If set to 0, by default open duration is
          # 60 seconds.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-async.circuit-breaker.open-duration
          [open_duration: <duration> | default = 5s]

          # Minimal requests to trigger the circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-async.circuit-breaker.min-requests
          [min_requests: <int> | default = 50]

          # Consecutive failures to determine if the circuit breaker should
          # open.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-async.circuit-breaker.consecutive-failures
          [consecutive_failures: <int> | default = 5]

          # Failure percentage to determine if the circuit breaker should open.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-async.circuit-breaker.failure-percent
          [failure_percent: <float> | default = 0.05]

      # How long to cache list of tenants in the bucket.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.tenants-list-ttl
      [tenants_list_ttl: <duration> | default = 15m]

      # How long to cache list of blocks for each tenant.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.tenant-blocks-list-ttl
      [tenant_blocks_list_ttl: <duration> | default = 5m]

      # How long to cache list of chunks for a block.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.chunks-list-ttl
      [chunks_list_ttl: <duration> | default = 24h]

      # How long to cache information that block metafile exists. Also used for
      # user deletion mark file.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-exists-ttl
      [metafile_exists_ttl: <duration> | default = 2h]

      # How long to cache information that block metafile doesn't exist. Also
      # used for user deletion mark file.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-doesnt-exist-ttl
      [metafile_doesnt_exist_ttl: <duration> | default = 5m]

      # How long to cache content of the metafile.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-content-ttl
      [metafile_content_ttl: <duration> | default = 24h]

      # Maximum size of metafile content to cache in bytes. Caching will be
      # skipped if the content exceeds this size. This is useful to avoid
      # network round trip for large content if the configured caching backend
      # has an hard limit on cached items size (in this case, you should set
      # this limit to the same limit in the caching backend).
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-max-size-bytes
      [metafile_max_size_bytes: <int> | default = 1048576]

      # How long to cache attributes of the block metafile.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-attributes-ttl
      [metafile_attributes_ttl: <duration> | default = 168h]

      # How long to cache attributes of the block index.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.block-index-attributes-ttl
      [block_index_attributes_ttl: <duration> | default = 168h]

      # How long to cache content of the bucket index.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.bucket-index-content-ttl
      [bucket_index_content_ttl: <duration> | default = 5m]

      # Maximum size of bucket index content to cache in bytes. Caching will be
      # skipped if the content exceeds this size. This is useful to avoid
      # network round trip for large content if the configured caching backend
      # has an hard limit on cached items size (in this case, you should set
      # this limit to the same limit in the caching backend).
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.bucket-index-max-size-bytes
      [bucket_index_max_size_bytes: <int> | default = 1048576]

    # Duration after which the blocks marked for deletion will be filtered out
    # while fetching blocks. The idea of ignore-deletion-marks-delay is to
    # ignore blocks that are marked for deletion with some delay. This ensures
    # store can still serve blocks that are meant to be deleted but do not have
    # a replacement yet. Default is 6h, half of the default value for
    # -compactor.deletion-delay.
    # CLI flag: -blocks-storage.bucket-store.ignore-deletion-marks-delay
    [ignore_deletion_mark_delay: <duration> | default = 6h]

    # The blocks created since `now() - ignore_blocks_within` will not be
    # synced. This should be used together with `-querier.query-store-after` to
    # filter out the blocks that are too new to be queried. A reasonable value
    # for this flag would be `-querier.query-store-after -
    # blocks-storage.bucket-store.bucket-index.max-stale-period` to give some
    # buffer. 0 to disable.
    # CLI flag: -blocks-storage.bucket-store.ignore-blocks-within
    [ignore_blocks_within: <duration> | default = 0s]

    bucket_index:
      # True to enable querier and store-gateway to discover blocks in the
      # storage via bucket index instead of bucket scanning.
      # CLI flag: -blocks-storage.bucket-store.bucket-index.enabled
      [enabled: <boolean> | default = false]

      # How frequently a bucket index, which previously failed to load, should
      # be tried to load again. This option is used only by querier.
      # CLI flag: -blocks-storage.bucket-store.bucket-index.update-on-error-interval
      [update_on_error_interval: <duration> | default = 1m]

      # How long a unused bucket index should be cached. Once this timeout
      # expires, the unused bucket index is removed from the in-memory cache.
      # This option is used only by querier.
      # CLI flag: -blocks-storage.bucket-store.bucket-index.idle-timeout
      [idle_timeout: <duration> | default = 1h]

      # The maximum allowed age of a bucket index (last updated) before queries
      # start failing because the bucket index is too old. The bucket index is
      # periodically updated by the compactor, while this check is enforced in
      # the querier (at query time).
      # CLI flag: -blocks-storage.bucket-store.bucket-index.max-stale-period
      [max_stale_period: <duration> | default = 1h]

    # One of concurrent, recursive, bucket_index. When set to concurrent, stores
    # will concurrently issue one call per directory to discover active blocks
    # in the bucket. The recursive strategy iterates through all objects in the
    # bucket, recursively traversing into each directory. This avoids N+1 calls
    # at the expense of having slower bucket iterations. bucket_index strategy
    # can be used in Compactor only and utilizes the existing bucket index to
    # fetch block IDs to sync. This avoids iterating the bucket but can be
    # impacted by delays of cleaner creating bucket index.
    # CLI flag: -blocks-storage.bucket-store.block-discovery-strategy
    [block_discovery_strategy: <string> | default = "concurrent"]

    # Max size - in bytes - of a chunks pool, used to reduce memory allocations.
    # The pool is shared across all tenants. 0 to disable the limit.
    # CLI flag: -blocks-storage.bucket-store.max-chunk-pool-bytes
    [max_chunk_pool_bytes: <int> | default = 2147483648]

    # If enabled, store-gateway will lazily memory-map an index-header only once
    # required by a query.
    # CLI flag: -blocks-storage.bucket-store.index-header-lazy-loading-enabled
    [index_header_lazy_loading_enabled: <boolean> | default = false]

    # If index-header lazy loading is enabled and this setting is > 0, the
    # store-gateway will release memory-mapped index-headers after 'idle
    # timeout' inactivity.
    # CLI flag: -blocks-storage.bucket-store.index-header-lazy-loading-idle-timeout
    [index_header_lazy_loading_idle_timeout: <duration> | default = 20m]

    # If true, Store Gateway will estimate postings size and try to lazily
    # expand postings if it downloads less data than expanding all postings.
    # CLI flag: -blocks-storage.bucket-store.lazy-expanded-postings-enabled
    [lazy_expanded_postings_enabled: <boolean> | default = false]

    # Controls how many series to fetch per batch in Store Gateway. Default
    # value is 10000.
    # CLI flag: -blocks-storage.bucket-store.series-batch-size
    [series_batch_size: <int> | default = 10000]

    token_bucket_bytes_limiter:
      # Token bucket bytes limiter mode. Supported values are: disabled, dryrun,
      # enabled
      # CLI flag: -blocks-storage.bucket-store.token-bucket-bytes-limiter.mode
      [mode: <string> | default = "disabled"]

      # Instance token bucket size
      # CLI flag: -blocks-storage.bucket-store.token-bucket-bytes-limiter.instance-token-bucket-size
      [instance_token_bucket_size: <int> | default = 859832320]

      # User token bucket size
      # CLI flag: -blocks-storage.bucket-store.token-bucket-bytes-limiter.user-token-bucket-size
      [user_token_bucket_size: <int> | default = 644874240]

      # Request token bucket size
      # CLI flag: -blocks-storage.bucket-store.token-bucket-bytes-limiter.request-token-bucket-size
      [request_token_bucket_size: <int> | default = 4194304]

  tsdb:
    # Local directory to store TSDBs in the ingesters.
    # CLI flag: -blocks-storage.tsdb.dir
    [dir: <string> | default = "tsdb"]

    # TSDB blocks range period.
    # CLI flag: -blocks-storage.tsdb.block-ranges-period
    [block_ranges_period: <list of duration> | default = 2h0m0s]

    # TSDB blocks retention in the ingester before a block is removed. This
    # should be larger than the block_ranges_period and large enough to give
    # store-gateways and queriers enough time to discover newly uploaded blocks.
    # CLI flag: -blocks-storage.tsdb.retention-period
    [retention_period: <duration> | default = 6h]

    # How frequently the TSDB blocks are scanned and new ones are shipped to the
    # storage. 0 means shipping is disabled.
    # CLI flag: -blocks-storage.tsdb.ship-interval
    [ship_interval: <duration> | default = 1m]

    # Maximum number of tenants concurrently shipping blocks to the storage.
    # CLI flag: -blocks-storage.tsdb.ship-concurrency
    [ship_concurrency: <int> | default = 10]

    # How frequently does Cortex try to compact TSDB head. Block is only created
    # if data covers smallest block range. Must be greater than 0 and max 30
    # minutes. Note that up to 50% jitter is added to the value for the first
    # compaction to avoid ingesters compacting concurrently.
    # CLI flag: -blocks-storage.tsdb.head-compaction-interval
    [head_compaction_interval: <duration> | default = 1m]

    # Maximum number of tenants concurrently compacting TSDB head into a new
    # block
    # CLI flag: -blocks-storage.tsdb.head-compaction-concurrency
    [head_compaction_concurrency: <int> | default = 5]

    # If TSDB head is idle for this duration, it is compacted. Note that up to
    # 25% jitter is added to the value to avoid ingesters compacting
    # concurrently. 0 means disabled.
    # CLI flag: -blocks-storage.tsdb.head-compaction-idle-timeout
    [head_compaction_idle_timeout: <duration> | default = 1h]

    # The write buffer size used by the head chunks mapper. Lower values reduce
    # memory utilisation on clusters with a large number of tenants at the cost
    # of increased disk I/O operations.
    # CLI flag: -blocks-storage.tsdb.head-chunks-write-buffer-size-bytes
    [head_chunks_write_buffer_size_bytes: <int> | default = 4194304]

    # The number of shards of series to use in TSDB (must be a power of 2).
    # Reducing this will decrease memory footprint, but can negatively impact
    # performance.
    # CLI flag: -blocks-storage.tsdb.stripe-size
    [stripe_size: <int> | default = 16384]

    # Deprecated (use blocks-storage.tsdb.wal-compression-type instead): True to
    # enable TSDB WAL compression.
    # CLI flag: -blocks-storage.tsdb.wal-compression-enabled
    [wal_compression_enabled: <boolean> | default = false]

    # TSDB WAL type. Supported values are: 'snappy', 'zstd' and '' (disable
    # compression)
    # CLI flag: -blocks-storage.tsdb.wal-compression-type
    [wal_compression_type: <string> | default = ""]

    # TSDB WAL segments files max size (bytes).
    # CLI flag: -blocks-storage.tsdb.wal-segment-size-bytes
    [wal_segment_size_bytes: <int> | default = 134217728]

    # True to flush blocks to storage on shutdown. If false, incomplete blocks
    # will be reused after restart.
    # CLI flag: -blocks-storage.tsdb.flush-blocks-on-shutdown
    [flush_blocks_on_shutdown: <boolean> | default = false]

    # If TSDB has not received any data for this duration, and all blocks from
    # TSDB have been shipped, TSDB is closed and deleted from local disk. If set
    # to positive value, this value should be equal or higher than
    # -querier.query-ingesters-within flag to make sure that TSDB is not closed
    # prematurely, which could cause partial query results. 0 or negative value
    # disables closing of idle TSDB.
    # CLI flag: -blocks-storage.tsdb.close-idle-tsdb-timeout
    [close_idle_tsdb_timeout: <duration> | default = 0s]

    # The size of the in-memory queue used before flushing chunks to the disk.
    # CLI flag: -blocks-storage.tsdb.head-chunks-write-queue-size
    [head_chunks_write_queue_size: <int> | default = 0]

    # limit the number of concurrently opening TSDB's on startup
    # CLI flag: -blocks-storage.tsdb.max-tsdb-opening-concurrency-on-startup
    [max_tsdb_opening_concurrency_on_startup: <int> | default = 10]

    # Deprecated, use maxExemplars in limits instead. If the MaxExemplars value
    # in limits is set to zero, cortex will fallback on this value. This setting
    # enables support for exemplars in TSDB and sets the maximum number that
    # will be stored. 0 or less means disabled.
    # CLI flag: -blocks-storage.tsdb.max-exemplars
    [max_exemplars: <int> | default = 0]

    # True to enable snapshotting of in-memory TSDB data on disk when shutting
    # down.
    # CLI flag: -blocks-storage.tsdb.memory-snapshot-on-shutdown
    [memory_snapshot_on_shutdown: <boolean> | default = false]

    # [EXPERIMENTAL] Configures the maximum number of samples per chunk that can
    # be out-of-order.
    # CLI flag: -blocks-storage.tsdb.out-of-order-cap-max
    [out_of_order_cap_max: <int> | default = 32]

    # [EXPERIMENTAL] True to enable native histogram.
    # CLI flag: -blocks-storage.tsdb.enable-native-histograms
    [enable_native_histograms: <boolean> | default = false]