Store-gateway

The store-gateway is the Cortex service responsible to query series from blocks. The store-gateway is required when running the Cortex blocks storage.

The store-gateway is semi-stateful.

How it works

The store-gateway needs to have an almost up-to-date view over the storage bucket, in order to discover blocks belonging to their shard. The store-gateway can keep the bucket view updated in to two different ways:

  1. Periodically scanning the bucket (default)
  2. Periodically downloading the bucket index

Bucket index disabled (default)

At startup store-gateways iterate over the entire storage bucket to discover blocks for all tenants and download the meta.json and index-header for each block. During this initial bucket synchronization phase, the store-gateway /ready readiness probe endpoint will fail.

While running, store-gateways periodically rescan the storage bucket to discover new blocks (uploaded by the ingesters and compactor) and blocks marked for deletion or fully deleted since the last scan (as a result of compaction). The frequency at which this occurs is configured via -blocks-storage.bucket-store.sync-interval.

The blocks chunks and the entire index are never fully downloaded by the store-gateway. The index-header is stored to the local disk, in order to avoid to re-download it on subsequent restarts of a store-gateway. For this reason, it’s recommended - but not required - to run the store-gateway with a persistent disk. For example, if you’re running the Cortex cluster in Kubernetes, you may use a StatefulSet with a persistent volume claim for the store-gateways.

For more information about the index-header, please refer to Binary index-header documentation.

Bucket index enabled

When bucket index is enabled, the overall workflow is the same but, instead of iterating over the bucket objects, the store-gateway fetch the bucket index for each tenant belonging to their shard in order to discover each tenant’s blocks and block deletion marks.

For more information about the bucket index, please refer to bucket index documentation.

Blocks sharding and replication

The store-gateway optionally supports blocks sharding. Sharding can be used to horizontally scale blocks in a large cluster without hitting any vertical scalability limit.

When sharding is enabled, store-gateway instances builds an hash ring and blocks get sharded and replicated across the pool of store-gateway instances registered within the ring.

Store-gateways continuously monitor the ring state and whenever the ring topology changes (e.g. a new instance has been added/removed or gets healthy/unhealthy) each store-gateway instance resync the blocks assigned to its shard, based on the block ID hash matching the token ranges assigned to the instance itself within the ring.

For each block belonging to a store-gateway shard, the store-gateway loads its meta.json, the deletion-mark.json and the index-header. Once a block is loaded on the store-gateway, it’s ready to be queried by queriers. When the querier queries blocks through a store-gateway, the response will contain the list of actually queried block IDs. If a querier tries to query a block which has not been loaded by a store-gateway, the querier will either retry on a different store-gateway (if blocks replication is enabled) or fail the query.

Blocks can be replicated across multiple store-gateway instances based on a replication factor configured via -store-gateway.sharding-ring.replication-factor. The blocks replication is used to protect from query failures caused by some blocks not loaded by any store-gateway instance at a given time like, for example, in the event of a store-gateway failure or while restarting a store-gateway instance (e.g. during a rolling update).

This feature can be enabled via -store-gateway.sharding-enabled=true and requires the backend hash ring to be configured via -store-gateway.sharding-ring.* flags (or their respective YAML config options).

Sharding strategies

The store-gateway supports two sharding strategies:

  • default
  • shuffle-sharding
  • zone-stable-shuffle-sharding

The default sharding strategy spreads the blocks of each tenant across all store-gateway instances. It’s the easiest form of sharding supported, but doesn’t provide any workload isolation between different tenants.

The shuffle-sharding strategy spreads the blocks of a tenant across a subset of store-gateway instances. This way, the number of store-gateway instances loading blocks of a single tenant is limited and the blast radius of any issue that could be introduced by the tenant’s workload is limited to its shard instances.

The shuffle sharding strategy can be enabled via -store-gateway.sharding-strategy=shuffle-sharding and requires the -store-gateway.tenant-shard-size flag (or their respective YAML config options) to be set to the default shard size, which is the default number of store-gateway instances each tenant should be sharded to. The shard size can then be overridden on a per-tenant basis setting the store_gateway_tenant_shard_size in the limits overrides.

The zone-stable-shuffle-sharding strategy achieves the same as the shuffle-sharding strategy, but using a different sharding algorithm. The new sharding algorithm ensures that when zone awareness is enabled, when shard size increases or decreases by one, the replicas for any block should only change at most by one instance. This is important for querying store gateway because a block can be retried at most 3 times.

Zone stable shuffle sharding can be enabled via -store-gateway.sharding-ring.zone-stable-shuffle-sharding CLI flag.

It will become the default shuffle sharding strategy for store gateway in v1.17.0 release and the previous shuffle sharding algorithm will be removed in v1.18.0 release.

Please check out the shuffle sharding documentation for more information about how it works.

Auto-forget

When a store-gateway instance cleanly shutdowns, it automatically unregisters itself from the ring. However, in the event of a crash or node failure, the instance will not be unregistered from the ring, potentially leaving a spurious entry in the ring forever.

To protect from this, when an healthy store-gateway instance finds another instance in the ring which is unhealthy for more than 10 times the configured -store-gateway.sharding-ring.heartbeat-timeout, the healthy instance forcibly removes the unhealthy one from the ring.

This feature is called auto-forget and is built into the store-gateway.

Zone-awareness

The store-gateway replication optionally supports zone-awareness. When zone-aware replication is enabled and the blocks replication factor is > 1, each block is guaranteed to be replicated across store-gateway instances running in different availability zones.

To enable the zone-aware replication for the store-gateways you should:

  1. Configure the availability zone for each store-gateway via the -store-gateway.sharding-ring.instance-availability-zone CLI flag (or its respective YAML config option)
  2. Enable blocks zone-aware replication via the -store-gateway.sharding-ring.zone-awareness-enabled CLI flag (or its respective YAML config option). Please be aware this configuration option should be set to store-gateways, queriers and rulers.
  3. Rollout store-gateways, queriers and rulers to apply the new configuration

Waiting for stable ring at startup

In the event of a cluster cold start or scale up of 2+ store-gateway instances at the same time we may end up in a situation where each new store-gateway instance starts at a slightly different time and thus each one runs the initial blocks sync based on a different state of the ring. For example, in case of a cold start, the first store-gateway joining the ring may load all blocks since the sharding logic runs based on the current state of the ring, which is 1 single store-gateway.

To reduce the likelihood this could happen, the store-gateway waits for a stable ring at startup. A ring is considered stable if no instance is added/removed to the ring for at least -store-gateway.sharding-ring.wait-stability-min-duration. If the ring keep getting changed after -store-gateway.sharding-ring.wait-stability-max-duration, the store-gateway will stop waiting for a stable ring and will proceed starting up normally.

To disable this waiting logic, you can start the store-gateway with -store-gateway.sharding-ring.wait-stability-min-duration=0.

Blocks index-header

The index-header is a subset of the block index which the store-gateway downloads from the object storage and keeps on the local disk in order to speed up queries.

At startup, the store-gateway downloads the index-header of each block belonging to its shard. A store-gateway is not ready until this initial index-header download is completed. Moreover, while running, the store-gateway periodically looks for newly uploaded blocks in the storage and downloads the index-header for the blocks belonging to its shard.

Index-header lazy loading

By default, each index-header is memory mapped by the store-gateway right after downloading it. In a cluster with a large number of blocks, each store-gateway may have a large amount of memory mapped index-headers, regardless how frequently they’re used at query time.

Cortex supports a configuration option -blocks-storage.bucket-store.index-header-lazy-loading-enabled=true to enable index-header lazy loading. When enabled, index-headers will be memory mapped only once required by a query and will be automatically released after -blocks-storage.bucket-store.index-header-lazy-loading-idle-timeout time of inactivity.

Caching

The store-gateway supports the following caches:

Caching is optional, but highly recommended in a production environment. Please also check out the production tips for more information about configuring the cache.

Index cache

The store-gateway can use a cache to speed up lookups of postings and series from TSDB blocks indexes. Two backends are supported:

  • inmemory
  • memcached
  • redis

In-memory index cache

The inmemory index cache is enabled by default and its max size can be configured through the flag -blocks-storage.bucket-store.index-cache.inmemory.max-size-bytes (or config file). The trade-off of using the in-memory index cache is:

  • Pros: zero latency
  • Cons: increased store-gateway memory usage, not shared across multiple store-gateway replicas (when sharding is disabled or replication factor > 1)

Memcached index cache

The memcached index cache allows to use Memcached as cache backend. This cache backend is configured using -blocks-storage.bucket-store.index-cache.backend=memcached and requires the Memcached server(s) addresses via -blocks-storage.bucket-store.index-cache.memcached.addresses (or config file). The addresses are resolved using the DNS service provider.

The trade-off of using the Memcached index cache is:

  • Pros: can scale beyond a single node memory (Memcached cluster), shared across multiple store-gateway instances
  • Cons: higher latency in the cache round trip compared to the in-memory one

The Memcached client uses a jump hash algorithm to shard cached entries across a cluster of Memcached servers. For this reason, you should make sure memcached servers are not behind any kind of load balancer and their address is configured so that servers are added/removed to the end of the list whenever a scale up/down occurs.

For example, if you’re running Memcached in Kubernetes, you may:

  1. Deploy your Memcached cluster using a StatefulSet
  2. Create a headless service for Memcached StatefulSet
  3. Configure the Cortex’s Memcached client address using the dnssrvnoa+ service discovery

Redis index cache

The redis index cache allows to use Redis as cache backend. This cache backend is configured using -blocks-storage.bucket-store.index-cache.backend=redis and requires the Redis server(s) addresses via -blocks-storage.bucket-store.index-cache.redis.addresses (or config file).

Using redis as the cache backend has similar trade-offs as using memcached cache backend. However, client side caching can be enabled when using redis backend to avoid Store Gateway fetching data from cache each time. See here for more info and it can be enabled by setting flag -blocks-storage.bucket-store.index-cache.redis.cache-size > 0.

Chunks cache

Store-gateway can also use a cache for storing chunks fetched from the storage. Chunks contain actual samples, and can be reused if user query hits the same series for the same time range.

To enable chunks cache, please set -blocks-storage.bucket-store.chunks-cache.backend. Chunks can be stored into Memcached or Redis cache. Memcached client can be configured via flags with -blocks-storage.bucket-store.chunks-cache.memcached.* prefix. Redis client can be configured via flags with -blocks-storage.bucket-store.chunks-cache.redis.* prefix.

There are additional low-level options for configuring chunks cache. Please refer to other flags with -blocks-storage.bucket-store.chunks-cache.* prefix.

Metadata cache

Store-gateway and querier can use memcached or redis for caching bucket metadata:

  • List of tenants
  • List of blocks per tenant
  • Block’s meta.json content
  • Block’s deletion-mark.json existence and content
  • Tenant’s bucket-index.json.gz content

Using the metadata cache can significantly reduce the number of API calls to object storage and protects from linearly scale the number of these API calls with the number of querier and store-gateway instances (because the bucket is periodically scanned and synched by each querier and store-gateway).

To enable metadata cache, please set -blocks-storage.bucket-store.metadata-cache.backend. memcached and redis backend are supported currently. Memcached client has additional configuration available via flags with -blocks-storage.bucket-store.metadata-cache.memcached.* prefix. Redis client has additional configuration available via flags with -blocks-storage.bucket-store.metadata-cache.redis.* prefix.

Additional options for configuring metadata cache have -blocks-storage.bucket-store.metadata-cache.* prefix. By configuring TTL to zero or negative value, caching of given item type is disabled.

The same cache backend deployment should be shared between store-gateways and queriers.

Store-gateway HTTP endpoints

  • GET /store-gateway/ring
    Displays the status of the store-gateways ring, including the tokens owned by each store-gateway and an option to remove (forget) instances from the ring.

Store-gateway configuration

This section described the store-gateway configuration. For the general Cortex configuration and references to common config blocks, please refer to the configuration documentation.

store_gateway_config

The store_gateway_config configures the store-gateway service used by the blocks storage.

store_gateway:
  # Shard blocks across multiple store gateway instances. This option needs be
  # set both on the store-gateway and querier when running in microservices
  # mode.
  # CLI flag: -store-gateway.sharding-enabled
  [sharding_enabled: <boolean> | default = false]

  # The hash ring configuration. This option is required only if blocks sharding
  # is enabled.
  sharding_ring:
    # The key-value store used to share the hash ring across multiple instances.
    # This option needs be set both on the store-gateway and querier when
    # running in microservices mode.
    kvstore:
      # Backend storage to use for the ring. Supported values are: consul, etcd,
      # inmemory, memberlist, multi.
      # CLI flag: -store-gateway.sharding-ring.store
      [store: <string> | default = "consul"]

      # The prefix for the keys in the store. Should end with a /.
      # CLI flag: -store-gateway.sharding-ring.prefix
      [prefix: <string> | default = "collectors/"]

      dynamodb:
        # Region to access dynamodb.
        # CLI flag: -store-gateway.sharding-ring.dynamodb.region
        [region: <string> | default = ""]

        # Table name to use on dynamodb.
        # CLI flag: -store-gateway.sharding-ring.dynamodb.table-name
        [table_name: <string> | default = ""]

        # Time to expire items on dynamodb.
        # CLI flag: -store-gateway.sharding-ring.dynamodb.ttl-time
        [ttl: <duration> | default = 0s]

        # Time to refresh local ring with information on dynamodb.
        # CLI flag: -store-gateway.sharding-ring.dynamodb.puller-sync-time
        [puller_sync_time: <duration> | default = 1m]

        # Maximum number of retries for DDB KV CAS.
        # CLI flag: -store-gateway.sharding-ring.dynamodb.max-cas-retries
        [max_cas_retries: <int> | default = 10]

      # The consul_config configures the consul client.
      # The CLI flags prefix for this block config is:
      # store-gateway.sharding-ring
      [consul: <consul_config>]

      # The etcd_config configures the etcd client.
      # The CLI flags prefix for this block config is:
      # store-gateway.sharding-ring
      [etcd: <etcd_config>]

      multi:
        # Primary backend storage used by multi-client.
        # CLI flag: -store-gateway.sharding-ring.multi.primary
        [primary: <string> | default = ""]

        # Secondary backend storage used by multi-client.
        # CLI flag: -store-gateway.sharding-ring.multi.secondary
        [secondary: <string> | default = ""]

        # Mirror writes to secondary store.
        # CLI flag: -store-gateway.sharding-ring.multi.mirror-enabled
        [mirror_enabled: <boolean> | default = false]

        # Timeout for storing value to secondary store.
        # CLI flag: -store-gateway.sharding-ring.multi.mirror-timeout
        [mirror_timeout: <duration> | default = 2s]

    # Period at which to heartbeat to the ring. 0 = disabled.
    # CLI flag: -store-gateway.sharding-ring.heartbeat-period
    [heartbeat_period: <duration> | default = 15s]

    # The heartbeat timeout after which store gateways are considered unhealthy
    # within the ring. 0 = never (timeout disabled). This option needs be set
    # both on the store-gateway and querier when running in microservices mode.
    # CLI flag: -store-gateway.sharding-ring.heartbeat-timeout
    [heartbeat_timeout: <duration> | default = 1m]

    # The replication factor to use when sharding blocks. This option needs be
    # set both on the store-gateway and querier when running in microservices
    # mode.
    # CLI flag: -store-gateway.sharding-ring.replication-factor
    [replication_factor: <int> | default = 3]

    # File path where tokens are stored. If empty, tokens are not stored at
    # shutdown and restored at startup.
    # CLI flag: -store-gateway.sharding-ring.tokens-file-path
    [tokens_file_path: <string> | default = ""]

    # True to enable zone-awareness and replicate blocks across different
    # availability zones.
    # CLI flag: -store-gateway.sharding-ring.zone-awareness-enabled
    [zone_awareness_enabled: <boolean> | default = false]

    # True to keep the store gateway instance in the ring when it shuts down.
    # The instance will then be auto-forgotten from the ring after
    # 10*heartbeat_timeout.
    # CLI flag: -store-gateway.sharding-ring.keep-instance-in-the-ring-on-shutdown
    [keep_instance_in_the_ring_on_shutdown: <boolean> | default = false]

    # Minimum time to wait for ring stability at startup. 0 to disable.
    # CLI flag: -store-gateway.sharding-ring.wait-stability-min-duration
    [wait_stability_min_duration: <duration> | default = 1m]

    # Maximum time to wait for ring stability at startup. If the store-gateway
    # ring keeps changing after this period of time, the store-gateway will
    # start anyway.
    # CLI flag: -store-gateway.sharding-ring.wait-stability-max-duration
    [wait_stability_max_duration: <duration> | default = 5m]

    # Timeout for waiting on store-gateway to become desired state in the ring.
    # CLI flag: -store-gateway.sharding-ring.wait-instance-state-timeout
    [wait_instance_state_timeout: <duration> | default = 10m]

    # The sleep seconds when store-gateway is shutting down. Need to be close to
    # or larger than KV Store information propagation delay
    # CLI flag: -store-gateway.sharding-ring.final-sleep
    [final_sleep: <duration> | default = 0s]

    # Name of network interface to read address from.
    # CLI flag: -store-gateway.sharding-ring.instance-interface-names
    [instance_interface_names: <list of string> | default = [eth0 en0]]

    # The availability zone where this instance is running. Required if
    # zone-awareness is enabled.
    # CLI flag: -store-gateway.sharding-ring.instance-availability-zone
    [instance_availability_zone: <string> | default = ""]

  # The sharding strategy to use. Supported values are: default,
  # shuffle-sharding.
  # CLI flag: -store-gateway.sharding-strategy
  [sharding_strategy: <string> | default = "default"]

  # Comma separated list of tenants whose store metrics this storegateway can
  # process. If specified, only these tenants will be handled by storegateway,
  # otherwise this storegateway will be enabled for all the tenants in the
  # store-gateway cluster.
  # CLI flag: -store-gateway.enabled-tenants
  [enabled_tenants: <string> | default = ""]

  # Comma separated list of tenants whose store metrics this storegateway cannot
  # process. If specified, a storegateway that would normally pick the specified
  # tenant(s) for processing will ignore them instead.
  # CLI flag: -store-gateway.disabled-tenants
  [disabled_tenants: <string> | default = ""]

blocks_storage_config

The blocks_storage_config configures the blocks storage.

blocks_storage:
  # Backend storage to use. Supported backends are: s3, gcs, azure, swift,
  # filesystem.
  # CLI flag: -blocks-storage.backend
  [backend: <string> | default = "s3"]

  s3:
    # The S3 bucket endpoint. It could be an AWS S3 endpoint listed at
    # https://docs.aws.amazon.com/general/latest/gr/s3.html or the address of an
    # S3-compatible service in hostname:port format.
    # CLI flag: -blocks-storage.s3.endpoint
    [endpoint: <string> | default = ""]

    # S3 region. If unset, the client will issue a S3 GetBucketLocation API call
    # to autodetect it.
    # CLI flag: -blocks-storage.s3.region
    [region: <string> | default = ""]

    # S3 bucket name
    # CLI flag: -blocks-storage.s3.bucket-name
    [bucket_name: <string> | default = ""]

    # S3 secret access key
    # CLI flag: -blocks-storage.s3.secret-access-key
    [secret_access_key: <string> | default = ""]

    # S3 access key ID
    # CLI flag: -blocks-storage.s3.access-key-id
    [access_key_id: <string> | default = ""]

    # If enabled, use http:// for the S3 endpoint instead of https://. This
    # could be useful in local dev/test environments while using an
    # S3-compatible backend storage, like Minio.
    # CLI flag: -blocks-storage.s3.insecure
    [insecure: <boolean> | default = false]

    # The signature version to use for authenticating against S3. Supported
    # values are: v4, v2.
    # CLI flag: -blocks-storage.s3.signature-version
    [signature_version: <string> | default = "v4"]

    # The s3 bucket lookup style. Supported values are: auto, virtual-hosted,
    # path.
    # CLI flag: -blocks-storage.s3.bucket-lookup-type
    [bucket_lookup_type: <string> | default = "auto"]

    # The s3_sse_config configures the S3 server-side encryption.
    # The CLI flags prefix for this block config is: blocks-storage
    [sse: <s3_sse_config>]

    http:
      # The time an idle connection will remain idle before closing.
      # CLI flag: -blocks-storage.s3.http.idle-conn-timeout
      [idle_conn_timeout: <duration> | default = 1m30s]

      # The amount of time the client will wait for a servers response headers.
      # CLI flag: -blocks-storage.s3.http.response-header-timeout
      [response_header_timeout: <duration> | default = 2m]

      # If the client connects via HTTPS and this option is enabled, the client
      # will accept any certificate and hostname.
      # CLI flag: -blocks-storage.s3.http.insecure-skip-verify
      [insecure_skip_verify: <boolean> | default = false]

      # Maximum time to wait for a TLS handshake. 0 means no limit.
      # CLI flag: -blocks-storage.s3.tls-handshake-timeout
      [tls_handshake_timeout: <duration> | default = 10s]

      # The time to wait for a server's first response headers after fully
      # writing the request headers if the request has an Expect header. 0 to
      # send the request body immediately.
      # CLI flag: -blocks-storage.s3.expect-continue-timeout
      [expect_continue_timeout: <duration> | default = 1s]

      # Maximum number of idle (keep-alive) connections across all hosts. 0
      # means no limit.
      # CLI flag: -blocks-storage.s3.max-idle-connections
      [max_idle_connections: <int> | default = 100]

      # Maximum number of idle (keep-alive) connections to keep per-host. If 0,
      # a built-in default value is used.
      # CLI flag: -blocks-storage.s3.max-idle-connections-per-host
      [max_idle_connections_per_host: <int> | default = 100]

      # Maximum number of connections per host. 0 means no limit.
      # CLI flag: -blocks-storage.s3.max-connections-per-host
      [max_connections_per_host: <int> | default = 0]

  gcs:
    # GCS bucket name
    # CLI flag: -blocks-storage.gcs.bucket-name
    [bucket_name: <string> | default = ""]

    # JSON representing either a Google Developers Console
    # client_credentials.json file or a Google Developers service account key
    # file. If empty, fallback to Google default logic.
    # CLI flag: -blocks-storage.gcs.service-account
    [service_account: <string> | default = ""]

  azure:
    # Azure storage account name
    # CLI flag: -blocks-storage.azure.account-name
    [account_name: <string> | default = ""]

    # Azure storage account key
    # CLI flag: -blocks-storage.azure.account-key
    [account_key: <string> | default = ""]

    # The values of `account-name` and `endpoint-suffix` values will not be
    # ignored if `connection-string` is set. Use this method over `account-key`
    # if you need to authenticate via a SAS token or if you use the Azurite
    # emulator.
    # CLI flag: -blocks-storage.azure.connection-string
    [connection_string: <string> | default = ""]

    # Azure storage container name
    # CLI flag: -blocks-storage.azure.container-name
    [container_name: <string> | default = ""]

    # Azure storage endpoint suffix without schema. The account name will be
    # prefixed to this value to create the FQDN
    # CLI flag: -blocks-storage.azure.endpoint-suffix
    [endpoint_suffix: <string> | default = ""]

    # Number of retries for recoverable errors
    # CLI flag: -blocks-storage.azure.max-retries
    [max_retries: <int> | default = 20]

    # Deprecated: Azure storage MSI resource. It will be set automatically by
    # Azure SDK.
    # CLI flag: -blocks-storage.azure.msi-resource
    [msi_resource: <string> | default = ""]

    # Azure storage MSI resource managed identity client Id. If not supplied
    # default Azure credential will be used. Set it to empty if you need to
    # authenticate via Azure Workload Identity.
    # CLI flag: -blocks-storage.azure.user-assigned-id
    [user_assigned_id: <string> | default = ""]

    http:
      # The time an idle connection will remain idle before closing.
      # CLI flag: -blocks-storage.azure.http.idle-conn-timeout
      [idle_conn_timeout: <duration> | default = 1m30s]

      # The amount of time the client will wait for a servers response headers.
      # CLI flag: -blocks-storage.azure.http.response-header-timeout
      [response_header_timeout: <duration> | default = 2m]

      # If the client connects via HTTPS and this option is enabled, the client
      # will accept any certificate and hostname.
      # CLI flag: -blocks-storage.azure.http.insecure-skip-verify
      [insecure_skip_verify: <boolean> | default = false]

      # Maximum time to wait for a TLS handshake. 0 means no limit.
      # CLI flag: -blocks-storage.azure.tls-handshake-timeout
      [tls_handshake_timeout: <duration> | default = 10s]

      # The time to wait for a server's first response headers after fully
      # writing the request headers if the request has an Expect header. 0 to
      # send the request body immediately.
      # CLI flag: -blocks-storage.azure.expect-continue-timeout
      [expect_continue_timeout: <duration> | default = 1s]

      # Maximum number of idle (keep-alive) connections across all hosts. 0
      # means no limit.
      # CLI flag: -blocks-storage.azure.max-idle-connections
      [max_idle_connections: <int> | default = 100]

      # Maximum number of idle (keep-alive) connections to keep per-host. If 0,
      # a built-in default value is used.
      # CLI flag: -blocks-storage.azure.max-idle-connections-per-host
      [max_idle_connections_per_host: <int> | default = 100]

      # Maximum number of connections per host. 0 means no limit.
      # CLI flag: -blocks-storage.azure.max-connections-per-host
      [max_connections_per_host: <int> | default = 0]

  swift:
    # OpenStack Swift authentication API version. 0 to autodetect.
    # CLI flag: -blocks-storage.swift.auth-version
    [auth_version: <int> | default = 0]

    # OpenStack Swift authentication URL
    # CLI flag: -blocks-storage.swift.auth-url
    [auth_url: <string> | default = ""]

    # OpenStack Swift username.
    # CLI flag: -blocks-storage.swift.username
    [username: <string> | default = ""]

    # OpenStack Swift user's domain name.
    # CLI flag: -blocks-storage.swift.user-domain-name
    [user_domain_name: <string> | default = ""]

    # OpenStack Swift user's domain ID.
    # CLI flag: -blocks-storage.swift.user-domain-id
    [user_domain_id: <string> | default = ""]

    # OpenStack Swift user ID.
    # CLI flag: -blocks-storage.swift.user-id
    [user_id: <string> | default = ""]

    # OpenStack Swift API key.
    # CLI flag: -blocks-storage.swift.password
    [password: <string> | default = ""]

    # OpenStack Swift user's domain ID.
    # CLI flag: -blocks-storage.swift.domain-id
    [domain_id: <string> | default = ""]

    # OpenStack Swift user's domain name.
    # CLI flag: -blocks-storage.swift.domain-name
    [domain_name: <string> | default = ""]

    # OpenStack Swift project ID (v2,v3 auth only).
    # CLI flag: -blocks-storage.swift.project-id
    [project_id: <string> | default = ""]

    # OpenStack Swift project name (v2,v3 auth only).
    # CLI flag: -blocks-storage.swift.project-name
    [project_name: <string> | default = ""]

    # ID of the OpenStack Swift project's domain (v3 auth only), only needed if
    # it differs the from user domain.
    # CLI flag: -blocks-storage.swift.project-domain-id
    [project_domain_id: <string> | default = ""]

    # Name of the OpenStack Swift project's domain (v3 auth only), only needed
    # if it differs from the user domain.
    # CLI flag: -blocks-storage.swift.project-domain-name
    [project_domain_name: <string> | default = ""]

    # OpenStack Swift Region to use (v2,v3 auth only).
    # CLI flag: -blocks-storage.swift.region-name
    [region_name: <string> | default = ""]

    # Name of the OpenStack Swift container to put chunks in.
    # CLI flag: -blocks-storage.swift.container-name
    [container_name: <string> | default = ""]

    # Max retries on requests error.
    # CLI flag: -blocks-storage.swift.max-retries
    [max_retries: <int> | default = 3]

    # Time after which a connection attempt is aborted.
    # CLI flag: -blocks-storage.swift.connect-timeout
    [connect_timeout: <duration> | default = 10s]

    # Time after which an idle request is aborted. The timeout watchdog is reset
    # each time some data is received, so the timeout triggers after X time no
    # data is received on a request.
    # CLI flag: -blocks-storage.swift.request-timeout
    [request_timeout: <duration> | default = 5s]

  filesystem:
    # Local filesystem storage directory.
    # CLI flag: -blocks-storage.filesystem.dir
    [dir: <string> | default = ""]

  # This configures how the querier and store-gateway discover and synchronize
  # blocks stored in the bucket.
  bucket_store:
    # Directory to store synchronized TSDB index headers.
    # CLI flag: -blocks-storage.bucket-store.sync-dir
    [sync_dir: <string> | default = "tsdb-sync"]

    # How frequently to scan the bucket, or to refresh the bucket index (if
    # enabled), in order to look for changes (new blocks shipped by ingesters
    # and blocks deleted by retention or compaction).
    # CLI flag: -blocks-storage.bucket-store.sync-interval
    [sync_interval: <duration> | default = 15m]

    # Max number of concurrent queries to execute against the long-term storage.
    # The limit is shared across all tenants.
    # CLI flag: -blocks-storage.bucket-store.max-concurrent
    [max_concurrent: <int> | default = 100]

    # Max number of inflight queries to execute against the long-term storage.
    # The limit is shared across all tenants. 0 to disable.
    # CLI flag: -blocks-storage.bucket-store.max-inflight-requests
    [max_inflight_requests: <int> | default = 0]

    # Maximum number of concurrent tenants syncing blocks.
    # CLI flag: -blocks-storage.bucket-store.tenant-sync-concurrency
    [tenant_sync_concurrency: <int> | default = 10]

    # Maximum number of concurrent blocks syncing per tenant.
    # CLI flag: -blocks-storage.bucket-store.block-sync-concurrency
    [block_sync_concurrency: <int> | default = 20]

    # Number of Go routines to use when syncing block meta files from object
    # storage per tenant.
    # CLI flag: -blocks-storage.bucket-store.meta-sync-concurrency
    [meta_sync_concurrency: <int> | default = 20]

    # Minimum age of a block before it's being read. Set it to safe value (e.g
    # 30m) if your object storage is eventually consistent. GCS and S3 are
    # (roughly) strongly consistent.
    # CLI flag: -blocks-storage.bucket-store.consistency-delay
    [consistency_delay: <duration> | default = 0s]

    index_cache:
      # The index cache backend type. Multiple cache backend can be provided as
      # a comma-separated ordered list to enable the implementation of a cache
      # hierarchy. Supported values: inmemory, memcached, redis.
      # CLI flag: -blocks-storage.bucket-store.index-cache.backend
      [backend: <string> | default = "inmemory"]

      inmemory:
        # Maximum size in bytes of in-memory index cache used to speed up blocks
        # index lookups (shared between all tenants).
        # CLI flag: -blocks-storage.bucket-store.index-cache.inmemory.max-size-bytes
        [max_size_bytes: <int> | default = 1073741824]

        # Selectively cache index item types. Supported values are Postings,
        # ExpandedPostings and Series
        # CLI flag: -blocks-storage.bucket-store.index-cache.inmemory.enabled-items
        [enabled_items: <list of string> | default = []]

      memcached:
        # Comma separated list of memcached addresses. Supported prefixes are:
        # dns+ (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV
        # query, dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup
        # made after that).
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.addresses
        [addresses: <string> | default = ""]

        # The socket read/write timeout.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.timeout
        [timeout: <duration> | default = 100ms]

        # The maximum number of idle connections that will be maintained per
        # address.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-idle-connections
        [max_idle_connections: <int> | default = 16]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # The maximum number of concurrent connections running get operations.
        # If set to 0, concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum number of keys a single underlying get operation should
        # run. If more keys are specified, internally keys are split into
        # multiple batches and fetched concurrently, honoring the max
        # concurrency. If set to 0, the max batch size is unlimited.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-get-multi-batch-size
        [max_get_multi_batch_size: <int> | default = 0]

        # The maximum size of an item stored in memcached. Bigger items are not
        # stored. If set to 0, no maximum size is enforced.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-item-size
        [max_item_size: <int> | default = 1048576]

        # Use memcached auto-discovery mechanism provided by some cloud provider
        # like GCP and AWS
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.auto-discovery
        [auto_discovery: <boolean> | default = false]

        set_async_circuit_breaker_config:
          # If true, enable circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.set-async.circuit-breaker.enabled
          [enabled: <boolean> | default = false]

          # Maximum number of requests allowed to pass through when the circuit
          # breaker is half-open. If set to 0, by default it allows 1 request.
          # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.set-async.circuit-breaker.half-open-max-requests
          [half_open_max_requests: <int> | default = 10]

          # Period of the open state after which the state of the circuit
          # breaker becomes half-open. If set to 0, by default open duration is
          # 60 seconds.
          # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.set-async.circuit-breaker.open-duration
          [open_duration: <duration> | default = 5s]

          # Minimal requests to trigger the circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.set-async.circuit-breaker.min-requests
          [min_requests: <int> | default = 50]

          # Consecutive failures to determine if the circuit breaker should
          # open.
          # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.set-async.circuit-breaker.consecutive-failures
          [consecutive_failures: <int> | default = 5]

          # Failure percentage to determine if the circuit breaker should open.
          # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.set-async.circuit-breaker.failure-percent
          [failure_percent: <float> | default = 0.05]

        # Selectively cache index item types. Supported values are Postings,
        # ExpandedPostings and Series
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.enabled-items
        [enabled_items: <list of string> | default = []]

      redis:
        # Comma separated list of redis addresses. Supported prefixes are: dns+
        # (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV query,
        # dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup made after
        # that).
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.addresses
        [addresses: <string> | default = ""]

        # Redis username.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.username
        [username: <string> | default = ""]

        # Redis password.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.password
        [password: <string> | default = ""]

        # Database to be selected after connecting to the server.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.db
        [db: <int> | default = 0]

        # Specifies the master's name. Must be not empty for Redis Sentinel.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.master-name
        [master_name: <string> | default = ""]

        # The maximum number of concurrent GetMulti() operations. If set to 0,
        # concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum size per batch for mget.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.get-multi-batch-size
        [get_multi_batch_size: <int> | default = 100]

        # The maximum number of concurrent SetMulti() operations. If set to 0,
        # concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.max-set-multi-concurrency
        [max_set_multi_concurrency: <int> | default = 100]

        # The maximum size per batch for pipeline set.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-multi-batch-size
        [set_multi_batch_size: <int> | default = 100]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # Client dial timeout.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.dial-timeout
        [dial_timeout: <duration> | default = 5s]

        # Client read timeout.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.read-timeout
        [read_timeout: <duration> | default = 3s]

        # Client write timeout.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.write-timeout
        [write_timeout: <duration> | default = 3s]

        # Whether to enable tls for redis connection.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.tls-enabled
        [tls_enabled: <boolean> | default = false]

        # Path to the client certificate file, which will be used for
        # authenticating with the server. Also requires the key path to be
        # configured.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis..tls-cert-path
        [tls_cert_path: <string> | default = ""]

        # Path to the key file for the client certificate. Also requires the
        # client certificate to be configured.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis..tls-key-path
        [tls_key_path: <string> | default = ""]

        # Path to the CA certificates file to validate server certificate
        # against. If not set, the host's root CA certificates are used.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis..tls-ca-path
        [tls_ca_path: <string> | default = ""]

        # Override the expected name on the server certificate.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis..tls-server-name
        [tls_server_name: <string> | default = ""]

        # Skip validating server certificate.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis..tls-insecure-skip-verify
        [tls_insecure_skip_verify: <boolean> | default = false]

        # If not zero then client-side caching is enabled. Client-side caching
        # is when data is stored in memory instead of fetching data each time.
        # See https://redis.io/docs/manual/client-side-caching/ for more info.
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.cache-size
        [cache_size: <int> | default = 0]

        set_async_circuit_breaker_config:
          # If true, enable circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-async.circuit-breaker.enabled
          [enabled: <boolean> | default = false]

          # Maximum number of requests allowed to pass through when the circuit
          # breaker is half-open. If set to 0, by default it allows 1 request.
          # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-async.circuit-breaker.half-open-max-requests
          [half_open_max_requests: <int> | default = 10]

          # Period of the open state after which the state of the circuit
          # breaker becomes half-open. If set to 0, by default open duration is
          # 60 seconds.
          # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-async.circuit-breaker.open-duration
          [open_duration: <duration> | default = 5s]

          # Minimal requests to trigger the circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-async.circuit-breaker.min-requests
          [min_requests: <int> | default = 50]

          # Consecutive failures to determine if the circuit breaker should
          # open.
          # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-async.circuit-breaker.consecutive-failures
          [consecutive_failures: <int> | default = 5]

          # Failure percentage to determine if the circuit breaker should open.
          # CLI flag: -blocks-storage.bucket-store.index-cache.redis.set-async.circuit-breaker.failure-percent
          [failure_percent: <float> | default = 0.05]

        # Selectively cache index item types. Supported values are Postings,
        # ExpandedPostings and Series
        # CLI flag: -blocks-storage.bucket-store.index-cache.redis.enabled-items
        [enabled_items: <list of string> | default = []]

      multilevel:
        # The maximum number of concurrent asynchronous operations can occur
        # when backfilling cache items.
        # CLI flag: -blocks-storage.bucket-store.index-cache.multilevel.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed when
        # backfilling cache items.
        # CLI flag: -blocks-storage.bucket-store.index-cache.multilevel.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # The maximum number of items to backfill per asynchronous operation.
        # CLI flag: -blocks-storage.bucket-store.index-cache.multilevel.max-backfill-items
        [max_backfill_items: <int> | default = 10000]

    chunks_cache:
      # Backend for chunks cache, if not empty. Supported values: memcached.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.backend
      [backend: <string> | default = ""]

      memcached:
        # Comma separated list of memcached addresses. Supported prefixes are:
        # dns+ (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV
        # query, dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup
        # made after that).
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.addresses
        [addresses: <string> | default = ""]

        # The socket read/write timeout.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.timeout
        [timeout: <duration> | default = 100ms]

        # The maximum number of idle connections that will be maintained per
        # address.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-idle-connections
        [max_idle_connections: <int> | default = 16]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # The maximum number of concurrent connections running get operations.
        # If set to 0, concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum number of keys a single underlying get operation should
        # run. If more keys are specified, internally keys are split into
        # multiple batches and fetched concurrently, honoring the max
        # concurrency. If set to 0, the max batch size is unlimited.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-get-multi-batch-size
        [max_get_multi_batch_size: <int> | default = 0]

        # The maximum size of an item stored in memcached. Bigger items are not
        # stored. If set to 0, no maximum size is enforced.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-item-size
        [max_item_size: <int> | default = 1048576]

        # Use memcached auto-discovery mechanism provided by some cloud provider
        # like GCP and AWS
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.auto-discovery
        [auto_discovery: <boolean> | default = false]

        set_async_circuit_breaker_config:
          # If true, enable circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.set-async.circuit-breaker.enabled
          [enabled: <boolean> | default = false]

          # Maximum number of requests allowed to pass through when the circuit
          # breaker is half-open. If set to 0, by default it allows 1 request.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.set-async.circuit-breaker.half-open-max-requests
          [half_open_max_requests: <int> | default = 10]

          # Period of the open state after which the state of the circuit
          # breaker becomes half-open. If set to 0, by default open duration is
          # 60 seconds.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.set-async.circuit-breaker.open-duration
          [open_duration: <duration> | default = 5s]

          # Minimal requests to trigger the circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.set-async.circuit-breaker.min-requests
          [min_requests: <int> | default = 50]

          # Consecutive failures to determine if the circuit breaker should
          # open.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.set-async.circuit-breaker.consecutive-failures
          [consecutive_failures: <int> | default = 5]

          # Failure percentage to determine if the circuit breaker should open.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.set-async.circuit-breaker.failure-percent
          [failure_percent: <float> | default = 0.05]

      redis:
        # Comma separated list of redis addresses. Supported prefixes are: dns+
        # (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV query,
        # dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup made after
        # that).
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.addresses
        [addresses: <string> | default = ""]

        # Redis username.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.username
        [username: <string> | default = ""]

        # Redis password.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.password
        [password: <string> | default = ""]

        # Database to be selected after connecting to the server.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.db
        [db: <int> | default = 0]

        # Specifies the master's name. Must be not empty for Redis Sentinel.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.master-name
        [master_name: <string> | default = ""]

        # The maximum number of concurrent GetMulti() operations. If set to 0,
        # concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum size per batch for mget.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.get-multi-batch-size
        [get_multi_batch_size: <int> | default = 100]

        # The maximum number of concurrent SetMulti() operations. If set to 0,
        # concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.max-set-multi-concurrency
        [max_set_multi_concurrency: <int> | default = 100]

        # The maximum size per batch for pipeline set.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-multi-batch-size
        [set_multi_batch_size: <int> | default = 100]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # Client dial timeout.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.dial-timeout
        [dial_timeout: <duration> | default = 5s]

        # Client read timeout.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.read-timeout
        [read_timeout: <duration> | default = 3s]

        # Client write timeout.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.write-timeout
        [write_timeout: <duration> | default = 3s]

        # Whether to enable tls for redis connection.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.tls-enabled
        [tls_enabled: <boolean> | default = false]

        # Path to the client certificate file, which will be used for
        # authenticating with the server. Also requires the key path to be
        # configured.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis..tls-cert-path
        [tls_cert_path: <string> | default = ""]

        # Path to the key file for the client certificate. Also requires the
        # client certificate to be configured.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis..tls-key-path
        [tls_key_path: <string> | default = ""]

        # Path to the CA certificates file to validate server certificate
        # against. If not set, the host's root CA certificates are used.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis..tls-ca-path
        [tls_ca_path: <string> | default = ""]

        # Override the expected name on the server certificate.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis..tls-server-name
        [tls_server_name: <string> | default = ""]

        # Skip validating server certificate.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis..tls-insecure-skip-verify
        [tls_insecure_skip_verify: <boolean> | default = false]

        # If not zero then client-side caching is enabled. Client-side caching
        # is when data is stored in memory instead of fetching data each time.
        # See https://redis.io/docs/manual/client-side-caching/ for more info.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.cache-size
        [cache_size: <int> | default = 0]

        set_async_circuit_breaker_config:
          # If true, enable circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-async.circuit-breaker.enabled
          [enabled: <boolean> | default = false]

          # Maximum number of requests allowed to pass through when the circuit
          # breaker is half-open. If set to 0, by default it allows 1 request.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-async.circuit-breaker.half-open-max-requests
          [half_open_max_requests: <int> | default = 10]

          # Period of the open state after which the state of the circuit
          # breaker becomes half-open. If set to 0, by default open duration is
          # 60 seconds.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-async.circuit-breaker.open-duration
          [open_duration: <duration> | default = 5s]

          # Minimal requests to trigger the circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-async.circuit-breaker.min-requests
          [min_requests: <int> | default = 50]

          # Consecutive failures to determine if the circuit breaker should
          # open.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-async.circuit-breaker.consecutive-failures
          [consecutive_failures: <int> | default = 5]

          # Failure percentage to determine if the circuit breaker should open.
          # CLI flag: -blocks-storage.bucket-store.chunks-cache.redis.set-async.circuit-breaker.failure-percent
          [failure_percent: <float> | default = 0.05]

      # Size of each subrange that bucket object is split into for better
      # caching.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.subrange-size
      [subrange_size: <int> | default = 16000]

      # Maximum number of sub-GetRange requests that a single GetRange request
      # can be split into when fetching chunks. Zero or negative value =
      # unlimited number of sub-requests.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.max-get-range-requests
      [max_get_range_requests: <int> | default = 3]

      # TTL for caching object attributes for chunks.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.attributes-ttl
      [attributes_ttl: <duration> | default = 168h]

      # TTL for caching individual chunks subranges.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.subrange-ttl
      [subrange_ttl: <duration> | default = 24h]

    metadata_cache:
      # Backend for metadata cache, if not empty. Supported values: memcached.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.backend
      [backend: <string> | default = ""]

      memcached:
        # Comma separated list of memcached addresses. Supported prefixes are:
        # dns+ (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV
        # query, dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup
        # made after that).
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.addresses
        [addresses: <string> | default = ""]

        # The socket read/write timeout.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.timeout
        [timeout: <duration> | default = 100ms]

        # The maximum number of idle connections that will be maintained per
        # address.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-idle-connections
        [max_idle_connections: <int> | default = 16]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # The maximum number of concurrent connections running get operations.
        # If set to 0, concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum number of keys a single underlying get operation should
        # run. If more keys are specified, internally keys are split into
        # multiple batches and fetched concurrently, honoring the max
        # concurrency. If set to 0, the max batch size is unlimited.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-get-multi-batch-size
        [max_get_multi_batch_size: <int> | default = 0]

        # The maximum size of an item stored in memcached. Bigger items are not
        # stored. If set to 0, no maximum size is enforced.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-item-size
        [max_item_size: <int> | default = 1048576]

        # Use memcached auto-discovery mechanism provided by some cloud provider
        # like GCP and AWS
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.auto-discovery
        [auto_discovery: <boolean> | default = false]

        set_async_circuit_breaker_config:
          # If true, enable circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.set-async.circuit-breaker.enabled
          [enabled: <boolean> | default = false]

          # Maximum number of requests allowed to pass through when the circuit
          # breaker is half-open. If set to 0, by default it allows 1 request.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.set-async.circuit-breaker.half-open-max-requests
          [half_open_max_requests: <int> | default = 10]

          # Period of the open state after which the state of the circuit
          # breaker becomes half-open. If set to 0, by default open duration is
          # 60 seconds.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.set-async.circuit-breaker.open-duration
          [open_duration: <duration> | default = 5s]

          # Minimal requests to trigger the circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.set-async.circuit-breaker.min-requests
          [min_requests: <int> | default = 50]

          # Consecutive failures to determine if the circuit breaker should
          # open.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.set-async.circuit-breaker.consecutive-failures
          [consecutive_failures: <int> | default = 5]

          # Failure percentage to determine if the circuit breaker should open.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.set-async.circuit-breaker.failure-percent
          [failure_percent: <float> | default = 0.05]

      redis:
        # Comma separated list of redis addresses. Supported prefixes are: dns+
        # (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV query,
        # dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup made after
        # that).
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.addresses
        [addresses: <string> | default = ""]

        # Redis username.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.username
        [username: <string> | default = ""]

        # Redis password.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.password
        [password: <string> | default = ""]

        # Database to be selected after connecting to the server.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.db
        [db: <int> | default = 0]

        # Specifies the master's name. Must be not empty for Redis Sentinel.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.master-name
        [master_name: <string> | default = ""]

        # The maximum number of concurrent GetMulti() operations. If set to 0,
        # concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum size per batch for mget.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.get-multi-batch-size
        [get_multi_batch_size: <int> | default = 100]

        # The maximum number of concurrent SetMulti() operations. If set to 0,
        # concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.max-set-multi-concurrency
        [max_set_multi_concurrency: <int> | default = 100]

        # The maximum size per batch for pipeline set.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-multi-batch-size
        [set_multi_batch_size: <int> | default = 100]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # Client dial timeout.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.dial-timeout
        [dial_timeout: <duration> | default = 5s]

        # Client read timeout.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.read-timeout
        [read_timeout: <duration> | default = 3s]

        # Client write timeout.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.write-timeout
        [write_timeout: <duration> | default = 3s]

        # Whether to enable tls for redis connection.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.tls-enabled
        [tls_enabled: <boolean> | default = false]

        # Path to the client certificate file, which will be used for
        # authenticating with the server. Also requires the key path to be
        # configured.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis..tls-cert-path
        [tls_cert_path: <string> | default = ""]

        # Path to the key file for the client certificate. Also requires the
        # client certificate to be configured.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis..tls-key-path
        [tls_key_path: <string> | default = ""]

        # Path to the CA certificates file to validate server certificate
        # against. If not set, the host's root CA certificates are used.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis..tls-ca-path
        [tls_ca_path: <string> | default = ""]

        # Override the expected name on the server certificate.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis..tls-server-name
        [tls_server_name: <string> | default = ""]

        # Skip validating server certificate.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis..tls-insecure-skip-verify
        [tls_insecure_skip_verify: <boolean> | default = false]

        # If not zero then client-side caching is enabled. Client-side caching
        # is when data is stored in memory instead of fetching data each time.
        # See https://redis.io/docs/manual/client-side-caching/ for more info.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.cache-size
        [cache_size: <int> | default = 0]

        set_async_circuit_breaker_config:
          # If true, enable circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-async.circuit-breaker.enabled
          [enabled: <boolean> | default = false]

          # Maximum number of requests allowed to pass through when the circuit
          # breaker is half-open. If set to 0, by default it allows 1 request.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-async.circuit-breaker.half-open-max-requests
          [half_open_max_requests: <int> | default = 10]

          # Period of the open state after which the state of the circuit
          # breaker becomes half-open. If set to 0, by default open duration is
          # 60 seconds.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-async.circuit-breaker.open-duration
          [open_duration: <duration> | default = 5s]

          # Minimal requests to trigger the circuit breaker.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-async.circuit-breaker.min-requests
          [min_requests: <int> | default = 50]

          # Consecutive failures to determine if the circuit breaker should
          # open.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-async.circuit-breaker.consecutive-failures
          [consecutive_failures: <int> | default = 5]

          # Failure percentage to determine if the circuit breaker should open.
          # CLI flag: -blocks-storage.bucket-store.metadata-cache.redis.set-async.circuit-breaker.failure-percent
          [failure_percent: <float> | default = 0.05]

      # How long to cache list of tenants in the bucket.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.tenants-list-ttl
      [tenants_list_ttl: <duration> | default = 15m]

      # How long to cache list of blocks for each tenant.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.tenant-blocks-list-ttl
      [tenant_blocks_list_ttl: <duration> | default = 5m]

      # How long to cache list of chunks for a block.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.chunks-list-ttl
      [chunks_list_ttl: <duration> | default = 24h]

      # How long to cache information that block metafile exists. Also used for
      # user deletion mark file.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-exists-ttl
      [metafile_exists_ttl: <duration> | default = 2h]

      # How long to cache information that block metafile doesn't exist. Also
      # used for user deletion mark file.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-doesnt-exist-ttl
      [metafile_doesnt_exist_ttl: <duration> | default = 5m]

      # How long to cache content of the metafile.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-content-ttl
      [metafile_content_ttl: <duration> | default = 24h]

      # Maximum size of metafile content to cache in bytes. Caching will be
      # skipped if the content exceeds this size. This is useful to avoid
      # network round trip for large content if the configured caching backend
      # has an hard limit on cached items size (in this case, you should set
      # this limit to the same limit in the caching backend).
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-max-size-bytes
      [metafile_max_size_bytes: <int> | default = 1048576]

      # How long to cache attributes of the block metafile.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-attributes-ttl
      [metafile_attributes_ttl: <duration> | default = 168h]

      # How long to cache attributes of the block index.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.block-index-attributes-ttl
      [block_index_attributes_ttl: <duration> | default = 168h]

      # How long to cache content of the bucket index.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.bucket-index-content-ttl
      [bucket_index_content_ttl: <duration> | default = 5m]

      # Maximum size of bucket index content to cache in bytes. Caching will be
      # skipped if the content exceeds this size. This is useful to avoid
      # network round trip for large content if the configured caching backend
      # has an hard limit on cached items size (in this case, you should set
      # this limit to the same limit in the caching backend).
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.bucket-index-max-size-bytes
      [bucket_index_max_size_bytes: <int> | default = 1048576]

    # Duration after which the blocks marked for deletion will be filtered out
    # while fetching blocks. The idea of ignore-deletion-marks-delay is to
    # ignore blocks that are marked for deletion with some delay. This ensures
    # store can still serve blocks that are meant to be deleted but do not have
    # a replacement yet. Default is 6h, half of the default value for
    # -compactor.deletion-delay.
    # CLI flag: -blocks-storage.bucket-store.ignore-deletion-marks-delay
    [ignore_deletion_mark_delay: <duration> | default = 6h]

    # The blocks created since `now() - ignore_blocks_within` will not be
    # synced. This should be used together with `-querier.query-store-after` to
    # filter out the blocks that are too new to be queried. A reasonable value
    # for this flag would be `-querier.query-store-after -
    # blocks-storage.bucket-store.bucket-index.max-stale-period` to give some
    # buffer. 0 to disable.
    # CLI flag: -blocks-storage.bucket-store.ignore-blocks-within
    [ignore_blocks_within: <duration> | default = 0s]

    bucket_index:
      # True to enable querier and store-gateway to discover blocks in the
      # storage via bucket index instead of bucket scanning.
      # CLI flag: -blocks-storage.bucket-store.bucket-index.enabled
      [enabled: <boolean> | default = false]

      # How frequently a bucket index, which previously failed to load, should
      # be tried to load again. This option is used only by querier.
      # CLI flag: -blocks-storage.bucket-store.bucket-index.update-on-error-interval
      [update_on_error_interval: <duration> | default = 1m]

      # How long a unused bucket index should be cached. Once this timeout
      # expires, the unused bucket index is removed from the in-memory cache.
      # This option is used only by querier.
      # CLI flag: -blocks-storage.bucket-store.bucket-index.idle-timeout
      [idle_timeout: <duration> | default = 1h]

      # The maximum allowed age of a bucket index (last updated) before queries
      # start failing because the bucket index is too old. The bucket index is
      # periodically updated by the compactor, while this check is enforced in
      # the querier (at query time).
      # CLI flag: -blocks-storage.bucket-store.bucket-index.max-stale-period
      [max_stale_period: <duration> | default = 1h]

    # Max size - in bytes - of a chunks pool, used to reduce memory allocations.
    # The pool is shared across all tenants. 0 to disable the limit.
    # CLI flag: -blocks-storage.bucket-store.max-chunk-pool-bytes
    [max_chunk_pool_bytes: <int> | default = 2147483648]

    # If enabled, store-gateway will lazily memory-map an index-header only once
    # required by a query.
    # CLI flag: -blocks-storage.bucket-store.index-header-lazy-loading-enabled
    [index_header_lazy_loading_enabled: <boolean> | default = false]

    # If index-header lazy loading is enabled and this setting is > 0, the
    # store-gateway will release memory-mapped index-headers after 'idle
    # timeout' inactivity.
    # CLI flag: -blocks-storage.bucket-store.index-header-lazy-loading-idle-timeout
    [index_header_lazy_loading_idle_timeout: <duration> | default = 20m]

    # If true, Store Gateway will estimate postings size and try to lazily
    # expand postings if it downloads less data than expanding all postings.
    # CLI flag: -blocks-storage.bucket-store.lazy-expanded-postings-enabled
    [lazy_expanded_postings_enabled: <boolean> | default = false]

    # Controls how many series to fetch per batch in Store Gateway. Default
    # value is 10000.
    # CLI flag: -blocks-storage.bucket-store.series-batch-size
    [series_batch_size: <int> | default = 10000]

  tsdb:
    # Local directory to store TSDBs in the ingesters.
    # CLI flag: -blocks-storage.tsdb.dir
    [dir: <string> | default = "tsdb"]

    # TSDB blocks range period.
    # CLI flag: -blocks-storage.tsdb.block-ranges-period
    [block_ranges_period: <list of duration> | default = 2h0m0s]

    # TSDB blocks retention in the ingester before a block is removed. This
    # should be larger than the block_ranges_period and large enough to give
    # store-gateways and queriers enough time to discover newly uploaded blocks.
    # CLI flag: -blocks-storage.tsdb.retention-period
    [retention_period: <duration> | default = 6h]

    # How frequently the TSDB blocks are scanned and new ones are shipped to the
    # storage. 0 means shipping is disabled.
    # CLI flag: -blocks-storage.tsdb.ship-interval
    [ship_interval: <duration> | default = 1m]

    # Maximum number of tenants concurrently shipping blocks to the storage.
    # CLI flag: -blocks-storage.tsdb.ship-concurrency
    [ship_concurrency: <int> | default = 10]

    # How frequently does Cortex try to compact TSDB head. Block is only created
    # if data covers smallest block range. Must be greater than 0 and max 5
    # minutes.
    # CLI flag: -blocks-storage.tsdb.head-compaction-interval
    [head_compaction_interval: <duration> | default = 1m]

    # Maximum number of tenants concurrently compacting TSDB head into a new
    # block
    # CLI flag: -blocks-storage.tsdb.head-compaction-concurrency
    [head_compaction_concurrency: <int> | default = 5]

    # If TSDB head is idle for this duration, it is compacted. Note that up to
    # 25% jitter is added to the value to avoid ingesters compacting
    # concurrently. 0 means disabled.
    # CLI flag: -blocks-storage.tsdb.head-compaction-idle-timeout
    [head_compaction_idle_timeout: <duration> | default = 1h]

    # The write buffer size used by the head chunks mapper. Lower values reduce
    # memory utilisation on clusters with a large number of tenants at the cost
    # of increased disk I/O operations.
    # CLI flag: -blocks-storage.tsdb.head-chunks-write-buffer-size-bytes
    [head_chunks_write_buffer_size_bytes: <int> | default = 4194304]

    # The number of shards of series to use in TSDB (must be a power of 2).
    # Reducing this will decrease memory footprint, but can negatively impact
    # performance.
    # CLI flag: -blocks-storage.tsdb.stripe-size
    [stripe_size: <int> | default = 16384]

    # True to enable TSDB WAL compression.
    # CLI flag: -blocks-storage.tsdb.wal-compression-enabled
    [wal_compression_enabled: <boolean> | default = false]

    # TSDB WAL segments files max size (bytes).
    # CLI flag: -blocks-storage.tsdb.wal-segment-size-bytes
    [wal_segment_size_bytes: <int> | default = 134217728]

    # True to flush blocks to storage on shutdown. If false, incomplete blocks
    # will be reused after restart.
    # CLI flag: -blocks-storage.tsdb.flush-blocks-on-shutdown
    [flush_blocks_on_shutdown: <boolean> | default = false]

    # If TSDB has not received any data for this duration, and all blocks from
    # TSDB have been shipped, TSDB is closed and deleted from local disk. If set
    # to positive value, this value should be equal or higher than
    # -querier.query-ingesters-within flag to make sure that TSDB is not closed
    # prematurely, which could cause partial query results. 0 or negative value
    # disables closing of idle TSDB.
    # CLI flag: -blocks-storage.tsdb.close-idle-tsdb-timeout
    [close_idle_tsdb_timeout: <duration> | default = 0s]

    # The size of the in-memory queue used before flushing chunks to the disk.
    # CLI flag: -blocks-storage.tsdb.head-chunks-write-queue-size
    [head_chunks_write_queue_size: <int> | default = 0]

    # limit the number of concurrently opening TSDB's on startup
    # CLI flag: -blocks-storage.tsdb.max-tsdb-opening-concurrency-on-startup
    [max_tsdb_opening_concurrency_on_startup: <int> | default = 10]

    # Deprecated, use maxExemplars in limits instead. If the MaxExemplars value
    # in limits is set to zero, cortex will fallback on this value. This setting
    # enables support for exemplars in TSDB and sets the maximum number that
    # will be stored. 0 or less means disabled.
    # CLI flag: -blocks-storage.tsdb.max-exemplars
    [max_exemplars: <int> | default = 0]

    # True to enable snapshotting of in-memory TSDB data on disk when shutting
    # down.
    # CLI flag: -blocks-storage.tsdb.memory-snapshot-on-shutdown
    [memory_snapshot_on_shutdown: <boolean> | default = false]

    # [EXPERIMENTAL] Configures the maximum number of samples per chunk that can
    # be out-of-order.
    # CLI flag: -blocks-storage.tsdb.out-of-order-cap-max
    [out_of_order_cap_max: <int> | default = 32]