Querier

The querier service handles queries using the PromQL query language. The querier service is used both by the chunks and blocks storage, and the general architecture documentation applies to the blocks storage too, except for the differences described in this document.

The querier is stateless.

How it works

At startup queriers iterate over the entire storage bucket to discover all tenants blocks and download the meta.json for each block. During this initial bucket scanning phase, a querier is not ready to handle incoming queries yet and its /ready readiness probe endpoint will fail.

While running, queriers periodically iterate over the storage bucket to discover new tenants and recently uploaded blocks. Queriers do not download any content from blocks except a small meta.json file containing the block’s metadata (including the minimum and maximum timestamp of samples within the block).

Queriers use the metadata to compute the list of blocks that need to be queried at query time and fetch matching series from the store-gateway instances holding the required blocks.

Anatomy of a query request

When a querier receives a query range request, it contains the following parameters:

  • query: the PromQL query expression itself (e.g. rate(node_cpu_seconds_total[1m]))
  • start: the start time
  • end: the end time
  • step: the query resolution (e.g. 30 to have 1 resulting data point every 30s)

Given a query, the querier analyzes the start and end time range to compute a list of all known blocks containing at least 1 sample within this time range. Given the list of blocks, the querier then computes a list of store-gateway instances holding these blocks and sends a request to each matching store-gateway instance asking to fetch all the samples for the series matching the query within the start and end time range.

The request sent to each store-gateway contains the list of block IDs that are expected to be queried, and the response sent back by the store-gateway to the querier contains the list of block IDs that were actually queried. This list may be a subset of the requested blocks, for example due to recent blocks resharding event (ie. last few seconds). The querier runs a consistency check on responses received from the store-gateways to ensure all expected blocks have been queried; if not, the querier retries to fetch samples from missing blocks from different store-gateways (if the -store-gateway.sharding-ring.replication-factor is greater than 1) and if the consistency check fails after all retries, the query execution fails as well (correctness is always guaranteed).

If the query time range covers a period within -querier.query-ingesters-within duration, the querier also sends the request to all ingesters, in order to fetch samples that have not been uploaded to the long-term storage yet.

Once all samples have been fetched from both store-gateways and ingesters, the querier proceeds with running the PromQL engine to execute the query and send back the result to the client.

How queriers connect to store-gateway

Queriers need to discover store-gateways in order to connect to them at query time. The service discovery mechanism used depends whether blocks sharding is enabled in the store-gateways.

When blocks sharding is enabled, queriers need to access to the store-gateways hash ring and thus queriers need to be configured with the same -store-gateway.sharding-ring.* flags (or their respective YAML config options) store-gateways have been configured.

When blocks sharding is disabled, queriers need the -querier.store-gateway-addresses CLI flag (or its respective YAML config option) being set to a comma separated list of store-gateway addresses in DNS Service Discovery format. Queriers will evenly balance the requests to query blocks across the resolved addresses.

Caching

The querier supports the following caches:

Caching is optional, but highly recommended in a production environment. Please also check out the production tips for more information about configuring the cache.

Metadata cache

Store-gateway and querier can use memcached for caching bucket metadata:

  • List of tenants
  • List of blocks per tenant
  • Block’s meta.json content
  • Block’s deletion-mark.json existence and content

Using the metadata cache can significantly reduce the number of API calls to object storage and protects from linearly scale the number of these API calls with the number of querier and store-gateway instances (because the bucket is periodically scanned and synched by each querier and store-gateway).

To enable metadata cache, please set -blocks-storage.bucket-store.metadata-cache.backend. Only memcached backend is supported currently. Memcached client has additional configuration available via flags with -blocks-storage.bucket-store.metadata-cache.memcached.* prefix.

Additional options for configuring metadata cache have -blocks-storage.bucket-store.metadata-cache.* prefix. By configuring TTL to zero or negative value, caching of given item type is disabled.

The same memcached backend cluster should be shared between store-gateways and queriers.

Querier configuration

This section described the querier configuration. For the general Cortex configuration and references to common config blocks, please refer to the configuration documentation.

querier_config

The querier_config configures the Cortex querier.

querier:
  # The maximum number of concurrent queries.
  # CLI flag: -querier.max-concurrent
  [max_concurrent: <int> | default = 20]

  # The timeout for a query.
  # CLI flag: -querier.timeout
  [timeout: <duration> | default = 2m]

  # Use iterators to execute query, as opposed to fully materialising the series
  # in memory.
  # CLI flag: -querier.iterators
  [iterators: <boolean> | default = false]

  # Use batch iterators to execute query, as opposed to fully materialising the
  # series in memory.  Takes precedent over the -querier.iterators flag.
  # CLI flag: -querier.batch-iterators
  [batch_iterators: <boolean> | default = true]

  # Use streaming RPCs to query ingester.
  # CLI flag: -querier.ingester-streaming
  [ingester_streaming: <boolean> | default = true]

  # Maximum number of samples a single query can load into memory.
  # CLI flag: -querier.max-samples
  [max_samples: <int> | default = 50000000]

  # Maximum lookback beyond which queries are not sent to ingester. 0 means all
  # queries are sent to ingester.
  # CLI flag: -querier.query-ingesters-within
  [query_ingesters_within: <duration> | default = 0s]

  # The time after which a metric should only be queried from storage and not
  # just ingesters. 0 means all queries are sent to store. When running the
  # blocks storage, if this option is enabled, the time range of the query sent
  # to the store will be manipulated to ensure the query end is not more recent
  # than 'now - query-store-after'.
  # CLI flag: -querier.query-store-after
  [query_store_after: <duration> | default = 0s]

  # Maximum duration into the future you can query. 0 to disable.
  # CLI flag: -querier.max-query-into-future
  [max_query_into_future: <duration> | default = 10m]

  # The default evaluation interval or step size for subqueries.
  # CLI flag: -querier.default-evaluation-interval
  [default_evaluation_interval: <duration> | default = 1m]

  # Active query tracker monitors active queries, and writes them to the file in
  # given directory. If Cortex discovers any queries in this log during startup,
  # it will log them to the log file. Setting to empty value disables active
  # query tracker, which also disables -querier.max-concurrent option.
  # CLI flag: -querier.active-query-tracker-dir
  [active_query_tracker_dir: <string> | default = "./active-query-tracker"]

  # Time since the last sample after which a time series is considered stale and
  # ignored by expression evaluations.
  # CLI flag: -querier.lookback-delta
  [lookback_delta: <duration> | default = 5m]

  # Comma separated list of store-gateway addresses in DNS Service Discovery
  # format. This option should be set when using the blocks storage and the
  # store-gateway sharding is disabled (when enabled, the store-gateway
  # instances form a ring and addresses are picked from the ring).
  # CLI flag: -querier.store-gateway-addresses
  [store_gateway_addresses: <string> | default = ""]

  store_gateway_client:
    # Path to the client certificate file, which will be used for authenticating
    # with the server. Also requires the key path to be configured.
    # CLI flag: -querier.store-gateway-client.tls-cert-path
    [tls_cert_path: <string> | default = ""]

    # Path to the key file for the client certificate. Also requires the client
    # certificate to be configured.
    # CLI flag: -querier.store-gateway-client.tls-key-path
    [tls_key_path: <string> | default = ""]

    # Path to the CA certificates file to validate server certificate against.
    # If not set, the host's root CA certificates are used.
    # CLI flag: -querier.store-gateway-client.tls-ca-path
    [tls_ca_path: <string> | default = ""]

    # Skip validating server certificate.
    # CLI flag: -querier.store-gateway-client.tls-insecure-skip-verify
    [tls_insecure_skip_verify: <boolean> | default = false]

  # Second store engine to use for querying. Empty = disabled.
  # CLI flag: -querier.second-store-engine
  [second_store_engine: <string> | default = ""]

  # If specified, second store is only used for queries before this timestamp.
  # Default value 0 means secondary store is always queried.
  # CLI flag: -querier.use-second-store-before-time
  [use_second_store_before_time: <time> | default = 0]

  # When distributor's sharding strategy is shuffle-sharding and this setting is
  # > 0, queriers fetch in-memory series from the minimum set of required
  # ingesters, selecting only ingesters which may have received series since
  # 'now - lookback period'. The lookback period should be greater or equal than
  # the configured 'query store after'. If this setting is 0, queriers always
  # query all ingesters (ingesters shuffle sharding on read path is disabled).
  # CLI flag: -querier.shuffle-sharding-ingesters-lookback-period
  [shuffle_sharding_ingesters_lookback_period: <duration> | default = 0s]

blocks_storage_config

The blocks_storage_config configures the blocks storage.

blocks_storage:
  # Backend storage to use. Supported backends are: s3, gcs, azure, filesystem.
  # CLI flag: -blocks-storage.backend
  [backend: <string> | default = "s3"]

  s3:
    # The S3 bucket endpoint. It could be an AWS S3 endpoint listed at
    # https://docs.aws.amazon.com/general/latest/gr/s3.html or the address of an
    # S3-compatible service in hostname:port format.
    # CLI flag: -blocks-storage.s3.endpoint
    [endpoint: <string> | default = ""]

    # S3 bucket name
    # CLI flag: -blocks-storage.s3.bucket-name
    [bucket_name: <string> | default = ""]

    # S3 secret access key
    # CLI flag: -blocks-storage.s3.secret-access-key
    [secret_access_key: <string> | default = ""]

    # S3 access key ID
    # CLI flag: -blocks-storage.s3.access-key-id
    [access_key_id: <string> | default = ""]

    # If enabled, use http:// for the S3 endpoint instead of https://. This
    # could be useful in local dev/test environments while using an
    # S3-compatible backend storage, like Minio.
    # CLI flag: -blocks-storage.s3.insecure
    [insecure: <boolean> | default = false]

    http:
      # The time an idle connection will remain idle before closing.
      # CLI flag: -blocks-storage.s3.http.idle-conn-timeout
      [idle_conn_timeout: <duration> | default = 1m30s]

      # The amount of time the client will wait for a servers response headers.
      # CLI flag: -blocks-storage.s3.http.response-header-timeout
      [response_header_timeout: <duration> | default = 2m]

      # If the client connects to S3 via HTTPS and this option is enabled, the
      # client will accept any certificate and hostname.
      # CLI flag: -blocks-storage.s3.http.insecure-skip-verify
      [insecure_skip_verify: <boolean> | default = false]

  gcs:
    # GCS bucket name
    # CLI flag: -blocks-storage.gcs.bucket-name
    [bucket_name: <string> | default = ""]

    # JSON representing either a Google Developers Console
    # client_credentials.json file or a Google Developers service account key
    # file. If empty, fallback to Google default logic.
    # CLI flag: -blocks-storage.gcs.service-account
    [service_account: <string> | default = ""]

  azure:
    # Azure storage account name
    # CLI flag: -blocks-storage.azure.account-name
    [account_name: <string> | default = ""]

    # Azure storage account key
    # CLI flag: -blocks-storage.azure.account-key
    [account_key: <string> | default = ""]

    # Azure storage container name
    # CLI flag: -blocks-storage.azure.container-name
    [container_name: <string> | default = ""]

    # Azure storage endpoint suffix without schema. The account name will be
    # prefixed to this value to create the FQDN
    # CLI flag: -blocks-storage.azure.endpoint-suffix
    [endpoint_suffix: <string> | default = ""]

    # Number of retries for recoverable errors
    # CLI flag: -blocks-storage.azure.max-retries
    [max_retries: <int> | default = 20]

  filesystem:
    # Local filesystem storage directory.
    # CLI flag: -blocks-storage.filesystem.dir
    [dir: <string> | default = ""]

  # This configures how the store-gateway synchronizes blocks stored in the
  # bucket.
  bucket_store:
    # Directory to store synchronized TSDB index headers.
    # CLI flag: -blocks-storage.bucket-store.sync-dir
    [sync_dir: <string> | default = "tsdb-sync"]

    # How frequently scan the bucket to look for changes (new blocks shipped by
    # ingesters and blocks removed by retention or compaction). 0 disables it.
    # CLI flag: -blocks-storage.bucket-store.sync-interval
    [sync_interval: <duration> | default = 5m]

    # Max size - in bytes - of a per-tenant chunk pool, used to reduce memory
    # allocations.
    # CLI flag: -blocks-storage.bucket-store.max-chunk-pool-bytes
    [max_chunk_pool_bytes: <int> | default = 2147483648]

    # Max number of concurrent queries to execute against the long-term storage.
    # The limit is shared across all tenants.
    # CLI flag: -blocks-storage.bucket-store.max-concurrent
    [max_concurrent: <int> | default = 100]

    # Maximum number of concurrent tenants synching blocks.
    # CLI flag: -blocks-storage.bucket-store.tenant-sync-concurrency
    [tenant_sync_concurrency: <int> | default = 10]

    # Maximum number of concurrent blocks synching per tenant.
    # CLI flag: -blocks-storage.bucket-store.block-sync-concurrency
    [block_sync_concurrency: <int> | default = 20]

    # Number of Go routines to use when syncing block meta files from object
    # storage per tenant.
    # CLI flag: -blocks-storage.bucket-store.meta-sync-concurrency
    [meta_sync_concurrency: <int> | default = 20]

    # Minimum age of a block before it's being read. Set it to safe value (e.g
    # 30m) if your object storage is eventually consistent. GCS and S3 are
    # (roughly) strongly consistent.
    # CLI flag: -blocks-storage.bucket-store.consistency-delay
    [consistency_delay: <duration> | default = 0s]

    index_cache:
      # The index cache backend type. Supported values: inmemory, memcached.
      # CLI flag: -blocks-storage.bucket-store.index-cache.backend
      [backend: <string> | default = "inmemory"]

      inmemory:
        # Maximum size in bytes of in-memory index cache used to speed up blocks
        # index lookups (shared between all tenants).
        # CLI flag: -blocks-storage.bucket-store.index-cache.inmemory.max-size-bytes
        [max_size_bytes: <int> | default = 1073741824]

      memcached:
        # Comma separated list of memcached addresses. Supported prefixes are:
        # dns+ (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV
        # query, dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup
        # made after that).
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.addresses
        [addresses: <string> | default = ""]

        # The socket read/write timeout.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.timeout
        [timeout: <duration> | default = 100ms]

        # The maximum number of idle connections that will be maintained per
        # address.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-idle-connections
        [max_idle_connections: <int> | default = 16]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # The maximum number of concurrent connections running get operations.
        # If set to 0, concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum number of keys a single underlying get operation should
        # run. If more keys are specified, internally keys are splitted into
        # multiple batches and fetched concurrently, honoring the max
        # concurrency. If set to 0, the max batch size is unlimited.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-get-multi-batch-size
        [max_get_multi_batch_size: <int> | default = 0]

        # The maximum size of an item stored in memcached. Bigger items are not
        # stored. If set to 0, no maximum size is enforced.
        # CLI flag: -blocks-storage.bucket-store.index-cache.memcached.max-item-size
        [max_item_size: <int> | default = 1048576]

      # Compress postings before storing them to postings cache.
      # CLI flag: -blocks-storage.bucket-store.index-cache.postings-compression-enabled
      [postings_compression_enabled: <boolean> | default = false]

    chunks_cache:
      # Backend for chunks cache, if not empty. Supported values: memcached.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.backend
      [backend: <string> | default = ""]

      memcached:
        # Comma separated list of memcached addresses. Supported prefixes are:
        # dns+ (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV
        # query, dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup
        # made after that).
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.addresses
        [addresses: <string> | default = ""]

        # The socket read/write timeout.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.timeout
        [timeout: <duration> | default = 100ms]

        # The maximum number of idle connections that will be maintained per
        # address.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-idle-connections
        [max_idle_connections: <int> | default = 16]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # The maximum number of concurrent connections running get operations.
        # If set to 0, concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum number of keys a single underlying get operation should
        # run. If more keys are specified, internally keys are splitted into
        # multiple batches and fetched concurrently, honoring the max
        # concurrency. If set to 0, the max batch size is unlimited.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-get-multi-batch-size
        [max_get_multi_batch_size: <int> | default = 0]

        # The maximum size of an item stored in memcached. Bigger items are not
        # stored. If set to 0, no maximum size is enforced.
        # CLI flag: -blocks-storage.bucket-store.chunks-cache.memcached.max-item-size
        [max_item_size: <int> | default = 1048576]

      # Size of each subrange that bucket object is split into for better
      # caching.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.subrange-size
      [subrange_size: <int> | default = 16000]

      # Maximum number of sub-GetRange requests that a single GetRange request
      # can be split into when fetching chunks. Zero or negative value =
      # unlimited number of sub-requests.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.max-get-range-requests
      [max_get_range_requests: <int> | default = 3]

      # TTL for caching object attributes for chunks.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.attributes-ttl
      [attributes_ttl: <duration> | default = 24h]

      # TTL for caching individual chunks subranges.
      # CLI flag: -blocks-storage.bucket-store.chunks-cache.subrange-ttl
      [subrange_ttl: <duration> | default = 24h]

    metadata_cache:
      # Backend for metadata cache, if not empty. Supported values: memcached.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.backend
      [backend: <string> | default = ""]

      memcached:
        # Comma separated list of memcached addresses. Supported prefixes are:
        # dns+ (looked up as an A/AAAA query), dnssrv+ (looked up as a SRV
        # query, dnssrvnoa+ (looked up as a SRV query, with no A/AAAA lookup
        # made after that).
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.addresses
        [addresses: <string> | default = ""]

        # The socket read/write timeout.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.timeout
        [timeout: <duration> | default = 100ms]

        # The maximum number of idle connections that will be maintained per
        # address.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-idle-connections
        [max_idle_connections: <int> | default = 16]

        # The maximum number of concurrent asynchronous operations can occur.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-async-concurrency
        [max_async_concurrency: <int> | default = 50]

        # The maximum number of enqueued asynchronous operations allowed.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-async-buffer-size
        [max_async_buffer_size: <int> | default = 10000]

        # The maximum number of concurrent connections running get operations.
        # If set to 0, concurrency is unlimited.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-get-multi-concurrency
        [max_get_multi_concurrency: <int> | default = 100]

        # The maximum number of keys a single underlying get operation should
        # run. If more keys are specified, internally keys are splitted into
        # multiple batches and fetched concurrently, honoring the max
        # concurrency. If set to 0, the max batch size is unlimited.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-get-multi-batch-size
        [max_get_multi_batch_size: <int> | default = 0]

        # The maximum size of an item stored in memcached. Bigger items are not
        # stored. If set to 0, no maximum size is enforced.
        # CLI flag: -blocks-storage.bucket-store.metadata-cache.memcached.max-item-size
        [max_item_size: <int> | default = 1048576]

      # How long to cache list of tenants in the bucket.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.tenants-list-ttl
      [tenants_list_ttl: <duration> | default = 15m]

      # How long to cache list of blocks for each tenant.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.tenant-blocks-list-ttl
      [tenant_blocks_list_ttl: <duration> | default = 5m]

      # How long to cache list of chunks for a block.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.chunks-list-ttl
      [chunks_list_ttl: <duration> | default = 24h]

      # How long to cache information that block metafile exists.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-exists-ttl
      [metafile_exists_ttl: <duration> | default = 2h]

      # How long to cache information that block metafile doesn't exist.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-doesnt-exist-ttl
      [metafile_doesnt_exist_ttl: <duration> | default = 5m]

      # How long to cache content of the metafile.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-content-ttl
      [metafile_content_ttl: <duration> | default = 24h]

      # Maximum size of metafile content to cache in bytes.
      # CLI flag: -blocks-storage.bucket-store.metadata-cache.metafile-max-size-bytes
      [metafile_max_size_bytes: <int> | default = 1048576]

    # Duration after which the blocks marked for deletion will be filtered out
    # while fetching blocks. The idea of ignore-deletion-marks-delay is to
    # ignore blocks that are marked for deletion with some delay. This ensures
    # store can still serve blocks that are meant to be deleted but do not have
    # a replacement yet. Default is 6h, half of the default value for
    # -compactor.deletion-delay.
    # CLI flag: -blocks-storage.bucket-store.ignore-deletion-marks-delay
    [ignore_deletion_mark_delay: <duration> | default = 6h]

  tsdb:
    # Local directory to store TSDBs in the ingesters.
    # CLI flag: -blocks-storage.tsdb.dir
    [dir: <string> | default = "tsdb"]

    # TSDB blocks range period.
    # CLI flag: -blocks-storage.tsdb.block-ranges-period
    [block_ranges_period: <list of duration> | default = 2h0m0s]

    # TSDB blocks retention in the ingester before a block is removed. This
    # should be larger than the block_ranges_period and large enough to give
    # store-gateways and queriers enough time to discover newly uploaded blocks.
    # CLI flag: -blocks-storage.tsdb.retention-period
    [retention_period: <duration> | default = 6h]

    # How frequently the TSDB blocks are scanned and new ones are shipped to the
    # storage. 0 means shipping is disabled.
    # CLI flag: -blocks-storage.tsdb.ship-interval
    [ship_interval: <duration> | default = 1m]

    # Maximum number of tenants concurrently shipping blocks to the storage.
    # CLI flag: -blocks-storage.tsdb.ship-concurrency
    [ship_concurrency: <int> | default = 10]

    # How frequently does Cortex try to compact TSDB head. Block is only created
    # if data covers smallest block range. Must be greater than 0 and max 5
    # minutes.
    # CLI flag: -blocks-storage.tsdb.head-compaction-interval
    [head_compaction_interval: <duration> | default = 1m]

    # Maximum number of tenants concurrently compacting TSDB head into a new
    # block
    # CLI flag: -blocks-storage.tsdb.head-compaction-concurrency
    [head_compaction_concurrency: <int> | default = 5]

    # If TSDB head is idle for this duration, it is compacted. 0 means disabled.
    # CLI flag: -blocks-storage.tsdb.head-compaction-idle-timeout
    [head_compaction_idle_timeout: <duration> | default = 1h]

    # The number of shards of series to use in TSDB (must be a power of 2).
    # Reducing this will decrease memory footprint, but can negatively impact
    # performance.
    # CLI flag: -blocks-storage.tsdb.stripe-size
    [stripe_size: <int> | default = 16384]

    # True to enable TSDB WAL compression.
    # CLI flag: -blocks-storage.tsdb.wal-compression-enabled
    [wal_compression_enabled: <boolean> | default = false]

    # True to flush blocks to storage on shutdown. If false, incomplete blocks
    # will be reused after restart.
    # CLI flag: -blocks-storage.tsdb.flush-blocks-on-shutdown
    [flush_blocks_on_shutdown: <boolean> | default = false]

    # limit the number of concurrently opening TSDB's on startup
    # CLI flag: -blocks-storage.tsdb.max-tsdb-opening-concurrency-on-startup
    [max_tsdb_opening_concurrency_on_startup: <int> | default = 10]