Prometheus configuration details

1, Configuration file

https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#configuring-rules
To specify the configuration file to load, use – config File flag.

The file is in YAML Format, defined by the scheme described below. Parentheses indicate that the parameter is optional. For non list parameters, this value is set to the specified default value.

Common placeholders are defined as follows:

< Boolean >: a Boolean value that can be taken as true or false
< duration >: duration matching regular expression [0-9] + (ms | [smhdwy])
< labelname >: string matching regular expression [a-za-z] [a-zA-Z0-9 _] *
< labelvalue >: a string of unicode characters
< filename >: a valid path in the current working directory
< host >: a valid string consisting of a host name or IP followed by an optional port number
< Path >: valid URL path
< scheme >: a string that can take http or https
< string >: general string
< secret >: a secret regular string, such as password
<tmpl_ String >: a string for template extension before use
Other placeholders are specified separately.

Can be in here A valid sample file was found.

Global configuration specifies parameters that are valid in all other configuration contexts. They can also be used as defaults for other configuration sections.

global:
  # By default, the frequency of grabbing the target
  [ scrape_interval: <duration> | default = 1m ]

  # Crawl timeout
  [ scrape_timeout: <duration> | default = 10s ]

  # Frequency of evaluation rules
  [ evaluation_interval: <duration> | default = 1m ]

  # Tags added to any time series or alerts when communicating with external systems
  #(Federated, remote storage, Alertma# nager).
  external_labels:
    [ <labelname>: <labelvalue> ... ]

# The rules file specifies a globs list 
# Read rules and alerts from all matching files
rule_files:
  [ - <filepath_glob> ... ]

# Grab configuration list
scrape_configs:
  [ - <scrape_config> ... ]

# Alerts specify settings related to Alertmanager
alerting:
  alert_relabel_configs:
    [ - <relabel_config> ... ]
  alertmanagers:
    [ - <alertmanager_config> ... ]

# Settings related to the remote write function
remote_write:
  [ - <remote_write> ... ]

# Settings related to the remote reading function
remote_read:
  [ - <remote_read> ... ]
  • Global: this segment specifies the global configuration of prometheus, such as collection interval, crawl timeout, etc.
  • rule_files: this segment specifies the alarm rule file. According to these rule information, prometheus will push the alarm information to alertmanager.
  • scrape_configs: this segment specifies the capture configuration, and the data collection of prometheus is configured through this segment.
  • alerting: this segment specifies the alarm configuration. It mainly specifies prometheus to push the alarm rules to the specified alertmanager instance address.
  • remote_write: Specifies the write api address of the back-end storage.
  • remote_read: Specifies the read api address of the backend store.

1.1 scrape_config

<scrape_ The config > section specifies a set of targets and parameters that describe how to scrape them. In general, a single sweep configuration specifies a single job. In advanced configurations, this may change.

The target can be passed through < static_ The configs > parameter is statically configured, and one of the supported service discovery mechanisms can also be used for dynamic discovery.

In addition, < relabel_ Configs > allows advanced modifications to any target and its tags before crawling.

Where < job_ Name > must be unique in all the scene configurations.

# The job name assigned to the captured indicator by default.
job_name: <job_name>

# How often to grab the target from the job
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]

# When grabbing this job, the timeout time of each grab
[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]

# Get the HTTP resource path of the indicator from the target
[ metrics_path: <path> | default = /metrics ]

# honor_labels controls how Prometheus handles conflicts between tags that already exist in the captured data and tags that Prometheus will attach to the server (job and instance tags, manually configured target tags and tags generated by service discovery Implementation).
# 
# If honor_ If labels is set to "true", label conflicts are resolved by retaining the tag values of the captured data and ignoring the conflicting server-side tags.
#
# If honor_ If labels is set to "false", the label conflict can be resolved by renaming the conflicting labels in the captured data to "exported_ < original label >" (such as "exported_instance", "exported_job") and attaching server-side labels. This is very useful for use cases such as Federation, in which all labels specified in the target should be retained.
# 
# Note that any globally configured "external_labels" are not affected by this setting. When communicating with external systems, they are always applied only when the time series does not have a given label, otherwise they will be ignored.
# 
[ honor_labels: <boolean> | default = false ]

# Configure the protocol scheme for the request
[ scheme: <scheme> | default = http ]

# Optional HTTP URL parameters
params:
  [ <string>: [<string>, ...] ]

# Use the configured user name and password to set the 'Authorization' header on each scene request. Password and password_ Files are mutually exclusive.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Use the configured bearer token to set the 'Authorization' header on each scene request. It ` bearer_token_file 'and are mutually exclusive.
[ bearer_token: <secret> ]

# Use the configured bearer token to set the 'Authorization' header on each scene request. It ` bearer_token ` and are mutually exclusive.
[ bearer_token_file: /path/to/bearer/token/file ]

# Configure the TLS settings for the scratch request
tls_config:
  [ <tls_config> ]

# Optional proxy URL
[ proxy_url: <string> ]

# Azure service discovery configuration list
azure_sd_configs:
  [ - <azure_sd_config> ... ]

# Consul service discovery configuration list
consul_sd_configs:
  [ - <consul_sd_config> ... ]

# DNS service discovery configuration list.
dns_sd_configs:
  [ - <dns_sd_config> ... ]

# EC2 service discovery configuration list.
ec2_sd_configs:
  [ - <ec2_sd_config> ... ]

# OpenStack service discovery configuration list.
openstack_sd_configs:
  [ - <openstack_sd_config> ... ]

# File service discovery configuration list.
file_sd_configs:
  [ - <file_sd_config> ... ]

# GCE service discovery configuration list.
gce_sd_configs:
  [ - <gce_sd_config> ... ]

# Kubernetes service discovery configuration list.
kubernetes_sd_configs:
  [ - <kubernetes_sd_config> ... ]

# Marathon service discovery configuration list.
marathon_sd_configs:
  [ - <marathon_sd_config> ... ]

# Neural service discovery configuration list of AirBnB.
nerve_sd_configs:
  [ - <nerve_sd_config> ... ]

# Zookeeper Serverset service discovery configuration list.
serverset_sd_configs:
  [ - <serverset_sd_config> ... ]

# Triton service discovery configuration list.
triton_sd_configs:
  [ - <triton_sd_config> ... ]

# The tag static configuration target list for this job.
static_configs:
  [ - <static_config> ... ]

# Target relabel configuration list.
relabel_configs:
  [ - <relabel_config> ... ]

# Metric reconfiguration list.
metric_relabel_configs:
  [ - <relabel_config> ... ]

# Each grab limit on the number of samples to be accepted.
# If there are more than this number of samples after the measurement is relabeled, the whole grab will be regarded as a failure. 0 means there is no limit.
[ sample_limit: <int> | default = 0 ]

1.2 tls_config

<tls_ Config > allows you to configure TLS connections.

# The CA certificate used to validate the API server certificate.
[ ca_file: <filename> ]

# The certificate and key file used for client certificate authentication of the server.
[ cert_file: <filename> ]
[ key_file: <filename> ]

# ServerName extension that indicates the name of the server.
# https://tools.ietf.org/html/rfc4366#section-3.1
[ server_name: <string> ]

# Disable authentication of server certificates.
[ insecure_skip_verify: <boolean> ]

1.3 file_sd_config

File based service discovery provides a more general way to configure static targets and serves as an interface for inserting custom service discovery mechanisms.

It reads zero or more < static_ Config > s list. Disk monitoring detects changes to all definition files and applies them immediately. The file may be provided in YAML or JSON format. Only apply changes that lead to the formation of a good target group.

The file must contain a static configuration list in the following format:

JSON json [ { "targets": [ "<host>", ... ], "labels": { "<labelname>": "<labelvalue>", ... } }, ... ]

YAML yaml - targets: [ - '<host>' ] labels: [ <labelname>: <labelvalue> ... ]

As a fallback, the file contents are also periodically re read at the specified refresh interval.

Each goal__ meta_filepath has a meta tag during the relabel phase. Its value is set to the file path from which the target is extracted.

There is a list of integrations with this discovery mechanism.

# Extract the files from the target group.
files:
  [ - <filename_pattern> ... ]

# Refresh at the specified time to reread the file.
[ refresh_interval: <duration> | default = 5m ]

Where < filename_ Pattern > may be based on json,.yml or ending path yaml. The last path segment may contain a single path that * matches any character sequence, such as my / path / tg_ * json.

1.4 dns_sd_config

The DNS based service discovery configuration allows you to specify a set of DNS domain names that are queried periodically to discover the target list. The DNS server to contact is from / etc / resolv Read from conf.

This service discovery method only supports basic DNSA, AAAA and SRV record queries, but does not support the advanced DNS-SD method specified in RFC6763.

In the retag phase, meta tags__ meta_dns_name is available on each target and is set to generate the record name of the discovered target.

# List of DNS domain names to query.
names:
  [ - <domain_name> ]

# The type of DNS query to perform.
[ type: <query_type> | default = 'SRV' ]

# The port number used when the query type is not SRV.
[ port: <number>]

# Refresh time after name is provided.
[ refresh_interval: <duration> | default = 30s ]

Where < domain_ Name > is A valid DNS domain name. Where < query_ Type > is SRV, A or AAAA.

1.5 kubernetes_sd_config

Kubernetes SD configuration allows you to retrieve the scene target from kubernetes' rest API and keep it in sync with the cluster state at all times.

You can configure one of the following role types to discover targets:

node
The node role finds that each cluster node has a target whose address is Kubelet's HTTP port by default. The target address defaults to the first existing address of the Kubernetes node object in the address type order of NodeInternalIP, NodeExternalIP, NodeLegacyHostIP and NodeHostName.

Available Meta Tags:

  • __ meta_kubernetes_node_name: the name of the node object.
  • __ meta_ kubernetes_ node_ label_ < Labelname >: each label in the node object.
  • __ meta_kubernetes_node_annotation: each annotation in the node object.
  • __ meta_ kubernetes_ node_ address_< address_ Type >: the first address of each node address type (if any).

In addition, the instance tag of the node will be set to the node name retrieved from the API server.

service
The service role discovers the target of each service port for each service. This is often useful for black box monitoring of services. This address will be set to the Kubernetes DNS name of the service and the corresponding service port.
Available Meta Tags:

  • __ meta_kubernetes_namespace: the namespace of the service object.
  • __ meta_ kubernetes_ service_ annotation_< Annotationname >: Annotation of the service object.
  • __ meta_kubernetes_service_cluster_ip: the cluster IP address of the service. (not applicable to services of type ExternalName)
  • __ meta_kubernetes_service_external_name: DNS name of the service. (for services of type ExternalName)
  • __ meta_ kubernetes_ service_ label_ < Labelname >: label of the service object.
  • __ meta_kubernetes_service_name: the name of the service object.
  • __ meta_kubernetes_service_port_name: the name of the target service port.
  • __ meta_kubernetes_service_port_number: the service port number of the target.
  • __ meta_kubernetes_service_port_protocol: the protocol of the target service port.

pod
The pod role discovers all pods and exposes their containers as targets. For each declared port of the container, a single target is generated. If the container does not specify a port, a Portless target for each container is created to manually add ports by relabeling.
Available Meta Tags:

  • __ meta_kubernetes_namespace: the namespace of the pod object.
  • __ meta_kubernetes_pod_name: the name of the pod object.
  • __ meta_kubernetes_pod_ip: the pod IP of the pod object.
  • __ meta_ kubernetes_ pod_ label_ < Labelname >: the label of the pod object.
  • __ meta_ kubernetes_ pod_ annotation_ < Annotationname >: comment of pod object.
  • __ meta_kubernetes_pod_container_name: the name of the container to which the destination address points.
  • __ meta_kubernetes_pod_container_port_name: the name of the container port.
  • __ meta_kubernetes_pod_container_port_number: container port number.
  • __ meta_kubernetes_pod_container_port_protocol: the protocol of the container port.
  • __ meta_kubernetes_pod_ready: set the ready status of the pod to true or false.
  • __ meta_kubernetes_pod_phase: set to Pending, Running, Succeeded, Failed or Unknown in the lifecycle.
  • __ meta_kubernetes_pod_node_name: the name of the node to which the pod is scheduled.
  • __ meta_kubernetes_pod_host_ip: the current host IP of the pod object.
  • __ meta_kubernetes_pod_uid: the UID of the pod object.
  • __ meta_kubernetes_pod_controller_kind: pod controller of object type.
  • __ meta_kubernetes_pod_controller_name: the name of the pod controller.

endpoints
The endpoints role discovers targets from the listed service endpoints. For each endpoint address, a target is found for each port. If the endpoint is supported by the pod, all other container ports of the pod (not bound to the endpoint port) are also found as targets.
Available Meta Tags:

  • __ meta_kubernetes_namespace: the namespace of the endpoint object.
  • __ meta_kubernetes_endpoints_name: the name of the endpoint object.
  • For all targets found directly from the endpoint list (not those otherwise inferred from the underlying pod), attach the following tags:
    • __ meta_kubernetes_endpoint_ready: set the ready status of the endpoint to true or false.
    • __ meta_kubernetes_endpoint_port_name: the name of the endpoint port.
    • __ meta_kubernetes_endpoint_port_protocol: the protocol of the endpoint port.
    • __ meta_kubernetes_endpoint_address_target_kind: type of endpoint address target.
    • __ meta_kubernetes_endpoint_address_target_name: the name of the endpoint address target.

If the endpoint belongs to a service, the role is attached: all tags discovered by the service.
For all targets supported by pod, all tags of the role will be attached: pod discovery.

ingress
The ingress role discovers the target of each path for each entry. This is usually used for black box monitoring entry. The address will be set to the host specified in the portal specification.
Available Meta Tags:

  • __ meta_kubernetes_namespace: the namespace of the entry object.
  • __ meta_kubernetes_ingress_name: the name of the entry object.
  • __ meta_ kubernetes_ ingress_ label_ < Labelname >: the label of the entry object.
  • __ meta_ kubernetes_ ingress_ annotation_< Annotationname >: Annotation of the entry object.
  • __ meta_kubernetes_ingress_scheme: the protocol scheme of the portal. If TLS configuration is set, it is https. The default is http.
  • __ meta_kubernetes_ingress_path: the path from the entry specification. The default is /.
    For configuration options for Kubernetes discovery, see below:
# Access information about the Kubernetes API.

# API server address. If left blank, it is assumed that Prometheus runs inside the cluster and automatically discovers the API server in / var / run / secrets / kubernetes CA certificate and bearer token file using pod on Io / serviceaccount /.
[ api_server: <host> ]

# The Kubernetes role of the entity that should be discovered.
role: <role>

# Optional authentication information used to authenticate to the API server. Note, ` basic_auth`,`bearer_token ` and ` bearer_token_ The file ` options are mutually exclusive Password and password_ Files are mutually exclusive.

# Optional HTTP basic authentication information.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional bearer token authentication information.
[ bearer_token: <secret> ]

# Optional bearer token file authentication information.
[ bearer_token_file: <filename> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

# Optional namespace discovery if omitted, all namespaces are used.
namespaces:
  names:
    [ - <string> ]

Where must be endpoints, service, pod, node or ingress.

For a detailed example of configuring Prometheus for Kubernetes, see this sample Prometheus configuration file.

You may want to view third-party Prometheus operations that automatically perform Prometheus settings on Kubernetes.

1.6 static_config

<static_ Config > allows you to specify target lists and their public tag sets. This is the canonical method of specifying static targets in the scene configuration.

# Statically configure the specified target.
targets:
  [ - '<host>' ]

# Label assigned to all metrics captured from the target.
labels:
  [ <labelname>: <labelvalue> ... ]

1.7 relabel_config

<relabel_ Config > re tagging is a powerful tool that can dynamically rewrite the target's tag set before grabbing the target. Multiple retag steps can be configured for each grab configuration. They are applied to each target's label set in the order they appear in the configuration file.

Initially, in addition to the configured per target tag, the job tag of the target is set to the job configured by the corresponding scene_ Name value__ address__ The label is set to the < host >: < port > address of the target. After re tagging, if the instance label is not set during re tagging, the instance label is set to by default__ address__ Value of__ scheme__ And__ metrics_path__ The label is set to the scheme and metric path of the target respectively__ param_ < The name > tag is set to the value of the first passed URL parameter named < name >.

In the relabeling phase, you can use__ meta_ Additional labels for prefixes. They are set by the service discovery mechanism that provides the target and change between different mechanisms.

After the target is retagged, it is removed from the label set to__ Label at the beginning.

If the relabeling step only needs to temporarily store label values (as input to subsequent relabeling steps), use the _tmp label name prefix. Ensure that Prometheus itself does not use this prefix.

# Source label select a value from an existing label. Their contents are concatenated with configured delimiters and matched with configured regular expressions for replacement, retention, and deletion operations.
[ source_labels: '[' <labelname> [, ...] ']' ]

# The separator is placed between the connected source label values.
[ separator: <string> | default = ; ]

# Label that writes the result value in the replace operation.
# Substitution is mandatory. Regular expression capture groups are available.
[ target_label: <labelname> ]

# A regular expression that matches the extracted value.
[ regex: <regex> | default = (.*) ]

# Modulus of the hash using the source tag value.
[ modulus: <uint64> ]

# If the regular expression matches, the replacement value of the regular expression replacement is performed. Regular expression capture groups are available.
[ replacement: <string> | default = $1 ]

# Operations performed based on regular expression matching.
[ action: <relabel_action> | default = replace ]

Is any valid RE2 regular expression. It is required for replace, keep, drop, labelmap, labeldrop and labelkeep operations. Regular expressions are fixed at both ends. To UN anchor a regular expression, use. *. *.

<relabel_ Action > determine the re signing action to take:

replace: connect the regex to the source_labels match. Then, set target_label is set to replacement to match the group reference( 1 , {1}, 1, {2},...) is replaced with its value. If the regular expressions do not match, no replacement is made.
keep: delete the regex and the connected source_labels do not match the target.
drop: delete the regex and the connected source_labels match the target.
hashmod: set target_label is set to the connected source_ Hash modulus of labels.
Label map: match regex to all label names. Then copy the value of the matching label to the label name given at the time of replacement, replace it with the matching group reference (, {2},...) and replace it with its value.
Label drop: match regex to all label names. Any tags that match are removed from the tag set.
Label keep: match regex to all label names. Any mismatched labels are removed from the label set.
labeldrop and labelkeep must be used with care to ensure that indicators are still uniquely marked after labels are deleted.

1.8 metric_relabel_configs

Metric relabeling is applied to the sample as the last step before ingestion. It has the same configuration format and operation as the target retag. Metric relabeling is not applicable to automatically generated time series, such as up.

One purpose is to blacklist time series, which are too expensive to absorb.

1.9 alert_relabel_configs

Alert retag is applied to alerts before sending to Alertmanager. It has the same configuration format and operation as the target retag. Apply alert re tagging after external tagging.

One purpose of this is to ensure that HA with different external tags sends the same alert to the Prometheus server.

1.10 alertmanager_config

Alertmanager_ The config section specifies the Alertmanager instance to which the Prometheus server sends alerts. It also provides parameters to configure how to communicate with these alert managers.

Alertmanagers can use static_ The configs parameter is statically configured, and one of the supported service discovery mechanisms can also be used for dynamic discovery.

In addition, relabel_configs allows you to select alert managers from the discovered entities and provides advanced modifications to the API path used through__ alerts_path__ Label disclosure.

# Timeout by target Alertmanager when pushing alerts.
[ timeout: <duration> | default = 10s ]

# Prefix that will push HTTP path alerts.
[ path_prefix: <path> | default = / ]

# Configure the protocol scheme used for the request.
[ scheme: <scheme> | default = http ]

# Use the configured user name and password to set the 'Authorization' header on each request. Password and password_ Files are mutually exclusive.
basic_auth:
  [ username: <string> ]
  [ password: <string> ]
  [ password_file: <string> ]

# Set the Authorization header on each request using the configured bearer token. It works with ` bearer_token_file ` is mutually exclusive.
[ bearer_token: <string> ]

# Set the Authorization header on each request using the configured bearer token. It works with ` bearer_token ` mutually exclusive.
[ bearer_token_file: /path/to/bearer/token/file ]

# Configure the TLS settings for the scratch request.
tls_config:
  [ <tls_config> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# Azure service discovery configuration list.
azure_sd_configs:
  [ - <azure_sd_config> ... ]

# Consul service discovery configuration list.
consul_sd_configs:
  [ - <consul_sd_config> ... ]

# DNS service discovery configuration list.
dns_sd_configs:
  [ - <dns_sd_config> ... ]

# ECS service discovery configuration list.
ec2_sd_configs:
  [ - <ec2_sd_config> ... ]

# File service discovery configuration list.
file_sd_configs:
  [ - <file_sd_config> ... ]

# GCE service discovery configuration list.
gce_sd_configs:
  [ - <gce_sd_config> ... ]

# K8S service discovery configuration list.
kubernetes_sd_configs:
  [ - <kubernetes_sd_config> ... ]

# Marathon service discovery configuration list.
marathon_sd_configs:
  [ - <marathon_sd_config> ... ]

# AirBnB's Nerve service discovery configuration list.
nerve_sd_configs:
  [ - <nerve_sd_config> ... ]

# Zookepper service discovery configuration list.
serverset_sd_configs:
  [ - <serverset_sd_config> ... ]

# Triton service discovery configuration list.
triton_sd_configs:
  [ - <triton_sd_config> ... ]

# List of Alertmanagers marked as static configuration.
static_configs:
  [ - <static_config> ... ]

# Alert manager reconfigures the list.
relabel_configs:
  [ - <relabel_config> ... ]

1.11 remote_write

<remote_write>

write_relabel_configs is the re tagging applied to the sample before sending it to the remote endpoint. Apply write retag after external tag. This can be used to limit the samples sent.

There is a small demonstration of how to use this feature.

# URL of the endpoint to send the sample
url: <string>

# The request to the remote write endpoint timed out.
[ remote_timeout: <duration> | default = 30s ]

# Remote write relabel configuration list.
write_relabel_configs:
  [ - <relabel_config> ... ]

# Use the configured user name and password to set the 'Authorization' header on each remote write request Password and password_ Files are mutually exclusive.
basic_auth:
  [ username: <string> ]
  [ password: <string> ]
  [ password_file: <string> ]

# Use the configured bearer token to set the 'Authorization' header on each remote write request. It works with ` bearer_token_file ` is mutually exclusive.
[ bearer_token: <string> ]

# Use the configured bearer token to set the 'Authorization' header on each remote write request. It works with ` bearer_token ` mutually exclusive.
[ bearer_token_file: /path/to/bearer/token/file ]

# Configure TLS settings for remote write requests.
tls_config:
  [ <tls_config> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# Configure queues for writing to remote storage.
queue_config:
  # The number of samples per slice buffer before we start deleting.
  [ capacity: <int> | default = 10000 ]
  # Maximum number of shards, i.e. concurrent number.
  [ max_shards: <int> | default = 1000 ]
  # The minimum number of shards, that is, the number of concurrency.
  [ min_shards: <int> | default = 1 ]
  # Maximum number of samples sent each time.
  [ max_samples_per_send: <int> | default = 100]
  # The maximum time the sample waits in the buffer.
  [ batch_send_deadline: <duration> | default = 5s ]
  # The maximum number of times a batch is retried on a recoverable error.
  [ max_retries: <int> | default = 3 ]
  # Initial retry delay. Each retry doubles.
  [ min_backoff: <duration> | default = 30ms ]
  # Maximum retry delay.
  [ max_backoff: <duration> | default = 100ms ]

There is a list integrated with this feature.

1.12 remote_read

<remote_read>

# URL of the endpoint to send the sample
url: <string>

# A list of optional matchers that must exist in the selector to query the remote read endpoint.
required_matchers:
  [ <labelname>: <labelvalue> ... ]

# The request to the remote read endpoint timed out.
[ remote_timeout: <duration> | default = 1m ]

# Local storage should have complete data.
[ read_recent: <boolean> | default = false ]

# Use the configured user name and password to set the 'Authorization' header on each remote write request Password and password_ Files are mutually exclusive.
basic_auth:
  [ username: <string> ]
  [ password: <string> ]
  [ password_file: <string> ]

# Use the configured bearer token to set the 'Authorization' header on each remote write request. It works with ` bearer_toke_filen ` mutually exclusive.
[ bearer_token: <string> ]

# Use the configured bearer token to set the 'Authorization' header on each remote write request. It works with ` bearer_token ` mutually exclusive.
[ bearer_token_file: /path/to/bearer/token/file ]

# Configure TLS settings for remote write requests.
tls_config:
  [ <tls_config> ]

# Optional proxy URL.
[ proxy_url: <string> ]

There is one with this feature integrate List of.

Keywords: Linux Operation & Maintenance regex Prometheus

Added by said_r3000 on Sat, 25 Dec 2021 15:59:04 +0200