Warning
The Datadog Operator is now enabled by default since version 3.157.0 to collect chart metadata for display in Fleet Automation. We are aware of issues affecting some environments and are actively working on fixes. We apologize for the inconvenience and appreciate your patience while we address these issues.
Datadog is a hosted infrastructure monitoring platform. This chart adds the Datadog Agent to all nodes in your cluster via a DaemonSet. It also optionally depends on the kube-state-metrics chart. For more information about monitoring Kubernetes with Datadog, please refer to the Datadog documentation website.
Datadog offers three build variants, switch to a -jmx tag if you need to run JMX/java integrations or set the useFIPSAgent: true value to use the -fips tags if you require FIPS compliant cryptography modules. The chart also supports running the standalone dogstatsd image.
See the Datadog JMX integration to learn more.
You need to add this repository to your Helm repositories:
helm repo add datadog https://helm.datadoghq.com
helm repo update
Kubernetes 1.10+ or OpenShift 3.10+, note that:
- the Datadog Agent supports Kubernetes 1.4+
- The Datadog chart's defaults are tailored to Kubernetes 1.10+, see Datadog Agent legacy Kubernetes versions documentation for adjustments you might need to make for older versions
| Repository | Name | Version |
|---|---|---|
| https://helm.datadoghq.com | datadog-crds | 2.18.0 |
| https://helm.datadoghq.com | datadog-csi-driver | 0.10.0 |
| https://helm.datadoghq.com | operator(datadog-operator) | 2.21.0 |
| https://prometheus-community.github.io/helm-charts | kube-state-metrics | 2.13.2 |
By default, the Datadog Agent runs as a DaemonSet to ensure it runs on every node in your cluster. For alternative deployment patterns, consider using the Datadog Operator. Supporting the Agent as a deployment has been removed since version 2.0.0 of our Helm chart.
To install the chart with the release name <RELEASE_NAME>, retrieve your Datadog API key from your Agent Installation Instructions and run:
helm install <RELEASE_NAME> \
--set datadog.apiKey=<DATADOG_API_KEY> datadog/datadogBy default, this Chart creates a Secret and puts an API key in that Secret.
However, you can use manually created secrets by setting the datadog.apiKeyExistingSecret and/or datadog.appKeyExistingSecret values (see Creating a Secret, below).
Note: When creating the secret(s), be sure to name the key fields api-key and app-key.
After a few minutes, you should see hosts and metrics being reported in Datadog.
Note: You can set your Datadog site using the datadog.site field.
helm install <RELEASE_NAME> \
--set datadog.appKey=<DATADOG_APP_KEY> \
--set datadog.site=<DATADOG_SITE> \
datadog/datadogTo create a secret that contains your Datadog API key, replace the <DATADOG_API_KEY> below with the API key for your organization. This secret is used in the manifest to deploy the Datadog Agent.
DATADOG_API_SECRET_NAME=datadog-api-secret
kubectl create secret generic $DATADOG_API_SECRET_NAME --from-literal api-key="<DATADOG_API_KEY>"Note: This creates a secret in the default namespace. If you are in a custom namespace, update the namespace parameter of the command before running it.
Now, the installation command contains the reference to the secret.
helm install <RELEASE_NAME> \
--set datadog.apiKeyExistingSecret=$DATADOG_API_SECRET_NAME datadog/datadogThe Datadog Cluster Agent is now enabled by default.
Read about the Datadog Cluster Agent in the official documentation.
If you plan to use the Custom Metrics Server feature, provide a secret for the application key (AppKey) using the datadog.appKeyExistingSecret chart variable.
DATADOG_APP_SECRET_NAME=datadog-app-secret
kubectl create secret generic $DATADOG_APP_SECRET_NAME --from-literal app-key="<DATADOG_APP_KEY>"Note: the same secret can store the API and APP keys
DATADOG_SECRET_NAME=datadog-secret
kubectl create secret generic $DATADOG_SECRET_NAME --from-literal api-key="<DATADOG_API_KEY>" --from-literal app-key="<DATADOG_APP_KEY>"Run the following if you want to deploy the chart with the Custom Metrics Server enabled in the Cluster Agent:
helm install datadog-monitoring \
--set datadog.apiKeyExistingSecret=$DATADOG_API_SECRET_NAME \
--set datadog.appKeyExistingSecret=$DATADOG_APP_SECRET_NAME \
--set clusterAgent.enabled=true \
--set clusterAgent.metricsProvider.enabled=true \
datadog/datadogIf you want to learn to use this feature, you can check out this Datadog Cluster Agent walkthrough.
The Leader Election is enabled by default in the chart for the Cluster Agent. Only the Cluster Agent(s) participate in the election, in case you have several replicas configured (using clusterAgent.replicas.
You can specify the Datadog Cluster Agent token used to secure the communication between the Cluster Agent(s) and the Agents with clusterAgent.token.
The migration from 2.x to 3.x does not require manual action. As per the Changelog, we do not be guaranteeing support of Helm 2 moving forward. If you already have the legacy Kubernetes State Metrics Check enabled, migrating will only show you the deprecation notice.
The datadog chart has been refactored to regroup the values.yaml parameters in a more logical way.
Please follow the migration guide to update your values.yaml file.
Version 1.19.0 introduces the use of release name as full name if it contains the chart name(datadog in this case).
E.g. with a release name of datadog, this renames the DaemonSet from datadog-datadog to datadog.
The suggested approach is to delete the release and reinstall it.
Starting with version 1.0.0, this chart does not support deploying Agent 5.x anymore. If you cannot upgrade to Agent 6.x or later, you can use a previous version of the chart by calling helm install with --version 0.18.0.
See 0.18.1's README to see which options were supported at the time.
To uninstall/delete the <RELEASE_NAME> deployment:
helm uninstall <RELEASE_NAME>The command removes all the Kubernetes components associated with the chart and deletes the release.
As a best practice, a YAML file that specifies the values for the chart parameters should be used to configure the chart. Any parameters not specified in this file will default to those set in values.yaml.
- Create an empty
datadog-values.yamlfile. - Create a Kubernetes
secretto store your Datadog API key and App key
kubectl create secret generic datadog-secret --from-literal api-key=$DD_API_KEY --from-literal app-key=$DD_APP_KEY- Set the following parameters in your
datadog-values.yamlfile to reference the secret:
datadog:
apiKeyExistingSecret: datadog-secret
appKeyExistingSecret: datadog-secret- Install or upgrade the Datadog Helm chart with the new
datadog-values.yamlfile:
helm install -f datadog-values.yaml <RELEASE_NAME> datadog/datadogOR
helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadogSee the All configuration options section to discover all configuration possibilities in the Datadog chart.
The agent will start a server running Dogstatsd in order to process custom metrics sent from your applications. Check out the official documentation on Dogstatsd for more details.
By default the agent will create a unix domain socket to process the datagrams (not supported on Windows, see below).
To disable the socket in favor of the hostPort, use the following configuration:
datadog:
#(...)
dogstatsd:
useSocketVolume: false
useHostPort: trueAPM is enabled by default using a socket for communication in the out-of-the-box values.yaml file; more details about application configuration are available on the official documentation.
Update your datadog-values.yaml file with the following configration to enable TCP communication using a hostPort:
datadog:
# (...)
apm:
portEnabled: trueTo disable APM, set socketEnabled to false in your datadog-values.yaml file (portEnabled is false by default):
datadog:
# (...)
apm:
socketEnabled: falseAPM tracing libraries and configurations can be automatically injected in your application pods in the whole cluster or specific namespaces using Single Step Instrumentation.
Update your datadog-values.yaml file with the following configration to enable Single Step Instrumentation in the whole cluster:
datadog:
# (...)
apm:
instrumentation:
enabled: trueSingle Step Instrumentation can be disabled in specific namespaces using configuration option disabledNamespaces:
datadog:
# (...)
apm:
instrumentation:
enabled: true
disabledNamespaces:
- namespaceA
- namespaceBSingle Step Instrumentation can be enabled in specific namespaces using configuration option enabledNamespaces:
datadog:
# (...)
apm:
instrumentation:
enabled: true
enabledNamespaces:
- namespaceCTo confiure the version of Tracing library that Single Step Instrumentation will instrument applications with, set the configuration libVersions:
datadog:
# (...)
apm:
instrumentation:
enabled: true
libVersions:
java: v1.18.0
python: v1.20.0then upgrade your Datadog Helm chart:
helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadogUpdate your datadog-values.yaml file with the following log collection configuration:
datadog:
# (...)
logs:
enabled: true
containerCollectAll: truethen upgrade your Datadog Helm chart:
helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadogUpdate your datadog-values.yaml file with the process collection configuration:
datadog:
# (...)
processAgent:
enabled: true
processCollection: truethen upgrade your Datadog Helm chart:
helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadogThe system-probe agent only runs in dedicated container environment. Update your datadog-values.yaml file with the NPM collection configuration:
datadog:
# (...)
networkMonitoring:
# (...)
enabled: true
# (...)then upgrade your Datadog Helm chart:
helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadogUse the Datadog Cluster Agent to collect Kubernetes events. Please read the official documentation for more context.
Alternatively set the datadog.leaderElection, datadog.collectEvents and rbac.create options to true in order to enable Kubernetes event collection.
The Datadog entrypoint copies files with a .yaml extension found in /conf.d and files with .py extension in /checks.d to /etc/datadog-agent/conf.d and /etc/datadog-agent/checks.d respectively.
The keys for datadog.confd and datadog.checksd should mirror the content found in their respective ConfigMaps. Update your datadog-values.yaml file with the check configurations:
datadog:
confd:
redisdb.yaml: |-
ad_identifiers:
- redis
- bitnami/redis
init_config:
instances:
- host: "%%host%%"
port: "%%port%%"
jmx.yaml: |-
ad_identifiers:
- openjdk
instance_config:
instances:
- host: "%%host%%"
port: "%%port_0%%"
redisdb.yaml: |-
init_config:
instances:
- host: "outside-k8s.example.com"
port: 6379then upgrade your Datadog Helm chart:
helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadogFor more details, please refer to the documentation.
To map Kubernetes node labels and pod labels and annotations to Datadog tags, provide a dictionary with kubernetes labels/annotations as keys and Datadog tags key as values in your datadog-values.yaml file:
nodeLabelsAsTags:
beta.kubernetes.io/instance-type: aws_instance_type
kubernetes.io/role: kube_rolepodAnnotationsAsTags:
iam.amazonaws.com/role: kube_iamrolepodLabelsAsTags:
app: kube_app
release: helm_releasethen upgrade your Datadog Helm chart:
helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadogAs of the version 6.6.0, the Datadog Agent supports collecting metrics from any container runtime interface used in your cluster. Configure the location path of the socket with datadog.criSocketPath; default is the Docker container runtime socket. To deactivate this support, you just need to unset the datadog.criSocketPath setting.
Standard paths are:
- Docker socket:
/var/run/docker.sock - Containerd socket:
/var/run/containerd/containerd.sock - Cri-o socket:
/var/run/crio/crio.sock
Amazon Linux 2 does not support apparmor profile enforcement.
Amazon Linux 2 is the default operating system for AWS Elastic Kubernetes Service (EKS) based clusters.
Update your datadog-values.yaml file to disable apparmor enforcement:
agents:
# (...)
podSecurity:
# (...)
apparmor:
# (...)
enabled: false
# (...)You can set environment variables using the --set helm's flag thanks to the datadog.envDict field.
For example, to set the DD_ENV environment variable:
$ helm install --set datadog.envDict.DD_ENV=prod <release name> datadog/datadogThe following table lists the configurable parameters of the Datadog chart and their default values. Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,
helm install <RELEASE_NAME> \
--set datadog.apiKey=<DATADOG_API_KEY>,datadog.logLevel=DEBUG \
datadog/datadog| Key | Type | Default | Description |
|---|---|---|---|
| agents.additionalLabels | object | {} |
Adds labels to the Agent daemonset and pods |
| agents.affinity | object | {} |
Allow the DaemonSet to schedule using affinity rules |
| agents.containers.agent.env | list | [] |
Additional environment variables for the agent container |
| agents.containers.agent.envDict | object | {} |
Set environment variables specific to agent container defined in a dict |
| agents.containers.agent.envFrom | list | [] |
Set environment variables specific to agent container from configMaps and/or secrets |
| agents.containers.agent.healthPort | int | 5555 |
Port number to use in the node agent for the healthz endpoint |
| agents.containers.agent.livenessProbe | object | Every 15s / 6 KO / 1 OK | Override default agent liveness probe settings |
| agents.containers.agent.logLevel | string | nil |
Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off. If not set, fall back to the value of datadog.logLevel. |
| agents.containers.agent.ports | list | [] |
Allows to specify extra ports (hostPorts for instance) for this container |
| agents.containers.agent.readinessProbe | object | Every 15s / 6 KO / 1 OK | Override default agent readiness probe settings |
| agents.containers.agent.resources | object | {} |
Resource requests and limits for the agent container. |
| agents.containers.agent.securityContext | object | {"readOnlyRootFilesystem":true} |
Allows you to overwrite the default container SecurityContext for the agent container. |
| agents.containers.agent.startupProbe | object | Every 15s / 6 KO / 1 OK | Override default agent startup probe settings |
| agents.containers.agentDataPlane.env | list | [] |
Additional environment variables for the agent-data-plane container |
| agents.containers.agentDataPlane.envDict | object | {} |
Set environment variables specific to agent-data-plane container defined in a dict |
| agents.containers.agentDataPlane.envFrom | list | [] |
Set environment variables specific to agent-data-plane container from configMaps and/or secrets |
| agents.containers.agentDataPlane.livenessProbe | object | Every 5s / 12 KO / 1 OK | Override default agent-data-plane liveness probe settings |
| agents.containers.agentDataPlane.logLevel | string | nil |
Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off. If not set, fall back to the value of datadog.logLevel. |
| agents.containers.agentDataPlane.ports | list | [] |
Allows to specify extra ports (hostPorts for instance) for this container |
| agents.containers.agentDataPlane.privilegedApiPort | int | 5101 |
Port for privileged API server, used for lower-level operations that can alter the state of the ADP process or expose internal information |
| agents.containers.agentDataPlane.readinessProbe | object | Every 5s / 12 KO / 1 OK | Override default agent-data-plane readiness probe settings |
| agents.containers.agentDataPlane.resources | object | {} |
Resource requests and limits for the agent-data-plane container |
| agents.containers.agentDataPlane.securityContext | object | {"readOnlyRootFilesystem":true} |
Allows you to overwrite the default container SecurityContext for the agent-data-plane container. |
| agents.containers.agentDataPlane.telemetryApiPort | int | 5102 |
Port for telemetry API server, used for exposing internal telemetry to be scraped by the Agent |
| agents.containers.agentDataPlane.unprivilegedApiPort | int | 5100 |
Port for unprivileged API server, used primarily for health checks |
| agents.containers.hostProfiler.env | list | [] |
Additional environment variables for the host-profiler container |
| agents.containers.hostProfiler.envDict | object | {} |
Set environment variables specific to host-profiler defined in a dict |
| agents.containers.hostProfiler.envFrom | list | [] |
Set environment variables specific to host-profiler from configMaps and/or secrets |
| agents.containers.hostProfiler.resources | object | {} |
Resource requests and limits for the host-profiler container |
| agents.containers.hostProfiler.securityContext | object | {"privileged":true,"readOnlyRootFilesystem":true} |
Allows you to overwrite the default container SecurityContext for the host-profiler container. |
| agents.containers.hostProfiler.volumeMounts | list | [] |
Specify additional volumes to mount in the host-profiler container |
| agents.containers.initContainers.resources | object | {} |
Resource requests and limits for the init containers |
| agents.containers.initContainers.securityContext | object | {} |
Allows you to overwrite the default container SecurityContext for the init containers. |
| agents.containers.initContainers.volumeMounts | list | [] |
Specify additional volumes to mount for the init containers |
| agents.containers.otelAgent.env | list | [] |
Additional environment variables for the otel-agent container |
| agents.containers.otelAgent.envDict | object | {} |
Set environment variables specific to otel-agent defined in a dict |
| agents.containers.otelAgent.envFrom | list | [] |
Set environment variables specific to otel-agent from configMaps and/or secrets |
| agents.containers.otelAgent.ports | list | [] |
Allows to specify extra ports (hostPorts for instance) for this container |
| agents.containers.otelAgent.resources | object | {} |
Resource requests and limits for the otel-agent container |
| agents.containers.otelAgent.securityContext | object | {"readOnlyRootFilesystem":true} |
Allows you to overwrite the default container SecurityContext for the otel-agent container. |
| agents.containers.otelAgent.volumeMounts | list | [] |
Specify additional volumes to mount in the otel-agent container |
| agents.containers.privateActionRunner.env | list | [] |
Additional environment variables for the private-action-runner container |
| agents.containers.privateActionRunner.envDict | object | {} |
Set environment variables specific to private-action-runner defined in a dict |
| agents.containers.privateActionRunner.envFrom | list | [] |
Set environment variables specific to private-action-runner from configMaps and/or secrets |
| agents.containers.privateActionRunner.logLevel | string | nil |
Set logging verbosity for the private-action-runner container |
| agents.containers.privateActionRunner.resources | object | {} |
Resource requests and limits for the private-action-runner container. |
| agents.containers.privateActionRunner.securityContext | object | {"capabilities":{"add":["NET_RAW"]},"readOnlyRootFilesystem":true} |
Specify securityContext on the private-action-runner container. |
| agents.containers.processAgent.env | list | [] |
Additional environment variables for the process-agent container |
| agents.containers.processAgent.envDict | object | {} |
Set environment variables specific to process-agent defined in a dict |
| agents.containers.processAgent.envFrom | list | [] |
Set environment variables specific to process-agent from configMaps and/or secrets |
| agents.containers.processAgent.logLevel | string | nil |
Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off. If not set, fall back to the value of datadog.logLevel. |
| agents.containers.processAgent.ports | list | [] |
Allows to specify extra ports (hostPorts for instance) for this container |
| agents.containers.processAgent.resources | object | {} |
Resource requests and limits for the process-agent container |
| agents.containers.processAgent.securityContext | object | {"readOnlyRootFilesystem":true} |
Allows you to overwrite the default container SecurityContext for the process-agent container. |
| agents.containers.securityAgent.env | list | [] |
Additional environment variables for the security-agent container |
| agents.containers.securityAgent.envDict | object | {} |
Set environment variables specific to security-agent defined in a dict |
| agents.containers.securityAgent.envFrom | list | [] |
Set environment variables specific to security-agent from configMaps and/or secrets |
| agents.containers.securityAgent.logLevel | string | nil |
Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off. If not set, fall back to the value of datadog.logLevel. |
| agents.containers.securityAgent.ports | list | [] |
Allows to specify extra ports (hostPorts for instance) for this container |
| agents.containers.securityAgent.resources | object | {} |
Resource requests and limits for the security-agent container |
| agents.containers.securityAgent.securityContext | object | {"readOnlyRootFilesystem":true} |
Allows you to overwrite the default container SecurityContext for the security-agent container. |
| agents.containers.systemProbe.env | list | [] |
Additional environment variables for the system-probe container |
| agents.containers.systemProbe.envDict | object | {} |
Set environment variables specific to system-probe defined in a dict |
| agents.containers.systemProbe.envFrom | list | [] |
Set environment variables specific to system-probe from configMaps and/or secrets |
| agents.containers.systemProbe.logLevel | string | nil |
Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off. If not set, fall back to the value of datadog.logLevel. |
| agents.containers.systemProbe.ports | list | [] |
Allows to specify extra ports (hostPorts for instance) for this container |
| agents.containers.systemProbe.resources | object | {} |
Resource requests and limits for the system-probe container |
| agents.containers.systemProbe.securityContext | object | {"capabilities":{"add":["SYS_ADMIN","SYS_RESOURCE","SYS_PTRACE","NET_ADMIN","NET_BROADCAST","NET_RAW","IPC_LOCK","CHOWN","DAC_READ_SEARCH"]},"privileged":false,"readOnlyRootFilesystem":true} |
Allows you to overwrite the default container SecurityContext for the system-probe container. |
| agents.containers.traceAgent.env | list | [] |
Additional environment variables for the trace-agent container |
| agents.containers.traceAgent.envDict | object | {} |
Set environment variables specific to trace-agent defined in a dict |
| agents.containers.traceAgent.envFrom | list | [] |
Set environment variables specific to trace-agent from configMaps and/or secrets |
| agents.containers.traceAgent.livenessProbe | object | Every 15s | Override default agent liveness probe settings |
| agents.containers.traceAgent.logLevel | string | nil |
Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off |
| agents.containers.traceAgent.ports | list | [] |
Allows to specify extra ports (hostPorts for instance) for this container |
| agents.containers.traceAgent.resources | object | {} |
Resource requests and limits for the trace-agent container |
| agents.containers.traceAgent.securityContext | object | {"readOnlyRootFilesystem":true} |
Allows you to overwrite the default container SecurityContext for the trace-agent container. |
| agents.customAgentConfig | object | {} |
Specify custom contents for the datadog agent config (datadog.yaml) |
| agents.daemonsetAnnotations | object | {} |
Annotations to add to the DaemonSet |
| agents.dnsConfig | object | {} |
specify dns configuration options for datadog cluster agent containers e.g ndots |
| agents.enabled | bool | true |
You should keep Datadog DaemonSet enabled! |
| agents.image.digest | string | "" |
Define Agent image digest to use, takes precedence over tag if specified |
| agents.image.doNotCheckTag | string | nil |
Skip the version and chart compatibility check |
| agents.image.name | string | "agent" |
Datadog Agent image name to use (relative to registry) |
| agents.image.pullPolicy | string | "IfNotPresent" |
Datadog Agent image pull policy |
| agents.image.pullSecrets | list | [] |
Datadog Agent repository pullSecret (ex: specify docker registry credentials) |
| agents.image.repository | string | nil |
Override default registry + image.name for Agent |
| agents.image.tag | string | "7.78.0" |
Define the Agent version to use |
| agents.image.tagSuffix | string | "" |
Suffix to append to Agent tag |
| agents.lifecycle | object | {} |
Configure the lifecycle of the Agent. Note: The exec lifecycle handler is not supported in GKE Autopilot. |
| agents.localService.forceLocalServiceEnabled | bool | false |
Force the creation of the internal traffic policy service to target the agent running on the local node. By default, the internal traffic service is created only on Kubernetes 1.22+ where the feature became beta and enabled by default. This option allows to force the creation of the internal traffic service on kubernetes 1.21 where the feature was alpha and required a feature gate to be explicitly enabled. |
| agents.localService.overrideName | string | "" |
Name of the internal traffic service to target the agent running on the local node |
| agents.networkPolicy.create | bool | false |
If true, create a NetworkPolicy for the agents. DEPRECATED. Use datadog.networkPolicy.create instead |
| agents.nodeSelector | object | {} |
Allow the DaemonSet to schedule on selected nodes |
| agents.podAnnotations | object | {} |
Annotations to add to the DaemonSet's Pods |
| agents.podLabels | object | {} |
Sets podLabels if defined |
| agents.podSecurity.allowedUnsafeSysctls | list | [] |
Allowed unsafe sysclts |
| agents.podSecurity.apparmor.enabled | bool | true |
If true, enable apparmor enforcement |
| agents.podSecurity.apparmorProfiles | list | ["runtime/default","unconfined"] |
Allowed apparmor profiles |
| agents.podSecurity.capabilities | list | ["SYS_ADMIN","SYS_RESOURCE","SYS_PTRACE","NET_ADMIN","NET_BROADCAST","NET_RAW","IPC_LOCK","CHOWN","AUDIT_CONTROL","AUDIT_READ","DAC_READ_SEARCH","MKNOD"] |
Allowed capabilities |
| agents.podSecurity.defaultApparmor | string | "runtime/default" |
Default AppArmor profile for all containers but system-probe |
| agents.podSecurity.podSecurityPolicy.create | bool | false |
If true, create a PodSecurityPolicy resource for Agent pods |
| agents.podSecurity.privileged | bool | false |
If true, Allow to run privileged containers |
| agents.podSecurity.seLinuxContext | object | Must run as spc_t | Provide seLinuxContext configuration for PSP/SCC |
| agents.podSecurity.seccompProfiles | list | ["runtime/default","localhost/system-probe"] |
Allowed seccomp profiles |
| agents.podSecurity.securityContextConstraints.create | bool | false |
If true, create a SecurityContextConstraints resource for Agent pods |
| agents.podSecurity.volumes | list | ["configMap","downwardAPI","emptyDir","hostPath","secret"] |
Allowed volumes types |
| agents.priorityClassCreate | bool | false |
Creates a priorityClass for the Datadog Agent's Daemonset pods. |
| agents.priorityClassName | string | nil |
Sets PriorityClassName if defined |
| agents.priorityClassValue | int | 1000000000 |
Value used to specify the priority of the scheduling of Datadog Agent's Daemonset pods. |
| agents.priorityPreemptionPolicyValue | string | "PreemptLowerPriority" |
Set to "Never" to change the PriorityClass to non-preempting |
| agents.rbac.automountServiceAccountToken | bool | true |
If true, automatically mount the ServiceAccount's API credentials if agents.rbac.create is true |
| agents.rbac.create | bool | true |
If true, create & use RBAC resources |
| agents.rbac.serviceAccountAdditionalLabels | object | {} |
Labels to add to the ServiceAccount if agents.rbac.create is true |
| agents.rbac.serviceAccountAnnotations | object | {} |
Annotations to add to the ServiceAccount if agents.rbac.create is true |
| agents.rbac.serviceAccountName | string | "default" |
Specify a preexisting ServiceAccount to use if agents.rbac.create is false |
| agents.revisionHistoryLimit | int | 10 |
The number of ControllerRevision to keep in this DaemonSet. |
| agents.shareProcessNamespace | bool | false |
Set the process namespace sharing on the Datadog Daemonset |
| agents.terminationGracePeriodSeconds | int | nil |
Configure the termination grace period for the Agent |
| agents.tolerations | list | [] |
Allow the DaemonSet to schedule on tainted nodes (requires Kubernetes >= 1.6) |
| agents.updateStrategy | object | {"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"} |
Allow the DaemonSet to perform a rolling update on helm update |
| agents.useConfigMap | string | nil |
Configures a configmap to provide the agent configuration. Use this in combination with the agents.customAgentConfig parameter. |
| agents.useHostNetwork | bool | false |
Bind ports on the hostNetwork |
| agents.volumeMounts | list | [] |
Specify additional volumes to mount in all containers of the agent pod |
| agents.volumes | list | [] |
Specify additional volumes to mount in the dd-agent container |
| clusterAgent.additionalLabels | object | {} |
Adds labels to the Cluster Agent deployment and pods |
| clusterAgent.admissionController.agentSidecarInjection.clusterAgentCommunicationEnabled | bool | true |
Enable communication between Agent sidecars and the Cluster Agent. |
| clusterAgent.admissionController.agentSidecarInjection.clusterAgentTlsVerification | object | {"copyCaConfigMap":false,"enabled":false} |
TLS verification configuration for sidecar-to-cluster-agent communication. |
| clusterAgent.admissionController.agentSidecarInjection.clusterAgentTlsVerification.copyCaConfigMap | bool | false |
Enable automatic creation of a ConfigMap containing the Cluster Agent's CA certificate in namespaces where sidecar injection occurs. |
| clusterAgent.admissionController.agentSidecarInjection.clusterAgentTlsVerification.enabled | bool | false |
Enable TLS verification for Agent sidecars communicating with the Cluster Agent. |
| clusterAgent.admissionController.agentSidecarInjection.containerRegistry | string | nil |
Override the default registry for the sidecar Agent. |
| clusterAgent.admissionController.agentSidecarInjection.enabled | bool | false |
Enables Datadog Agent sidecar injection. |
| clusterAgent.admissionController.agentSidecarInjection.imageName | string | nil |
|
| clusterAgent.admissionController.agentSidecarInjection.imageTag | string | nil |
|
| clusterAgent.admissionController.agentSidecarInjection.profiles | list | [] |
Defines the sidecar configuration override, currently only one profile is supported. |
| clusterAgent.admissionController.agentSidecarInjection.provider | string | nil |
Used by the admission controller to add infrastructure provider-specific configurations to the Agent sidecar. |
| clusterAgent.admissionController.agentSidecarInjection.selectors | list | [] |
Defines the pod selector for sidecar injection, currently only one rule is supported. |
| clusterAgent.admissionController.configMode | string | nil |
The kind of configuration to be injected, it can be "hostip", "service", "socket" or "csi". |
| clusterAgent.admissionController.containerRegistry | string | nil |
Override the default registry for the admission controller. |
| clusterAgent.admissionController.cwsInstrumentation.enabled | bool | false |
Enable the CWS Instrumentation admission controller endpoint. |
| clusterAgent.admissionController.cwsInstrumentation.mode | string | "remote_copy" |
Mode defines how the CWS Instrumentation should behave. Options are "remote_copy" or "init_container" |
| clusterAgent.admissionController.enabled | bool | true |
Enable the admissionController to be able to inject APM/Dogstatsd config and standard tags (env, service, version) automatically into your pods |
| clusterAgent.admissionController.failurePolicy | string | "Ignore" |
Set the failure policy for dynamic admission control.' |
| clusterAgent.admissionController.kubernetesAdmissionEvents.enabled | bool | false |
Enable the Kubernetes Admission Events feature. |
| clusterAgent.admissionController.mutateUnlabelled | bool | false |
Enable injecting config without having the pod label 'admission.datadoghq.com/enabled="true"' |
| clusterAgent.admissionController.mutation | object | {"enabled":true} |
Mutation Webhook configuration options |
| clusterAgent.admissionController.mutation.enabled | bool | true |
Enabled enables the Admission Controller mutation webhook. Default: true. (Requires Agent 7.59.0+). |
| clusterAgent.admissionController.port | int | 8000 |
Set port of cluster-agent admission controller service |
| clusterAgent.admissionController.probe.enabled | bool | false |
Enable the admission controller connectivity probe. # The probe periodically sends dry-run ConfigMap creation requests to verify the webhook is reachable from the API server. # (Requires Cluster Agent 7.78.0+). |
| clusterAgent.admissionController.probe.gracePeriod | int | 60 |
Seconds to wait at startup before the first probe. |
| clusterAgent.admissionController.probe.interval | int | 60 |
Seconds between probe executions. |
| clusterAgent.admissionController.remoteInstrumentation.enabled | bool | false |
Enable polling and applying library injection using Remote Config. # This feature is in beta, and enables Remote Config in the Cluster Agent. It also requires Cluster Agent version 7.43+. # Enabling this feature grants the Cluster Agent the permissions to patch Deployment objects in the cluster. |
| clusterAgent.admissionController.validation | object | {"enabled":true} |
Validation Webhook configuration options |
| clusterAgent.admissionController.validation.enabled | bool | true |
Enabled enables the Admission Controller validation webhook. Default: true. (Requires Agent 7.59.0+). |
| clusterAgent.admissionController.webhookName | string | "datadog-webhook" |
Name of the validatingwebhookconfiguration and mutatingwebhookconfiguration created by the cluster-agent |
| clusterAgent.advancedConfd | object | {} |
Provide additional cluster check configurations. Each key is an integration containing several config files. |
| clusterAgent.affinity | object | {} |
Allow the Cluster Agent Deployment to schedule using affinity rules |
| clusterAgent.celWorkloadExclude | string | nil |
Exclude workloads using a CEL-based definition in the Cluster Agent. (Requires Agent 7.73.0+) ref: https://docs.datadoghq.com/containers/guide/container-discovery-management/ |
| clusterAgent.command | list | [] |
Command to run in the Cluster Agent container as entrypoint |
| clusterAgent.confd | object | {} |
Provide additional cluster check configurations. Each key will become a file in /conf.d. |
| clusterAgent.containerExclude | string | nil |
Exclude containers from the Cluster Agent Autodiscovery, as a space-separated list. (Requires Agent/Cluster Agent 7.50.0+) |
| clusterAgent.containerInclude | string | nil |
Include containers in the Cluster Agent Autodiscovery, as a space-separated list. If a container matches an include rule, it’s always included in the Autodiscovery. (Requires Agent/Cluster Agent 7.50.0+) |
| clusterAgent.containers.clusterAgent.securityContext | object | {"allowPrivilegeEscalation":false,"readOnlyRootFilesystem":true} |
Specify securityContext on the cluster-agent container. |
| clusterAgent.containers.initContainers.resources | object | {} |
Resource requests and limits for the Cluster Agent init containers |
| clusterAgent.containers.initContainers.securityContext | object | {} |
Specify securityContext on the initContainers. |
| clusterAgent.createPodDisruptionBudget | bool | false |
Create pod disruption budget for Cluster Agent deployments DEPRECATED. Use clusterAgent.pdb.create instead |
| clusterAgent.datadog_cluster_yaml | object | {} |
Specify custom contents for the datadog cluster agent config (datadog-cluster.yaml) |
| clusterAgent.deploymentAnnotations | object | {} |
Annotations to add to the cluster-agents's deployment |
| clusterAgent.dnsConfig | object | {} |
Specify dns configuration options for datadog cluster agent containers e.g ndots |
| clusterAgent.enabled | bool | true |
Set this to false to disable Datadog Cluster Agent |
| clusterAgent.env | list | [] |
Set environment variables specific to Cluster Agent |
| clusterAgent.envDict | object | {} |
Set environment variables specific to Cluster Agent defined in a dict |
| clusterAgent.envFrom | list | [] |
Set environment variables specific to Cluster Agent from configMaps and/or secrets |
| clusterAgent.healthPort | int | 5556 |
Port number to use in the Cluster Agent for the healthz endpoint |
| clusterAgent.image.digest | string | "" |
Cluster Agent image digest to use, takes precedence over tag if specified |
| clusterAgent.image.doNotCheckTag | string | nil |
Skip the version and chart compatibility check |
| clusterAgent.image.name | string | "cluster-agent" |
Cluster Agent image name to use (relative to registry) |
| clusterAgent.image.pullPolicy | string | "IfNotPresent" |
Cluster Agent image pullPolicy |
| clusterAgent.image.pullSecrets | list | [] |
Cluster Agent repository pullSecret (ex: specify docker registry credentials) |
| clusterAgent.image.repository | string | nil |
Override default registry + image.name for Cluster Agent |
| clusterAgent.image.tag | string | "7.78.0" |
Cluster Agent image tag to use |
| clusterAgent.kubernetesApiserverCheck.disableUseComponentStatus | bool | false |
Set this to true to disable use_component_status for the kube_apiserver integration. |
| clusterAgent.livenessProbe | object | Every 15s / 6 KO / 1 OK | Override default Cluster Agent liveness probe settings |
| clusterAgent.metricsProvider.aggregator | string | "avg" |
Define the aggregator the cluster agent will use to process the metrics. The options are (avg, min, max, sum) |
| clusterAgent.metricsProvider.createReaderRbac | bool | true |
Create external-metrics-reader RBAC automatically (to allow HPA to read data from Cluster Agent) |
| clusterAgent.metricsProvider.enabled | bool | false |
Set this to true to enable Metrics Provider |
| clusterAgent.metricsProvider.endpoint | string | nil |
Override the external metrics provider endpoint. If not set, the cluster-agent defaults to datadog.site |
| clusterAgent.metricsProvider.registerAPIService | bool | true |
Set this to false to disable external metrics registration as an APIService |
| clusterAgent.metricsProvider.service.port | int | 8443 |
Set port of cluster-agent metrics server service (Kubernetes >= 1.15) |
| clusterAgent.metricsProvider.service.type | string | "ClusterIP" |
Set type of cluster-agent metrics server service |
| clusterAgent.metricsProvider.useDatadogMetrics | bool | false |
Enable usage of DatadogMetric CRD to autoscale on arbitrary Datadog queries |
| clusterAgent.metricsProvider.wpaController | bool | false |
Enable informer and controller of the watermark pod autoscaler |
| clusterAgent.networkPolicy.create | bool | false |
If true, create a NetworkPolicy for the cluster agent. DEPRECATED. Use datadog.networkPolicy.create instead |
| clusterAgent.nodeSelector | object | {} |
Allow the Cluster Agent Deployment to be scheduled on selected nodes |
| clusterAgent.pdb.create | bool | false |
Enable pod disruption budget for Cluster Agent deployments. |
| clusterAgent.pdb.maxUnavailable | string | nil |
Maximum number of pods that can be unavailable during a disruption |
| clusterAgent.pdb.minAvailable | string | nil |
|
| clusterAgent.podAnnotations | object | {} |
Annotations to add to the cluster-agents's pod(s) |
| clusterAgent.podSecurity.podSecurityPolicy.create | bool | false |
If true, create a PodSecurityPolicy resource for Cluster Agent pods |
| clusterAgent.podSecurity.securityContextConstraints.create | bool | false |
If true, create a SCC resource for Cluster Agent pods |
| clusterAgent.priorityClassName | string | nil |
Name of the priorityClass to apply to the Cluster Agent |
| clusterAgent.privateActionRunner.actionsAllowlist | list | [] |
List of actions executable by the Private Action Runner |
| clusterAgent.privateActionRunner.enabled | bool | false |
Enable the Private Action Runner to execute workflow actions |
| clusterAgent.privateActionRunner.identityFromExistingSecret | string | nil |
Use existing Secret which stores the Private Action Runner URN and private key # The secret should contain 'urn' and 'private_key' keys # If set, this parameter takes precedence over "urn" and "privateKey" |
| clusterAgent.privateActionRunner.identitySecretName | string | "datadog-private-action-runner-identity" |
Name of the Kubernetes secret used to store PAR identity when self-enrollment is enabled # The Cluster Agent will create and manage this secret for storing the enrolled runner's URN and private key # RBAC permissions are granted specifically for this secret name |
| clusterAgent.privateActionRunner.privateKey | string | nil |
Private key for the Private Action Runner (required if selfEnroll is false) # This key is used to authenticate the runner with Datadog |
| clusterAgent.privateActionRunner.selfEnroll | bool | true |
Enable self-enrollment for the Private Action Runner # When enabled, the runner will automatically register itself with Datadog using the provided API/APP keys # and store its identity in a Kubernetes secret. Requires leader election to be enabled. |
| clusterAgent.privateActionRunner.urn | string | nil |
URN of the Private Action Runner (required if selfEnroll is false) # Format: urn:datadog:private-action-runner:organization:<org_id>:runner:<runner_id> |
| clusterAgent.rbac.automountServiceAccountToken | bool | true |
If true, automatically mount the ServiceAccount's API credentials if clusterAgent.rbac.create is true |
| clusterAgent.rbac.create | bool | true |
If true, create & use RBAC resources |
| clusterAgent.rbac.flareAdditionalPermissions | bool | true |
If true, add Secrets and Configmaps get/list permissions to retrieve user Datadog Helm values from Cluster Agent namespace |
| clusterAgent.rbac.serviceAccountAdditionalLabels | object | {} |
Labels to add to the ServiceAccount if clusterAgent.rbac.create is true |
| clusterAgent.rbac.serviceAccountAnnotations | object | {} |
Annotations to add to the ServiceAccount if clusterAgent.rbac.create is true |
| clusterAgent.rbac.serviceAccountName | string | "default" |
Specify a preexisting ServiceAccount to use if clusterAgent.rbac.create is false |
| clusterAgent.readinessProbe | object | Every 15s / 6 KO / 1 OK | Override default Cluster Agent readiness probe settings |
| clusterAgent.replicas | int | 1 |
Specify the of cluster agent replicas, if > 1 it allow the cluster agent to work in HA mode. |
| clusterAgent.resources | object | {} |
Datadog cluster-agent resource requests and limits. |
| clusterAgent.revisionHistoryLimit | int | 10 |
The number of old ReplicaSets to keep in this Deployment. |
| clusterAgent.securityContext | object | {} |
Allows you to overwrite the default PodSecurityContext on the cluster-agent pods. |
| clusterAgent.shareProcessNamespace | bool | false |
Set the process namespace sharing on the Datadog Cluster Agent |
| clusterAgent.startupProbe | object | Every 15s / 6 KO / 1 OK | Override default Cluster Agent startup probe settings |
| clusterAgent.strategy | object | {"rollingUpdate":{"maxSurge":1,"maxUnavailable":0},"type":"RollingUpdate"} |
Allow the Cluster Agent deployment to perform a rolling update on helm update |
| clusterAgent.token | string | "" |
Cluster Agent token is a preshared key between node agents and cluster agent (autogenerated if empty, needs to be at least 32 characters a-zA-z) |
| clusterAgent.tokenExistingSecret | string | "" |
Existing secret name to use for Cluster Agent token. Put the Cluster Agent token in a key named token inside the Secret |
| clusterAgent.tolerations | list | [] |
Allow the Cluster Agent Deployment to schedule on tainted nodes ((requires Kubernetes >= 1.6)) |
| clusterAgent.topologySpreadConstraints | list | [] |
Allow the Cluster Agent Deployment to schedule using pod topology spreading |
| clusterAgent.useHostNetwork | bool | false |
Bind ports on the hostNetwork |
| clusterAgent.volumeMounts | list | [] |
Specify additional volumes to mount in the cluster-agent container |
| clusterAgent.volumes | list | [] |
Specify additional volumes to mount in the cluster-agent container |
| clusterChecksRunner.additionalLabels | object | {} |
Adds labels to the cluster checks runner deployment and pods |
| clusterChecksRunner.affinity | object | {} |
Allow the ClusterChecks Deployment to schedule using affinity rules. |
| clusterChecksRunner.containers.agent.securityContext | object | {"readOnlyRootFilesystem":true} |
Specify securityContext on the agent container |
| clusterChecksRunner.containers.initContainers.securityContext | object | {} |
Specify securityContext on the init containers |
| clusterChecksRunner.createPodDisruptionBudget | bool | false |
Create the pod disruption budget to apply to the cluster checks agents DEPRECATED. Use clusterChecksRunner.pdb.create instead |
| clusterChecksRunner.deploymentAnnotations | object | {} |
Annotations to add to the cluster-checks-runner's Deployment |
| clusterChecksRunner.dnsConfig | object | {} |
specify dns configuration options for datadog cluster agent containers e.g ndots |
| clusterChecksRunner.enabled | bool | false |
If true, deploys agent dedicated for running the Cluster Checks instead of running in the Daemonset's agents. |
| clusterChecksRunner.env | list | [] |
Environment variables specific to Cluster Checks Runner |
| clusterChecksRunner.envDict | object | {} |
Set environment variables specific to Cluster Checks Runner defined in a dict |
| clusterChecksRunner.envFrom | list | [] |
Set environment variables specific to Cluster Checks Runner from configMaps and/or secrets |
| clusterChecksRunner.healthPort | int | 5557 |
Port number to use in the Cluster Checks Runner for the healthz endpoint |
| clusterChecksRunner.image.digest | string | "" |
Define Agent image digest to use, takes precedence over tag if specified |
| clusterChecksRunner.image.name | string | "agent" |
Datadog Agent image name to use (relative to registry) |
| clusterChecksRunner.image.pullPolicy | string | "IfNotPresent" |
Datadog Agent image pull policy |
| clusterChecksRunner.image.pullSecrets | list | [] |
Datadog Agent repository pullSecret (ex: specify docker registry credentials) |
| clusterChecksRunner.image.repository | string | nil |
Override default registry + image.name for Cluster Check Runners |
| clusterChecksRunner.image.tag | string | "7.78.0" |
Define the Agent version to use |
| clusterChecksRunner.image.tagSuffix | string | "" |
Suffix to append to Agent tag |
| clusterChecksRunner.livenessProbe | object | Every 15s / 6 KO / 1 OK | Override default agent liveness probe settings |
| clusterChecksRunner.networkPolicy.create | bool | false |
If true, create a NetworkPolicy for the cluster checks runners. DEPRECATED. Use datadog.networkPolicy.create instead |
| clusterChecksRunner.nodeSelector | object | {} |
Allow the ClusterChecks Deployment to schedule on selected nodes |
| clusterChecksRunner.pdb.create | bool | false |
Enable pod disruption budget for Cluster Checks Runner deployments. |
| clusterChecksRunner.pdb.maxUnavailable | string | nil |
Maximum number of pods that can be unavailable during a disruption |
| clusterChecksRunner.pdb.minAvailable | string | nil |
Minimum number of pods that must remain available during a disruption |
| clusterChecksRunner.podAnnotations | object | {} |
Annotations to add to the cluster-checks-runner's pod(s) |
| clusterChecksRunner.ports | list | [] |
Allows to specify extra ports (hostPorts for instance) for this container |
| clusterChecksRunner.priorityClassName | string | nil |
Name of the priorityClass to apply to the Cluster checks runners |
| clusterChecksRunner.rbac.automountServiceAccountToken | bool | true |
If true, automatically mount the ServiceAccount's API credentials if clusterChecksRunner.rbac.create is true |
| clusterChecksRunner.rbac.create | bool | true |
If true, create & use RBAC resources |
| clusterChecksRunner.rbac.dedicated | bool | false |
If true, use a dedicated RBAC resource for the cluster checks agent(s) |
| clusterChecksRunner.rbac.serviceAccountAdditionalLabels | object | {} |
Labels to add to the ServiceAccount if clusterChecksRunner.rbac.dedicated is true |
| clusterChecksRunner.rbac.serviceAccountAnnotations | object | {} |
Annotations to add to the ServiceAccount if clusterChecksRunner.rbac.dedicated is true |
| clusterChecksRunner.rbac.serviceAccountName | string | "default" |
Specify a preexisting ServiceAccount to use if clusterChecksRunner.rbac.create is false |
| clusterChecksRunner.readinessProbe | object | Every 15s / 6 KO / 1 OK | Override default agent readiness probe settings |
| clusterChecksRunner.remoteConfiguration.enabled | bool | false |
Enable remote configuration on the Cluster Checks Runner. Set to true to enable remote configuration on the Cluster Checks Runner. |
| clusterChecksRunner.replicas | int | 2 |
Number of Cluster Checks Runner instances |
| clusterChecksRunner.resources | object | {} |
Datadog clusterchecks-agent resource requests and limits. |
| clusterChecksRunner.revisionHistoryLimit | int | 10 |
The number of old ReplicaSets to keep in this Deployment. |
| clusterChecksRunner.securityContext | object | {} |
Allows you to overwrite the default PodSecurityContext on the clusterchecks pods. |
| clusterChecksRunner.startupProbe | object | Every 15s / 6 KO / 1 OK | Override default agent startup probe settings |
| clusterChecksRunner.strategy | object | {"rollingUpdate":{"maxSurge":1,"maxUnavailable":0},"type":"RollingUpdate"} |
Allow the ClusterChecks deployment to perform a rolling update on helm update |
| clusterChecksRunner.tolerations | list | [] |
Tolerations for pod assignment |
| clusterChecksRunner.topologySpreadConstraints | list | [] |
Allow the ClusterChecks Deployment to schedule using pod topology spreading |
| clusterChecksRunner.volumeMounts | list | [] |
Specify additional volumes to mount in the cluster checks container |
| clusterChecksRunner.volumes | list | [] |
Specify additional volumes to mount in the cluster checks container |
| commonLabels | object | {} |
Labels to apply to all resources |
| datadog-crds.crds.datadogMetrics | bool | true |
Set to true to deploy the DatadogMetrics CRD |
| datadog-crds.crds.datadogPodAutoscalers | bool | true |
Set to true to deploy the DatadogPodAutoscalers CRD |
| datadog.apiKey | string | nil |
Your Datadog API key |
| datadog.apiKeyExistingSecret | string | nil |
Use existing Secret which stores API key instead of creating a new one. The value should be set with the api-key key inside the secret. |
| datadog.apm.enabled | bool | false |
Enable this to enable APM and tracing, on port 8126 DEPRECATED. Use datadog.apm.portEnabled instead |
| datadog.apm.errorTrackingStandalone.enabled | bool | false |
Enables Error Tracking for backend services. |
| datadog.apm.hostSocketPath | string | "/var/run/datadog" |
Host path to the trace-agent socket |
| datadog.apm.instrumentation.disabledNamespaces | list | [] |
Disable injecting the Datadog APM libraries into pods in specific namespaces. |
| datadog.apm.instrumentation.enabled | bool | false |
Enable injecting the Datadog APM libraries into all pods in the cluster. |
| datadog.apm.instrumentation.enabledNamespaces | list | [] |
Enable injecting the Datadog APM libraries into pods in specific namespaces. |
| datadog.apm.instrumentation.injectionMode | string | "" |
The injection mode to use for libraries injection. Valid values are: "auto", "init_container", "csi" (experimental, requires Cluster Agent 7.76.0+ and Datadog CSI Driver), "image_volume" (experimental, requires Cluster Agent 7.77.0+) Empty by default so the Cluster Agent can apply its own defaults. |
| datadog.apm.instrumentation.injector.imageTag | string | "" |
The image tag to use for the APM Injector (preview). |
| datadog.apm.instrumentation.language_detection.enabled | bool | true |
Run language detection to automatically detect languages of user workloads (preview). |
| datadog.apm.instrumentation.libVersions | object | {} |
Inject specific version of tracing libraries with Single Step Instrumentation. |
| datadog.apm.instrumentation.skipKPITelemetry | bool | false |
Disable generating Configmap for APM Instrumentation KPIs |
| datadog.apm.instrumentation.targets | list | [] |
Enable target based workload selection. Requires Cluster Agent 7.64.0+. ddTraceConfigs[]valueFrom Requires Cluster Agent 7.66.0+. |
| datadog.apm.port | int | 8126 |
Override the trace Agent port |
| datadog.apm.portEnabled | bool | false |
Enable APM over TCP communication (hostPort 8126 by default) |
| datadog.apm.socketEnabled | bool | true |
Enable APM over Socket (Unix Socket or windows named pipe) |
| datadog.apm.socketPath | string | "/var/run/datadog/apm.socket" |
Path to the trace-agent socket |
| datadog.apm.useLocalService | bool | false |
Enable APM over TCP communication to use the local service only (requires Kubernetes v1.22+) Note: The hostPort 8126 is disabled when this is enabled. |
| datadog.apm.useSocketVolume | bool | false |
Enable APM over Unix Domain Socket DEPRECATED. Use datadog.apm.socketEnabled instead |
| datadog.appKey | string | nil |
Datadog APP key required to use metricsProvider |
| datadog.appKeyExistingSecret | string | nil |
Use existing Secret which stores APP key instead of creating a new one. The value should be set with the app-key key inside the secret. |
| datadog.appsec.injector.autoDetect | bool | true |
Automatically detect and inject supported proxies in the cluster (Envoy Gateway, Istio Gateway API, native Istio Gateway) |
| datadog.appsec.injector.enabled | bool | false |
Enable App & API Protection on your cluster ingress usage across all your cluster at once |
| datadog.appsec.injector.mode | string | "" |
Deployment mode for the AppSec processor. Valid values: "sidecar", "external". Leave empty to use the agent default (sidecar). Upgrading users who rely on the external-processor flow (processor.address / processor.service.*) should set this to "external" explicitly. |
| datadog.appsec.injector.processor.address | string | "" |
Address of the AppSec processor service Defaults to {service.name}.{service.namespace}.svc |
| datadog.appsec.injector.processor.port | int | 443 |
Port of the AppSec processor service (defaults to 443) |
| datadog.appsec.injector.processor.service.name | string | "" |
Name of the AppSec processor service |
| datadog.appsec.injector.processor.service.namespace | string | "" |
Namespace where the AppSec processor service is deployed |
| datadog.appsec.injector.proxies | list | [] |
Manually specify which proxy types to inject. Valid values: "envoy-gateway", "istio", "istio-gateway" When autoDetect is true, detected proxies are added to this list When autoDetect is false, only proxies in this list are enabled |
| datadog.appsec.injector.sidecar.bodyParsingSizeLimit | int | 0 |
Request body parsing size limit in bytes for the AppSec sidecar processor. Set to 0 to leave it unset (default agent behavior). Set to a negative value (e.g. -1) to disable body parsing entirely. |
| datadog.appsec.injector.sidecar.healthPort | int | 8081 |
Health check port for the AppSec sidecar processor |
| datadog.appsec.injector.sidecar.image | string | "ghcr.io/datadog/dd-trace-go/service-extensions-callout" |
Container image for the AppSec sidecar processor |
| datadog.appsec.injector.sidecar.imageTag | string | "v2.6.0" |
Image tag for the AppSec sidecar processor |
| datadog.appsec.injector.sidecar.port | int | 8080 |
Listening port for the AppSec sidecar processor |
| datadog.appsec.injector.sidecar.resources.limits.cpu | string | "" |
Optional CPU limit for the AppSec sidecar processor |
| datadog.appsec.injector.sidecar.resources.limits.memory | string | "" |
Optional memory limit for the AppSec sidecar processor |
| datadog.appsec.injector.sidecar.resources.requests.cpu | string | "10m" |
CPU request for the AppSec sidecar processor |
| datadog.appsec.injector.sidecar.resources.requests.memory | string | "128Mi" |
Memory request for the AppSec sidecar processor |
| datadog.asm.iast.enabled | bool | false |
Enable Application Security Management Interactive Application Security Testing by injecting DD_IAST_ENABLED=true environment variable to all pods in the cluster |
| datadog.asm.sca.enabled | bool | false |
Enable Application Security Management Software Composition Analysis by injecting DD_APPSEC_SCA_ENABLED=true environment variable to all pods in the cluster |
| datadog.asm.threats.enabled | bool | false |
Enable Application Security Management Threats App & API Protection by injecting DD_APPSEC_ENABLED=true environment variable to all pods in the cluster |
| datadog.autoscaling.workload.enabled | string | nil |
Enable Workload Autoscaling. |
| datadog.celWorkloadExclude | string | nil |
Exclude workloads using a CEL-based definition in the Agent. (Requires Agent 7.73.0+) ref: https://docs.datadoghq.com/containers/guide/container-discovery-management/ |
| datadog.checksCardinality | string | nil |
Sets the tag cardinality for the checks run by the Agent. |
| datadog.checksd | object | {} |
Provide additional custom checks as python code |
| datadog.clusterChecks.enabled | bool | true |
Enable the Cluster Checks feature on both the cluster-agents and the daemonset |
| datadog.clusterChecks.shareProcessNamespace | bool | false |
Set the process namespace sharing on the cluster checks agent |
| datadog.clusterName | string | nil |
Set a unique cluster name to allow scoping hosts and Cluster Checks easily |
| datadog.clusterTagger.collectKubernetesTags | bool | false |
Enables Kubernetes resources tags collection. |
| datadog.collectEvents | bool | true |
Enables this to start event collection from the kubernetes API |
| datadog.confd | object | {} |
Provide additional check configurations (static and Autodiscovery) |
| datadog.containerExclude | string | nil |
Exclude containers from Agent Autodiscovery, as a space-separated list |
| datadog.containerExcludeLogs | string | nil |
Exclude logs from Agent Autodiscovery, as a space-separated list |
| datadog.containerExcludeMetrics | string | nil |
Exclude metrics from Agent Autodiscovery, as a space-separated list |
| datadog.containerImageCollection.enabled | bool | true |
Enable collection of container image metadata |
| datadog.containerInclude | string | nil |
Include containers in Agent Autodiscovery, as a space-separated list. If a container matches an include rule, it’s always included in Autodiscovery |
| datadog.containerIncludeLogs | string | nil |
Include logs in Agent Autodiscovery, as a space-separated list |
| datadog.containerIncludeMetrics | string | nil |
Include metrics in Agent Autodiscovery, as a space-separated list |
| datadog.containerLifecycle.enabled | bool | true |
Enable container lifecycle events collection |
| datadog.containerRuntimeSupport.enabled | bool | true |
Set this to false to disable agent access to container runtime. |
| datadog.criSocketPath | string | nil |
Path to the container runtime socket (if different from Docker) |
| datadog.csi.enabled | bool | false |
Enable datadog csi driver Requires version 7.67 or later of the cluster agent Note: - When set to true, the CSI driver subchart will be installed automatically. - Do not install the CSI driver separately if this is enabled, or you may hit conflicts. |
| datadog.dataPlane.dogstatsd.enabled | bool | false |
Whether or not DogStatsD is enabled in the data plane |
| datadog.dataPlane.enabled | bool | false |
Whether or not the data plane is enabled Requires version 7.74 or later of the Datadog Agent. The data plane feature is currently in preview. Please reach out to your Datadog representative for more information. |
| datadog.dataPlane.image.digest | string | "" |
Define the data plane image digest to use, takes precedence over tag if specified |
| datadog.dataPlane.image.name | string | "agent-data-plane" |
Data plane image name to use (relative to registry) |
| datadog.dataPlane.image.pullPolicy | string | "IfNotPresent" |
Data plane image pull policy |
| datadog.dataPlane.image.repository | string | nil |
Override default registry + image.name for data plane |
| datadog.dataPlane.image.tag | string | "0.1.30" |
Define the data plane version to use |
| datadog.dd_url | string | nil |
The host of the Datadog intake server to send Agent data to, only set this option if you need the Agent to send data to a custom URL |
| datadog.disableDefaultOsReleasePaths | bool | false |
Set this to true to disable mounting datadog.osReleasePath in all containers |
| datadog.disablePasswdMount | bool | false |
Set this to true to disable mounting /etc/passwd in all containers |
| datadog.discovery.enabled | bool | nil |
Enable Service Discovery |
| datadog.discovery.networkStats.enabled | bool | true |
Enable Service Discovery Network Stats |
| datadog.dockerSocketPath | string | nil |
Path to the docker socket |
| datadog.dogstatsd.hostSocketPath | string | "/var/run/datadog" |
Host path to the DogStatsD socket |
| datadog.dogstatsd.nonLocalTraffic | bool | true |
Enable this to make each node accept non-local statsd traffic (from outside of the pod) |
| datadog.dogstatsd.originDetection | bool | false |
Enable origin detection for container tagging |
| datadog.dogstatsd.port | int | 8125 |
Override the Agent DogStatsD port |
| datadog.dogstatsd.socketPath | string | "/var/run/datadog/dsd.socket" |
Path to the DogStatsD socket |
| datadog.dogstatsd.tagCardinality | string | "low" |
Sets the tag cardinality relative to the origin detection |
| datadog.dogstatsd.tags | list | [] |
List of static tags to attach to every custom metric, event and service check collected by Dogstatsd. |
| datadog.dogstatsd.useHostPID | bool | false |
Run the agent in the host's PID namespace # DEPRECATED: use datadog.useHostPID instead. |
| datadog.dogstatsd.useHostPort | bool | false |
Sets the hostPort to the same value of the container port |
| datadog.dogstatsd.useSocketVolume | bool | true |
Enable dogstatsd over Unix Domain Socket with an HostVolume |
| datadog.dynamicInstrumentationGo.enabled | bool | false |
Enable Dynamic Instrumentation and Live Debugger for Go services. |
| datadog.env | list | [] |
Set environment variables for all Agents |
| datadog.envDict | object | {} |
Set environment variables for all Agents defined in a dict |
| datadog.envFrom | list | [] |
Set environment variables for all Agents directly from configMaps and/or secrets |
| datadog.excludePauseContainer | bool | true |
Exclude pause containers from Agent Autodiscovery. |
| datadog.expvarPort | int | 6000 |
Specify the port to expose pprof and expvar to not interfere with the agent metrics port from the cluster-agent, which defaults to 5000 |
| datadog.gpuMonitoring.configureCgroupPerms | bool | false |
Configure cgroup permissions for GPU monitoring |
| datadog.gpuMonitoring.enabled | bool | false |
Enable GPU monitoring core check |
| datadog.gpuMonitoring.privilegedMode | bool | false |
Enable advanced GPU metrics and monitoring via system-probe Note: system-probe component of the agent runs with elevated privileges |
| datadog.gpuMonitoring.runtimeClassName | string | "nvidia" |
Runtime class name for the agent pods to get access to NVIDIA resources. Can be left empty to use the default runtime class. |
| datadog.helmCheck.collectEvents | bool | false |
Set this to true to enable event collection in the Helm Check (Requires Agent 7.36.0+ and Cluster Agent 1.20.0+) This requires datadog.HelmCheck.enabled to be set to true |
| datadog.helmCheck.enabled | bool | false |
Set this to true to enable the Helm check (Requires Agent 7.35.0+ and Cluster Agent 1.19.0+) This requires clusterAgent.enabled to be set to true |
| datadog.helmCheck.valuesAsTags | object | {} |
Collects Helm values from a release and uses them as tags (Requires Agent and Cluster Agent 7.40.0+). This requires datadog.HelmCheck.enabled to be set to true |
| datadog.hostProfiler.enabled | bool | false |
Enable the Host Profiler. This feature is experimental and subject to change. |
| datadog.hostProfiler.image | string | "" |
Image the Host Profiler. This parameter is experimental and will be removed once official image is available. |
| datadog.hostVolumeMountPropagation | string | "None" |
Allow to specify the mountPropagation value on all volumeMounts using HostPath |
| datadog.ignoreAutoConfig | list | [] |
List of integration to ignore auto_conf.yaml. |
| datadog.kubeStateMetricsCore.annotationsAsTags | object | {} |
Extra annotations to collect from resources and to turn into datadog tag. |
| datadog.kubeStateMetricsCore.collectApiServicesMetrics | bool | false |
Enable watching apiservices objects and collecting their corresponding metrics kubernetes_state.apiservice.* (Requires Cluster Agent 7.45.0+) |
| datadog.kubeStateMetricsCore.collectConfigMaps | bool | true |
Enable watching configmap objects and collecting their corresponding metrics kubernetes_state.configmap.* |
| datadog.kubeStateMetricsCore.collectCrMetrics | list | [] |
Enable watching CustomResource objects and collecting their corresponding metrics kubernetes_state_customresource.* (Requires Cluster Agent 7.63.0+) |
| datadog.kubeStateMetricsCore.collectCrdMetrics | bool | false |
Enable watching CRD objects and collecting their corresponding metrics kubernetes_state.crd.* |
| datadog.kubeStateMetricsCore.collectSecretMetrics | bool | true |
Enable watching secret objects and collecting their corresponding metrics kubernetes_state.secret.* |
| datadog.kubeStateMetricsCore.collectVpaMetrics | bool | false |
Enable watching VPA objects and collecting their corresponding metrics kubernetes_state.vpa.* |
| datadog.kubeStateMetricsCore.enabled | bool | true |
Enable the kubernetes_state_core check in the Cluster Agent (Requires Cluster Agent 1.12.0+) |
| datadog.kubeStateMetricsCore.ignoreLegacyKSMCheck | bool | true |
Disable the auto-configuration of legacy kubernetes_state check (taken into account only when datadog.kubeStateMetricsCore.enabled is true) |
| datadog.kubeStateMetricsCore.labelsAsTags | object | {} |
Extra labels to collect from resources and to turn into datadog tag. |
| datadog.kubeStateMetricsCore.namespaces | list | [] |
Restrict the kubernetes_state_core check to collect metrics only from the specified namespaces. # When set, namespace-scoped RBAC is created as Role+RoleBinding per listed namespace instead of a cluster-wide ClusterRole. # Cluster-scoped resources (nodes, persistentvolumes, storageclasses, etc.) are still collected via a ClusterRole. |
| datadog.kubeStateMetricsCore.rbac.create | bool | true |
If true, create & use RBAC resources |
| datadog.kubeStateMetricsCore.tags | list | [] |
List of static tags to attach to all KSM metrics |
| datadog.kubeStateMetricsCore.useClusterCheckRunners | bool | false |
For large clusters where the Kubernetes State Metrics Check Core needs to be distributed on dedicated workers. |
| datadog.kubeStateMetricsEnabled | bool | false |
If true, deploys the kube-state-metrics deployment |
| datadog.kubeStateMetricsNetworkPolicy.create | bool | false |
If true, create a NetworkPolicy for kube state metrics |
| datadog.kubelet.agentCAPath | string | /var/run/host-kubelet-ca.crt if hostCAPath else /var/run/secrets/kubernetes.io/serviceaccount/ca.crt | Path (inside Agent containers) where the Kubelet CA certificate is stored |
| datadog.kubelet.coreCheckEnabled | bool | true | Toggle if kubelet core check should be used instead of Python check. (Requires Agent/Cluster Agent 7.53.0+) |
| datadog.kubelet.fineGrainedAuthorization | bool | false |
Enable fine-grained authentication for kubelet (requires: Kubernetes 1.32+) |
| datadog.kubelet.host | object | {"valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}} |
Override kubelet IP |
| datadog.kubelet.hostCAPath | string | None (no mount from host) | Path (on host) where the Kubelet CA certificate is stored |
| datadog.kubelet.podLogsPath | string | /var/log/pods on Linux, C:\var\log\pods on Windows | Path (on host) where the PODs logs are located |
| datadog.kubelet.podResourcesSocketDir | string | /var/lib/kubelet/pod-resources | Path (on host) where the kubelet.sock socket for the PodResources API is located |
| datadog.kubelet.tlsVerify | string | true | Toggle kubelet TLS verification |
| datadog.kubelet.useApiServer | bool | false | Enable this to query the pod list from the API Server instead of the Kubelet. (Requires Agent 7.65.0+) |
| datadog.kubernetesEvents.collectedEventTypes | list | [{"kind":"Pod","reasons":["Failed","BackOff","Unhealthy","FailedScheduling","FailedMount","FailedAttachVolume"]},{"kind":"Node","reasons":["TerminatingEvictedPod","NodeNotReady","Rebooted","HostPortConflict"]},{"kind":"CronJob","reasons":["SawCompletedJob"]}] |
Event types to be collected. This requires datadog.kubernetesEvents.unbundleEvents to be set to true. |
| datadog.kubernetesEvents.filteringEnabled | bool | false |
Enable this to only include events that match the pre-defined allowed events. (Requires Cluster Agent 7.57.0+). |
| datadog.kubernetesEvents.kubernetesEventResyncPeriodS | string | nil |
Specify the frequency in seconds at which the Agent should list all events to re-sync following the informer pattern |
| datadog.kubernetesEvents.maxEventsPerRun | string | nil |
Maximum number of events you wish to collect per check run. |
| datadog.kubernetesEvents.sourceDetectionEnabled | bool | false |
Enable this to map Kubernetes events to integration sources based on controller names. (Requires Cluster Agent 7.56.0+). |
| datadog.kubernetesEvents.unbundleEvents | bool | false |
Allow unbundling kubernetes events, 1:1 mapping between Kubernetes and Datadog events. (Requires Cluster Agent 7.42.0+). |
| datadog.kubernetesKubeServiceIgnoreReadiness | bool | false |
Enable this to attach kube_service tag unconditionally. (Requires Cluster Agent 7.76.0+). |
| datadog.kubernetesResourcesAnnotationsAsTags | object | {} |
Provide a mapping of Kubernetes Resources Annotations to Datadog Tags |
| datadog.kubernetesResourcesLabelsAsTags | object | {} |
Provide a mapping of Kubernetes Resources Labels to Datadog Tags |
| datadog.kubernetesUseEndpointSlices | bool | true |
Enable this to map Kubernetes services to endpointslices instead of endpoints. (Requires Cluster Agent 7.62.0+). |
| datadog.leaderElection | bool | true |
Enables leader election mechanism for event collection |
| datadog.leaderElectionResource | string | "configmap" |
Selects the default resource to use for leader election. Can be: * "lease" / "leases". Only supported in agent 7.47+ * "configmap" / "configmaps". "" to automatically detect which one to use. |
| datadog.leaderLeaseDuration | string | nil |
Set the lease time for leader election in second |
| datadog.logLevel | string | "INFO" |
Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, off |
| datadog.logs.autoMultiLineDetection | bool | false |
Allows the Agent to detect common multi-line patterns automatically. |
| datadog.logs.containerCollectAll | bool | false |
Enable this to allow log collection for all containers |
| datadog.logs.containerCollectUsingFiles | bool | true |
Collect logs from files in /var/log/pods instead of using container runtime API |
| datadog.logs.enabled | bool | false |
Enables this to activate Datadog Agent log collection |
| datadog.namespaceAnnotationsAsTags | object | {} |
Provide a mapping of Kubernetes Namespace Annotations to Datadog Tags |
| datadog.namespaceLabelsAsTags | object | {} |
Provide a mapping of Kubernetes Namespace Labels to Datadog Tags |
| datadog.networkMonitoring.dnsMonitoringPorts | list | [53] (set by agent) |
List of ports to monitor for DNS traffic |
| datadog.networkMonitoring.enabled | bool | false |
Enable Cloud Network Monitoring |
| datadog.networkPath.collector.pathtestContextsLimit | string | nil |
Override maximum number of pathtests stored to run |
| datadog.networkPath.collector.pathtestInterval | string | nil |
Override time interval between pathtest runs |
| datadog.networkPath.collector.pathtestMaxPerMinute | string | nil |
Override limit for total pathtests run, per minute |
| datadog.networkPath.collector.pathtestTTL | string | nil |
Override TTL in minutes for pathtests |
| datadog.networkPath.collector.workers | string | nil |
Override the number of workers |
| datadog.networkPath.connectionsMonitoring.enabled | bool | false |
Enable Network Path's "Network traffic paths" feature. Requires the traceroute system-probe module to be enabled. |
| datadog.networkPolicy.cilium.dnsSelector | object | kube-dns in namespace kube-system | Cilium selector of the DNS server entity |
| datadog.networkPolicy.create | bool | false |
If true, create NetworkPolicy for all the components |
| datadog.networkPolicy.flavor | string | "kubernetes" |
Flavor of the network policy to use. Can be: * kubernetes for networking.k8s.io/v1/NetworkPolicy * cilium for cilium.io/v2/CiliumNetworkPolicy |
| datadog.nodeLabelsAsTags | object | {} |
Provide a mapping of Kubernetes Node Labels to Datadog Tags |
| datadog.operator.enabled | bool | true |
Enable the Datadog Operator. |
| datadog.operator.migration.enabled | bool | false |
Enable migration of Agent workloads to be managed by the Datadog Operator. Creates a DatadogAgent manifest based on current release's values.yaml. |
| datadog.operator.migration.preview | bool | false |
Set to true to preview the DatadogAgent manifest mapped from the Helm release's values.yaml. Mapped DatadogAgent manifest can be viewed by checking the dda-mapper container logs in the migration job. |
| datadog.operator.migration.userValues | string | "" |
Provide datadog chart values as a YAML string to be mapped to the DatadogAgent manifest. Use --set-file to pass the file contents: helm install datadog ./charts/datadog --set-file datadog.operator.migration.userValues=myValues.yaml -f myValues.yaml |
| datadog.orchestratorExplorer.container_scrubbing | object | {"enabled":true} |
Enable the scrubbing of containers in the kubernetes resource YAML for sensitive information |
| datadog.orchestratorExplorer.customResources | list | [] |
Defines custom resources for the orchestrator explorer to collect |
| datadog.orchestratorExplorer.enabled | bool | true |
Set this to false to disable the orchestrator explorer |
| datadog.orchestratorExplorer.kubelet_configuration_check.enabled | bool | true |
Enable the orchestrator kubelet configuration check |
| datadog.originDetectionUnified.enabled | bool | false |
Enabled enables unified mechanism for origin detection. Default: false. (Requires Agent 7.54.0+). |
| datadog.osReleasePath | string | "/etc/os-release" |
Specify the path to your os-release file |
| datadog.otelCollector.config | string | nil |
OTel collector configuration |
| datadog.otelCollector.configMap | object | {"items":null,"key":"otel-config.yaml","name":null} |
Use an existing ConfigMap for DDOT Collector configuration |
| datadog.otelCollector.configMap.items | string | nil |
Items within the ConfigMap that contain DDOT Collector configuration |
| datadog.otelCollector.configMap.key | string | "otel-config.yaml" |
Key within the ConfigMap that contains the DDOT Collector configuration |
| datadog.otelCollector.configMap.name | string | nil |
Name of the existing ConfigMap that contains the DDOT Collector configuration |
| datadog.otelCollector.enabled | bool | false |
Enable the OTel Collector |
| datadog.otelCollector.featureGates | string | nil |
Feature gates to pass to OTel collector, as a comma separated list |
| datadog.otelCollector.logs.enabled | bool | false |
Enable logs support in the OTel Collector. If true, checks OTel Collector config for filelog receiver and mounts additional volumes to collect containers and pods logs. |
| datadog.otelCollector.ports | list | [{"containerPort":"4317","name":"otel-grpc","protocol":"TCP"},{"containerPort":"4318","name":"otel-http","protocol":"TCP"}] |
Ports that OTel Collector is listening on |
| datadog.otelCollector.rbac.create | bool | true |
If true, check OTel Collector config for k8sattributes processor and create required ClusterRole to access Kubernetes API |
| datadog.otelCollector.rbac.rules | list | [] |
A set of additional RBAC rules to apply to OTel Collector's ClusterRole |
| datadog.otelCollector.useStandaloneImage | bool | true |
If true, the OTel Collector will use the ddot-collector image instead of the agent image The tag is retrieved from the agents.image.tag value. This is only supported for agent versions 7.67.0+ If set to false, you will need to set agents.image.tagSuffix to full |
| datadog.otlp.logs.enabled | bool | false |
Enable logs support in the OTLP ingest endpoint |
| datadog.otlp.receiver.protocols.grpc.enabled | bool | false |
Enable the OTLP/gRPC endpoint |
| datadog.otlp.receiver.protocols.grpc.endpoint | string | "0.0.0.0:4317" |
OTLP/gRPC endpoint |
| datadog.otlp.receiver.protocols.grpc.useHostPort | bool | true |
Enable the Host Port for the OTLP/gRPC endpoint |
| datadog.otlp.receiver.protocols.http.enabled | bool | false |
Enable the OTLP/HTTP endpoint |
| datadog.otlp.receiver.protocols.http.endpoint | string | "0.0.0.0:4318" |
OTLP/HTTP endpoint |
| datadog.otlp.receiver.protocols.http.useHostPort | bool | true |
Enable the Host Port for the OTLP/HTTP endpoint |
| datadog.podAnnotationsAsTags | object | {} |
Provide a mapping of Kubernetes Annotations to Datadog Tags |
| datadog.podLabelsAsTags | object | {} |
Provide a mapping of Kubernetes Labels to Datadog Tags |
| datadog.privateActionRunner.actionsAllowlist | list | [] |
List of actions executable by the Private Action Runner |
| datadog.privateActionRunner.enabled | bool | false |
Enable the Private Action Runner on the node agent to execute workflow actions |
| datadog.privateActionRunner.identityFromExistingSecret | string | nil |
Use existing Secret which stores the Private Action Runner URN and private key # The secret should contain 'urn' and 'private_key' keys # If set, this parameter takes precedence over "urn" and "privateKey" |
| datadog.privateActionRunner.privateKey | string | nil |
Private key for the Private Action Runner (required if selfEnroll is false) # This key is used to authenticate the runner with Datadog |
| datadog.privateActionRunner.selfEnroll | bool | true |
Enable self-enrollment for the Private Action Runner # When enabled, the runner will automatically register itself with Datadog using the provided API/APP keys # and store its identity in a local file. Requires leader election to be enabled. |
| datadog.privateActionRunner.urn | string | nil |
URN of the Private Action Runner (required if selfEnroll is false) # Format: urn:datadog:private-action-runner:organization:<org_id>:runner:<runner_id> |
| datadog.processAgent.containerCollection | bool | true |
Set this to true to enable container collection # ref: https://docs.datadoghq.com/infrastructure/containers/?tab=helm |
| datadog.processAgent.enabled | bool | true |
Set this to true to enable live process monitoring agent DEPRECATED. Set datadog.processAgent.processCollection or datadog.processAgent.containerCollection instead. # Note: /etc/passwd is automatically mounted when processCollection, processDiscovery, or containerCollection is enabled. # ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset |
| datadog.processAgent.processCollection | bool | false |
Set this to true to enable process collection |
| datadog.processAgent.processDiscovery | bool | true |
Enables or disables autodiscovery of integrations |
| datadog.processAgent.runInCoreAgent | bool | true |
Set this to true to run the following features in the core agent: Live Processes, Live Containers, Process Discovery. # This requires Agent 7.60.0+ and Linux. # DEPRECATED: This behavior will be enabled by default for installations that meet the requirements. # For Agent 7.78.0+, this setting is ignored — process checks always run in the core agent on Linux. |
| datadog.processAgent.stripProcessArguments | bool | false |
Set this to scrub all arguments from collected processes # Requires datadog.processAgent.processCollection to be set to true to have any effect # ref: https://docs.datadoghq.com/infrastructure/process/?tab=linuxwindows#process-arguments-scrubbing |
| datadog.profiling.enabled | string | nil |
Enable Continuous Profiler by injecting DD_PROFILING_ENABLED environment variable with the same value to all pods in the cluster Valid values are: - false: Profiler is turned off and can not be turned on by other means. - null: Profiler is turned off, but can be turned on by other means. - auto: Profiler is turned off, but the library will turn it on if the application is a good candidate for profiling. - true: Profiler is turned on. |
| datadog.prometheusScrape.additionalConfigs | list | [] |
Allows adding advanced openmetrics check configurations with custom discovery rules. (Requires Agent version 7.27+) |
| datadog.prometheusScrape.enabled | bool | false |
Enable autodiscovering pods and services exposing prometheus metrics. |
| datadog.prometheusScrape.serviceEndpoints | bool | false |
Enable generating dedicated checks for service endpoints. |
| datadog.prometheusScrape.version | int | 2 |
Version of the openmetrics check to schedule by default. |
| datadog.remoteConfiguration.enabled | bool | true |
Set to true to enable remote configuration. DEPRECATED: Consider using remoteConfiguration.enabled instead |
| datadog.sbom.containerImage.analyzers | list | ["os"] |
List of analyzers to use for container image SBOM generation |
| datadog.sbom.containerImage.containerExclude | string | nil |
Exclude containers from SBOM generation, as a space-separated list |
| datadog.sbom.containerImage.containerInclude | string | nil |
Include containers in SBOM generation, as a space-separated list. If a container matches an include rule, it’s always included in SBOM generation |
| datadog.sbom.containerImage.enabled | bool | false |
Enable SBOM collection for container images |
| datadog.sbom.containerImage.overlayFSDirectScan | bool | false |
Use experimental overlayFS direct scan |
| datadog.sbom.containerImage.uncompressedLayersSupport | bool | true |
Use container runtime snapshotter This should be set to true when using EKS, GKE or if containerd is configured to discard uncompressed layers. This feature will cause the SYS_ADMIN capability to be added to the Agent container. Setting this to false could cause a high error rate when generating SBOMs due to missing uncompressed layer. See https://docs.datadoghq.com/security/cloud_security_management/troubleshooting/vulnerabilities/#uncompressed-container-image-layers |
| datadog.sbom.host.analyzers | list | ["os"] |
List of analyzers to use for host SBOM generation |
| datadog.sbom.host.enabled | bool | false |
Enable SBOM collection for host filesystems |
| datadog.secretAnnotations | object | {} |
|
| datadog.secretBackend.arguments | string | nil |
Configure the secret backend command arguments (space-separated strings). |
| datadog.secretBackend.command | string | nil |
Configure the secret backend command, path to the secret backend binary. |
| datadog.secretBackend.config | object | {} |
Additional configuration for the secret backend type. |
| datadog.secretBackend.enableGlobalPermissions | bool | true |
Whether to create a global permission allowing Datadog agents to read all secrets when datadog.secretBackend.command is set to "/readsecret_multiple_providers.sh" or datadog.secretBackend.type is set. |
| datadog.secretBackend.refreshInterval | string | nil |
[PREVIEW] Configure the secret backend command refresh interval in seconds. |
| datadog.secretBackend.roles | list | [] |
Creates roles for Datadog to read the specified secrets - replacing datadog.secretBackend.enableGlobalPermissions. |
| datadog.secretBackend.timeout | string | nil |
Configure the secret backend command timeout in seconds. |
| datadog.secretBackend.type | string | nil |
Configure the built-in secret backend type. Alternative to command; when set, the Agent uses the built-in backend to resolve secrets. Requires Agent 7.70+. |
| datadog.securityAgent.compliance.checkInterval | string | "20m" |
Compliance check run interval |
| datadog.securityAgent.compliance.configMap | string | nil |
Contains CSPM compliance benchmarks that will be used |
| datadog.securityAgent.compliance.containerInclude | string | nil |
Include containers in CSPM monitoring, as a space-separated list. If a container matches an include rule, it’s always included |
| datadog.securityAgent.compliance.enabled | bool | false |
Set to true to enable Cloud Security Posture Management (CSPM) |
| datadog.securityAgent.compliance.host_benchmarks.enabled | bool | true |
Set to false to disable host benchmarks. If enabled, this feature requires 160 MB extra memory for the security-agent container. (Requires Agent 7.47.0+) |
| datadog.securityAgent.compliance.xccdf.enabled | bool | false |
|
| datadog.securityAgent.runtime.activityDump.cgroupDumpTimeout | int | 20 |
Set to the desired duration of a single container tracing (in minutes) |
| datadog.securityAgent.runtime.activityDump.cgroupWaitListSize | int | 0 |
Set to the size of the wait list for already traced containers |
| datadog.securityAgent.runtime.activityDump.enabled | bool | true |
Set to true to enable the collection of CWS activity dumps |
| datadog.securityAgent.runtime.activityDump.pathMerge.enabled | bool | false |
Set to true to enable the merging of similar paths |
| datadog.securityAgent.runtime.activityDump.tracedCgroupsCount | int | 3 |
Set to the number of containers that should be traced concurrently |
| datadog.securityAgent.runtime.containerExclude | string | nil |
|
| datadog.securityAgent.runtime.containerInclude | string | nil |
Include containers in runtime security monitoring, as a space-separated list. If a container matches an include rule, it’s always included |
| datadog.securityAgent.runtime.directSendFromSystemProbe | bool | false |
Set to true to enable direct sending of CWS events from system-probe to Datadog, bypassing security-agent. When enabled, the security-agent container will not be created for CWS functionality (it may still be created if compliance features are enabled). |
| datadog.securityAgent.runtime.enabled | bool | false |
Set to true to enable Cloud Workload Security (CWS) |
| datadog.securityAgent.runtime.enforcement.enabled | bool | true |
Set to false to disable CWS runtime enforcement |
| datadog.securityAgent.runtime.fimEnabled | bool | false |
Set to true to enable Cloud Workload Security (CWS) File Integrity Monitoring DEPRECATED. This option has no effect. Cloud Workload Security is now only controlled by datadog.securityAgent.runtime.enabled. |
| datadog.securityAgent.runtime.network.enabled | bool | true |
Set to true to enable the collection of CWS network events |
| datadog.securityAgent.runtime.policies.configMap | string | nil |
Contains CWS policies that will be used |
| datadog.securityAgent.runtime.securityProfile.anomalyDetection.enabled | bool | true |
Set to true to enable CWS runtime drift events |
| datadog.securityAgent.runtime.securityProfile.autoSuppression.enabled | bool | true |
Set to true to enable CWS runtime auto suppression |
| datadog.securityAgent.runtime.securityProfile.enabled | bool | true |
Set to true to enable CWS runtime security profiles |
| datadog.securityAgent.runtime.syscallMonitor.enabled | bool | false |
Set to true to enable the Syscall monitoring (recommended for troubleshooting only) |
| datadog.securityAgent.runtime.useSecruntimeTrack | bool | true |
Set to true to send Cloud Workload Security (CWS) events directly to the Agent events explorer. This value shouldn't be changed unless advised by Datadog support. |
| datadog.securityContext | object | {"runAsUser":0} |
Allows you to overwrite the default PodSecurityContext on the Daemonset or Deployment |
| datadog.serviceMonitoring.enabled | bool | false |
Enable Universal Service Monitoring |
| datadog.serviceMonitoring.http2MonitoringEnabled | string | nil |
Enable HTTP2 & gRPC monitoring for Universal Service Monitoring (Requires Agent 7.53.0+ and kernel 5.2 or later). Empty values use the default setting in the datadog agent. |
| datadog.serviceMonitoring.httpMonitoringEnabled | string | nil |
Enable HTTP monitoring for Universal Service Monitoring (Requires Agent 7.40.0+). Empty values use the default setting in the datadog agent. |
| datadog.serviceMonitoring.tls.go.enabled | bool | nil |
Enable TLS monitoring for Golang services (Requires Agent 7.51.0+). Empty values use the default setting in the datadog agent. |
| datadog.serviceMonitoring.tls.istio.enabled | bool | nil |
Enable TLS monitoring for Istio services (Requires Agent 7.50.0+). Empty values use the default setting in the datadog agent. |
| datadog.serviceMonitoring.tls.native.enabled | bool | nil |
Enable TLS monitoring for native (openssl, libssl, gnutls) services (Requires Agent 7.51.0+). Empty values use the default setting in the datadog agent. |
| datadog.serviceMonitoring.tls.nodejs.enabled | bool | nil |
Enable TLS monitoring for Node.js services (Requires Agent 7.54.0+). Empty values use the default setting in the datadog agent. |
| datadog.site | string | nil |
The site of the Datadog intake to send Agent data to. (documentation: https://docs.datadoghq.com/getting_started/site/) |
| datadog.systemProbe.apparmor | string | "unconfined" |
Specify a apparmor profile for system-probe |
| datadog.systemProbe.bpfDebug | bool | false |
Enable logging for kernel debug |
| datadog.systemProbe.btfPath | string | "" |
Specify the path to a BTF file for your kernel |
| datadog.systemProbe.collectDNSStats | bool | true |
Enable DNS stat collection |
| datadog.systemProbe.conntrackInitTimeout | string | "10s" |
the time to wait for conntrack to initialize before failing |
| datadog.systemProbe.conntrackMaxStateSize | int | 131072 |
the maximum size of the userspace conntrack cache |
| datadog.systemProbe.debugPort | int | 0 |
Specify the port to expose pprof and expvar for system-probe agent |
| datadog.systemProbe.enableConntrack | bool | true |
Enable the system-probe agent to connect to the netlink/conntrack subsystem to add NAT information to connection data |
| datadog.systemProbe.enableDefaultKernelHeadersPaths | bool | true |
Enable mount of default paths where kernel headers are stored |
| datadog.systemProbe.enableDefaultOsReleasePaths | bool | true |
enable default os-release files mount |
| datadog.systemProbe.enableOOMKill | bool | false |
Enable the OOM kill eBPF-based check |
| datadog.systemProbe.enableTCPQueueLength | bool | false |
Enable the TCP queue length eBPF-based check |
| datadog.systemProbe.maxConnectionStateBuffered | string | nil |
Maximum number of concurrent connections for Cloud Network Monitoring |
| datadog.systemProbe.maxTrackedConnections | int | 131072 |
the maximum number of tracked connections |
| datadog.systemProbe.mountPackageManagementDirs | list | [] |
Enables mounting of specific package management directories when runtime compilation is enabled |
| datadog.systemProbe.runtimeCompilationAssetDir | string | "/var/tmp/datadog-agent/system-probe" |
Specify a directory for runtime compilation assets to live in |
| datadog.systemProbe.seccomp | string | "localhost/system-probe" |
Apply an ad-hoc seccomp profile to the system-probe agent to restrict its privileges |
| datadog.systemProbe.seccompRoot | string | "/var/lib/kubelet/seccomp" |
Specify the seccomp profile root directory |
| datadog.tags | list | [] |
List of static tags to attach to every metric, event and service check collected by this Agent. |
| datadog.traceroute.enabled | bool | false |
Enable traceroutes in system-probe for Network Path |
| datadog.useHostPID | bool | true |
Run the agent in the host's PID namespace, required for origin detection / unified service tagging |
| existingClusterAgent.clusterchecksEnabled | bool | true |
set this to false if you don’t want the agents to run the cluster checks of the joined external cluster agent |
| existingClusterAgent.join | bool | false |
set this to true if you want the agents deployed by this chart to connect to a Cluster Agent deployed independently |
| existingClusterAgent.serviceName | string | nil |
Existing service name to use for reaching the external Cluster Agent |
| existingClusterAgent.tokenSecretName | string | nil |
Existing secret name to use for external Cluster Agent token |
| fips.customFipsConfig | object | {} |
Configure a custom configMap to provide the FIPS configuration. Specify custom contents for the FIPS proxy sidecar container config (/etc/datadog-fips-proxy/datadog-fips-proxy.cfg). If empty, the default FIPS proxy sidecar container config is used. |
| fips.enabled | bool | false |
Enable fips proxy sidecar. The fips-proxy method is getting phased out in favor of FIPS-compliant images (refer to the useFIPSAgent setting). |
| fips.image.digest | string | "" |
Define the FIPS sidecar image digest to use, takes precedence over fips.image.tag if specified. |
| fips.image.name | string | "fips-proxy" |
|
| fips.image.pullPolicy | string | "IfNotPresent" |
Datadog the FIPS sidecar image pull policy |
| fips.image.repository | string | nil |
Override default registry + image.name for the FIPS sidecar container. |
| fips.image.tag | string | "1.1.22" |
Define the FIPS sidecar container version to use. |
| fips.local_address | string | "127.0.0.1" |
Set local IP address. This setting is only used for the fips-proxy sidecar. |
| fips.port | int | 9803 |
Specifies which port is used by the containers to communicate to the FIPS sidecar. This setting is only used for the fips-proxy sidecar. |
| fips.portRange | int | 15 |
Specifies the number of ports used, defaults to 13 https://github.com/DataDog/datadog-agent/blob/7.44.x/pkg/config/config.go#L1564-L1577. This setting is only used for the fips-proxy sidecar. |
| fips.resources | object | {} |
Resource requests and limits for the FIPS sidecar container. This setting is only used for the fips-proxy sidecar. |
| fips.use_https | bool | false |
Option to enable https. This setting is only used for the fips-proxy sidecar. |
| fullnameOverride | string | nil |
Override the full qualified app name |
| kube-state-metrics.image.repository | string | "registry.k8s.io/kube-state-metrics/kube-state-metrics" |
Default kube-state-metrics image repository. |
| kube-state-metrics.nodeSelector | object | {"kubernetes.io/os":"linux"} |
Node selector for KSM. KSM only supports Linux. |
| kube-state-metrics.rbac.create | bool | true |
If true, create & use RBAC resources |
| kube-state-metrics.resources | object | {} |
Resource requests and limits for the kube-state-metrics container. |
| kube-state-metrics.serviceAccount.create | bool | true |
If true, create ServiceAccount, require rbac kube-state-metrics.rbac.create true |
| kube-state-metrics.serviceAccount.name | string | nil |
The name of the ServiceAccount to use. |
| kubeVersionOverride | string | nil |
Override Kubernetes version detection. Useful for GitOps tools like FluxCD that don't expose the real cluster version to Helm |
| nameOverride | string | nil |
Override name of app |
| operator.datadogAgent.enabled | bool | true |
Enables Datadog Agent controller |
| operator.datadogAgentInternal.enabled | bool | false |
Enables the Datadog Agent Internal controller |
| operator.datadogCRDs.crds.datadogAgentInternals | bool | false |
Set to true to deploy the DatadogAgentInternals CRD |
| operator.datadogCRDs.crds.datadogAgents | bool | true |
Set to true to deploy the DatadogAgents CRD |
| operator.datadogCRDs.crds.datadogDashboards | bool | true |
Set to true to deploy the DatadogDashboard CRD |
| operator.datadogCRDs.crds.datadogGenericResources | bool | true |
Set to true to deploy the DatadogGenericResource CRD |
| operator.datadogCRDs.crds.datadogMetrics | bool | false |
Set to true to deploy the DatadogMetrics CRD |
| operator.datadogCRDs.crds.datadogMonitors | bool | true |
Set to true to deploy the DatadogMonitors CRD |
| operator.datadogCRDs.crds.datadogPodAutoscalers | bool | false |
Set to true to deploy the DatadogPodAutoscalers CRD |
| operator.datadogCRDs.crds.datadogSLOs | bool | true |
Set to true to deploy the DatadogSLO CRD |
| operator.datadogCRDs.keepCrds | bool | false |
Set to true to keep the CRDs when the helm chart is uninstalled. This must be set to true if datadog.operator.migration.enabled is set to true. |
| operator.datadogDashboard.enabled | bool | false |
Enables the Datadog Dashboard controller |
| operator.datadogGenericResource.enabled | bool | false |
Enables the Datadog Generic Resource controller |
| operator.datadogMonitor.enabled | bool | false |
Enables the Datadog Monitor controller |
| operator.datadogSLO.enabled | bool | false |
Enables the Datadog SLO controller |
| operator.image.tag | string | "1.25.0" |
Define the Datadog Operator version to use |
| otelAgentGateway.additionalLabels | object | {} |
Adds labels to the Agent Gateway Deployment and pods |
| otelAgentGateway.affinity | object | {} |
Allow the Gateway Deployment to schedule using affinity rules |
| otelAgentGateway.autoscaling.annotations | object | {} |
annotations for OTel Agent Gateway HPA |
| otelAgentGateway.autoscaling.behavior | object | {"scaleDown":{},"scaleUp":{}} |
defines the scaling behavior in OTel Agent Gateway HPA |
| otelAgentGateway.autoscaling.behavior.scaleDown | object | {} |
defines the scaling down behavior in OTel Agent Gateway HPA |
| otelAgentGateway.autoscaling.behavior.scaleUp | object | {} |
defines the scaling up behavior in OTel Agent Gateway HPA |
| otelAgentGateway.autoscaling.enabled | bool | false |
enable autoscaling using Horizontal Pod Autoscaler (HPA), requires k8s 1.23.0 and above. Will override otelAgentGateway.replicas. |
| otelAgentGateway.autoscaling.maxReplicas | int | 0 |
max number of replicas for OTel Agent Gateway HPA |
| otelAgentGateway.autoscaling.metrics | list | [] |
the metrics used for OTel Agent Gateway HPA |
| otelAgentGateway.autoscaling.minReplicas | int | 0 |
min number of replicas for OTel Agent Gateway HPA |
| otelAgentGateway.config | string | nil |
Gateway OTel Agent configuration |
| otelAgentGateway.configMap.checksum | string | nil |
Checksum of the existing ConfigMap that contains the Gateway OTel Agent configuration |
| otelAgentGateway.configMap.items | string | nil |
Items within the ConfigMap that contain Gateway OTel Agent configuration |
| otelAgentGateway.configMap.key | string | "otel-gateway-config.yaml" |
Key within the ConfigMap that contains the Gateway OTel Agent configuration |
| otelAgentGateway.configMap.name | string | nil |
Name of the existing ConfigMap that contains the Gateway OTel Agent configuration |
| otelAgentGateway.containers.otelAgent.env | list | [] |
Additional environment variables for the otel-agent container |
| otelAgentGateway.containers.otelAgent.envDict | object | {} |
Set environment variables specific to otel-agent defined in a dict |
| otelAgentGateway.containers.otelAgent.envFrom | list | [] |
Set environment variables specific to otel-agent from configMaps and/or secrets |
| otelAgentGateway.containers.otelAgent.healthPort | int | 13133 |
Port number to use for the otel-agent-gateway health check endpoint (OTel health_check extension) |
| otelAgentGateway.containers.otelAgent.livenessProbe | object | {"enabled":false,"failureThreshold":6,"initialDelaySeconds":15,"periodSeconds":15,"successThreshold":1,"timeoutSeconds":5} |
otel-agent-gateway liveness probe settings. Set enabled to true to activate. The OTel config must expose the health_check extension on healthPort (default 13133); the generated default config does this automatically. |
| otelAgentGateway.containers.otelAgent.logLevel | string | nil |
Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off. If not set, fall back to the value of datadog.logLevel. |
| otelAgentGateway.containers.otelAgent.readinessProbe | object | {"enabled":false,"failureThreshold":6,"initialDelaySeconds":15,"periodSeconds":15,"successThreshold":1,"timeoutSeconds":5} |
otel-agent-gateway readiness probe settings. Set enabled to true to activate. The OTel config must expose the health_check extension on healthPort (default 13133); the generated default config does this automatically. |
| otelAgentGateway.containers.otelAgent.resources | object | {} |
Resource requests and limits for the otel-agent container |
| otelAgentGateway.containers.otelAgent.securityContext | object | {} |
Allows you to overwrite the default container SecurityContext for the otel-agent container. |
| otelAgentGateway.deploymentAnnotations | object | {} |
Annotations to add to the otel-agent Gateway Deployment |
| otelAgentGateway.dnsConfig | object | {} |
Specify dns configuration options for otel agent containers e.g ndots |
| otelAgentGateway.enabled | bool | false |
Enable otel-agent Gateway |
| otelAgentGateway.featureGates | string | nil |
Feature gates to pass to OTel collector, as a comma separated list |
| otelAgentGateway.image.digest | string | "" |
Override the image digest of otel agent, takes precedence over tag if specified |
| otelAgentGateway.image.doNotCheckTag | string | nil |
Skip the version and chart compatibility check |
| otelAgentGateway.image.name | string | "ddot-collector" |
otel agent image name to use (relative to registry) |
| otelAgentGateway.image.pullPolicy | string | "IfNotPresent" |
otel Agent image pullPolicy |
| otelAgentGateway.image.pullSecrets | list | [] |
otel Agent repository pullSecret (ex: specify docker registry credentials) |
| otelAgentGateway.image.repository | string | nil |
Override the image repository to override default registry |
| otelAgentGateway.image.tag | string | "" |
Override the image tag of otel agent |
| otelAgentGateway.image.tagSuffix | string | "" |
Suffix to append to image tag of otel agent |
| otelAgentGateway.initContainers.resources | string | nil |
Resource requests and limits for init containers |
| otelAgentGateway.initContainers.securityContext | string | nil |
Allows you to overwrite the default container SecurityContext for init containers |
| otelAgentGateway.lifecycle | object | {} |
Configure the lifecycle of the otel-agent |
| otelAgentGateway.logs.enabled | bool | false |
Enable logs support in the OTel Collector. If true, checks OTel Collector config for filelog receiver and mounts additional volumes to collect containers and pods logs. |
| otelAgentGateway.nodeSelector | object | {} |
Allow the Gateway Deployment to schedule on selected nodes |
| otelAgentGateway.podAnnotations | object | {} |
Annotations to add to the Gateway Deployment's Pods |
| otelAgentGateway.podLabels | object | {} |
Sets podLabels if defined |
| otelAgentGateway.ports | list | [{"containerPort":"4317","name":"otel-grpc","protocol":"TCP"},{"containerPort":"4318","name":"otel-http","protocol":"TCP"}] |
Ports that OTel Collector is listening on |
| otelAgentGateway.priorityClassCreate | bool | false |
Creates a priorityClass for the otel-agent Gateway Deployment pods. |
| otelAgentGateway.priorityClassName | string | nil |
Sets PriorityClassName if defined |
| otelAgentGateway.priorityClassValue | int | 1000000000 |
Value used to specify the priority of the scheduling of otel-agent Gateway Deployment pods. |
| otelAgentGateway.priorityPreemptionPolicyValue | string | "PreemptLowerPriority" |
Set to "Never" to change the PriorityClass to non-preempting |
| otelAgentGateway.rbac.create | bool | true |
If true, check OTel Collector config for k8sattributes processor and create required ClusterRole to access Kubernetes API |
| otelAgentGateway.rbac.rules | list | [] |
A set of additional RBAC rules to apply to OTel Collector's ClusterRole |
| otelAgentGateway.replicas | int | 1 |
Number of otel-agent instances in the Gateway Deployment |
| otelAgentGateway.revisionHistoryLimit | int | 10 |
The number of old ReplicaSets to keep in this Deployment. |
| otelAgentGateway.service.type | string | "ClusterIP" |
Set type of otel-agent-gateway service |
| otelAgentGateway.shareProcessNamespace | bool | false |
Set the process namespace sharing on the otel-agent |
| otelAgentGateway.strategy | object | {"rollingUpdate":{"maxSurge":1,"maxUnavailable":0},"type":"RollingUpdate"} |
Allow the otel-agent Gateway Deployment to perform a rolling update on helm update |
| otelAgentGateway.terminationGracePeriodSeconds | int | nil |
Configure the termination grace period for the otel-agent |
| otelAgentGateway.tolerations | list | [] |
Allow the Gateway Deployment to schedule on tainted nodes (requires Kubernetes >= 1.6) |
| otelAgentGateway.topologySpreadConstraints | list | [] |
Allow the otel-agent Gateway Deployment to schedule using pod topology spreading |
| otelAgentGateway.useHostNetwork | bool | false |
Bind ports on the hostNetwork |
| otelAgentGateway.volumeMounts | list | [] |
Specify additional volumes to mount in the otel-agent container |
| otelAgentGateway.volumes | list | [] |
Specify additional volumes to mount in the otel-agent container |
| providers.aks.enabled | bool | false |
Activate all specificities related to AKS configuration. Required as currently we cannot auto-detect AKS. |
| providers.eks.controlPlaneMonitoring | bool | false |
Enable control plane monitoring checks in the EKS cluster. |
| providers.eks.ec2.useHostnameFromFile | bool | false |
Use hostname from EC2 filesystem instead of fetching from metadata endpoint. |
| providers.gke.autopilot | bool | false |
Enables Datadog Agent deployment on GKE Autopilot |
| providers.gke.cos | bool | false |
Enables Datadog Agent deployment on GKE with Container-Optimized OS (COS) |
| providers.gke.gdc | bool | false |
Enables Datadog Agent deployment on GKE on Google Distributed Cloud (GDC) |
| providers.openshift.controlPlaneMonitoring | bool | false |
Enable control plane monitoring checks in the OpenShift cluster. Certificates are needed to communicate with the Etcd service, which can be found in the secret etcd-metric-client in the openshift-etcd-operator namespace. To give the Datadog Agent access to these certificates, copy them into the same namespace the Datadog Agent is running in: `oc get secret etcd-metric-client -n openshift-etcd-operator -o yaml |
| providers.talos.enabled | bool | false |
Activate all required specificities related to Talos.dev configuration, as currently the chart cannot auto-detect Talos.dev cluster. Note: The Agent deployment requires additional privileges that are not permitted by the default pod security policy. The annotation pod-security.kubernetes.io/enforce=privileged must be applied to the Datadog installation Kubernetes namespace. For more information on pod security policies in Talos.dev clusters, see: https://www.talos.dev/v1.8/kubernetes-guides/configuration/pod-security/ |
| registry | string | nil |
Registry to use for all Agent images (default depends on datadog.site and registryMigrationMode values) |
| registryMigrationMode | string | "auto" |
Controls gradual migration of default image registry to registry.datadoghq.com, replacing site-specific regional mirrors (GCR, ACR). This setting has no effect when registry is explicitly set. GKE Autopilot and GKE GDC clusters are excluded and always use their site-specific gcr.io variant. US1-FED (ddog-gov.com) is excluded and always uses public.ecr.aws/datadog. US3 (us3.datadoghq.com) is excluded and always uses datadoghq.azurecr.io. |
| remoteConfiguration.enabled | bool | true |
Set to true to enable remote configuration on the Cluster Agent (if set) and the node agent. Can be overridden if datadog.remoteConfiguration.enabled Preferred way to enable Remote Configuration. |
| targetSystem | string | "linux" |
Target OS for this deployment (possible values: linux, windows) |
| useFIPSAgent | bool | false |
Setting useFIPSAgent to true makes the helm chart use Agent images that are FIPS-compliant for use in GOVCLOUD environments. Setting this to true disables the fips-proxy sidecar and is the recommended method for enabling FIPS compliance. |
Some options above are not working/not available on Windows, here is the list of unsupported options:
| Parameter | Reason |
|---|---|
datadog.dogstatsd.useHostPID |
Host PID not supported by Windows Containers |
datadog.useHostPID |
Host PID not supported by Windows Containers |
datadog.dogstatsd.useSocketVolume |
Unix sockets not supported on Windows |
datadog.dogstatsd.socketPath |
Unix sockets not supported on Windows |
datadog.processAgent.processCollection |
Unable to access host/other containers processes |
datadog.systemProbe.seccomp |
System probe is not available for Windows |
datadog.systemProbe.seccompRoot |
System probe is not available for Windows |
datadog.systemProbe.debugPort |
System probe is not available for Windows |
datadog.systemProbe.enableConntrack |
System probe is not available for Windows |
datadog.systemProbe.bpfDebug |
System probe is not available for Windows |
datadog.systemProbe.apparmor |
System probe is not available for Windows |
agents.useHostNetwork |
Host network not supported by Windows Containers |
Because the Cluster Agent can only be deployed on Linux Node, the communication between the Agents deployed on the Windows nodes with the a Cluster Agent need to be configured.
The following datadog-values.yaml file contains all the parameters needed to configure this communication.
targetSystem: windows
existingClusterAgent:
join: true
serviceName: "<EXISTING_DCA_SERVICE_NAME>" # from the other datadog helm chart release
tokenSecretName: "<EXISTING_DCA_SECRET_NAME>" # from the other datadog helm chart release
# Disabled datadogMetrics deployment since it should have been already deployed with the other chart release.
datadog-crds:
crds:
datadogMetrics: false
# Disable kube-state-metrics deployment
datadog:
kubeStateMetricsEnabled: false