TL;DR
Datadog bills across independent dimensions: per-host for infrastructure ($15-$23/month) and APM ($31-$40/month), per-GB for log ingestion ($0.10), per-million for log indexing ($1.70 at 15-day retention), and per-metric for custom metrics beyond your per-host allocation. The high-water mark billing model charges your entire month based on the peak of the lower 99% of hourly host counts, not an average. Most teams find their actual bill two to three times higher than their initial estimate once logs, APM, and custom metrics compound. Controlling costs requires query-level visibility, automated guardrails, and shared accountability between engineering and finance.
Datadog's usage-based pricing gets unpredictable fast. A team starts with a few hosts and a dashboard. Then auto-scaling kicks in, engineers add custom metrics to debug a production issue, log ingestion grows as applications get chatty, and someone enables synthetic tests across five regions without a cleanup plan. Each module bills independently, and the math compounds across five or more dimensions at once.
The observability market now grows at a 12% compound annual rate through 2027, according to Gartner. The FinOps Foundation's State of FinOps 2026 report identifies observability and security tooling as top SaaS categories actively managed by FinOps teams, with 90% of practitioners now managing SaaS spend (up from 65% a year earlier). Understanding Datadog's cost structure isn't just budgeting. It's a FinOps discipline.
What goes into Datadog's pricing plans and cost structure?
Datadog prices each product independently. Infrastructure monitoring, APM, log management, synthetic monitoring, RUM, database monitoring, and security each carry separate meters. That modular design lets teams adopt only what they need, but it means costs accumulate across billing dimensions that most native dashboards don't surface until the invoice arrives.
How do infrastructure monitoring and APM pricing work?
Infrastructure monitoring charges per host per month. The Pro plan costs $15/host/month billed annually ($18 on-demand). Enterprise costs $23/host/month annually ($27 on-demand). Each Pro host includes 100 custom metrics and monitoring for 5 containers; Enterprise includes 200 custom metrics and 10 containers. Additional containers cost $0.002 per container per hour.
Datadog uses a high-water mark billing model. Per the official billing documentation, Datadog meters host count hourly, drops the top 1% of hours (~7 hours in a 720-hour month), and bills the entire month at the peak of the remaining 99%. A five-day traffic spike that doubles your host count sets your bill for the full month. In Kubernetes, the billing unit is the node, not the pod. One misconfigured agent running as a sidecar instead of a DaemonSet can count every pod as a separate host.
APM adds another per-host layer on top of infrastructure. APM costs $31/host/month (annual), APM Pro costs $35, and APM Enterprise costs $40. Each APM host includes 150 GB of ingested spans and 1 million indexed spans per month at 15-day retention. Overages on indexed spans cost $1.70 per million events. For high-throughput microservices, those limits get reached in the first week.
Pricing current as of May 2026. Verify rates at datadoghq.com/pricing.
What do log management and data retention actually cost?
Log management uses a two-part pricing model that catches most teams off guard. Ingestion costs $0.10 per GB for every byte sent to Datadog, whether you index it or not. Indexing costs $1.70 per million log events per month with 15-day standard retention ($2.55 on-demand). You pay to collect the data, then pay again at a much higher rate to make it searchable.
A team ingesting 100 GB of logs per day spends roughly $300/month on ingestion alone. If they index everything at 15-day retention, indexing adds thousands more depending on event density. Many teams respond by indexing only 10-20% of logs, which cuts costs but leaves most data invisible during incidents.
Flex Logs offer a middle ground for historical analysis at $0.05 per million events per month (30-day minimum retention). Forwarding to S3, GCS, or Azure Blob archives costs nothing beyond the $0.10/GB ingestion fee. Forwarding to external SIEMs or BI tools adds $0.25/GB per destination. The optimization pattern: ingest everything, exclude noisy logs from indexing, archive everything, rehydrate selectively when investigations demand it.
How do you calculate and predict your Datadog monthly bill?
What determines host-based versus container-based pricing impact?
Count every entity Datadog monitors: VMs, Kubernetes nodes, Azure App Service Plan instances, and Fargate tasks all qualify as billable hosts. Fargate tasks use a different model from hosts. Instead of high-water mark billing, Fargate tasks are sampled at 5-minute intervals and billed at average concurrency across the month at $1/task/month (infrastructure) or $2.60/task/month (APM).
The disconnect between engineering and finance decisions shows up here. An engineer spinning up an auto-scaling group doesn't think about monitoring costs. A FinOps team reviewing the monthly bill can't trace which scaling event caused the spike. Closing that gap requires cost attribution by team and service that connects infrastructure decisions to financial outcomes in near-real time.
How do data ingestion volume and retention policies affect costs?
Model costs across three categories: infrastructure (hosts times per-host rate times modules enabled), data (log GB/day times 30 times ingestion and indexing rates), and custom metrics (total unique tag-value combinations times overage rate).
Custom metrics deserve special attention. Per Datadog's AWS integration billing docs, enabling the AWS integration auto-collects CloudWatch metrics, and custom CloudWatch metrics count toward your Datadog custom metric allocation. OpenTelemetry metrics also count as custom metrics since they fall outside Datadog's official integration list. A single Prometheus-style metric tagged by user ID across a million users creates a million billable custom metrics from one metric name. Teams that don't audit cardinality quarterly find this line item growing faster than any other.
What Datadog cost optimization strategies actually work?
How should you right-size your monitoring coverage?
Start by auditing what you monitor versus what you use. Many teams deploy Datadog's agent across every environment without differentiating monitoring depth. Production needs full APM and log indexing. Staging can run infrastructure monitoring alone. Dev environments often need nothing beyond basic health checks.
For Kubernetes, Datadog's own docs explicitly warn that installing the agent directly in each container counts each container as a host from a billing perspective. Run the agent as a DaemonSet (one per node) via the Datadog Operator or Helm. Review auto-scaling behavior and set monitors on estimated usage metrics to alert before hosts cross billing thresholds. These estimated usage metrics carry a 10-20% margin of error versus final billable usage, so build in buffer. Sustainable cost control requires automation and guardrails that enforce best practices in real time, not humans chasing alerts after the invoice arrives.
How can you manage data ingestion to control spend?
Logs drive the largest cost overruns. Use exclusion filters at the index level to drop health checks, heartbeat messages, and debug-level logs before they get indexed. Excluded logs still flow to Live Tail, archives, and log-to-metric generation, so you don't lose the data. Set per-index daily quotas to hard-cap indexed events per day.
For custom metrics, Datadog's governance guide recommends using the Top Custom Metrics table in Plan and Usage to identify cost drivers, then applying Metrics Without Limits to allowlist only queried tag combinations. Disable the default "collect everything" behavior on cloud integrations and explicitly allowlist only the metrics your dashboards and alerts consume.
How does Datadog compare to competitors on pricing?
Datadog, New Relic, Splunk, and Elastic each use fundamentally different billing models, which makes list-price comparisons misleading without normalizing for workload and team size.
Datadog charges per host plus per-GB/per-event for logs, traces, and custom metrics. New Relic charges per user ($99-$349/month for full platform access) plus per-GB of data ingested ($0.40/GB Original Data, $0.60/GB Data Plus) after a free 100 GB/month tier. Splunk Observability Cloud bundles infrastructure monitoring at $15/host/month (Starter) up to $75/host/month (Enterprise) with APM, RUM, and synthetics included at higher tiers. Elastic offers self-hosted open-source components at infrastructure cost only, or Serverless Observability starting at $0.07/GB ingested plus retention fees.
Observability pricing model comparison. Pricing current as of May 2026.
| Platform | Billing model | Entry cost | Cost driver to watch |
|---|---|---|---|
| Datadog | Per host + per GB/event | $15/host/month (Infra Pro) | Custom metrics, log indexing |
| New Relic | Per user + per GB ingested | Free (100 GB + 1 user) | Full platform user seats |
| Splunk Observability | Per host (bundled tiers) | $15/host/month (Starter) | Tier selection, 9% annual uplift |
| Elastic | Resource-based or self-hosted | Free (self-hosted OSS) | Infrastructure ops overhead |
Total cost of ownership depends on team size, data volume, and retention needs. Datadog tends to cost less for small deployments with few users but scales aggressively as host counts and data volumes grow. New Relic penalizes large teams through per-user pricing but absorbs data volume with a generous free tier. DoiT helps organizations evaluate monitoring tools within the broader context of cloud spend optimization, connecting observability costs to overall infrastructure efficiency.
How do you get started with Datadog pricing and free trial options?
Datadog's free tier covers up to 5 hosts with 1-day metric retention, core dashboards, unlimited alerts, and unlimited users. That works for initial evaluation but the short retention window limits production use. The 14-day Pro trial gives a more realistic picture of costs at scale.
During the trial, track three numbers daily: host count (including containers and Fargate tasks), log ingestion volume in GB, and custom metric count. These three inputs drive the majority of most Datadog bills. Project 90-day costs at current growth rates before committing to an annual contract. Annual billing saves roughly 17-20% compared to on-demand rates, but lock-in means overprovisioning hurts for the full term.
Frequently asked questions about Datadog pricing
Does Datadog offer annual discounts or enterprise pricing?
Yes. Annual billing drops Infrastructure Pro from $18 to $15/host/month and Enterprise from $27 to $23/host/month. APM drops from $48 to $31/host/month on annual contracts. Larger deployments qualify for volume discounts through custom enterprise agreements, typically involving 1-3 year commitments. Negotiate based on total committed spend across all Datadog products, not individual SKUs. Datadog also announced new LLM Observability pricing effective May 1, 2026, so confirm current rates before signing.
What happens if I exceed my Datadog usage limits?
Datadog doesn't cut off service. Overages get billed at on-demand rates, which run 30-50% higher than annual committed rates depending on the product. Custom metrics beyond your per-host allocation, indexed spans above the 1 million per-host cap, and log events indexed beyond your contracted volume all trigger automatic overage charges. Set budget monitors on estimated usage metrics to catch overruns before they compound across a full billing cycle.
Can I monitor Kubernetes clusters without paying per container?
Each host license includes 5 containers (Pro) or 10 containers (Enterprise) at no extra charge. Beyond that, additional containers cost $0.002/hour or $1/month prepaid. The configuration that matters most: run the Datadog agent as a Kubernetes DaemonSet (one agent per node), not as a sidecar in every pod. Datadog's own documentation warns that per-container agent deployment counts each container as a separate billable host, which can multiply your bill by the ratio of pods to nodes in your cluster.
How do you make Datadog costs predictable and defensible?
Monitoring costs should support business outcomes, not create budget surprises. McKinsey's June 2025 research found that 28% of cloud spend goes to waste at many organizations. Observability tooling contributes to that waste when teams deploy broad monitoring defaults without matching them to the value they deliver.
The path to predictability starts with three practices: tag every resource so costs trace back to teams and services, automate enforcement so guardrails prevent cost spikes before they hit, and review usage monthly so optimization stays continuous rather than reactive. Datadog Intelligence turns cost visibility into automated optimization, helping teams maintain full observability while keeping spend predictable and aligned with growth.
Talk to DoiT to turn Datadog cost visibility into automated, enforceable optimization that keeps observability powerful and cloud spend predictable.