Datadog for DevOps : Metrics, Dashboards and Logs

Published: Jun 29, 2020 by Isaac Johnson

Last time I talked about Datadog it was to focus on how Datadog helps us track deployments into AKS and tie to Azure DevOps data from builds.  Today I want to dig in to how Datadog can help track and expose metrics and logs from builds as a really complimentary tool to Azure DevOps.  To make it more interesting, I’ll be using just the free tier of Datadog to show how easy it is to get started.

Setting up WebHooks for Events

This is a bit of an overview of what I already covered, but the first piece of information you’ll need after signing up for DataDog is the API key.  This is the unique identifier you can use for all your agents and integrations.  It’s also easy to revoke.  You’ll find it under “Integrations/APIs”.

/content/images/2020/06/image-54.png

We can now add service hooks for any events we wish to tracks natively in Azure DevOps.  You’ll find those under Project/General/Service Hooks:

/content/images/2020/06/image-55.png

Adding a new event just requires hitting the “+” to add an event and selecting Datadog:

/content/images/2020/06/image-56.png

Then choose the event type and any filters (perhaps you only wish to monitor one pipeline):

/content/images/2020/06/image-57.png

You can then set the key and click Test to verify:

/content/images/2020/06/image-58.png

Just click Finish to add the hook:

/content/images/2020/06/image-59.png

But what about custom events?

Sometimes as a release engineer I want to more tightly control my notifications.  This can be useful in a multi-tenant organization (where my project team has a datadog instance but not necessarily for use by others).

In this case I can send data several ways.

DataDog Event AzDO plugin.

Here we can use the “Datadog Events” plugin from William Matheus Batista.  It’s a pretty functional off-the-shelf utility function:

/content/images/2020/06/image-60.png

You can then add to your pipeline and expose any message output captured into a variable:

/content/images/2020/06/image-61.png

In YAML syntax:

steps:
- task: WilliamMatheusBatista.vsts-datadogevents.vsts-datadogevents.DatadogEvents@0
  displayName: 'Datadog Event'
  inputs:
    ddApiKey: asdfasdfasdfasdf
    ddEventTitle: MyEvent2
    ddEventText: 'Hello There'
    ddEventTags: 'ghost-blog'

Here we can see the event pushed to Datadog:

/content/images/2020/06/image-62.png

This can be useful for simple one-line outputs.

For example, perhaps we want to capture some basics about the Agent in use so later i could check event logs for problems associated to an agent or OS type:

#!/bin/bash
set +x
echo "##vso[task.setvariable variable=AGENTDETAILS]"$(Agent.Id) - $(Agent.OS) - $(Agent.OSArchitecture) for $(Build.BuildId) > t.o
set -x

cat t.o

Which in YAML syntax:

steps:
- bash: |
   #!/bin/bash
   set +x
   echo "##vso[task.setvariable variable=AGENTDETAILS]"$(Agent.Id) - $(Agent.OS) - $(Agent.OSArchitecture) for $(Build.BuildId) > t.o
   set -x
   
   cat t.o
  displayName: 'sample output'

- task: WilliamMatheusBatista.vsts-datadogevents.vsts-datadogevents.DatadogEvents@0
  displayName: 'Datadog Event'
  inputs:
    ddApiKey: asdfasdfasdfasdfasdasdfasdf
    ddEventTitle: MyEvent2
    ddEventText: '$(AGENTDETAILS)'
    ddEventTags: 'ghost-blog'

The results of which:

/content/images/2020/06/image-63.png

But what about Multi-line outputs?

We can actually use the catch-all Log POST url for things like this.  That is also useful for integrating Datadog into our containerized service.  As long as the ‘thing’ running can reach the internet and run an HTTP POST, you can get log data into Datadog.

Here, we’ll capture all the running processes on the agent (which serves as some good multiline output) and push it to Datadog:

steps:
- bash: |
   #!/bin/bash -x
   
   ps -ef > ps.log
   
   curl -X POST https://http-intake.logs.datadoghq.com/v1/input \
        -H "Content-Type: text/plain" \
        -H "DD-API-KEY: 08ce42ae942840eaa60ab9a893964229" \
        --data-binary @./ps.log
   
  displayName: 'DD Build Log Upload'
  enabled: false
  condition: succeededOrFailed()

Once run, in logs we can now see log details.  With the free tier of Datadog, they won’t persist that long, but that’s okay - for now we are mostly interested in showing the abilities and perhaps we would use this for debugging build outputs on remote unmanaged instances.

/content/images/2020/06/image-64.png

We can also include things like Host, App and Tags in the URL (while i didn’t find this in the documentation, this article inspired me);

$ curl -X POST 'https://http-intake.logs.datadoghq.com/v1/input?host=host13&service=AzDO&ddtags=env:dev' -H "Content-Type: text/plain" -H "DD-API-KEY: 08ce42ae942840eaa60ab9a893964229" --data-binary @./LICENSE
$

/content/images/2020/06/image-66.png

Dashboards

We can create Dashboards to share data.  You’ll find them, of course, under Dashboards:

/content/images/2020/06/image-67.png

Here I’ll just add a few metrics.  CPU percentage on agents, build duration and build events:

/content/images/2020/06/image-68.png

Next, under the gear in the upper right, i can get a public URL for this dashboard as well as set the Widget size. This can be useful for Confluence/Wiki pages that expose build details:

/content/images/2020/06/image-69.png

You can decide on whether to let the users change time frames - in this case I’m fine with that:

/content/images/2020/06/image-70.png

And you should be able to view this live dashboard right now:

https://p.datadoghq.com/sb/ah2hpiy5bf53u16p-0593d81c4f0bf2aa8766533b72209153

Another example: say we wished to add a graph of “passing builds”.  We could add that metric as another graph:

/content/images/2020/06/image-73.png

and then see it immediately added to the dashboard:

/content/images/2020/06/image-74.png

One of the really nice features is the “TV Mode” which is great for 2nd screens and public displays:

/content/images/2020/06/image-71.png

There is also a Dark mode there as well.

An example might look like this (the latter on a display my eldest daughter fished out of a nearby small lake)

and the “pond” monitor:

Monitoring and Alerts

So once we’ve got some metrics, one might realize that a dashboard showing pass/failed builds is good, but can we get alerts directly from Datadog?

For instance, say we have a graph of Succeeding and Failing builds:

/content/images/2020/06/image-75.png

And the only difference in the Metric query is the “status” parameter:

/content/images/2020/06/image-76.png

Now we can create a monitor that would be triggered on any build or just failed builds:

/content/images/2020/06/image-77.png

For instance, say I wish to determine why we had failed builds. I can mouse over the graph to see the time window:

/content/images/2020/06/image-78.png

And checking my inbox, indeed i see an alert from AzDO and from Datadog:

/content/images/2020/06/image-79.png

And indeed I can lookup build 643 to see it failed:

/content/images/2020/06/image-80.png

To be fair, the link in the Datadog alert is bad (must use some depricated _build and has URLEncoding issues).  (i.e. it listed https://princessking.visualstudio.com/_build?amp%3Bbuilduri=vstfs%3A%2F%2F%2FBuild%2FBuild%2F643&_a=completed instead of https://princessking.visualstudio.com/ghost-blog/_build/results?buildId=643&view=results - likely based on older VSTS pattern)

Alerting through PagerDuty

To really understand the power of what we have here, let’s setup PagerDuty with Slack integration.

/content/images/2020/06/image-83.png

And a triggered alert example

/content/images/2020/06/image-82.png

In integrations, add PagerDuty

/content/images/2020/06/image-81.png

We now have an event rule we can trigger a monitor on:

/content/images/2020/06/image-86.png

We can now test it:

/content/images/2020/06/image-85.png

Which not only called me, as PD can page the on-call resource, it also posted to Slack where i could acknowledge or resolve:

/content/images/2020/06/image-87.png

And texted and emailed me as well:

/content/images/2020/06/image-90.png

Since the routing is using PD via APIs, the alerts from Datadog would be routed to the appropriate oncall technician:

/content/images/2020/06/image-91.png

Summary

Datadog as an APM and Logging tool is great. As I blogged about before, it’s one of the few that can give us end-to-end traceability from code to through to the performance of running services.  However, many don’t realize how it’s also a solid metrics, alerting and dashboarding tool that compliments Azure DevOps.

While I demoed this with AzDO, as it’s my primary pipelining tool, the examples I showed of custom events could be tied into any CI/CD tool as well. For those smaller shops and individual developers, everything demoed above was done with the free tier so really there is no reason not to try it out.  While Pagerduty isn’t exactly free (it has a free tier without alerting, and a cheap $10 starter that would cover 6 users that includes phone and SMS) it does illustrate a great low-cost solution to SLA enforcement and response escalation.

Leveraging Datadog as a metrics, logging and alerting hub we can have SLA enforced CICD ensuring any build issues are tracked and escalated.

azure-devops tutorial datadog vsts

Have something to add? Feedback? You can use the feedback form

Isaac Johnson

Isaac Johnson

Cloud Solutions Architect

Isaac is a CSA and DevOps engineer who focuses on cloud migrations and devops processes. He also is a dad to three wonderful daughters (hence the references to Princess King sprinkled throughout the blog).

Theme built by C.S. Rhymes