A plugin to send test results to DATADOG using Bespoken.

Usage no npm install needed!

<script type="module">
  import bespokenDatadogPlugin from '';


Bespoken DataDog Plugin

This plugin makes it easy to send your voice app's end-to-end test results to a DataDog instance for reporting and monitoring.

It leverages Bespoken's filters to report test results to DataDog.

Getting Started

Installation and Usage

To use the Bespoken Datadog Plugin, just include it in your test project dependencies with:

npm install bespoken-datadog-plugin --save

Then include it as your filter like this in the testing.json:

  "filter": "./node_modules/bespoken-datadog-plugin/index.js"

Alternatively, you can call it from an existing filter like this:

const DatadogPlugin = require('bespoken-datadog-plugin')

module.exports = {
  onTestEnd: async (test, testResult) => {
    await DatadogPlugin.sendToDataDog(test, testResult)

Environment Variables

The DATADOG_API_KEY must be set as an environment variable.

For local running, we typically recommend using the package dotenv and setting it in a file labeled .env. We have provided a file example.env you can use as a template (just copy and rename to .env).

It can also be set manually like so:


(Use set instead of export if using Windows Command Prompt).

DataDog Configuration

  • Create a DataDog account.
  • Take the API key from the Integrations -> API section
  • Add it to the .env file

DataDog Metrics

DataDog captures metrics related to how all the tests have performed. Each time we run the tests, we push the result of each test to DataDog.

We use next metrics:

  • utterance.success
  • utterance.failure
  • test.success
  • test.failure

The metrics can be easily reported on through a DataDog Dashboard. They also can be used to set up notifications when certain conditions are triggered.

Read more about configuring DataDog in our walkthrough.

DataDog Tags

By default we report the results at utterance and test level:

Reporting Element Tag Description
Utterance & Test jobName The name of the testing job for the current execution (defaults to EndToEndTests). You can define it here.
Utterance & Test runName The name of the test execution. By default, it is the timestamp when the test was executed.
Utterance & Test testName The test description. Usually written after the test: keyword of the test script.
Utterance & Test testSuiteName The name of the test suite we are executing. In general, it is the YAML file name containing the test cases.
Utterance customer The name of the customer running the test scripts. You can change it here.
Utterance utterance The interaction sent to the voice app.
Utterance voiceId The voice used to do TTS when sending the text utterance to the voice service. We can use Amazon Polly or Google Wavenet voices.