@collegedunia/newman-mocha

A wrapper around newman to generate mocha tests

Usage no npm install needed!

<script type="module">
  import collegeduniaNewmanMocha from 'https://cdn.skypack.dev/@collegedunia/newman-mocha';
</script>

README

Newman API Testing via Mocha

npm version Typed with TypeScript

This module provides a set of helper utilities to set up your Postman Collections with scripts and to parse a set of data files/directories to generate mocha test suites.

You can also run newman for individual cases with a promise based API.

Requirements:

Install

Install using npm:

npm i --save-dev @collegedunia/newman-mocha mocha

Or Yarn:

yarn add --dev @collegedunia/newman-mocha mocha

Note: Mocha is a peer dependency

Usage

In your test folder create a file api.test.js and a api folder to keep all your test data files:

  1. Import NewmanMocha in the file

    import path from 'path';
    import { NewmanMocha } from '@collegedunia/newman-mocha';
    

    Or with CommonJS format:

    const path = require('path');
    const { NewmanMocha } = require('@collegedunia/newman-mocha');
    
  2. Initialize the class with the absolute path to your collection (in this case the same directory)

    const newmanMocha = new NewmanMocha({
      collection: path.join(__dirname, 'Test.postman_collection.json'),
    });
    

    You can also provide an environment file with variables along with it:

    const newmanMocha = new NewmanMocha({
      collection: path.join(__dirname, 'Test.postman_collection.json'),
      environment: path.join(__dirname, 'Test.postman_environment.json'),
    });
    
  3. Run the test cases from your data directory, api

    newmanMocha.runDirectory(path.join(__dirname, 'api'));
    
  4. And in your api directory add data files with names corresponding to requests in your "Test Collection". These files will contain the different test cases you wish to write for that particular request. For eg. Request A.json:

    [
      {
        "test_name": "Should return 200 on success",
        "validate": {
          "statusCode": 200
        }
      },
      {
        "test_name": "Invalid body should return 400",
        "request_body": {
          "invalid_key": "123456"
        },
        "validate": {
          "statusCode": 400
        }
      }
    ]
    

NOTE: For more available options see Iteration Data File Format. Also if your test cases require any conditional logic then you can use javascript files too, just default export the array of test cases.

Run

What runDirectory actually does is create a Mocha Test Suite for each Request. So to run them we will need to run mocha (which you should have installed already along with @collegedunia/newman-mocha):

Add it to your package.json file:

"scripts": {
  "test": "mocha"
}

And run

npm run test

Or

yarn test

Directory Structure

This is how your test directory might look like in the end:

.
├── src
├── test
|   ├── api
|   |   ├── Request A.json
|   |   ├── Request B.json
|   |   ├── Request C.js
|   |   └── Request D.json
|   ├── flow                # Read more below (#Flow-Tests)
|   |   ├── Request A.json
|   |   ├── Request B.json
|   |   ├── Request C.js
|   |   └── Request D.json
|   ├── api.test.js
|   ├── init.test.js        # Start your server in this file in a "before" block (if applicable)
|   ├── Test.postman_collection.json
|   └── Test.postman_environment.json
└── package.json

Though, you don't have to stick to this in the end, you can put all this in a subfolder or store the test cases separately, whatever works for you.

Example

For a complete example project with multiple different types of tests and some advanced usage have a look at the examples directory.

Flow Tests

The basic runDirectory allows testing a single request with multiple cases. But there is no sharing of environment or testing multiple requests in sequence.

Flow tests allow you to test a sequence of requests and share environment and results between them.

For writing flow tests:

  1. Create a new directory (eg. flow) in your test folder, this will hold your test cases

  2. Create a new .json file, name it according to the flow you are testing (eg. login-flow.json, item-crud-flow.json).

    This file follows the same format as before, but a special variable is injected during it's run that is available for you to use in most of the values: "results"

    [
      {
        "request_name": "Create Item",
        "request_params": {
          "value": 123456
        },
        "validate": {
          "statusCode": 200
        }
      },
      {
        "request_name": "Get Item",
        "request_params": {
          "itemId": "{{results.0.id}}"
        },
        "validate": {
          "statusCode": 200,
          "expect": {
            "value": 123456
          }
        }
      }
    ]
    

    Notice how the id from the result of the 1st request was used in the query of the second request using the variable "results.0.id". Here, results is an array of all the results of the requests in a flow test, it gets populated as tests run.

  3. Run the flow tests by calling runDirectory with the flow option set as true:

    newmanMocha.runDirectory(path.join(__dirname, 'flow'), { flowTest: true });
    

    Environment is by default shared between all requests in a single flow test. But if you want, you can disable it:

    newmanMocha.runDirectory(path.join(__dirname, 'flow'), { flowTest: true, forwardEnv: false });
    

API

For API discovery .d.ts files (typescript types) are provided with the project to enable intellisense (autocomplete) in editors.

Iteration Data File Format

Should be a json array. Each item in the array should follow this format:

{
  "test_name": "A description of the test case",

  // Request name is  only considered in "flow" tests, read more below (#Flow-Tests)
  "request_name": "Request Name",

  // Body/Params/Variables support postman dynamic variables: https://learning.getpostman.com/docs/postman/scripts/postman-sandbox-api-reference/#dynamic-variables

  // Replaces full body in non-GET requests with the value of this key
  "request_body": {
    "name": "something",
    "company": "{{$randomCompanyName}}"
  },

  // Existing params in request are not removed
  "request_params": {
    "limit": 2,

    // Use `null` to clear default query params from request
    "search": null,

    // Will automatically stringify objects
    "filters": {
      "partnerId": "1001"
    }
  },

  // Add/Update headers for request
  "request_headers": {
    "Content-Type": "application/json"
  },

  // Used for validating response
  "validate": {
    // Check status code of response
    "statusCode": 200,

    // Check the response to have these items (can be any object, will check deeply using [chai-subset](https://github.com/debitoor/chai-subset))
    "expect": {
      "test": "body"
    },
    // Check the response to not have these items, useful for negative tests like checking filters' results
    "not_expect": {
      "errors": null
    }

    // And some often used things:

    // Check for message in the response body (default in `message` key)
    "message": "Created Successfully",
    // Check item count in the response of list endpoints (default in `items` key)
    "itemCount": 20,
    // Check that errors should exist, 3 possible formats:
    // 1. String : Check that `body.error` matches string
    "errors": "Could not add Advertiser",
    // 2. Array of strings: Check that `body.errors` has these keys
    "errors": ["address"],
    // 3. Object: Check that `body.errors` matches the object
    "errors": {
      "address": "Path `address` is required."
    },
  },
  // Used to set any extra variables for this case only to anything you want
  "variables": {
    "userId": "123",

    // Special variable that works with `itemCount` in validate to specify object key for items
    "itemsKey": "payload"
  },
  // Boolean flags to only run a single test or skip the test for now (default: false)
  "only": false,
  "skip": true
}

The json schema is defined in iterationDataSchema.json. You can add this snippet in your .vscode/settings.json to get autocompletion through VSCode:

  "json.schemas": [
    {
      "fileMatch": ["test/data/**/*.json"], // Or whatever directory you end up storing your data files
      "url": "./node_modules/@collegedunia/newman-mocha/iterationDataSchema.json"
    }
  ]

NewmanMocha

Class that combines all the utilities. Initialize with path to collection and environment.

const newmanMocha = new NewmanMocha({
  collection: path.join(__dirname, 'Test.postman_collection.json'),
  environment: path.join(__dirname, 'Test.postman_environment.json'),
});

newmanMocha.run

Run a single request with some cases, will throw if any test case fails:

await newmanMocha.run('GET Request', [
  {
    request_params: {
      test: 'val',
    },
    validate: {
      statusCode: 200,
      expect: {
        args: {
          test: 'val',
        },
      },
    },
  },
]);

newmanMocha.runIterations

Same as run but this generates a mocha test suite (using describes and it) for the request, with each iterationData item being a test case.

NOTE: Only run this in files being run by mocha.

await newmanMocha.runIterations('GET Request', [
  {
    request_params: {
      test: 'val',
    },
    validate: {
      statusCode: 200,
      expect: {
        args: {
          test: 'val',
        },
      },
    },
  },
]);

Or if running a JSON file:

const testFileData = fs.readFileSync(`./api/GET Request.json`).toString();

await newmanMocha.runIterations('GET Request', JSON.parse(testFileData));

newmanMocha.runDirectory

This will parse a whole directory for json and js files and generate test suites for them using runIterations above.

newmanMocha.runDirectory(path.join(__dirname, 'api'));

newmanMocha.getEnvVal

Get env value from the currenlty loaded environment in the instance.

const accessToken = newmanMocha.getEnvVal('accessToken');

newmanMocha.setEnvVal

Set env value in the currenlty loaded environment in the instance.

newmanMocha.setEnvVal('accessToken', accessToken);

Or reset env values:

newmanMocha.setEnvVal('accessToken', null);

Helper Methods

There are lots of helper methods also exported from the module. But those aren't gauranteed to be stable during v0.x. Use carefully.

Motivation

We were already using mocha for our existing test suite (non-api tests) and had all the API routes already defined in Postman Collections, so it made sense to re-use those instead of redeclaring them in our code and having to maintain multiple versions.

After experimenting with newman and trying to use it directly, we felt that:

  • It was too much work to use variables for body values and query params in the Collection and we wanted to provide body and params directly in Iteration Data along with what was expected in the response

  • Similarly, adding postman.setNextRequest() to each request's test scripts and adding a request to multiple workflows was not very intuitive (solved by flow tests)

We started by adding common "prerequest" and "test" scripts to most requests to parse specific variables from the iteration data and use them to set the body and what to expect from the response.

After realizing that most of it could be generalized and wasn't request specific, we moved them to Collection Level Scripts. When a similar requirement was needed in other projects, it made sense to move the scripts out of the collection and into a module. We use the postman-collection SDK to insert our custom "prerequest" and "test" scripts into the collection at runtime.

How

1. Collection Modifcation

This is done using the postman-collection SDK. The collection is parsed and the custom scripts are injected.

2. Postman Collection Pre-Request Scripts/Tests

The tests are written in the Collection level Test Scripts by parsing the "validate" variable set by the iteration data.

The validate.expect key is compared to the response using a modified chai-subset (modified to stringify basic values before comparing to handle "{{results.0.val}}" cases ).

3. Mocha Test Suites Generation

A custom script parses all the data files and geenrates mocha test suites from it. Grouping them according to Request.

4. Newman Result Parsing

In the test suite, each test is run through Newman with the environment provided and extra variables for tests and results.

The result of the Postman Request is available in a Newman Summary object, with all tests and failures. We parse this summary object in javascript and throw errors if any of the Postman tests failed. The error contains the test failed, the request params/body and the response received.

5. Test Results

Mocha collects all the errors (if any) and presents us with a test report. If all the tests are passing it exits successfully without an error.

Links/Reading

License

MIT License

Copyright (c) 2020 Collegedunia Web Pvt. Ltd.