autobench

[![NPM version](https://img.shields.io/npm/v/autobench.svg?style=flat)](https://www.npmjs.com/package/autobench) [![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg?style=flat)](https://standardjs.com/)

Usage no npm install needed!

<script type="module">
  import autobench from 'https://cdn.skypack.dev/autobench';
</script>

README

autobench

NPM version js-standard-style

Automated benchmark avoiding regression in HTTP Applications.

Wrap autocannon and autocannon-compare in a box to automatize and monitor HTTP routes.

Installation

This is a Node.js module available through the npm registry. It can be installed using the npm or yarn command line tools.

npm i autobench

or globally

npm i -g autobench

Usage

autobench
# or directly
npx autobench

Add environment DEBUG=autobench:* to see the log applications. Example:

DEBUG=autobench:debug autobench compare
DEBUG=autobench:info autobench compare
DEBUG=autobench:* autobench compare

Config file

In order to use the autobench, the project must have a autobench.yml as config file.

The config file parameters are described bellow:

# Name of project [OPTIONAL]
name: 'Autobench Example'
# Benchmarking folder to store and retrieve benchmarks. [REQUIRED]
benchFolder: 'bench'
# Root URL to perform the benchmarking. [REQUIRED] It could be sent by `AUTOBENCH_URL` environment variable
url: 'http://localhost:3000'
# Number of connections. See https://github.com/mcollina/autocannon to further explanation. [OPTIONAL]
connections: 10
# Number of pipelining. See https://github.com/mcollina/autocannon to further explanation. [OPTIONAL]
pipelining: 1
# Duration of benchmark. See https://github.com/mcollina/autocannon to further explanation. [OPTIONAL]
duration: 30
# Group of routes to perform benchmarking. [REQUIRED]
benchmarks:
  # Benchmark route name. [REQUIRED]
  - name: 'request 1'
  # Route path. [REQUIRED]
    path: '/'
  # Method [OPTIONAL] - Default `GET`
    method: 'POST'
  # Headers to request [OPTIONAL]
    headers:
      Content-type: 'application/json'
  # Body to request [OPTIONAL] - It's automatically parsed to JSON object.
    body:
      example: 'true'
      email: 'hey-[<id>]@example.com'
  # [OPTIONAL] when this field is set as `true` the `[<id>]` is replaced with a generated HyperID at runtime
    idReplacement: true

  - name: 'request 2'
    path: '/slow'

See autobench.yml file to examples.

Compare

Command to perform benchmark and compare to the stored benchmark. It's required to have a previous benchmark stored in the benchFolder. See Autobench Create to realize it.

Options:

| Option | Description | Full command | | - | - | - | | -s | When is identified a Performance Regression a autobench-review.md file is created with the summary | autobench compare -s |

autobench compare [-s]

The autobench-review.md looks like:

## Performance Regression ⚠️

---
The previous benchmark for request-1 was significantly performatic than from this PR.

- **Router**: request-1
- **Requests Diff**: 10%
- **Throughput Diff**: 10%
- **Latency Diff**: 10%

---
The previous benchmark for request-2 was significantly performatic than from this PR.

- **Router**: request-2
- **Requests Diff**: 20%
- **Throughput Diff**: 20%
- **Latency Diff**: 20%

Create

Command to store/override the results in the benchFolder. Usually, it should be used to update the to latest benchmarking result, for instance, after each PR merged.

autobench create

Examples

See autobench-example for further details.