solidity-benchmark

Benchmarking tool for truffle-based projects

Usage no npm install needed!

<script type="module">
  import solidityBenchmark from 'https://cdn.skypack.dev/solidity-benchmark';
</script>

README

Benchmarking Solidity Smart Contracts

npm version

Features

This simple tool (WIP) is meant to benchmark EVM based smart contract calls. It's been designed in order to be plugged-in as seamlessly as possible within a truffle based project. This guide will show you how, just by adding few lines of code within your truffle tests, for each smart contract call you will see on a output file:

  • Transaction Hash
  • Gas Consumed
  • Time elapsed until the transaction is included in a block

Furthermore, harnessing the go-ethereum debug_traceTransaction RPC, which is called over the transactions produced during the tests (including the ones to deploy contracts), for each smart contract call it can possibly append to the above benchmark file:

  • The occurrences of each OPCODE called within the transaction, or just of the ones you want to monitor (more detail below)
  • Information about the EVM stack, storage and memory during the execution (to be implemented)

The tools allows also to compare two different benchmarking files, produced by the above steps, and will output the percentage of the improvement/deterioration between the two solutions. This could allow to speed up the smart contract optimization process, so that a new solution can be compared immediately against a previous one, or to effectively benchmark the same solution over different networks.

Finally, it converts the report files to Markdown format.

Truffle Integration Guide

Follow these steps in order to integrate the tool into a truffle-based project using the Mocha test framework (although the rationale behind it may be applied to different SDKs). The aim here is to enhance the tests so that the benchmarking script can be run automatically over the contract calls you want to measure. This guide shows a way to achieve that but the package is meant to be as more abstracted away from the environment as possible. Also, improvements and proposals are very appreciated so we encourage to open issues!

Configuration

The configuration object used by the package has the following fields, with corresponding default values, is:

{
    BENCHMARK_FILEPATH : "./benchmark/stats/test.json",
    WATCHED_OPCODES: ["MLOAD", "MSTORE", "MSTORE8", "SLOAD", "SSTORE", "CALLDATACOPY", "CALLDATASIZE", "CALLDATALOAD"],
    WEB3_PROVIDER_URL: "http://localhost:8545",
    MD_OUTPUT_FILEPATH : "./benchmark/stats/test.md"
}

As you will see in the steps below, some functions may take an optional customConfig object, in case you want to change one or more of them values only within the current call. If provided, it will overwrite the values for those keys that are actually present in the default object, ignoring the others.

The config object is treated as a singleton within the package. This means that once you passed the customConfig object to override some of the default values, this will be reused by all the subsequent function call as long as the same process is running. In other words, if for example you run all the test suites with truffle test command, you'll have to set the custom settings only in the first call to the package (possibly in the migration file, if you intend to benchmark also the deployment costs)

Step By Step

  1. Install the package through npm

npm install --save-dev solidity-benchmark

  1. Import it in one or more of your test files:

const benchmark = require("solidity-benchmark")

  1. Now, in the beforeEach clause of Mocha we need to set some kind of condition we'll check in the afterEach clause to determine whether we want to benchmark this specific test, for example:
beforeEach('setup for each test', async () => {
    //...other setup
    
    //unset variables to check for benchmark after each test 
    txToBenchmark = undefined
    duration = 0
    
    //set current test name to use in afterEach hook
    currentTestName = this.currentTest.title

    // metadata 
    metadata = { network: "test" }
})

Where this.currentTest.title is part of the Mocha framework, we need to store it here cause it won't be available in the afterEach clause.

  1. We can set those variables inside the tests we want to monitor:
it('Shows how to trigger the benchmark', async () => {
    let start = Date.now()
    txToBenchmark = await someContract.someMethod(someParams);
    duration = Number( (Date.now() - start) / 1000 ).toFixed(3)
})
  1. In the afterEach we check if the variables have been set and if so we use them:
  afterEach("save to file tx hash and benchmark time", async () => {

    // if variables txToBenchmark has been set inside the current test
      if(txToBenchmark){
        duration = duration ? duration : "Not estimated"
        await benchmark.saveStatsToFile(txToBenchmark.tx, currentTestName, txToBenchmark.receipt.gasUsed.toString(), duration, metadata, customConfigs)
      }

  })

benchmark.saveStatsToFile(<txHash>, <function_name>, <gas>, <seconds>, <metadata>, <customConfigs>)

This will store in a json file, for each function_name, gas consumption, time elapsed, transaction hash and some arbitrary metadata that you might want in the final report.

In some cases this pieces of information may be already enough, so why would you need this package to do this? Clearly, you don't.

The package may become powerful if you want to trace transactions you have just executed, word counting the opcodes called and possibly aggregate in some meaningful way data about EVM stack, storage and memory (6), to compare two different implementations (7), and to convert the report in Markdown format (8).

  1. You can do that simply by adding to the Mocha after clause:
  after("Trace the transactions benchmarked in this test suite", async () => {
    benchmark.trace(customConfigs)
  })

benchmark.trace(<customConfigs>, <aggregation_function>)

This will read transaction hashes written in the benchmark file at step 4, do a debug_traceTransaction RPC call to the client, run the aggregation_function, if provided, over the returned object and finally word count the opcodes called, all of them or just the ones specified in the configuration object. It will update the benchmark file appending to the transaction objects these further results, converting that to Markdown format.

  1. You can finally compare two different implementations with:

benchmark.compare(<benchmark_file_before>, <benchmark_file_after>, <output_md_file>)

where the two benchmark files are the ones produced in step 4, while the <output_md_file> is where the comparison will be reported in Markdown style. All file paths here are expected to be absolute or relative to your project root. The function will simply calculate the distance in percentage (also negative) of each same contract call (keys indicating the method name in the file must be identical) between two different implementations, both for gas consumption and for time elapsed.

  1. The package provides also a tool to convert the json formatted benchmark file into a Markdown table, writing it to an output file:

benchmark.convertToMD(<json_input_path>, <md_output_path>):

  • json_input_path: path of the benchmark file (produced in step 4) to convert, absolute or relative to your project root
  • md_output_path: path of the .md output file, absolute or relative to your project root

That's all! Maybe a working example explains this better than words, check out here how the package has been integrated in one of our projects, and how all the above steps are automated in a script here