deepspeech-node-wrapper

<!-- _One liner + link to confluence page_ _Screenshot of UI - optional_ -->

Usage no npm install needed!

<script type="module">
  import deepspeechNodeWrapper from 'https://cdn.skypack.dev/deepspeech-node-wrapper';
</script>

README

deepspeech-node-wrapper

TBC Work in progress

A node module that wraps around Mozilla Deepspeech node, to make it easier to use and get transcripts with word level timing.

Setup

git clone, cd, npm install

Usage

requrie from npm

npm install deepspeech-node-wrapper

Downloading speech models

You'd need to download the Deepspeech model separately, (1.8Gb). Optionally, this module provides a downloadDeepSpeechModel helper function, to download the model, unzip it, delete the tar file, and return the path tot he model to be used with the deepSpeechSttWrapper function. For ease of integration with a host application.

const downloadDeepSpeechModel = require("deepspeech-node-wrapper").downloadDeepSpeechModel;

const outputPath = './models.tar.gz';
downloadDeepSpeechModel(outputPath).then((res)=>{
    console.log('res',res)
}).catch((error)=>{
    console.error('error',error)
})

if you don't specify a version of the model, it defaults to 0.6.0, otherwise you can specify an optional parameter to download a different version.

const downloadDeepSpeechModel = require("deepspeech-node-wrapper").downloadDeepSpeechModel;

const outputPath = './models.tar.gz';
downloadDeepSpeechModel(outputPath, '0.6.0').then((res)=>{
    console.log('res',res)
}).catch((error)=>{
    console.error('error',error)
})

STT

Note that the wav audio file needs to be 16khz and mono.

Promises

const deepSpeechSttWrapper = require("deepspeech-node-wrapper");
// absolute path to audio file file
const audioFile = "./audio/2830-3980-0043.wav";
const modelPath = path.join(__dirname,'./models'); 
deepSpeechSttWrapper(audioFile, modelPath)
  .then(res => {
    console.log(JSON.stringify(res, null, 2));
    const { dpeResult, result, audioLength } = res;
    // Do something with the result
  })
  .catch(err => {
    console.error(err);
  });

async/await

const deepSpeechSttWrapper = require("deepspeech-node-wrapper");
// absolute path to audio file file
const audioFile = "./audio/2830-3980-0043.wav";

async function main(audioFile, modelPath){
    try{
        const res = await deepSpeechSttWrapper(audioFile, modelPath);
        const { dpeResult, result, audioLength } = await res;
        console.log(dpeResult)
        fs.writeFileSync(
            "./example-output/example-output-dpe.json",
            JSON.stringify({ ...dpeResult, audioLength }, null, 2)
          );
    }
    catch(e){
        console.error(e);
    }
}

const modelPath = path.join(__dirname,'./models'); 
main(audioFile, modelPath)

modelPath, is the folder for the deepspeech model, and expects to contain

  • output_graph.pbmm
  • lm.binary
  • trie

For more, see example usage in src folder for more.

System Architecture

Development env

Build

NA

Tests

NA

Deployment

npm run publish:public