this package is origined from, with some package upgrade

Usage no npm install needed!

<script type="module">
  import tfjsModelsUseEmbedding from '';


Universal Sentence Encoder lite

The Universal Sentence Encoder (Cer et al., 2018) (USE) is a model that encodes text into 512-dimensional embeddings. These embeddings can then be used as inputs to natural language processing tasks such as sentiment classification and textual similarity analysis.

This module is a TensorFlow.js FrozenModel converted from the USE lite (module on TFHub), a lightweight version of the original. The lite model is based on the Transformer (Vaswani et al, 2017) architecture, and uses an 8k word piece vocabulary.

In this demo we embed six sentences with the USE, and render their self-similarity scores in a matrix (redder means more similar):


The matrix shows that USE embeddings can be used to cluster sentences by similarity.

The sentences (taken from the TensorFlow Hub USE lite colab):

  1. I like my phone.
  2. Your cellphone looks great.
  3. How old are you?
  4. What is your age?
  5. An apple a day, keeps the doctors away.
  6. Eating strawberries is healthy.


Using yarn:

$ yarn add @tensorflow/tfjs@1.0.0-alpha3 @tensorflow-models/universal-sentence-encoder

Using npm:

$ npm install @tensorflow/tfjs@1.0.0-alpha3 @tensorflow-models/universal-sentence-encoder


To import in npm:

import * as use from '@tensorflow-models/universal-sentence-encoder';

or as a standalone script tag:

<script src=""></script>
<script src=""></script>


// Load the model.
use.load().then(model => {
  // Embed an array of sentences.
  const sentences = [
    'How are you?'
  model.embed(sentences).then(embeddings => {
    // `embeddings` is a 2D tensor consisting of the 512-dimensional embeddings for each sentence.
    // So in this example `embeddings` has the shape [2, 512].
    embeddings.print(true /* verbose */);

To use the Tokenizer separately:

use.loadTokenizer().then(tokenizer => {
  tokenizer.encode('Hello, how are you?'); // [341, 4125, 8, 140, 31, 19, 54]