naivebayesclassifier

NaiveBayesClassifier is a Multinomial Naive-Bayes Classifier that uses Laplace Smoothing.

Usage no npm install needed!

<script type="module">
  import naivebayesclassifier from 'https://cdn.skypack.dev/naivebayesclassifier';
</script>

README

Build Status Join the chat at https://gitter.im/hadimichael/NaiveBayesClassifier

NaiveBayesClassifier is an implementation of a Multinomial Naive-Bayes Classifier that uses Laplace Smoothing. It takes in a piece of text and tells you which category it most likely belongs to.

What is this good for?

"In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features." - Wikipedia: Naive Bayes classifier.

You can use this implementation for categorizing any text content into any arbitrary set of categories. For example:

  • is an email spam, or not spam?
  • is a news article about technology, politics, or sports?
  • is the author of some piece of text male, or female?
  • is a tweet expressing positive sentiment or negative sentiment?

Depending on your specific attributes and sample size, there may be other algorithms that are better suited: Comparison of Classification Methods Based on the Type of Attributes and Sample Size.

Try it now

You can experiment, test and play with NaiveBayesClassifier in your browser at http://jsbin.com/xixuga/1/edit?html,js,console

If you would like to try NaiveBayesClassifier as a web-service, you can use: http://nbcaas.herokuapp.com/

Installing NaiveBayesClassifier

NaiveBayesClassifier is shipped in UMD format, meaning that it is available as a CommonJS/AMD module or browser global. You can install it using npm:

$ npm install naivebayesclassifier

OR using bower:

$ bower install naivebayesclassifier

Basic Usage

new NaiveBayesClassifier([options])

Using the default tokenization function, which splits on spaces:

var NaiveBayesClassifier = require('NaiveBayesClassifier'),
    classifier = new NaiveBayesClassifier();

Or with an optional custom tokenization function that you specify:

var NaiveBayesClassifier = require('NaiveBayesClassifier');
var splitOnChar = function(text) { 
    return text.split('');
};
var classifier = new NaiveBayesClassifier({ tokenizer: splitOnChar });

.withClassifier(classifier)

Recover an existing classifier, which you may have retrieved from a database or localstorage:

var NaiveBayesClassifier = require('NaiveBayesClassifier'),
    classifier = NaiveBayesClassifier.withClassifier(existingClassifier);

.learn(text, category)

Teach your classifier what category the text belongs to. The more you teach your classifier, the more reliable it becomes. It will use what it has learned to identify new documents that it hasn't seen before.

classifier.learn('amazing, awesome movie!! Yeah!!', 'positive');
classifier.learn('terrible, shitty thing. Damn. Sucks!!', 'negative');
classifier.learn('I dont really know what to make of this.', 'neutral');

.categorize(text)

classifier.categorize('awesome, cool, amazing!! Yay.');

This will return the most likely category it thinks text belongs to and its probability. Its judgement is based on what you have taught it with .learn(text, category).

{ 
  "category": "positive",
  "probability": 0.7687012152002337,
  "categories":
   { 
     "positive": 0.7687012152002337,
     "negative": 0.15669449587155299,
     "neutral": 0.07460428892821332
   } 
}

Complete API Documentation

If you would like to explore the full API, you can find auto-generated documentation at: https://hadi.io/NaiveBayesClassifier.

Acknowledgements

This implementation is based on the Stanford NLP video series by Professor Dan Jurafsky & Chris Manning. This library modifies and extends work first investigated by Tolga Tezel.

License

Copyright (C) 2015, Hadi Michael. All rights reserved.

Licensed under BSD-3-Clause