winkNLP is a JavaScript library for Natural Language Processing (NLP). Designed specifically to make development of NLP solutions easier and faster, winkNLP is optimized for the right balance of performance and accuracy. The package can handle large amount of raw text at speeds over 525,000 tokens/second. And with a test coverage of ~100%, winkNLP is a tool for building production grade systems with confidence.
WinkNLP has a comprehensive natural language processing (NLP) pipeline covering tokenization, sentence boundary detection (sbd), negation handling, sentiment analysis, part-of-speech (pos) tagging, named entity recognition (ner), custom entities recognition (cer):
At every stage a range of properties become accessible for tokens, sentences, and entities. Read more about the processing pipeline and how to configure it in the winkNLP documentation.
It packs a rich feature set into a small foot print codebase of under 1500 lines:
-
Fast, lossless & multilingual tokenizer
-
Developer friendly and intuitive API
-
Built-in API to aid text visualization
-
Extensive text processing features such as bag-of-words, frequency table, stop word removal, readability statistics computation and many more.
-
Pre-trained language models with sizes starting from <3MB onwards
-
Multiple similarity methods
-
Word vector integration
-
No external dependencies
Use npm install:
npm install wink-nlp --save
In order to use winkNLP after its installation, you also need to install a language model according to the node version used. The following table outlines the version specific installation command:
Node.js Version | Installation |
---|---|
16 or 18 | npm install wink-eng-lite-web-model --save |
14 or 12 | node -e "require('wink-nlp/models/install')" |
The wink-eng-lite-web-model is designed to work with Node.js version 16 or 18. It can also work on browsers as described in the next section.
The second command installs the wink-eng-lite-model, which works with Node.js version 14 or 12.
If you’re using winkNLP in the browser use the wink-eng-lite-web-model. Learn about its installation and usage in our guide to using winkNLP in the browser. Explore winkNLP recipes on Observable for live browser based examples.
The "Hello World!" in winkNLP is given below:
// Load wink-nlp package.
const winkNLP = require( 'wink-nlp' );
// Load english language model.
const model = require( 'wink-eng-lite-web-model' );
// Instantiate winkNLP.
const nlp = winkNLP( model );
// Obtain "its" helper to extract item properties.
const its = nlp.its;
// Obtain "as" reducer helper to reduce a collection.
const as = nlp.as;
// NLP Code.
const text = 'Hello World🌎! How are you?';
const doc = nlp.readDoc( text );
console.log( doc.out() );
// -> Hello World🌎! How are you?
console.log( doc.sentences().out() );
// -> [ 'Hello World🌎!', 'How are you?' ]
console.log( doc.entities().out( its.detail ) );
// -> [ { value: '🌎', type: 'EMOJI' } ]
console.log( doc.tokens().out() );
// -> [ 'Hello', 'World', '🌎', '!', 'How', 'are', 'you', '?' ]
console.log( doc.tokens().out( its.type, as.freqTable ) );
// -> [ [ 'word', 5 ], [ 'punctuation', 2 ], [ 'emoji', 1 ] ]
Experiment with the above code on RunKit.
Dive into winkNLP's concepts or head to winkNLP recipes for common NLP tasks or just explore live showcases to learn:
Reads any wikipedia article and generates a visual timeline of all its events.
Performs tokenization, sentence boundary detection, pos tagging, named entity detection and sentiment analysis of user input text in real time.
Links entities such as famous persons, locations or objects to the relevant Wikipedia pages.
The winkNLP processes raw text at ~525,000 tokens per second with its default language model — wink-eng-lite-model, when benchmarked using "Ch 13 of Ulysses by James Joyce" on a 2.2 GHz Intel Core i7 machine with 16GB RAM. The processing included the entire NLP pipeline — tokenization, sentence boundary detection, negation handling, sentiment analysis, part-of-speech tagging, and named entity extraction. This speed is way ahead of the prevailing speed benchmarks.
The benchmark was conducted on Node.js versions 14.8.0, and 12.18.3. It delivered similar/better performance on Node.js versions 16/18.
The winkNLP delivers similar performance on browsers; its performance on a specific machine/browser combination can be measured using the Observable notebook — How to measure winkNLP's speed on browsers?.
It pos tags a subset of WSJ corpus with an accuracy of ~94.7% — this includes tokenization of raw text prior to pos tagging. The current state-of-the-art is at ~97% accuracy but at lower speeds and is generally computed using gold standard pre-tokenized corpus.
Its general purpose sentiment analysis delivers a f-score of ~84.5%, when validated using Amazon Product Review Sentiment Labelled Sentences Data Set at UCI Machine Learning Repository. The current benchmark accuracy for specifically trained models can range around 95%.
Wink NLP delivers this performance with the minimal load on RAM. For example, it processes the entire History of India Volume I with a total peak memory requirement of under 80MB. The book has around 350 pages which translates to over 125,000 tokens.
- Concepts — everything you need to know to get started.
- API Reference — explains usage of APIs with examples.
- Change log — version history along with the details of breaking changes, if any.
- Showcases — live examples with code to give you a head start.
Please ask at Stack Overflow or discuss at Wink JS GitHub Discussions or chat with us at Wink JS Gitter Lobby.
If you spot a bug and the same has not yet been reported, raise a new issue or consider fixing it and sending a PR.
Looking for a new feature, request it via the new features & ideas discussion forum or consider becoming a contributor.
Wink is a family of open source packages for Natural Language Processing, Machine Learning, and Statistical Analysis in NodeJS. The code is thoroughly documented for easy human comprehension and has a test coverage of ~100% for reliability to build production grade solutions.
Wink NLP is copyright 2017-22 GRAYPE Systems Private Limited.
It is licensed under the terms of the MIT License.