Github url

compromise

by spencermountain

spencermountain /compromise

modest natural-language processing

9.4K Stars 602 Forks Last release: 3 days ago (13.3.2) MIT License 3.8K Commits 193 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

compromise modest natural language processing

npm install compromise

by Spencer Kelly and many contributors ![](https://user-images.githubusercontent.com/399657/68221862-17ceb980-ffb8-11e9-87d4-7b30b6488f16.png)

compromise tries its best.

it is small,quick, and usually good-enough.

.match():

compromise makes it simple to interpret and match text:

let doc = nlp(entireNovel) doc.if('the #Adjective of times').text() // "it was the blurst of times??"
if (doc.has('simon says #Verb')) { return true }

match docs

.verbs():

conjugate and negate verbs in any tense:

let doc = nlp('she sells seashells by the seashore.') doc.verbs().toPastTense() doc.text() // 'she sold seashells by the seashore.'

verb docs

.nouns():

transform nouns to plural and possessive forms:

let doc = nlp('the purple dinosaur') doc.nouns().toPlural() doc.text() // 'the purple dinosaurs'

noun docs

.numbers():

interpret plaintext numbers

nlp.extend(require('compromise-numbers')) let doc = nlp('ninety five thousand and fifty two') doc.numbers().add(2) doc.text() // 'ninety five thousand and fifty four'

number docs

.topics():

grab subjects in a text:

let doc = nlp(buddyHolly) doc.people().if('mary').json() // [{text:'Mary Tyler Moore'}] let doc = nlp(freshPrince) doc.places().first().text() // 'West Phillidelphia' doc = nlp('the opera about richard nixon visiting china') doc.topics().json() // [// { text: 'richard nixon' }, // { text: 'china' } //]

topics docs

.contractions():

work with contracted and implicit words:

let doc = nlp("we're not gonna take it, no we ain't gonna take it.") // match an implicit term doc.has('going') // true // transform doc.contractions().expand() dox.text() // 'we are not going to take it, no we are not going to take it.'

contraction docs

Use it on the client-side:

<script src="https://unpkg.com/compromise"></script><script src="https://unpkg.com/compromise-numbers"></script><script>
  nlp.extend(compromiseNumbers)

  var doc = nlp('two bottles of beer')
  doc.numbers().minus(1)
  document.body.innerHTML = doc.text()
  // 'one bottle of beer'
</script>

or as an es-module:

import nlp from 'compromise' var doc = nlp('London is calling') doc.verbs().toNegative() // 'London is not calling'

or if you don't care about POS-tagging, you can use the tokenize-only build: (90kb!)

<script src="https://unpkg.com/compromise/builds/compromise-tokenize.js"></script><script>
  var doc = nlp('No, my son is also named Bort.')

  //you can see the text has no tags
  console.log(doc.has('#Noun')) //false

  //but the whole api still works
  console.log(doc.has('my .* is .? named /^b[oa]rt/')) //true
</script>

compromise is 170kb (minified):

[![](https://user-images.githubusercontent.com/399657/68234819-14dfc300-ffd0-11e9-8b30-cb8545707b29.png)](https://bundlephobia.com/result?p=compromise)

it's pretty fast. It can run on keypress:

it works mainly by conjugating many forms of a basic word list.

The final lexicon is ~14,000 words:

you can read more about how it works, here.

<!-- -->

.extend():

set a custom interpretation of your own words:

let myWords = { kermit: 'FirstName', fozzie: 'FirstName', } let doc = nlp(muppetText, myWords)

or make more changes with a compromise-plugin.

const nlp = require('compromise') nlp.extend((Doc, world) =\> { // add new tags world.addTags({ Character: { isA: 'Person', notA: 'Adjective', }, }) // add or change words in the lexicon world.addWords({ kermit: 'Character', gonzo: 'Character', }) // add methods to run after the tagger world.postProcess(doc =\> { doc.match('light the lights').tag('#Verb . #Plural') }) // add a whole new method Doc.prototype.kermitVoice = function () { this.sentences().prepend('well,') this.match('i [(am|was)]').prepend('um,') return this } })

.extend() docs

API:

Constructor

_(these methods are on the

nlp

object)_

  • .tokenize() - parse text without running POS-tagging
  • .extend() - mix in a compromise-plugin
  • .fromJSON() - load a compromise object from
    .json()
    result
  • .verbose() - log our decision-making for debugging
  • .version() - current semver version of the library
  • .world() - grab all current linguistic data
Utils
  • .all() - return the whole original document ('zoom out')
  • .found [getter] - is this document empty?
  • .parent() - return the previous result
  • .parents() - return all of the previous results
  • .tagger() - (re-)run the part-of-speech tagger on this document
  • .wordCount() - count the # of terms in the document
  • .length [getter] - count the # of characters in the document (string length)
  • .clone() - deep-copy the document, so that no references remain
  • .cache({}) - freeze the current state of the document, for speed-purposes
  • .uncache() - un-freezes the current state of the document, so it may be transformed
Accessors
Match

(all match methods use the match-syntax.)

  • .match('') - return a new Doc, with this one as a parent
  • .not('') - return all results except for this
  • .matchOne('') - return only the first match
  • .if('') - return each current phrase, only if it contains this match ('only')
  • .ifNo('') - Filter-out any current phrases that have this match ('notIf')
  • .has('') - Return a boolean if this match exists
  • .lookBehind('') - search through earlier terms, in the sentence
  • .lookAhead('') - search through following terms, in the sentence
  • .before('') - return all terms before a match, in each phrase
  • .after('') - return all terms after a match, in each phrase
  • .lookup([]) - quick find for an array of string matches
Case
Whitespace
  • .pre('') - add this punctuation or whitespace before each match
  • .post('') - add this punctuation or whitespace after each match
  • .trim() - remove start and end whitespace
  • .hyphenate() - connect words with hyphen, and remove whitespace
  • .dehyphenate() - remove hyphens between words, and set whitespace
  • .toQuotations() - add quotation marks around these matches
  • .toParentheses() - add brackets around these matches
Tag
  • .tag('') - Give all terms the given tag
  • .tagSafe('') - Only apply tag to terms if it is consistent with current tags
  • .unTag('') - Remove this term from the given terms
  • .canBe('') - return only the terms that can be this tag
Loops
  • .map(fn) - run each phrase through a function, and create a new document
  • .forEach(fn) - run a function on each phrase, as an individual document
  • .filter(fn) - return only the phrases that return true
  • .find(fn) - return a document with only the first phrase that matches
  • .some(fn) - return true or false if there is one matching phrase
  • .random(fn) - sample a subset of the results
Insert
Transform
Output
Selections
Subsets

Plugins:

These are some helpful extensions:

Adjectives
npm install compromise-adjectives
Dates
npm install compromise-dates
Numbers
npm install compromise-numbers
Export
npm install compromise-export
  • .export() - store a parsed document for later use
  • nlp.load() - re-generate a Doc object from .export() results
Html
npm install compromise-html
  • .html({}) - generate sanitized html from the document
Hash
npm install compromise-hash
  • .hash() - generate an md5 hash from the document+tags
  • .isEqual(doc) - compare the hash of two documents for semantic-equality
Keypress
npm install compromise-keypress
Ngrams
npm install compromise-ngrams
Paragraphs
npm install compromise-paragraphs

this plugin creates a wrapper around the default sentence objects.

Sentences
npm install compromise-sentences
Syllables
npm install compromise-syllables
  • .syllables() - split each term by its typical pronounciation

Typescript

we're committed to typescript/deno support, both in main and in the official-plugins:

import nlp from 'compromise' import ngrams from 'compromise-ngrams' import numbers from 'compromise-numbers' const nlpEx = nlp.extend(ngrams).extend(numbers) nlpEx('This is type safe!').ngrams({ min: 1 }) nlpEx('This is type safe!').numbers()

typescript docs

Docs:

Tutorials:
3rd party:
Talks:
Some fun Applications:

Limitations:

  • slash-support:We currently split slashes up as different words, like we do for hyphens. so things like this don't work:nlp('the koala eats/shoots/leaves').has('koala leaves') //false

  • inter-sentence match:By default, sentences are the top-level abstraction. Inter-sentence, or multi-sentence matches aren't supported:nlp("that's it. Back to Winnipeg!").has('it back')//false

  • nested match syntax:the danger beauty of regex is that you can recurse indefinitely. Our match syntax is much weaker. Things like this are not (yet) possible:doc.match('(modern (major|minor))? general')complex matches must be achieved with successive .match() statements.

  • dependency parsing:Proper sentence transformation requires understanding the syntax tree of a sentence, which we don't currently do. We should! Help wanted with this.

FAQ
β˜‚οΈ Isn't javascript too...

    yeah it is!
    it wasn't built to compete with NLTK, and may not fit every project.
    string processing is synchronous too, and parallelizing node processes is weird.
    See here for information about speed & performance, and here for project motivations

πŸ’ƒ Can it run on my arduino-watch?

    Only if it's water-proof!
    Read quick start for running compromise in workers, mobile apps, and all sorts of funny environments.

🌎 Compromise in other Languages?

    we've got work-in-progress forks for German and French, in the same philosophy.
    and need some help.

✨ Partial builds?

    we do offer a [compromise-tokenize](./builds/compromise-tokenize.js) build, which has the POS-tagger pulled-out.
    but otherwise, compromise isn't easily tree-shaken.
    the tagging methods are competitive, and greedy, so it's not recommended to pull things out.
    Note that without a full POS-tagging, the contraction-parser won't work perfectly. ((spencer's cool) vs. (spencer's house))
    It's recommended to run the library fully.

See Also:

  •   naturalNode - fancier statistical nlp in javascript
  •   superScript - clever conversation engine in js
  •   nodeBox linguistics - conjugation, inflection in javascript
  •   reText - very impressive text utilities in javascript
  •   jsPos - javascript build of the time-tested Brill-tagger
  •   spaCy - speedy, multilingual tagger in C/python
  •   Prose - quick tagger in Go by Joseph Kato

MIT

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.