Github url

flair

by flairNLP

flairNLP /flair

A very simple framework for state-of-the-art Natural Language Processing (NLP)

9.1K Stars 1.2K Forks Last release: about 1 month ago (v0.5.1) Other 2.4K Commits 13 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

alt text

PyPI versionGitHub IssuesContributions welcomeLicense: MITTravis

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends.


Flair is:

  • A powerful NLP library. Flair allows you to apply our state-of-the-art natural language processing (NLP) models to your text, such as named entity recognition (NER), part-of-speech tagging (PoS), sense disambiguation and classification.

  • Multilingual. Thanks to the Flair community, we support a rapidly growing number of languages. We also now include 'one model, many languages' taggers, i.e. single models that predict PoS or NER tags for input text in various languages.

  • A text embedding library. Flair has simple interfaces that allow you to use and combine different word and document embeddings, including our proposed Flair embeddings, BERT embeddings and ELMo embeddings.

  • A PyTorch NLP framework. Our framework builds directly on PyTorch, making it easy to train your own models and experiment with new approaches using Flair embeddings and classes.

Now at version 0.5.1!

Comparison with State-of-the-Art

Flair outperforms the previous best methods on a range of NLP tasks:

| Task | Language | Dataset | Flair | Previous best | | ------------------------------- | --- | ----------- | ---------------- | ------------- | | Named Entity Recognition |English | Conll-03 | 93.18 (F1) | 92.22 (Peters et al., 2018) | | Named Entity Recognition |English | Ontonotes | 89.3 (F1) | 86.28 (Chiu et al., 2016) | | Emerging Entity Detection | English | WNUT-17 | 49.49 (F1) | 45.55 (Aguilar et al., 2018) | | Part-of-Speech tagging |English| WSJ | 97.85 | 97.64 (Choi, 2016)| | Chunking |English| Conll-2000 | 96.72 (F1) | 96.36 (Peters et al., 2017)| Named Entity Recognition | German | Conll-03 | 88.27 (F1) | 78.76 (Lample et al., 2016) | | Named Entity Recognition |German | Germeval | 84.65 (F1) | 79.08 (Hänig et al, 2014)| | Named Entity Recognition | Dutch | Conll-02 | 92.38 (F1) | 81.74 (Lample et al., 2016) | | Named Entity Recognition |Polish | PolEval-2018 | 86.6 (F1)
(Borchmann et al., 2018) | 85.1 (PolDeepNer)|

Here's how to reproduce these numbers using Flair. You can also find detailed evaluations and discussions in our papers:

Quick Start

Requirements and Installation

The project is based on PyTorch 1.1+ and Python 3.6+, because method signatures and type hints are beautiful. If you do not have Python 3.6, install it first. Here is how for Ubuntu 16.04. Then, in your favorite virtual environment, simply do:

pip install flair

Example Usage

Let's run named entity recognition (NER) over an example sentence. All you need to do is make a

Sentence

, load a pre-trained model and use it to predict tags for the sentence:

from flair.data import Sentence from flair.models import SequenceTagger # make a sentence sentence = Sentence('I love Berlin .') # load the NER tagger tagger = SequenceTagger.load('ner') # run NER over sentence tagger.predict(sentence)

Done! The

Sentence

now has entity annotations. Print the sentence to see what the tagger found.

print(sentence) print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get\_spans('ner'): print(entity)

This should print:

Sentence: "I love Berlin ." - 4 Tokens The following NER tags are found: Span [3]: "Berlin" [− Labels: LOC (0.9992)]

Tutorials

We provide a set of quick tutorials to get you started with the library:

The tutorials explain how the base NLP classes work, how you can load pre-trained models to tag your text, how you can embed your text with different word or document embeddings, and how you can train your own language models, sequence labeling models, and text classification models. Let us know if anything is unclear.

There are also good third-party articles and posts that illustrate how to use Flair: * How to build a text classifier with Flair* How to build a microservice with Flair and Flask* A docker image for Flair* Great overview of Flair functionality and how to use in Colab* Visualisation tool for highlighting the extracted entities* Practical approach of State-of-the-Art Flair in Named Entity Recognition* Benchmarking NER algorithms

Citing Flair

Please cite the following paper when using Flair:

@inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} }

If you use the pooled version of the Flair embeddings (PooledFlairEmbeddings), please cite:

@inproceedings{akbik2019naacl, title={Pooled Contextualized Embeddings for Named Entity Recognition}, author={Akbik, Alan and Bergmann, Tanja and Vollgraf, Roland}, booktitle = {{NAACL} 2019, 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics}, pages = {724–728}, year = {2019} }

Contact

Please email your questions or comments to Alan Akbik.

Contributing

Thanks for your interest in contributing! There are many ways to get involved; start with our contributor guidelines and then check these open issues for specific tasks.

For contributors looking to get deeper into the API we suggest cloning the repository and checking out the unit tests for examples of how to call methods. Nearly all classes and methods are documented, so finding your way around the code should hopefully be easy.

Running unit tests locally

You need Pipenv for this:

pipenv install --dev && pipenv shell pytest tests/

To run integration tests execute:

bash pytest --runintegration tests/

The integration tests will train small models. Afterwards, the trained model will be loaded for prediction.

To also run slow tests, such as loading and using the embeddings provided by flair, you should execute:

bash pytest --runslow tests/

License

The MIT License (MIT)

Flair is licensed under the following MIT license: The MIT License (MIT) Copyright © 2018 Zalando SE, https://tech.zalando.com

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.