Need help with flexsearch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

7.0K Stars 274 Forks Apache License 2.0 159 Commits 111 Opened issues


Next-Generation full text search library for Browser and Node.js

Services available


Need anything else?

Contributors list

Search Library

FlexSearch v0.7.0 (Update)

The current state:

Read the documentation of new language encoding features:

Source Code v0.7.0-pre-alpha available here:

Web's fastest and most memory-flexible full-text search library with zero dependencies.

When it comes to raw search speed FlexSearch outperforms every single searching library out there and also provides flexible search capabilities like multi-field search, phonetic transformations or partial matching. Depending on the used options it also provides the most memory-efficient index. FlexSearch introduce a new scoring algorithm called "contextual index" based on a pre-scored lexical dictionary architecture which actually performs queries up to 1,000,000 times faster compared to other libraries. FlexSearch also provides you a non-blocking asynchronous processing model as well as web workers to perform any updates or queries on the index in parallel through dedicated balanced threads.

FlexSearch Server is available here:

Installation Guide  •  API Reference  •  Custom Builds  •  Flexsearch Server  •  Changelog

Supported Platforms: - Browser - Node.js

Demos: - Auto-Complete

Library Comparison: - Benchmark "Gulliver's Travels" - Relevance Scoring - Memory Consumption

Plugins (external): - -

Get Latest (Stable Release):

Build File CDN
flexsearch.min.js Download
flexsearch.light.js Download
flexsearch.compact.js Download
flexsearch.custom.js Custom Build

All Features:

Feature flexsearch.min.js flexsearch.compact.js flexsearch.light.js
Presets -
Async Search -
Web-Workers (not available in Node.js) - -
Contextual Indexes
Index Documents (Field-Search) -
Logical Operators -
Where / Find / Tags - -
Partial Matching
Relevance Scoring
Auto-Balanced Cache by Popularity - -
Pagination - -
Suggestions - -
Phonetic Matching -
Customizable: Matcher, Encoder, Tokenizer, Stemmer, Filter
File Size (gzip) 6.6 kb 4.7 kb 2.7 kb

It is also pretty simple to make Custom Builds

Benchmark Ranking

Comparison: Benchmark "Gulliver's Travels"

Query Test: "Gulliver's Travels"
Rank Library Name Library Version Single Phrase (op/s) Multi Phrase (op/s) Not Found (op/s)
1 FlexSearch * 0.3.6 363757 182603 1627219
2 Wade 0.3.3 899 6098 214286
3 JS Search 1.4.2 735 8889 800000
4 JSii 1.0 551 9970 75000
5 Lunr.js 2.3.5 355 1051 25000
6 Elasticlunr.js 0.9.6 327 781 6667
7 BulkSearch 0.1.3 265 535 2778
8 bm25 0.2 71 116 2065
9 Fuse 3.3.0 0.5 0.4 0.7

* The preset "fast" was used for this test
<!-- * Index Size: The size of memory the index requires
** Memory Allocation: The amount of memory which was additionally allocated during a row of 10 queries
*** The preset "fast" was used for this test
**** The preset "memory" was used for this test -->

Run Comparison: Benchmark "Gulliver's Travels"

Contextual Search

Note: This feature is actually not enabled by default. Read here how to enable.

FlexSearch introduce a new scoring mechanism called Contextual Search which was invented by Thomas Wilkerling, the author of this library. A Contextual Search incredibly boost up queries to a complete new level but also requires some additional memory (depending on depth). The basic idea of this concept is to limit relevance by its context instead of calculating relevance through the whole (unlimited) distance.<!--Imagine you add a text block of some sentences to an index ID. Assuming the query includes a combination of first and last word from this text block, are they really relevant to each other?--> In this way contextual search also improves the results of relevance-based queries on a large amount of text data.

Lexical Pre-Scored Dictionary / Context-based Map

The index consists of an in-memory pre-scored dictionary as its base. The biggest complexity of these algorithm occurs during the calculation of intersections. As a consequence each additional term in the query has a significant potential to increase complexity. A contextual map comes into play when the query contains more than 1 term and increase effect for each additional term by cutting down the complexity for the intersection calculations. Instead of an increase, the complexity is lowered for each additional term. The contextual index itself is also based on a pre-scored dictionary and follows a memory-friendly strategy.

Type Complexity
Each single term query: 1
Lexical Pre-Scored Dictionary (Solo): TERM_COUNT * TERM_MATCHES
Lexical Pre-Scored Dictionary + Context-based Map: TERM_MATCHES / TERM_COUNT

The complexity for one single term is always 1.

Compare BulkSearch vs. FlexSearch

BulkSearch FlexSearch
Access Read-Write optimized index Read-Memory optimized index
Memory Large: ~ 1 Mb per 100,000 words Tiny: ~ 100 Kb per 100,000 words
Index Type Bulk of encoded string data divided into chunks
  1. Lexical pre-scored dictionary
  2. Context-based map
  • fast adds
  • fast updates
  • fast removals
  • fast queries
  • memory-efficient index
  • less powerful contextual search
  • less memory efficient (has to be defragmented from time to time)
  • updating existing / deleting items from index is slow
  • adding items to the index optimized for partial matching (tokenize: "forward" / "reverse" / "full") is slow
Pagination Yes No
Wildcards Yes No

Keep in mind that updating existing items or removing items from the index has a significant cost. When existing items of your index needs to be updated/removed continuously then BulkSearch may be a better choice.


HTML / Javascript

Use flexsearch.min.js for production and flexsearch.js for development.

<script src="js/flexsearch.min.js"></script>


Use latest from CDN:


Or a specific version:



var FlexSearch = require("./flexsearch.js");


npm install flexsearch

In your code include as follows:

var FlexSearch = require("flexsearch");

Or pass in options when requiring:

var index = require("flexsearch").create({/* options */});

API Overview

Global methods: - FlexSearch.create(<options>) - FlexSearch.registerMatcher({KEY: VALUE}) - FlexSearch.registerEncoder(name, encoder) - FlexSearch.registerLanguage(lang, {stemmer:{}, filter:[]}) - FlexSearch.encode(name, string)

Index methods: - Index.add(id, string) -, <limit>, <callback>) -<options>) - Index.update(id, string) - Index.remove(id) - Index.clear() - Index.destroy() - Index.init(<options>) - - Index.find() - Index.where() - Index.addMatcher({KEY: VALUE}) - Index.encode(string) - Index.export() - Index.import(string)

Index properties: - - Index.length - Index.index


Create a new index


var index = new FlexSearch();

alternatively you can also use:

var index = FlexSearch.create();

Create a new index and choosing one of the presets:

var index = new FlexSearch("speed");

Create a new index with custom options:

var index = new FlexSearch({

// default values:

encode: "balance",
tokenize: "forward",
threshold: 0,
async: false,
worker: false,
cache: false


Create a new index and extend a preset with custom options:

var index = new FlexSearch("memory", {
    encode: "balance",
    tokenize: "forward",
    threshold: 0

See all available custom options.

Add text item to an index

Index.add(id, string)

index.add(10025, "John Doe");

Search items | options, <limit>, <callback>)"John");

Limit the result:"John", 10);

Async Search

Perform queries asynchronously:"John", function(result){

// array of results


Passing a callback always will perform as asynchronous even if the "async" option was not set.

Perform queries asynchronously (Promise-based):

Make sure the option "async" is enabled on this instance to receive promises."John").then(function(result){

// array of results


Alternatively ES6:

async function search(query){

const result = await;

// ...


Custom Search

Pass custom options for each query:{

query: "John",
limit: 1000,
threshold: 5, // &gt;= threshold
depth: 3,     // &lt;= depth
callback: function(results){
    // ...


The same from above could also be written as:"John", {

limit: 1000,
threshold: 5,
depth: 3

}, function(results){

// ....


See all available custom search options.


FlexSearch is providing a cursor-based pagination which has the ability to inject into the most-inner process. This enables the possibility of many performance improvements.

The cursor implementation may be changed often. Just take the cursor as it is and do not expect any specific value or format.

To enable pagination you have to pass a page field within the custom search object (optionally also a limit as maximum items per page).

Get the first page of results: ```js var response ="John Doe", {

limit: 5,
page: true

}); ```

Always when passing a page within custom search the response have this format:

    "page": "xxx:xxx",
    "next": "xxx:xxx",
    "result": []
  • page is the pointer to the current page
  • next is the pointer to the next page or null when no pages are left
  • result yields the searching results

Get the second (next) page of results: ```js"John Doe", {

limit: 10,

}); ```

The limit can be modified for each query.


Get also suggestions for a query:{

query: "John Doe",
suggest: true


When suggestion is enabled all results will be filled up (until limit, default 1000) with similar matches ordered by relevance.

Actually phonetic suggestions are not supported, for that purpose use the encoder and tokenizer which provides similar functionality. Suggestions comes into game when a query has multiple words/phrases. Assume a query contains 3 words. When the index just match 2 of 3 words then normally you will get no results, but with suggestion enabled you will also get results when 2 of 3 words was matched as well 1 of 3 words was matched (depends on the limit), also sorted by relevance.

Note: Is is planned to improve this feature and providing more flexibility.

Update item from an index

Index.update(id, string)

index.update(10025, "Road Runner");

Remove item from an index



Reset index


Destroy the index


Re-Initialize the index


Initialize (with same options):


Initialize with new options: ```js index.init({

/* options */

}); ```

Re-initialization will also destroy the old index.

Get Length

Get the length of an index:

var length = index.length;

Get Register

Get the index (register) of an instance:

var index = index.index;

The register has the format "@" + id.

Important: Do not modify manually, just use it as read-only.

Add custom matcher

FlexSearch.registerMatcher({REGEX: REPLACE})

Add global matchers for all instances: ```js FlexSearch.registerMatcher({

'ä': 'a', // replaces all 'ä' to 'a'
'ó': 'o',
'[ûúù]': 'u' // replaces multiple

}); ```

Add private matchers for a specific instance: ```js index.addMatcher({

'ä': 'a', // replaces all 'ä' to 'a'
'ó': 'o',
'[ûúù]': 'u' // replaces multiple

}); ```

Add custom encoder

Assign a custom encoder by passing a function during index creation/initialization: ```js var index = new FlexSearch({

encode: function(str){

// do something with str ...

return str;


}); ```

The encoder function gets a string as a parameter and has to return the modified string.

Call a custom encoder directly:

var encoded = index.encode("sample text");

Register a global encoder

FlexSearch.registerEncoder(name, encoder)

Global encoders can be shared/used by all instances.

FlexSearch.registerEncoder("whitespace", function(str){

return str.replace(/\s/g, "");


Initialize index and assign a global encoder:

var index = new FlexSearch({ encode: "whitespace" });

Call a global encoder directly:

var encoded = FlexSearch.encode("whitespace", "sample text");

Mix/Extend multiple encoders

FlexSearch.registerEncoder('mixed', function(str){

str = this.encode("icase", str); // built-in
str = this.encode("whitespace", str); // custom

 // do something additional with str ...

return str;


Add custom tokenizer

A tokenizer split words into components or chunks.

Define a private custom tokenizer during creation/initialization: ```js var index = new FlexSearch({

tokenize: function(str){

return str.split(/\s-\//g);


}); ```

The tokenizer function gets a string as a parameter and has to return an array of strings (parts).

Add language-specific stemmer and/or filter

Stemmer: several linguistic mutations of the same word (e.g. "run" and "running")

Filter: a blacklist of words to be filtered out from indexing at all (e.g. "and", "to" or "be")

Assign a private custom stemmer or filter during creation/initialization: ```js var index = new FlexSearch({

stemmer: {

// object {key: replacement}
"ational": "ate",
"tional": "tion",
"enci": "ence",
"ing": ""

}, filter: [

// array blacklist


}); ```

Using a custom filter, e.g.: ```js var index = new FlexSearch({

filter: function(value){

// just add values with length &gt; 1 to the index

return value.length &gt; 1;


}); ```

Or assign stemmer/filters globally to a language:

Stemmer are passed as a object (key-value-pair), filter as an array.

FlexSearch.registerLanguage("us", {

stemmer: { /* ... */ },
filter:  [ /* ... */ ]


Or use some pre-defined stemmer or filter of your preferred languages:


<script src="js/flexsearch.min.js"></script>
<script src="js/lang/en.min.js"></script>
<script src="js/lang/de.min.js"></script>


Now you can assign built-in stemmer during creation/initialization: ```js var index_en = new FlexSearch({ stemmer: "en", filter: "en" });

var index_de = new FlexSearch({ stemmer: "de", filter: [ /* custom */ ] }); ```

In Node.js you just have to require the language pack files to make them available:


It is also possible to compile language packs into the build as follows:

node compile SUPPORT_LANG_EN=true SUPPORT_LANG_DE=true

Right-To-Left Support

Set the tokenizer at least to "reverse" or "full" when using RTL.

Just set the field "rtl" to true and use a compatible tokenizer:

var index = FlexSearch.create({
    encode: "icase",
    tokenize: "reverse",
    rtl: true

CJK Word Break (Chinese, Japanese, Korean)

Set a custom tokenizer which fits your needs, e.g.:

var index = FlexSearch.create({
    encode: false,
    tokenize: function(str){
        return str.replace(/[\x00-\x7F]/g, "").split("");

You can also pass a custom encoder function to apply some linguistic transformations.

index.add(0, "一个单词");
var results ="单词");

Get info about an index

This feature is available in DEBUG mode.;

Returns information e.g.:

    "id": 0,
    "memory": 10000,
    "items": 500,
    "sequences": 3000,
    "matchers": 0,
    "chars": 3500,
    "cache": false,
    "matcher": 0,
    "worker": false,
    "threshold": 7,
    "depth": 3,
    "contextual": true

Index Documents (Field-Search)

The Document Descriptor

Assume the document is an array of data like:

var docs = [{
    id: 0,
    title: "Title",
    cat: "Category",
    content: "Body"
    id: 1,
    title: "Title",
    cat: "Category",
    content: "Body"

Provide a document descriptor doc when initializing a new index, e.g. related to the example above:

var index = new FlexSearch({
    tokenize: "strict",
    depth: 3,
    doc: {
        id: "id",
        field: "content"

The above example will just index the field "content", to index multiple fields pass an array:

var index = new FlexSearch({
    doc: {
        id: "id",
        field: [

You are also able to provide custom presets for each field separately:

var index = new FlexSearch({
    doc: {
        id: "id",
        field: {
            title: {
                encode: "extra",
                tokenize: "reverse",
                threshold: 7
            cat: {
                encode: false,
                tokenize: function(val){
                    return [val];
            content: "memory"

Complex Objects

Assume the document array looks more complex (has nested branches etc.), e.g.:

var docs = [{
        id: 0,
        title: "Foo",
        body: {
            content: "Foobar"
        id: 1,
        title: "Bar",
        body: {
            content: "Foobar"

Then use the colon separated notation "root:child:child" to define hierarchy within the document descriptor:

var index = new FlexSearch({
    doc: {
        id: "data:id",
        field: [

Hint: This is an alternative for indexing documents which has nested arrays:

Add/Update/Remove Documents to/from the Index

Just pass the document array (or a single object) to the index:


Update index with a single object or an array of objects:

        id: 0,
        title: "Foo",
        body: {
            content: "Bar"

Remove a single object or an array of objects from the index:


When the id is known, you can also simply remove by (faster):



The search gives you several options when using documents.

The colon notation also has to be applied for the searching respectively.

This will search through all indexed fields:

var results ="body");

This will search on a specific field):

var results ={
    field: "title",
    query: "foobar"
var results ={
    field: "data:body:content",
    query: "foobar"

This could also be written as:

var results ="foobar", {
    field: "data:body:content",

Search the same query on multiple fields:

Using bool as a logical operator when searching through multiple fields. The default operator when not set is "or".

var results ={
    query: "foobar",
    field: ["title", "body"],
    bool: "or"

Could also be written as:

var results ="foobar", {
    field: ["title", "body"],
    bool: "or"

Search different queries on multiple fields:

var results =[{
    field: "title",
    query: "foo"
    field: "body",
    query: "bar"

Sort by a custom function: ```js var results ="John", {

limit: 100,
sort: function(a, b){
    return ( < ? -1 : (
   > ? 1 : 0

}); ```


Simply chain methods like:

var index = FlexSearch.create()
                      .addMatcher({'â': 'a'})
                      .add(0, 'foo')
                      .add(1, 'bar');
index.remove(0).update(1, 'foo').add(2, 'foobar');

Enable Contextual Scoring

Create an index and just set the limit of relevance as "depth": ```js var index = new FlexSearch({

encode: "icase",
tokenize: "strict",
threshold: 7,
depth: 3

}); ```

Only the tokenizer "strict" is actually supported by the contextual index.

The contextual index requires additional amount of memory depending on depth.

Try to use the lowest depth and highest threshold which fits your needs.

It is possible to modify values for threshold and depth during search (see custom search). The restriction is that the threshold can only be raised, on the other hand the depth can only be lowered.

Enable Auto-Balanced Cache

Create index and just set a limit of cache entries: ```js var index = new FlexSearch({

profile: "score",
cache: 1000

}); ```

When passing a number as a limit the cache automatically balance stored entries related to their popularity.

When just using "true" the cache is unbounded and perform actually 2-3 times faster (because the balancer do not have to run).

Web-Worker (Browser only)

Worker get its own dedicated memory and also run in their own dedicated thread without blocking the UI while processing. Especially for larger indexes, web worker improves speed and available memory a lot. FlexSearch index was tested with a 250 Mb text file including 10 Million words. <!--The indexing was done silently in background by multiple parallel running workers in about 7 minutes. The final index reserves ~ 8.2 Mb memory/space. The search result took ~ 0.25 ms.-->

When the index isn't big enough it is faster to use no web worker.

Create index and just set the count of parallel threads: ```js var index = new FlexSearch({

encode: "icase",
tokenize: "full",
async: true,
worker: 4

}); ```

Adding items to worker index as usual (async enabled):

index.add(10025, "John Doe");

Perform search and simply pass in callback like:"John Doe", function(results){

// do something with array of results


Or use promises accordingly:"John Doe").then(function(results){

// do something with array of results



FlexSearch is highly customizable. Make use of the the right options can really improve your results as well as memory economy and query time.

Initialize Index

Option Values Description

The configuration profile. Choose your preferation.

The indexing mode (tokenizer).

Choose one of the built-ins or pass a custom tokenizer function.

The rule to split words when using non-custom tokenizer (built-ins e.g. "forward"). Use a string/char or use a regular expression (default: /\W+/).

The encoding type.

Choose one of the built-ins or pass a custom encoding function.

Enable/Disable and/or set capacity of cached entries.

When passing a number as a limit the cache automatically balance stored entries related to their popularity.

Note: When just using "true" the cache has no limits and is actually 2-3 times faster (because the balancer do not have to run).

Enable/Disable asynchronous processing.

Each job will be queued for non-blocking processing. Recommended when using WebWorkers.

Enable/Disable and set count of running worker threads.

Enable/Disable contextual indexing and also sets contextual distance of relevance. Depth is the maximum number of words/tokens away a term to be considered as relevant.

Enable/Disable the threshold of minimum relevance all results should have.

Note: It is also possible to set a lower threshold for indexing and pass a higher value when calling
resolution {number} Sets the scoring resolution (default: 9).

Disable or pass in language shorthand flag (ISO-3166) or a custom object.

Disable or pass in language shorthand flag (ISO-3166) or a custom array.

Enables Right-To-Left encoding.

Custom Search

Option Values Description
limit number Sets the limit of results.
suggest true, false Enables suggestions in results.
where object Use a where-clause for non-indexed fields.
field string, Array<string> Sets the document fields which should be searched. When no field is set, all fields will be searched. Custom options per field are also supported.
bool "and", "or" Sets the used logical operator when searching through multiple fields.
page true, false, cursor Enables paginated results.

You can also override these following index settings via custom search (v0.7.0):

  • encode
  • split
  • tokenize
  • threshold
  • cache
  • async

Custom-Search options will override index options.

Field-Search (v0.7.0)

Option Values Description
limit number Sets the limit of results per field.
suggest true, false Enables suggestions in results per field.
bool "and", "or", "not" Sets the used logical operator when searching through multiple fields.
boost number Enables boosting fields.

You can also override these following index settings per field via custom field-search:

  • encode
  • split
  • tokenize
  • threshold

Field-Search options will override custom-search options and index options.

Depth, Threshold, Resolution?

Whereas depth is the minimum relevance for the contextual index, threshold is the minimum relevance for the lexical index. The threshold score is an enhanced variation of a conventional scoring calculation, it uses on document distance and partial distance instead of TF-IDF. The final scoring value is based on 3 kinds of distance.

Resolution on the other hand specify the max scoring value. The final score value is an integer value, so resolution affect how many segments the score may have. When the resolution is 1, then there exist just one scoring level for all matched terms. To get more differentiated results you need to raise the resolution.

The difference of both affects the performance on higher values (complexity = resolution - threshold).

The combination of resolution and threshold gives you a good controlling of your matches as well as performance, e.g. when the resolution is 25 and the threshold is 22, then the result only contains matches which are super relevant. The goal should always be just have items in result which are really needed. On top, that also improves performance a lot.


Tokenizer affects the required memory also as query time and flexibility of partial matches. Try to choose the most upper of these tokenizer which fits your needs:

Option Description Example Memory Factor (n = length of word)
"strict" index whole words foobar * 1
"forward" incrementally index words in forward direction foobar
* n
"reverse" incrementally index words in both directions foobar
* 2n - 1
"full" index every possible combination foobar
* n * (n - 1)


Encoding affects the required memory also as query time and phonetic matches. Try to choose the most upper of these encoders which fits your needs, or pass in a custom encoder:

Option Description False-Positives Compression
false Turn off encoding no no
"icase" (default) Case in-sensitive encoding no no
"simple" Phonetic normalizations no ~ 7%
"advanced" Phonetic normalizations + Literal transformations no ~ 35%
"extra" Phonetic normalizations + Soundex transformations yes ~ 60%
function() Pass custom encoding via function(string):string

Encoder Matching Comparison

Reference String: "Björn-Phillipp Mayer"

Query icase simple advanced extra
björn yes yes yes yes
björ yes yes yes yes
bjorn no yes yes yes
bjoern no no yes yes
philipp no no yes yes
filip no no yes yes
björnphillip no yes yes yes
meier no no yes yes
björn meier no no yes yes
meier fhilip no no yes yes
byorn mair no no no yes
(false positives) no no no yes

Memory Usage

The required memory for the index depends on several options:

Encoding Memory usage of every ~ 100,000 indexed word
false 260 kb
"icase" (default) 210 kb
"simple" 190 kb
"advanced" 150 kb
"extra" 90 kb
Mode Multiplied with: (n = average length of indexed words)
"strict" * 1
"forward" * n
"reverse" * 2n - 1
"full" * n * (n - 1)
Contextual Index Multiply the sum above with:
* (depth * 2 + 1)

Adding, removing or updating existing items has a similar complexity. The contextual index grows exponentially, that's why it is actually just supported for the tokenizer "strict".

Compare Memory Consumption

The book "Gulliver's Travels" (Swift Jonathan 1726) was used for this test.


You can pass a preset during creation/initialization. They represents these following settings:

"default": Standard profile

    encode: "icase",
    tokenize: "forward",
    resolution: 9

"memory": Memory-optimized profile

    encode: "extra",
    tokenize: "strict",
    threshold: 0,
    resolution: 1

"speed": Speed-optimized profile

    encode: "icase",
    tokenize: "strict",
    threshold: 1,
    resolution: 3,
    depth: 2

"match": Matching-tolerant profile

    encode: "extra",
    tokenize: "full",
    threshold: 1,
    resolution: 3

"score": Relevance-optimized profile

    encode: "extra",
    tokenize: "strict",
    threshold: 1,
    resolution: 9,
    depth: 4

"balance": Most-balanced profile

    encode: "balance",
    tokenize: "strict",
    threshold: 0,
    resolution: 3,
    depth: 3

"fast": Absolute fastest profile

    encode: "icase",
    threshold: 8,
    resolution: 9,
    depth: 1

Compare these presets: - Relevance Scoring
- Benchmarks

Performance Guide

Methods to retrieve results sorted from fastest to slowest:

  1. index.find(id) -> doc
  2. index.where({field: string}) -> Arrary
    with a tag on the same field
  3. -> Arrary
    when just adding id and content to the index (no documents)
  4. -> Arrary
    when using documents
  5., { where }) -> Arrary
    when using documents and a where clause
  6. index.where({field: [string, string]}) -> Arrary
    when a tag was set to one of two fields
  7. index.where({field: string}) -> Arrary
    when no tag was set to this field

Methods to change index from fastest to slowest:

  1. index.add(id, string)
  2. index.add(docs)
  3. index.delete(id, string)
  4. index.delete(docs)
  5. index.update(id, string)
  6. index.update(docs)

Performance Checklist:

  • Using just id-content-pairs for the index performs almost faster than using docs
  • An additional where-clause in
    has a significant cost
  • When adding multiple fields of documents to the index try to set the lowest possible preset for each field separately
  • Make sure the auto-balanced cache is enabled and has a meaningful value
  • Using
    to find documents is very slow when not using a tagged field
  • Getting a document by ID via
    is extremely fast
  • Do not enable async as well as worker when the index does not claim it
  • Use numeric IDs (the datatype length of IDs influences the memory consumption significantly)
  • Try to enable contextual index by setting the depth to a minimum meaningful value and tokenizer to "strict"
  • Pass a limit when searching (lower values performs better)
  • Pass a minimum threshold when searching (higher values performs better)
  • Try to minify the content size of indexed documents by just adding attributes you really need to get back from results

Best Practices

Split Complexity

Whenever you can, try to divide content by categories and add them to its own index, e.g.:

var action = new FlexSearch();
var adventure = new FlexSearch();
var comedy = new FlexSearch();

This way you can also provide different settings for each category. This is actually the fastest way to perform a fuzzy search.

To make this workaround more extendable you can use a short helper:

var index = {};

function add(id, cat, content){ (index[cat] || ( index[cat] = new FlexSearch )).add(id, content); }

function search(cat, query){ return index[cat] ? index[cat].search(query) : []; }

Add content to the index:

add(1, "action", "Movie Title");
add(2, "adventure", "Movie Title");
add(3, "comedy", "Movie Title");

Perform queries:

var results = search("action", "movie title"); // --> [1]

Split indexes by categories improves performance significantly.

Use numeric IDs

It is recommended to use numeric id values as reference when adding content to the index. The byte length of passed ids influences the memory consumption significantly. If this is not possible you should consider to use a index table and map the ids with indexes, this becomes important especially when using contextual indexes on a large amount of content.

Export/Import Index

index.export() returns a serialized dump as a string.

index.import(string) takes a serialized dump as a string and load it to the index.

Assuming you have one or several indexes:

var feeds_2017 = new FlexSearch();
var feeds_2018 = new FlexSearch();
var feeds_2019 = new FlexSearch();

Export indexes, e.g. to the local storage:

localStorage.setItem("feeds_2017", feeds_2017.export());
localStorage.setItem("feeds_2018", feeds_2018.export());
localStorage.setItem("feeds_2019", feeds_2019.export());

Import indexes, e.g. from the local storage:


Disable Serialization

Pass a config flag to control serialization:

var data = index.export({serialize: false});

Use the same config accordingly when importing data:

index.import(data, {serialize: false});

Separate Exports (Documents)

Pass a config flag to separate index and documents from the export:

var idx = index.export({index: true, doc: false});
var docs = index.export({index: false, doc: true});

Use the same config accordingly when importing data:

index.import(idx, {index: true, doc: false});
index.import(docs, {index: false, doc: true});

You can also pass documents as external data, assume

is an array of documents:
index.import(idx, {index: true, doc: data});


Do not use DEBUG in production builds.

If you get issues, you can temporary set the DEBUG flag to true on top of flexsearch.js:

DEBUG = true;

This enables console logging of several processes. Just open the browsers console to make this information visible.

Profiler Stats

Do not use PROFILER in production builds.

To collect some performance statistics of your indexes you need to temporary set the PROFILER flag to true on top of flexsearch.js:

PROFILER = true;

This enables profiling of several processes.

An array of all profiles is available on:


You can also just open the browsers console and enter this line to get stats.

The index of the array corresponds to the

Get stats from a specific index:


The returning stats payload is divided into several categories. Each of these category provides its own statistic values.

Profiler Stats Properties
Property Description
time The sum of time (ms) the process takes (lower is better)
count How often the process was called
ops Average operations per seconds (higher is better)
nano Average cost (ns) per operation/call (lower is better)

Custom Builds

Full Build:

npm run build

Compact Build:

npm run build-compact

Light Build:

npm run build-light

Build Language Packs:

npm run build-lang

Custom Build:

npm run build-custom SUPPORT_WORKER=true SUPPORT_ASYNC=true

On custom builds each build flag will be set to false by default.

Alternatively you can also use:

node compile SUPPORT_WORKER=true

The custom build will be saved to flexsearch.custom.xxxxx.js (the "xxxxx" is a hash based on the used build flags).

Supported Build Flags
Flag Values
DEBUG true, false
PROFILER true, false
SUPPORT_WHERE true, false
SUPPORT_WORKER true, false
SUPPORT_CACHE true, false
SUPPORT_ASYNC true, false
SUPPORT_PRESET true, false
SUPPORT_INFO true, false

Language Packs
SUPPORT_LANG_EN true, false
SUPPORT_LANG_DE true, false

Compiler Flags



Feel free to contribute to this project and also feel free to contact me ( when you have any questions.


Copyright 2019 Nextapps GmbH
Released under the Apache 2.0 License

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.