Need help with rs-es?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

214 Stars 41 Forks Apache License 2.0 805 Commits 64 Opened issues


A Rust client for the ElasticSearch REST API

Services available


Need anything else?

Contributors list


Build Status


An ElasticSearch client for Rust via the REST API. Targetting ElasticSearch 2.0 and higher.

Other clients

For later versions of ElasticSearch you probably want the official client.


Full documentation for


Building and installation


and higher have been tested with the prevailing "stable", "beta" and "nightly" versions of rustc at the time of their release. It should also work correctly with earlier versions, however some dependencies may use or require language features that are only available in recent versions of rustc.

Available from

ElasticSearch compatibility

The default version of ElasticSearch supported is 2.0. Higher versions will also work as long as the particular part of the ES API is compatible with the version 2 spec.

Newer versions of ElasticSearch do have some incompatibilities in some areas, therefore these are not supported by this library.

However, starting with version

there is experimental support for ES 5 using the
feature flag. The intention is this support will become more complete over time and will become the new baseline supported compatible version.

Design goals

There are two primary goals: 1) to be a full implementation of the ElasticSearch REST API, and 2) to be idiomatic both with ElasticSearch and Rust conventions.

The second goal is more difficult to achieve than the first as there are some areas which conflict. A small example of this is the word

, this is a word that refers to the type of an ElasticSearch document but it also a reserved word for definining types in Rust. This means we cannot name a field
for instance, so in this library the document type is always referred to as

Usage guide

The client


wraps a single HTTP connection to a specified ElasticSearch host/port.

(At present there is no connection pooling, each client has one connection; if you need multiple connections you will need multiple clients. This may change in the future).

use rs_es::Client;

let mut client = Client::init("http://localhost:9200");



provides various operations, which are analogous to the various ElasticSearch APIs.

In each case the

has a function which returns a builder-pattern object that allows additional options to be set. The function itself will require mandatory parameters, everything else is on the builder (e.g. operations that require an index to be specified will have index as a parameter on the function itself).

An example of optional parameters is

. The routing parameter can be set on operations that support it with:


See the ElasticSearch guide for the full set of options and what they mean.


An implementation of the Index API.

let index_op = client.index("index_name", "type_name");

Returned is an

to add additional options. For example, to set an ID and a TTL:

The document to be indexed has to implement the

trait from the
library. This can be achieved by either implementing or deriving that on a custom type, or by manually creating a


submits the index operation and returns an


An implementation of the Get API.

Index and ID are mandatory, but type is optional. Some examples:

// Finds a document of any type with the given ID
let result_1 = client.get("index_name", "ID_VALUE").send();

// Finds a document of a specific type with the given ID let result_2 = client.get("index_name", "ID_VALUE").with_doc_type("type_name").send();


An implementation of the Delete API.

Index, type and ID are mandatory.

let result = client.delete("index_name", "type_name", "ID_VALUE").send();


Sends a refresh request.

use rs_es::Client;

let mut client = Client::init("http://localhost:9200").expect("connection failed"); // To everything let result = client.refresh().send();

// To specific indexes let result = client.refresh().with_indexes(&["index_name", "other_index_name"]).send();


An implementation of the Search API using query strings.


use rs_es::Client;

let mut client = Client::init("http://localhost:9200").expect("connection failed"); let result = client.search_uri() .with_indexes(&["index_name"]) .with_query("field:value") .send::();


An implementation of the Search API using the Query DSL.

use rs_es::Client;
use rs_es::query::Query;

let mut client = Client::init("http://localhost:9200").expect("connection failed"); let result = client.search_query() .with_indexes(&["index_name"]) .with_query(&Query::build_match("field", "value").build()) .send::();

A search query also supports scan and scroll, sorting, and aggregations.


An implementation of the Count API using query strings.


use rs_es::Client;

let mut client = Client::init("http://localhost:9200").expect("connection failed"); let result = client.count_uri() .with_indexes(&["index_name"]) .with_query("field:value") .send();


An implementation of the Count API using the Query DSL.

use rs_es::Client;
use rs_es::query::Query;

let mut client = Client::init("http://localhost:9200").expect("connection failed"); let result = client.count_query() .with_indexes(&["index_name"]) .with_query(&Query::build_match("field", "value").build()) .send();


An implementation of the Bulk API. This is the preferred way of indexing (or deleting, when Delete-by-Query is removed) many documents.

use rs_es::operations::bulk::Action;

let result = client.bulk(&vec![Action::index(document1), Action::index(document2).with_id("id")]);

In this case the document can be anything that implements



Sorting is supported on all forms of search (by query or by URI), and related operations (e.g. scan and scroll).

use rs_es::Client;
use rs_es::query::Query;
use rs_es::operations::search::{Order, Sort, SortBy, SortField};

let mut client = Client::init("http://localhost:9200").expect("connection failed"); let result = client.search_query() .with_query(&Query::build_match_all().build()) .with_sort(&Sort::new(vec![ SortBy::Field(SortField::new("fieldname", Some(Order::Desc))) ])) .send::();

This is quite unwieldy for simple cases, although it does support the more exotic combinations that ElasticSearch supports; so there are also a number of convenience functions for the more simple cases, e.g. sorting by a field in ascending order:

// Omitted the rest of the query


Each of the defined operations above returns a result. Specifically this is a struct that is a direct mapping to the JSON that ElasticSearch returns.

One of the most common return types is that from the search operations, this too mirrors the JSON that ElasticSearch returns. The top-level contains two fields,

returns counts of successful/failed operations per shard, and
contains the search results. These results are in the form of another struct that has two fields
the total number of matching results; and
which is a vector of individual results.

The individual results contain meta-data for each hit (such as the score) as well as the source document (unless the query set the various options which would disable or alter this).

The type of the source document can be anything that implemented

. ElasticSearch search may return many different types of document, it also doesn't (by default) enforce any schema, this together means the structure of a returned document may need to be validated before being deserialised. In this case a search result can return a
from that data can be extracted and/or converted to other structures.

The Query DSL

ElasticSearch offers a rich DSL for searches. It is JSON based, and therefore very easy to use and composable if using from a dynamic language (e.g. Ruby); but Rust, being a staticly-typed language, things are different. The

module defines a set of builder objects which can be similarly composed to the same ends.

For example:

use rs_es::query::Query;

let query = Query::build_bool() .with_must(vec![Query::build_term("field_a", "value").build(), Query::build_range("field_b") .with_gte(5) .with_lt(10) .build()]) .build();

The resulting

value can be used in the various search/query functions exposed by the client.

The implementation makes much use of conversion traits which are used to keep a lid on the verbosity of using such a builder pattern.

Scan and scroll

When working with large result sets that need to be loaded from an ElasticSearch query, the most efficient way is to use scan and scroll. This is preferred to simple pagination by setting the

option in a search as it will keep resources open server-side allowing the next page to literally carry-on from where it was, rather than having to execute additional queries. The downside to this is that it does require more memory/open file-handles on the server, which could go wrong if there were many un-finished scrolls; for this reason, ElasticSearch recommends a short time-out for such operations, after which it will close all resources whether the client has finished or not, the client is responsible to fetch the next page within the time-out.

To use scan and scroll, begin with a search query request, but instead of calling

let scan = client.search_query()
                 .with_query(Query::build_match("field", "value").build())

(Disclaimer: any use of

in this or other example is for the purposes of brevity, obviously real code should handle errors in accordance to the needs of the application.)


can be called multiple times to fetch each page. Finally
will tell ElasticSearch the scan has finished and it can close any open resources.
let first_page = scan.scroll(&mut client);
// omitted - calls of subsequent pages
scan.close(&mut client).unwrap();

The result of the call to

does not include a reference to the client, hence the need to pass in a reference to the client in subsequent calls. The advantage of this is that that same client could be used for actions based on each

Scan and scroll with an iterator

Also supported is an iterator which will scroll through a scan.

let scan_iter = scan.iter(&mut client);

The iterator will include a mutable reference to the client, so the same client cannot be used concurrently. However the iterator will automatically call

when it is dropped, this is so the consumer of such an iterator can use iterator functions like
without having to decide when to call

The type of each value returned from the iterator is

. If an error is returned than it must be assumed the iterator is closed. The type
is the same as returned in a normal search (the verbose name is intended to mirror the structure of JSON returned by ElasticSearch).


Experimental support for aggregations is also supported.



is a
, for convenience sake conversion traits are implemented for common patterns; specifically the tuple
(&str, Aggregation)
for a single aggregation, and
for multiple aggregations.

Bucket aggregations (i.e. those that define a bucket that can contain sub-aggregations) can also be specified as a tuple

(Aggregation, Aggregations)
use rs_es::operations::search::aggregations::Aggregations;
use rs_es::operations::search::aggregations::bucket::{Order, OrderKey, Terms};
use rs_es::operations::search::aggregations::metrics::Min;

let aggs = Aggregations::from(("str", (Terms::field("str_field").with_order(Order::asc(OrderKey::Term)), Aggregations::from(("int", Min::field("int_field"))))));

The above would, when used within a

operation, generate a JSON fragment within the search request:
"str": {
    "terms": {
        "field": "str_field",
        "order": {"_term": "asc"}
    "aggs": {
        "int": {
            "field": "int_field"

The majority, but not all aggregations are currently supported. See the documentation of the aggregations package for details.

For example, to get the a reference to the result of the Terms aggregation called

(see above):
let terms_result = result.aggs_ref()

EXPERIMENTAL: the structure of results may change as it currently feels quite cumbersome.

Unimplemented features

The ElasticSearch API is made-up of a large number of smaller APIs, the vast majority of which are not yet implemented, although the most frequently used ones (searching, indexing, etc.) are.

Some, non-exhaustive, specific TODOs

  1. Add a
  2. Handling API calls that don't deal with JSON objects.
  3. Documentation.
  4. Potentially: Concrete (de)serialization for aggregations and aggregation results
  5. Metric aggregations can have an empty body (check: all or some of them?) when used as a sub-aggregation underneath certain other aggregations.
  6. Performance (ensure use of persistent HTTP connections, etc.).
  7. All URI options are just String (or things that implement ToString), sometimes the values will be arrays that should be coerced into various formats.
  8. Check type of "timeout" option on Search...


   Copyright 2015-2017 Ben Ashford

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.