Need help with kafka_ex?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

500 Stars 147 Forks MIT License 1.2K Commits 44 Opened issues


Kafka client library for Elixir

Services available


Need anything else?

Contributors list


Build Status Coverage Status version downloads License API Docs

KafkaEx is an Elixir client for Apache Kafka with support for Kafka versions 0.8.0 and newer. KafkaEx requires Elixir 1.5+ and Erlang OTP 19+.

See for documentation, for code.

KakfaEx supports the following Kafka features:

  • Broker and Topic Metadata
  • Produce Messages
  • Fetch Messages
  • Message Compression with Snappy and gzip
  • Offset Management (fetch / commit / autocommit)
  • Consumer Groups
  • Topics Management (create / delete)

See Kafka Protocol Documentation and A Guide to the Kafka Protocol for details of these features.

IMPORTANT - Kayrock and The Future of KafkaEx


  • This is new implementation and we need people to test it!
  • Set
    kafka_version: "kayrock"
    to use the new client implementation.
  • The new client should be compatible with existing code when used this way.
  • Many functions now support an
    parameter, see below for details, e.g., how to store offsets in Kafka instead of Zookeeper.
  • Version 1.0 of KafkaEx will be based on Kayrock and have a cleaner API - you can start testing this API by using modules from the
    namespace. See below for details.
  • Version 0.11.0+ of KafkaEx is required to use Kayrock.

To support some oft-requested features (offset storage in Kafka, message timestamps), we have integrated KafkaEx with Kayrock which is a library that handles serialization and deserialization of the Kafka message protocol in a way that can grow as Kafka does.

Unfortunately, the existing KafkaEx API is built in such a way that it doesn't easily support this growth. This, combined with a number of other existing warts in the current API, has led us to the conclusion that v1.0 of KafkaEx should have a new and cleaner API.

The path we have planned to get to v1.0 is:

  1. Add a Kayrock compatibility layer for the existing KafkaEx API (DONE, not released).
  2. Expose Kayrock's API versioning through a select handful of KafkaEx API functions so that users can get access to the most-requested features (e.g., offset storage in Kafka and message timestamps) (DONE, not released).
  3. Begin designing and implementing the new API in parallel in the
    namespace (EARLY PROGRESS).
  4. Incrementally release the new API alongside the legacy API so that early adopters can test it.
  5. Once the new API is complete and stable, move it to the
    namespace (i.e., drop the
    part) and it will replace the legacy API. This will be released as v1.0.

Users of KafkaEx can help a lot by testing the new code. At first, we need people to test the Kayrock-based client using compatibility mode. You can do this by simply setting

kafka_version: "kayrock"
in your configuration. That should be all you need to change. If you want to test new features enabled by
options then that is also very valuable to us (see below for links to details). Then, as work on the new API ramps up, users can contribute feedback to pull requests (or even contribute pull requests!) and test out the new API as it becomes available.

For more information on using the Kayrock-based client, see

For more information on the v1.0 API, see

Using KafkaEx in an Elixir project

The standard approach for adding dependencies to an Elixir application applies: add KafkaEx to the deps list in your project's mix.exs file. You may also optionally add snappyer (required only if you want to use snappy compression).

# mix.exs
defmodule MyApp.Mixfile do
  # ...

defp deps do [ # add to your existing deps {:kafka_ex, "> 0.11"}, # If using snappy-erlang-nif (snappy) compression {:snappy, git: ""} # if using snappyer (snappy) compression {:snappyer, "> 1.2"} ] end end

Then run

mix deps.get
to fetch dependencies.


See config/config.exs or KafkaEx.Config for a description of configuration variables, including the Kafka broker list and default consumer group.

You can also override options when creating a worker, see below.

Timeouts with SSL

When using certain versions of OTP, random timeouts can occur if using SSL.

Impacted versions:

  • OTP ->
  • OTP 22.1 -> 22.3.1

Upgrade respectively to or 22.3.2 to solve this.

Usage Examples

Consumer Groups

To use a consumer group, first implement a handler module using

defmodule ExampleGenConsumer do
  use KafkaEx.GenConsumer

alias KafkaEx.Protocol.Fetch.Message

require Logger

note - messages are delivered in batches

def handle_message_set(message_set, state) do for %Message{value: message} "message: " <> inspect(message) end) end {:async_commit, state} end end

Then add a

to your application's supervision tree and configure it to use the implementation module.

See the

documentation for details.

Create a KafkaEx Worker

KafkaEx worker processes manage the state of the connection to the Kafka broker.

iex> KafkaEx.create_worker(:pr) # where :pr is the process name of the created worker
{:ok, #PID<0.171.0>}

With custom options:

iex> uris = [{"localhost", 9092}, {"localhost", 9093}, {"localhost", 9094}]
[{"localhost", 9092}, {"localhost", 9093}, {"localhost", 9094}]
iex> KafkaEx.create_worker(:pr, [uris: uris, consumer_group: "kafka_ex", consumer_group_update_interval: 100])
{:ok, #PID<0.172.0>}

Create an unnamed KafkaEx worker

You may find you want to create many workers, say in conjunction with a

pool. In this scenario you usually won't want to name these worker processes.

To create an unnamed worked with

iex> KafkaEx.create_worker(:no_name) # indicates to the server process not to name the process
{:ok, #PID<0.171.0>}

Use KafkaEx with a pooling library

Note that KafkaEx has a supervisor to manage its workers. If you are using Poolboy or a similar library, you will want to manually create a worker so that it is not supervised by

. To do this, you will need to call:
    [uris: KafkaEx.Config.brokers(),
     consumer_group: Application.get_env(:kafka_ex, :consumer_group)],

Alternatively, you can call


Retrieve kafka metadata

For all metadata

iex> KafkaEx.metadata
%KafkaEx.Protocol.Metadata.Response{brokers: [%KafkaEx.Protocol.Metadata.Broker{host:
   node_id: 49162, port: 49162, socket: nil}],
 topic_metadatas: [%KafkaEx.Protocol.Metadata.TopicMetadata{error_code: :no_error,
   partition_metadatas: [%KafkaEx.Protocol.Metadata.PartitionMetadata{error_code: :no_error,
     isrs: [49162], leader: 49162, partition_id: 0, replicas: [49162]}],
  %KafkaEx.Protocol.Metadata.TopicMetadata{error_code: :no_error,
   partition_metadatas: [%KafkaEx.Protocol.Metadata.PartitionMetadata{error_code: :no_error,
     isrs: [49162], leader: 49162, partition_id: 0, replicas: [49162]}],
  %KafkaEx.Protocol.Metadata.TopicMetadata{error_code: :no_error,
   partition_metadatas: [%KafkaEx.Protocol.Metadata.PartitionMetadata{error_code: :no_error,
     isrs: [49162], leader: 49162, partition_id: 0, replicas: [49162]}],
  %KafkaEx.Protocol.Metadata.TopicMetadata{error_code: :no_error,

For a specific topic

iex> KafkaEx.metadata(topic: "foo")
%KafkaEx.Protocol.Metadata.Response{brokers: [%KafkaEx.Protocol.Metadata.Broker{host: "",
   node_id: 49162, port: 49162, socket: nil}],
 topic_metadatas: [%KafkaEx.Protocol.Metadata.TopicMetadata{error_code: :no_error,
   partition_metadatas: [%KafkaEx.Protocol.Metadata.PartitionMetadata{error_code: :no_error,
     isrs: [49162], leader: 49162, partition_id: 0, replicas: [49162]}],
   topic: "foo"}]}

Retrieve offset from a particular time

Kafka will get the starting offset of the log segment that is created no later than the given timestamp. Due to this, and since the offset request is served only at segment granularity, the offset fetch request returns less accurate results for larger segment sizes.

iex> KafkaEx.offset("foo", 0, {{2015, 3, 29}, {23, 56, 40}}) # Note that the time specified should match/be ahead of time on the server that kafka runs
[%KafkaEx.Protocol.Offset.Response{partition_offsets: [%{error_code: :no_error, offset: [256], partition: 0}], topic: "foo"}]

Retrieve the latest offset

iex> KafkaEx.latest_offset("foo", 0) # where 0 is the partition
[%KafkaEx.Protocol.Offset.Response{partition_offsets: [%{error_code: :no_error, offset: [16], partition: 0}], topic: "foo"}]

Retrieve the earliest offset

iex> KafkaEx.earliest_offset("foo", 0) # where 0 is the partition
[%KafkaEx.Protocol.Offset.Response{partition_offsets: [%{error_code: :no_error, offset: [0], partition: 0}], topic: "foo"}]

Fetch kafka logs

NOTE You must pass

auto_commit: false
in the options for
when using Kafka < 0.8.2 or when using
iex> KafkaEx.fetch("foo", 0, offset: 5) # where 0 is the partition and 5 is the offset we want to start fetching from
[%KafkaEx.Protocol.Fetch.Response{partitions: [%{error_code: :no_error,
     hw_mark_offset: 115,
     message_set: [
      %KafkaEx.Protocol.Fetch.Message{attributes: 0, crc: 4264455069, key: nil, offset: 5, value: "hey"},
      %KafkaEx.Protocol.Fetch.Message{attributes: 0, crc: 4264455069, key: nil, offset: 6, value: "hey"},
      %KafkaEx.Protocol.Fetch.Message{attributes: 0, crc: 4264455069, key: nil, offset: 7, value: "hey"},
      %KafkaEx.Protocol.Fetch.Message{attributes: 0, crc: 4264455069, key: nil, offset: 8, value: "hey"},
      %KafkaEx.Protocol.Fetch.Message{attributes: 0, crc: 4264455069, key: nil, offset: 9, value: "hey"}
...], partition: 0}], topic: "foo"}]

Produce kafka logs

iex> KafkaEx.produce("foo", 0, "hey") # where "foo" is the topic and "hey" is the message

Stream kafka logs

See the
documentation for details on streaming.
iex> KafkaEx.produce("foo", 0, "hey")
iex> KafkaEx.produce("foo", 0, "hi")
iex>"foo", 0, offset: 0) |> Enum.take(2)
[%{attributes: 0, crc: 4264455069, key: nil, offset: 0, value: "hey"},
 %{attributes: 0, crc: 4251893211, key: nil, offset: 1, value: "hi"}]

For Kafka < 0.8.2 the

auto_commit: false
iex>"foo", 0, offset: 0, auto_commit: false) |> Enum.take(2)


Snappy and gzip compression is supported. Example usage for producing compressed messages:

message1 = %KafkaEx.Protocol.Produce.Message{value: "value 1"}
message2 = %KafkaEx.Protocol.Produce.Message{key: "key 2", value: "value 2"}
messages = [message1, message2]

#snappy produce_request = %KafkaEx.Protocol.Produce.Request{ topic: "test_topic", partition: 0, required_acks: 1, compression: :snappy, messages: messages} KafkaEx.produce(produce_request)

#gzip produce_request = %KafkaEx.Protocol.Produce.Request{ topic: "test_topic", partition: 0, required_acks: 1, compression: :gzip, messages: messages} KafkaEx.produce(produce_request)

Compression is handled automatically on the consuming/fetching end.


It is strongly recommended to test using the Dockerized test cluster described below. This is required for contributions to KafkaEx.

NOTE You may have to run the test suite twice to get tests to pass. Due to asynchronous issues, the test suite sometimes fails on the first try.

Dockerized Test Cluster

Testing KafkaEx requires a local SSL-enabled Kafka cluster with 3 nodes: one node listening on each port 9092, 9093, and 9093. The easiest way to do this is using the scripts in this repository that utilize Docker and Docker Compose (both of which are freely available). This is the method we use for our CI testing of KafkaEx.

To launch the included test cluster, run


script will attempt to determine an IP address for your computer on an active network interface.

The test cluster runs Kafka

Running the KafkaEx Tests

The KafkaEx tests are split up using tags to handle testing multiple scenarios and Kafka versions.

Unit tests

These tests do not require a Kafka cluster to be running (see test/test_helper.exs:3 for the tags excluded when running this).

mix test --no-start

Integration tests

If you are not using the Docker test cluster, you may need to modify

for your set up.

The full test suite requires Kafka

Kafka >= 0.9.0

The 0.9 client includes functionality that cannot be tested with older clusters.

Kafka = 0.9.0

The 0.9 client includes functionality that cannot be tested with older clusters.

mix test --include integration --include consumer_group --include server_0_p_9_p_0
Kafka >= 0.8.2 and < 0.9.0

Kafka 0.8.2 introduced the consumer group API.

mix test --include consumer_group --include integration
Kafka < 0.8.2

If your test cluster is older, the consumer group tests must be omitted.

mix test --include integration --include server_0_p_8_p_0

Static analysis

mix dialyzer


All contributions are managed through the kafkaex github repo.

If you find a bug or would like to contribute, please open an issue or submit a pull request. Please refer to for our contribution process.

KafkaEx has a Slack channel: #kafkaex on You can request an invite via The Slack channel is appropriate for quick questions or general design discussions. The Slack discussion is archived at

default snappy algorithm use snappy-erlang-nif package

It can be changed to snappyer by using this:

config :kafka_ex, snappy_module: :snappyer

Snappy erlang nif is deprecated and will be changed to :snappyer in 1.0.0 release.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.