Need help with redis-collectd-plugin?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

119 Stars 90 Forks 90 Commits 1 Opened issues


A Redis plugin for collectd.

Services available


Need anything else?

Contributors list


:boom: Project status: I am no longer working with Redis + collectd so am not actively maintaining this project. High quality PRs are welcome. Please contact me if you're interested in helping to maintain the project.

A Redis plugin for collectd using collectd's Python plugin.

You can capture any kind of Redis metrics like:

  • Memory used
  • Commands processed per second
  • Number of connected clients and slaves
  • Number of blocked clients
  • Number of keys stored (per database)
  • Uptime
  • Changes since last save
  • Replication delay (per slave)


  1. Place
    (assuming you have collectd installed to /opt/collectd).
  2. Configure the plugin (see below).
  3. Restart collectd.


Add the following to your collectd config or use the included redis.conf for full example. Notice, you will have to adjust cmdset section depending on the Redis version, see below.

    # Configure the redis_info-collectd-plugin

<loadplugin python>
  Globals true

<plugin python>
  ModulePath "/opt/collectd/lib/collectd/plugins/python"
  Import "redis_info"

  <module redis_info>
    Host "localhost"
    Port 6379
    # Un-comment to use AUTH
    #Auth "1234"
    # Cluster mode expected by default
    #Cluster false
    Verbose false
    #Instance "instance_1"
    # Redis metrics to collect (prefix with Redis_)
    Redis_db0_keys "gauge"
    Redis_uptime_in_seconds "gauge"
    Redis_uptime_in_days "gauge"
    Redis_lru_clock "counter"
    Redis_connected_clients "gauge"
    Redis_connected_slaves "gauge"
    Redis_blocked_clients "gauge"
    Redis_evicted_keys "gauge"
    Redis_used_memory "bytes"
    Redis_used_memory_peak "bytes"
    Redis_changes_since_last_save "gauge"
    Redis_instantaneous_ops_per_sec "gauge"
    Redis_rdb_bgsave_in_progress "gauge"
    Redis_total_connections_received "counter"
    Redis_total_commands_processed "counter"
    Redis_keyspace_hits "derive"
    Redis_keyspace_misses "derive"
    #Redis_master_repl_offset "gauge"
    #Redis_master_last_io_seconds_ago "gauge"
    #Redis_slave_repl_offset "gauge"
    Redis_cmdstat_info_calls "counter"
    Redis_cmdstat_info_usec "counter"
    Redis_cmdstat_info_usec_per_call "gauge"

Use below command and see which keys are present/missing:

redis-cli -h redis-host info commandstats

For example certain entries will not show up, because they were never used. Also if you enable verbose logging and see:

... collectd[6139]: redis_info plugin: Info key not found: cmdstat_del_calls, Instance:

It means given redis server does not return such value, and you should comment out that from config, to avoid filling logs with not so useful data, not to mention that you may trigger dropping log lines.

Multiple Redis instances

You can configure to monitor multiple redis instanceswith different config setups by the same machine by repeating the

 section, such as:
  ModulePath "/opt/collectd_plugins"
  Import "redis_info"

    Host ""
    Port 9100
    Verbose true
    Instance "instance_9100"
    Redis_uptime_in_seconds "gauge"
    Redis_used_memory "bytes"
    Redis_used_memory_peak "bytes"

    Host ""
    Port 9101
    Verbose true
    Instance "instance_9101"
    Redis_uptime_in_seconds "gauge"
    Redis_used_memory "bytes"
    Redis_used_memory_peak "bytes"
    Redis_master_repl_offset "gauge"

    Host ""
    Port 9102
    Verbose true
    Instance "instance_9102"
    Redis_uptime_in_seconds "gauge"
    Redis_used_memory "bytes"
    Redis_used_memory_peak "bytes"
    Redis_slave_repl_offset "gauge"

# notice, this is not added in above sections
Redis_cmdstat_info_calls "counter"
Redis_cmdstat_info_usec "counter"
Redis_cmdstat_info_usec_per_call "gauge"

These 3 redis instances listen on different ports, they have different plugin_instance combined by Host and Port:

"plugin_instance" => "",
"plugin_instance" => "",
"plugin_instance" => "",

These values will be part of the metric name emitted by collectd, e.g.


If you want to set a static value for the plugin instance, use the

configuration option:
    Host ""
    Port 9102
    Instance "redis-prod"

This will result in metric names like:


can be empty, in this case the name of the metric will not contain any reference to the host/port. If it is omitted, the host:port value is added to the metric name.

Multiple Data source types

You can send multiple data source types from same key by specifying it in the Module:

    Host "localhost"
    Port 6379

Redis_total_net_input_bytes "bytes"
Redis_total_net_output_bytes "bytes"
Redis_total_net_input_bytes "derive"
Redis_total_net_output_bytes "derive"

Graph examples

These graphs were created using collectd's rrdtool plugin, drraw and graphite with grafana.

Clients connected Commands/sec db0 keys Memory used Command stats in grafana 1 Command stats in grafana 2


  • collectd 4.9+

Devel workflow with Docker & Docker Compose

You can start hacking right away by using the provided Docker Compose manifest. No devel packages nor libs need to be installed on development host, just Docker and Docker Compose and you are good to go!

The Compose manifest launches a Redis server container based on

image (4.x as of Dec'17) and a collectd+python runtime container from
image. Both containers share the same net iface to connect via
(not a best practice on production, but fair enough for developing). Also, the collectd container mounts from the Docker host's git repo directory:
  • The Python program file.
  • The
    config file.
  • An additional collectd conf file to make al collectd readings to be sent to stdout (using the CSV plugin).

Just hack, change confs and test by doing:

$ docker-compose up
Stop by
'ing. Rinse and repeat.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.