realtime

by supabase

supabase /realtime

Listen to your to PostgreSQL database in realtime via websockets. Built with Elixir.

1.7K Stars 70 Forks Last release: 7 days ago (v0.7.9) Apache License 2.0 174 Commits 10 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

Supabase Realtime

Listens to changes in a PostgreSQL Database and broadcasts them over websockets.

Demo

Contents

Status

  • [x] Alpha: Under heavy development
  • [ ] Beta: Ready for use. But go easy on us, there may be a few kinks.
  • [ ] 1.0: Use in production!

This repo is still under heavy development and the documentation is evolving. You're welcome to try it, but expect some breaking changes. Watch "releases" of this repo to receive a notifification when we are ready for Beta. And give us a star if you like it!

Watch this repo

Example

import { Socket } = '@supabase/realtime-js'

var socket = new Socket(process.env.REALTIME_URL) socket.connect()

// Listen to all changes to user ID 99 var allChanges = this.socket.channel('realtime:public:users:id.eq.99') .join() .on('*', payload => { console.log('Update received!', payload) })

// Listen to only INSERTS on the 'users' table in the 'public' schema var allChanges = this.socket.channel('realtime:public:users') .join() .on('INSERT', payload => { console.log('Update received!', payload) })

// Listen to all updates from the 'public' schema var allChanges = this.socket.channel('realtime:public') .join() .on('UPDATE', payload => { console.log('Update received!', payload) })

// Listen to all changes in the database let allChanges = this.socket.channel('realtime:') .join() .on('', payload => { console.log('Update received!', payload) })

Introduction

What is this?

This is an Elixir server (Phoenix) that allows you to listen to changes in your database via websockets.

It works like this:

  1. the Phoenix server listens to PostgreSQL's replication functionality (using Postgres' logical decoding)
  2. it converts the byte stream into JSON
  3. it then broadcasts over websockets.

Cool, but why not just use Postgres'
NOTIFY
?

A few reasons:

  1. You don't have to set up triggers on every table
  2. NOTIFY has a payload limit of 8000 bytes and will fail for anything larger. The usual solution is to send an ID then fetch the record, but that's heavy on the database
  3. This server consumes one connection to the database, then you can connect many clients to this server. Easier on your database, and to scale up you just add realtime servers

What are the benefits?

  1. The beauty of listening to the replication functionality is that you can make changes to your database from anywhere - your API, directly in the DB, via a console etc - and you will still receive the changes via websockets.
  2. Decoupling. For example, if you want to send a new slack message every time someone makes a new purchase you might build that funcitonality directly into your API. This allows you to decouple your async functionality from your API.
  3. This is built with Phoenix, an extremely scalable Elixir framework

Quick start

We have set up some simple examples that show how to use this server:

Client libraries

Server

Database set up

There are a some requirements for your database

  1. It must be Postgres 10+ as it uses logical replication
  2. Set up your DB for replication
    1. it must have the
      wal_level
      set to logical. You can check this by running
      SHOW wal_level;
      . To set the
      wal_level
      , you can call
      ALTER SYSTEM SET wal_level = logical;
    2. You must set
      max_replication_slots
      to at least 1:
      ALTER SYSTEM SET max_replication_slots = 5;
  3. Create a
    PUBLICATION
    for this server to listen to:
    CREATE PUBLICATION supabase_realtime FOR ALL TABLES;
  4. [OPTIONAL] If you want to recieve the old record (previous values) on UDPATE and DELETE, you can set the
    REPLICA IDENTITY
    to
    FULL
    like this:
    ALTER TABLE your_table REPLICA IDENTITY FULL;
    . This has to be set for each table unfortunately.

Server set up

The easiest way to get started is just to use our docker image. We will add more deployment methods soon.

# Update the environment variables to point to your own database
docker run \
  -e DB_HOST='docker.for.mac.host.internal' \
  -e DB_NAME='postgres' \
  -e DB_USER='postgres' \
  -e DB_PASSWORD='postgres' \
  -e DB_PORT=5432 \
  -e PORT=4000 \
  -e HOSTNAME='localhost' \
  -e SECRET_KEY_BASE='SOMETHING_SUPER_SECRET' \
  -p 4000:4000 \
  supabase/realtime

OPTIONS

DB_HOST       # {string} Database host URL
DB_NAME       # {string} Postgres database name
DB_USER       # {string} Database user
DB_PASSWORD   # {string} Database password
DB_PORT       # {number} Database port
SLOT_NAME     # {string} A unique name for Postgres to track where this server has "listened until". If the server dies, it can pick up from the last position. This should be lowercase.
PORT          # {number} Port which you can connect your client/listeners

Contributing

  • Fork the repo on GitHub
  • Clone the project to your own machine
  • Commit changes to your own branch
  • Push your work back up to your fork
  • Submit a Pull request so that we can review your changes and merge

Releasing

  • Make a commit to bump the version in
    mix.exs
  • Tag the commit

To trigger a release you must tag the commit, then push to origin.

git tag -a 0.x.x -m "Some release details / link to release notes"
git push origin 0.x.x

License

This repo is licensed under Apache 2.0.

Credits

Sponsors

We are building the features of Firebase using enterprise-grade, open source products. We support existing communities wherever possible, and if the products don’t exist we build them and open source them ourselves. Thanks to these sponsors who are making the OSS ecosystem better for everyone.

Worklife VC New Sponsor

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.