Need help with message-io?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

lemunozm
181 Stars 11 Forks Apache License 2.0 123 Commits 5 Opened issues

Description

Asynchronous network message library for building network applications easily.

Services available

!
?

Need anything else?

Contributors list

# 229,586
Rust
C++
Shell
multica...
120 commits
# 223,336
Dart
Rust
bandwid...
Shell
1 commit
# 50,714
tcp-cli...
synonym...
apache-...
uber
1 commit

message-io

message-io
is an asynchronous message library to build network applications easy and fast. The library manages and processes the socket data streams in order to offer a simple event message API to the user.

Also, it can be understanding as a generic manager network. This means that you can implement your own protocol following some rules and

message-io
will manage the tedious asynchrony and thread management for you. See more here.

Any contribution is welcome!

Who is this project for?

  • People who don't want to deal with concurrence or socket connection problems.
  • People who want to push the effort in the messages among the apps, not in how to transport them.
  • People who want to make a multiplayer game (server and/or client).
  • People who want to make an application that needs to communicate over TCP / UDP protocols.

Features

  • Asynchronous: internal poll event with non-blocking sockets using mio.
  • Multiplatform: see mio platform support.
  • TCP and UDP (with multicast option) protocols.
  • Internal encoding layer: handle messages, not data streams.
  • FIFO events with timers and priority.
  • Easy, intuitive and consistent API:
    • Follows KISS principle.
    • Abstraction from transport layer: do not think about sockets, think about data messages.
    • Only two main entities: an extensible event-queue to manage all events, and a network manager to manage all connections (connect, listen, remove, send, receive).
    • Forget concurrence problems: handle thousands of active connections and listeners without any effort, "One thread to rule them all".
    • Easy error handling. Do not manage internals
      std::io::Error
      when send/receive from network.
  • High performance:
    • One thread for manage all internal connections over the faster OS poll.
    • Binary serialization.
    • Small runtime overhead over OS sockets.

Getting started

Add to your

Cargo.toml
message-io = "0.6"

Documentation

TCP & UDP echo server

The following example is the simplest server that reads messages from the clients and respond to them. It is capable to manage several client connections and listen from 2 differents protocols at same time.

use message_io::events::{EventQueue};
use message_io::network::{Network, NetEvent, Transport};

use serde::{Serialize, Deserialize};

#[derive(Deserialize)] enum InputMessage { HelloServer(String), // Other input messages here }

#[derive(Serialize)] enum OutputMessage { HelloClient(String), // Other output messages here }

enum Event { Network(NetEvent), // Other user events here }

fn main() { let mut event_queue = EventQueue::new();

// Create Network, the callback will push the network event into the event queue
let sender = event_queue.sender().clone();
let mut network = Network::new(move |net_event| sender.send(Event::Network(net_event)));

// Listen from TCP and UDP messages on ports 3005.
let addr = "0.0.0.0:3005";
network.listen(Transport::Tcp, addr).unwrap();
network.listen(Transport::Udp, addr).unwrap();

loop {
    match event_queue.receive() { // Read the next event or wait until have it.
        Event::Network(net_event) => match net_event {
            NetEvent::Message(endpoint, message) => match message {
                InputMessage::HelloServer(msg) => {
                    println!("Received: {}", msg);
                    network.send(endpoint, OutputMessage::HelloClient(msg));
                },
                //Other input messages here
            },
            NetEvent::AddedEndpoint(_endpoint) => println!("TCP Client connected"),
            NetEvent::RemovedEndpoint(_endpoint) => println!("TCP Client disconnected"),
            NetEvent::DeserializationError(_) => (),
        },
        // Other events here
    }
}

}

Test yourself!

Clone the repository and test the TCP example that you can found in

examples/tcp
:

Run the server:

cargo run --example tcp server
In other terminals, run one or more clients:
cargo run --example tcp client 

Do you need a transport protocol that
message-io
doesn't have? Add it!

If the protocol can be built in top on

mio
(most of the existing protocol libraries can), then you can add it to

message-io
really easy:
  1. Add your adapter file in

    src/adapters/.rs
    that implements the traits that you can found in
    src/adapter.rs
    .
  2. Add a new field in the

    Transport
    enum found in [
    src/network.rs
    ] to register your new adapter.

That's all! You can use your new transport in the

message-io
API like any other.

Oops, one step more, you can make a Pull request for everyone to use it :)

Basic concepts

The library has two main pieces:

  • EventQueue
    : Is a generic and synchronized queue where all the system events are sent. The user must be read these events in its main thread in order to dispatch actions.

  • Network
    : It is an abstraction layer of the transport protocols that works over non-blocking sockets. It allows to create/remove connections, send and receive messages (defined by the user).

To manage the connections, the

Network
offers an
Endpoint
that is an unique identifier of the connection that can be used to remove, send or identify input messages. It can be understood as the remitter/recipient of the message.

The power comes when both pieces joins together, allowing to process all actions from one thread. To reach this, the user has to connect the

Network
to the
EventQueue
sending the
NetEvent
produced by the first one.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.