Firefly's gevent version, with better perfromance than its previous twisted version
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
在alpha 0.1.5做了如下的改进： １、单node节点断开与root节点的连接后自动重连。 ２、修改了gfirefly的底层库gtiwsted,将socket发送数据放到一个协程中进处理，解决了
AssertionError: This socket is already used by another greenlet
与单进程多线程模型相比，多进程和协程是更加Scalable的模型。在高并发场景下，采用多进程模型编制的程序更加容易Scale Out，而协程模型可以使单机的并发性能大幅提升，达到Scale Up的目的。所以，未来服务器端并发模型的标配估计会是：每个核一个进程，每个进程是用协程实现的微线程。
Firefly-gevent is firefly gevent version which is more of simplicity comparing to current Firefly twisted version.
Based on coroutine, python-written gevent is a web development framework. Coroutine is a concurrency model, but unlike to thread and callback, all its tasks can be executed in a single thread, and are able to swap to another task for execution by initial aborting in a task. It has program-level schedule rather than thread’s system-level schedule.
Amazing performance is Gevent’s most obvious feature, especially when you compare it to traditional thread solution. On this point, an almost common sense fact is that asynchronous I/O performance will be significantly superior to separate thread-based synchronous I/O when load is over a certain degree. In the meantime, Gevent provides seemingly much alike traditional port that was programmed based on thread model. However this port’s real identity is asynchronous I/O, and what’s more wonderful part is that it makes all these transparent (here it means you don't need to worry about how it function itself, Gevent will help you finish the switching job).
Ignoring other factors, Gevent performance is four times greater than thread solution (here in this test we use Paste as contrast, note: Paste is another thread-based web library of Python). Comparing to single process multi thread model, multi process and coroutine is more scalable model. In high concurrency situation, multi process model compiled program is easier to Scale Out, and coroutine model could significantly improve single host concurrency performance to achieve Scale Up. So the standardized configuration of future server concurrency model probably is: each core contains a process which is actually a micro thread that realized by coroutine.
Coding-wise, shared resources lock and unlock problem caused by multi thread model has been a nightmare to programmer since ever. But when you code with multi process model, its philosophy will inspire programmers write programs that has avoiding shared resources feature which will consequently improve system robustness. And currently all Python coroutine realization are non preemptive schedule, it enables programmers themselves to control multi program switch timing so as to avoid a majority of worrisome lock and unlock problems. All these are advantageous for programmers to write more established codes.
In addition, comparing to event-driven model that also has great concurrency performance, coroutine realized micro thread has very friendly and straight forward logic expression and allows programmers to free from struggling and twisting in unpredictable event and multi-layer covered callback (what Twisted implicates…). Micro thread model of coroutine could help multi thread programs writer to almost realize a painless concurrency performance upgrade.
Firefly-gevent, with gevent performance, encapsulates network IO processing, database IO read and write cache, interface calls among distributed processes, which
allows game server side development become more easy and simple, and developers focus on gameplay logic development with no burden on technical problems.