Need help with Chinese-Poetry-Generation?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

236 Stars 104 Forks MIT License 23 Commits 4 Opened issues


An RNN-based Chinese Poem Generator

Services available


Need anything else?

Contributors list

# 157,939
21 commits

Planning-based Poetry Generation

A classical Chinese quatrain generator based on the RNN encoder-decoder framework.

Here I tried to implement the planning-based architecture purposed in Wang et al. 2016, whereas technical details might be different from the original paper. My purpose of making this was not to refine the neural network model and give better results by myself. Rather, I wish to provide a simple framework as said in the paper along with convenient data processing toolkits for all those who want to experiment their ideas on this interesting task.

By Jun 2018, this project has been refactored into Python3 using TensorFlow 1.8.

Code Organization

Structure of Code

The diagram above illustrates major dependencies in this codebase in terms of either data or functionalities. Here I tried to organize code around data, and make every data processing module a singleton at runtime. Batch processing is only done when the produced result is either missing or outdated.


Data Processing

Run the following command to generate training data from source text data:


Depending on your hardware, this can take you a cup of tea or over one hour. The keyword extraction is based on the TextRank algorithm, which can take a long time to converge.


The poem planner was based on Gensim's Word2Vec module. To train it, simply run:

./ -p

The poem generator was implemented as an enc-dec model with attention mechanism. To train it, type the following command:

./ -g

You can also choose to train the both models altogether by running:

./ -a

To erase all trained models, run:

./ --clean

As it turned out, the attention-based generator model after refactor was really hard to train well. From my side, the average loss will typically stuck at ~5.6 and won't go down any more. There should be considerable space to improve it.

Run Tests

Type the following command:


Then each time you type in a hint text in Chinese, it should return a kind of gibberish poem. It's up to you to decide how to improve the models and training methods to make them work better.

Improve It

  • To add data processing tools, consider adding dependency configs into __dependency_dict in It helps you to automatically update processed data when it goes stale.

  • To improve the planning model, please refine the planner class in

  • To improve the generation model, please refine the generator class in

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.