Need help with FastTextRank?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

334 Stars 74 Forks 12 Commits 6 Opened issues



Services available


Need anything else?

Contributors list

# 128,843
12 commits




Numpy>=1.14.5 gensim>=3.5.0 pip install FastTextRank==1.1




如有优化点,欢迎pull requests


Extract abstracts and keywords from Chinese text, use optimized iterative algorithms to improve running speed, and selectively use word vectors to improve accuracy.


PageRank is a website page ranking algorithm from Google.
PageRank was originally used to calculate the importance of web pages. The entire www can be seen as a directed graph, and the node is a web page.
This algorithm can caculate all node's importance by their connections.
* My algorithm changed the iterative algorithm to make the algorithm much faster, it costs 10ms per article, on the mean while TextRank4ZH costs 80ms on my data.
* My algorithm also use word2vec to make the abstract more accurate, but it will cost more time to run the algorithm. Using word2vec costs 40ms per article on the same traning data.



  1. Cut article into sentence
  2. Calculate similarity between sentences:
    • Using word vectors' cosine similarity
    • Using two sentences' common words
  3. Build a graph by sentences' similarity
  4. Caculate the importance of each sentence by improved iterative algorithm
  5. Get the abstract ### API
  6. use_stopword: boolean, default True
  7. stopwordsfile: str, default None. The stop words file you want to use. If it is None, you will use this package's stop words.
  8. usew2v: boolean, default False If it is True, you must input passing dictpath parameter.
  9. dict_path: str, default None.
  10. max_iter:maximum iteration round
  11. tol: maximum tolerance error



  1. Cut artile into word
  2. Calculate similarity between word: If two words are all in window distance, then the graph's side of this two word add 1.0. Window is set by user.
  3. Build a graph by word' similarity
  4. Caculate the importance of each word by improved iterative algorithm
  5. Get the key word


  • use_stopword=boolean, default True
  • stopwordsfile=str, default None. The stop words file you want to use. If it is None, you will use this package's stop words.
  • max_iter=maximum iteration round
  • tol=maximum tolerance error
  • window=int, default 2 The window to determine if two words are related

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.