Need help with NATS?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

tshi04
137 Stars 31 Forks GNU General Public License v3.0 276 Commits 0 Opened issues

Description

Neural Abstractive Text Summarization with Sequence-to-Sequence Models

Services available

!
?

Need anything else?

Contributors list

No Data

NATS toolkit

image image image image image

Our new implementation is available at https://github.com/tshi04/LeafNATS. Please check it.

  • Check python2.7 version of NATS from here.
  • This repository is a pytorch implementation of seq2seq models for the following survey:

Abstractive Text Summarization with Sequence-to-Sequence Models

Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, Chandan K. Reddy

Requirements and Installation

  • Python 3.5.2
  • glob
  • argparse
  • shutil
  • pytorch 1.0

Use following scripts to

Dataset

In this survey, we run an extensive set of experiments with NATS on the following datasets. Here, we provide the link to CNN/Daily Mail dataset and data processing codes for Newsroom and Bytecup2018 datasets. - CNN/Daily Mail - Newsroom - Bytecup2018

In the dataset, <s> and </s> is used to separate sentences. <sec> is used to separate summaries and articles. We did not use the json format because it takes more space and be difficult to transfer.

Usuage

  • Training:
    python main.py
  • Validate:
    python main.py --task validate
  • Test:
    python main.py --task beam
  • Rouge:
    python main.py --task rouge

Features

The NATS is equipped with following features:

  • Attention based seq2seq framework.
    Encoder and decoder can be LSTM or GRU. The attention scores can be calculated with three different alignment methods.
  • Pointer-generator network.
  • Intra-temporal attention mechanism and intra-decoder attention mechanism.
  • Coverage mechanism.
  • Weight sharing mechanism.
    Weight sharing mechanism can boost the performance with significantly less parameters.
  • Beam search algorithm.
    We implemented an efficient beam search algorithm that can also handle cases when batch_size>1.
  • Unknown words replacement.
    This meta-algorithm can be used along with any attention based seq2seq model. The OOV words in summaries are manually replaced with words in source articles using attention weights.

Experimental results can be found in our survey paper.

Problems and Todos

  • Some models failed during the training after several epochs. For example, on CNN/Daily Mail dataset,
    concat + temporal
    concat + temporal + attn_decoder
    
  • We have tried to optimize memory usage, but we are still not quite happy with it.
  • Merge the LSTM and GRU decoders.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.