pointer_summarizer

by atulkum

pytorch implementation of "Get To The Point: Summarization with Pointer-Generator Networks"

544 Stars 159 Forks Last release: Not found Apache License 2.0 81 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

pytorch implementation of Get To The Point: Summarization with Pointer-Generator Networks

  1. Train with pointer generation and coverage loss enabled
  2. Training with pointer generation enabled
  3. How to run training
  4. Papers using this code

Train with pointer generation and coverage loss enabled

After training for 100k iterations with coverage loss enabled (batch size 8)

ROUGE-1:
rouge_1_f_score: 0.3907 with confidence interval (0.3885, 0.3928)
rouge_1_recall: 0.4434 with confidence interval (0.4410, 0.4460)
rouge_1_precision: 0.3698 with confidence interval (0.3672, 0.3721)

ROUGE-2: rouge_2_f_score: 0.1697 with confidence interval (0.1674, 0.1720) rouge_2_recall: 0.1920 with confidence interval (0.1894, 0.1945) rouge_2_precision: 0.1614 with confidence interval (0.1590, 0.1636)

ROUGE-l: rouge_l_f_score: 0.3587 with confidence interval (0.3565, 0.3608) rouge_l_recall: 0.4067 with confidence interval (0.4042, 0.4092) rouge_l_precision: 0.3397 with confidence interval (0.3371, 0.3420)

Alt text

Training with pointer generation enabled

After training for 500k iterations (batch size 8)

ROUGE-1:
rouge_1_f_score: 0.3500 with confidence interval (0.3477, 0.3523)
rouge_1_recall: 0.3718 with confidence interval (0.3693, 0.3745)
rouge_1_precision: 0.3529 with confidence interval (0.3501, 0.3555)

ROUGE-2: rouge_2_f_score: 0.1486 with confidence interval (0.1465, 0.1508) rouge_2_recall: 0.1573 with confidence interval (0.1551, 0.1597) rouge_2_precision: 0.1506 with confidence interval (0.1483, 0.1529)

ROUGE-l: rouge_l_f_score: 0.3202 with confidence interval (0.3179, 0.3225) rouge_l_recall: 0.3399 with confidence interval (0.3374, 0.3426) rouge_l_precision: 0.3231 with confidence interval (0.3205, 0.3256)

Alt text

How to run training:

1) Follow data generation instruction from https://github.com/abisee/cnn-dailymail 2) Run starttrain.sh, you might need to change some path and parameters in datautil/config.py 3) For training run starttrain.sh, for decoding run startdecode.sh, and for evaluating run run_eval.sh

Note:

  • In decode mode beam search batch should have only one example replicated to batch size https://github.com/atulkum/pointersummarizer/blob/master/trainingptrgen/decode.py#L109 https://github.com/atulkum/pointersummarizer/blob/master/data_util/batcher.py#L226

  • It is tested on pytorch 0.4 with python 2.7

  • You need to setup pyrouge to get the rouge score

Papers using this code:

1) Automatic Program Synthesis of Long Programs with a Learned Garbage Collector https://github.com/amitz25/PCCoder 2) Automatic Fact-guided Sentence Modification https://github.com/darsh10/splitencoderpointer_summarizer 3) Resurrecting Submodularity in Neural Abstractive Summarization 4) StructSum: Incorporating Latent and Explicit Sentence Dependencies for Single Document Summarization 5) Concept Pointer Network for Abstractive Summarization https://github.com/wprojectsn/codes 7) VAE-PGN based Abstractive Model in Multi-stage Architecture for Text Summarization

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.