Need help with Structured-Self-Attention?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

kaushalshetty
449 Stars 107 Forks MIT License 22 Commits 1 Opened issues

Description

A Structured Self-attentive Sentence Embedding

Services available

!
?

Need anything else?

Contributors list

Structured Self-attentive sentence embeddings

Implementation for the paper A Structured Self-Attentive Sentence Embedding, which was published in ICLR 2017: https://arxiv.org/abs/1703.03130 .

USAGE:

For binary sentiment classification on imdb dataset run :

python classification.py "binary"

For multiclass classification on reuters dataset run :

python classification.py "multiclass"

You can change the model parameters in the

model_params.json file
Other tranining parameters like number of attention hops etc can be configured in the
config.json
file.

If you want to use pretrained glove embeddings , set the

use_embeddings
parameter to
"True"
,default is set to False. Do not forget to download the
glove.6B.50d.txt
and place it in the glove folder.

Implemented:

  • Classification using self attention
  • Regularization using Frobenius norm
  • Gradient clipping
  • Visualizing the attention weights

Instead of pruning ,used averaging over the sentence embeddings.

Visualization:

After training, the model is tested on 100 test points. Attention weights for the 100 test data are retrieved and used to visualize over the text using heatmaps. A file visualization.html gets saved in the visualization/ folder after successful training. The visualization code was provided by Zhouhan Lin (@hantek). Many thanks.

Below is a shot of the visualization on few datapoints. alt text

Training accuracy 93.4% Tested on 1000 points with 90.2% accuracy


We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.