neural-combinatorial-optimization-rl-tensorflow

by MichelDeudon

Neural Combinatorial Optimization with Reinforcement Learning

145 Stars 50 Forks Last release: Not found MIT License 94 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

Neural Combinatorial Optimization with RL

TensorFlow implementation of:
Neural Combinatorial Optimization with Reinforcement Learning, Bello I., Pham H., Le Q. V., Norouzi M., Bengio S.
for the TSP with Time Windows (TSP-TW).
and Learning Heuristics for the TSP by Policy Gradient, Deudon M., Cournut P., Lacoste A., Adulyasak Y. and Rousseau L.M.
for the Traveling Salesman Problem (TSP) (final release here)

model

The Neural Network consists in a RNN or self attentive encoder-decoder with an attention module connecting the decoder to the encoder (via a "pointer"). The model is trained by Policy Gradient (Reinforce, 1992).

Requirements

Architecture

(under progress)

Usage

TSP

  • To train a (2D TSP20) model from scratch (data is generated on the fly):
    > python main.py --max_length=20 --inference_mode=False --restore_model=False --save_to=20/model --log_dir=summary/20/repo
    

NB: Just make sure ./save/20/model exists (create folder otherwise)

  • To visualize training on tensorboard: ```

    tensorboard --logdir=summary/20/repo ```

  • To test a trained model: ```

    python main.py --maxlength=20 --inferencemode=True --restoremodel=True --restorefrom=20/model ```

TSP-TW

  • To pretrain a (2D TSPTW20) model with infinite travel speed from scratch: ```

    python main.py --inferencemode=False --pretrain=True --restoremodel=False --speed=1000. --beta=3 --saveto=speed1000/n20w100 --logdir=summary/speed1000/n20w100 ```

  • To fine tune a (2D TSPTW20) model with finite travel speed: ```

    python main.py --inferencemode=False --pretrain=False --kNN=5 --restoremodel=True --restorefrom=speed1000/n20w100 --speed=10.0 --beta=3 --saveto=speed10/s10k5n20w100 --logdir=summary/speed10/s10k5_n20w100 ```

NB: Just make sure save_to folders exist

  • To visualize training on tensorboard: ```

    tensorboard --logdir=summary/speed1000/n20w100

    
    tensorboard --logdir=summary/speed10/s10k5n20w100
    ```
    
  • To test a trained model with finite travel speed on Dumas instances (in the benchmark folder): ```

    python main.py --inferencemode=True --restoremodel=True --restorefrom=speed10/s10k5_n20w100 --speed=10.0 ```

Results

TSP

Sampling 128 permutations with the Self-Attentive Encoder + Pointer Decoder:

  • Comparison to Google OR tools on 1000 TSP20 instances: (predicted tour length) = 0.9983 * (target tour length)

Self_Net_TSP20

TSP-TW

Sampling 256 permutations with the RNN Encoder + Pointer Decoder, followed by a 2-opt post processing on best tour: - Dumas instance n20w100.001 tsptw1 - Dumas instance n20w100.003 tsptw2

Authors

Michel Deudon / @mdeudon

Pierre Cournut / @pcournut

References

Bello, I., Pham, H., Le, Q. V., Norouzi, M., & Bengio, S. (2016). Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.