Need help with dancenet?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

jsn5
465 Stars 73 Forks MIT License 40 Commits 0 Opened issues

Description

DanceNet -💃💃Dance generator using Autoencoder, LSTM and Mixture Density Network. (Keras)

Services available

!
?

Need anything else?

Contributors list

# 123,110
Python
Jupyter...
JavaScr...
ico
32 commits
# 50,029
Jupyter...
JavaScr...
HTML
generat...
3 commits

DanceNet - Dance generator using Variational Autoencoder, LSTM and Mixture Density Network. (Keras)

License: MIT Run on FloydHub DOI

Main components:

  • Variational autoencoder
  • LSTM + Mixture Density Layer

Requirements:

  • Python version = 3.5.2

### Packages * keras==2.2.0 * sklearn==0.19.1 * numpy==1.14.3 * opencv-python==3.4.1

Dataset

https://www.youtube.com/watch?v=NdSqAAT28v0 This is the video used for training.

How to run locally

  • Download the trained weights from here. and extract it to the dancenet dir.
  • Run dancegen.ipynb

How to run in your browser

Run on FloydHub

  • Click the button above to open this code in a FloydHub workspace (the trained weights dataset will be automatically attached to the environment)
  • Run dancegen.ipynb

Training from scratch

  • fill dance sequence images labeled as
    1.jpg
    ,
    2.jpg
    ... in
    imgs/
    folder
  • run
    model.py
  • run
    gen_lv.py
    to encode images
  • run
    video_from_lv.py
    to test decoded video
  • run jupyter notebook
    dancegen.ipynb
    to train dancenet and generate new video.

References

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.