Need help with deep-clustering?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

133 Stars 74 Forks 6 Commits 13 Opened issues


A tensorflow implementation for Deep clustering: Discriminative embeddings for segmentation and separation

Services available


Need anything else?

Contributors list

# 204,927
Deep le...
6 commits

A tensorflow implementation of deep clustering for speech seperation

This is a tensorflow implementation of the deep clustering paper: A few exmaples from the test set can be viewed in visualizationsamples/ and speechsamples/


Python 2 and its packages: * tensorflow r0.11 * numpy * scikit-learn * matplotlib * librosa

File documentation

  • Gloabl constants.
  • Transform seperate speech files in a dir into .pkl format data set.
  • A class to read the .pkl data set and generate batches of data for training the net.
  • A class defining the net structure.
  • Train the DC model.
  • Mix up two pieces of speech signals for test.
  • Transform a .wav file into chunks of frames to be fed to the models during test.
  • Visualize the active embedding points using PCA.
  • Take in two speaker mix sample and seperate them.

Training procedure

  1. Orgnize your speech data files as the following format: rootdir/speakerid/speech_files.wav
  2. Make some changes dir of the and run it, you may want to rename the .pkl file properly.  3. Make dirs for write summaries and checkpoints, update your dirs in the The changes of the .pkl file list for     training and validation are also need to be made.
  3. Train the model.
  4. Generate some mixtures using, and modify the checkpoints in
  5. Enjoy yourself!

Some other things

The optimizer is not the same as that in the original paper, and also no 3 speaker mixture generator is provided, and we are moving on to the next stage of work and will not bother to do that. If you are interested and implemente that, we are glad to merge your branch.


We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.