Need help with speech-denoising-wavenet?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

drethage
499 Stars 143 Forks MIT License 3 Commits 29 Opened issues

Description

A neural network for end-to-end speech denoising

Services available

!
?

Need anything else?

Contributors list

A Wavenet For Speech Denoising

A neural network for end-to-end speech denoising, as described in: "A Wavenet For Speech Denoising"

Listen to denoised samples under varying noise conditions and SNRs here

Installation

It is recommended to use a virtual environment

  1. git clone https://github.com/drethage/speech-denoising-wavenet.git
  2. pip install -r requirements.txt
  3. Install pygpu

Currently the project requires *Keras 1.2** and Theano 0.9.0, the large dilations present in the architecture are not supported by the current version of Tensorflow (1.2.0)*

Usage

A pre-trained model (best-performing model described in the paper) can be found in

sessions/001/models
and is ready to be used out-of-the-box. The parameterization of this model is specified in
sessions/001/config.json

Download the dataset as described below

Denoising:

Example:

THEANO_FLAGS=optimizer=fast_compile,device=gpu python main.py --mode inference --config sessions/001/config.json --noisy_input_path data/NSDTSEA/noisy_testset_wav --clean_input_path data/NSDTSEA/clean_testset_wav
Speedup

To achieve faster denoising, one can increase the target-field length by use of the optional

--target_field_length
argument. This defines the amount of samples that are denoised in a single forward propagation, saving redundant calculations. In the following example, it is increased 10x that of when the model was trained, the batch_size is reduced to 4.

Faster Example:

THEANO_FLAGS=device=gpu python main.py --mode inference --target_field_length 16001 --batch_size 4 --config sessions/001/config.json --noisy_input_path data/NSDTSEA/noisy_testset_wav --clean_input_path data/NSDTSEA/clean_testset_wav

Training:

THEANO_FLAGS=device=gpu python main.py --mode training --config config.json

Configuration

A detailed description of all configurable parameters can be found in config.md

Optional command-line arguments:

Argument

Valid Inputs Default Description
mode [training, inference] training
config string config.json Path to JSON-formatted config file
printmodelsummary bool False Prints verbose summary of the model
load_checkpoint string None Path to hdf5 file containing a snapshot of model weights

Additional arguments during inference:

Argument

Valid Inputs Default Description
oneshot bool False Denoises each audio file in a single forward propagation
targetfieldlength int as defined in config.json Overrides parameter in config.json for denoising with different target-field lengths than used in training
batchsize int as defined in config.json # of samples per batch
conditionvalue int 1 Corresponds to speaker identity
cleaninput_path string None If supplied, SNRs of denoised samples are computed

Dataset

The "Noisy speech database for training speech enhancement algorithms and TTS models" (NSDTSEA) is used for training the model. It is provided by the University of Edinburgh, School of Informatics, Centre for Speech Technology Research (CSTR).

  1. Download here
  2. Extract to
    data/NSDTSEA

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.