Need help with DeepHyperX?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

152 Stars 51 Forks Other 134 Commits 17 Opened issues


Deep learning toolbox based on PyTorch for hyperspectral data classification.

Services available


Need anything else?

Contributors list

# 86,386
124 commits
# 178,012
1 commit
# 217,795
The Jul...
1 commit


A Python tool to perform deep learning experiments on various hyperspectral datasets.


This toolbox was used for our review paper in Geoscience and Remote Sensing Magazine :

N. Audebert, B. Le Saux and S. Lefevre, "Deep Learning for Classification of Hyperspectral Data: A Comparative Review," in IEEE Geoscience and Remote Sensing Magazine, vol. 7, no. 2, pp. 159-173, June 2019.

Bibtex format :

@article{8738045, author={N. {Audebert} and B. {Le Saux} and S. {Lefèvre}}, journal={IEEE Geoscience and Remote Sensing Magazine}, title={Deep Learning for Classification of Hyperspectral Data: A Comparative Review}, year={2019}, volume={7}, number={2}, pages={159-173}, doi={10.1109/MGRS.2019.2912563}, ISSN={2373-7468}, month={June},}


This tool is compatible with Python 2.7 and Python 3.5+.

It is based on the PyTorch deep learning and GPU computing framework and use the Visdom visualization server.


The easiest way to install this code is to create a Python virtual environment and to install dependencies using:

pip install -r requirements.txt

(on Windows you should use

pip install -r requirements.txt -f


Alternatively, it is possible to run the Docker image.

Grab the image using:

docker pull

And then run the image using:

docker run -p 9999:8097 -ti --rm -v `pwd`:/workspace/DeepHyperX/

This command: * starts a Docker container using the image
* starts an interactive shell session
* mounts the current folder in the
path of the container * binds the local port 9999 to the container port 8097 (for Visdom) * removes the container
when the user has finished.

All data and products are stored in the current folder.

Users can build the Docker image locally using the

using the command
docker build .

Hyperspectral datasets

Several public hyperspectral datasets are available on the UPV/EHU wiki. Users can download those beforehand or let the tool download them. The default dataset folder is

, although this can be modified at runtime using the

At this time, the tool automatically downloads the following public datasets: * Pavia University * Pavia Center * Kennedy Space Center * Indian Pines * Botswana

The Data Fusion Contest 2018 hyperspectral dataset is also preconfigured, although users need to download it on the DASE website and store it in the dataset folder under


An example dataset folder has the following structure:

├── Botswana
│   ├── Botswana_gt.mat
│   └── Botswana.mat
├── DFC2018_HSI
│   ├── 2018_IEEE_GRSS_DFC_GT_TR.tif
│   ├── 2018_IEEE_GRSS_DFC_HSI_TR
│   ├── 2018_IEEE_GRSS_DFC_HSI_TR.aux.xml
├── IndianPines
│   ├── Indian_pines_corrected.mat
│   ├── Indian_pines_gt.mat
├── KSC
│   ├── KSC_gt.mat
│   └── KSC.mat
├── PaviaC
│   ├── Pavia_gt.mat
│   └── Pavia.mat
└── PaviaU
    ├── PaviaU_gt.mat
    └── PaviaU.mat

Adding a new dataset

Adding a custom dataset can be done by modifying the
file. Developers should add a new entry to the
variable and define a specific data loader for their use case.


Currently, this tool implements several SVM variants from the scikit-learn library and many state-of-the-art deep networks implemented in PyTorch. * SVM (linear, RBF and poly kernels with grid search) * SGD (linear SVM using stochastic gradient descent for fast optimization) * baseline neural network (4 fully connected layers with dropout) * 1D CNN (Deep Convolutional Neural Networks for Hyperspectral Image Classification, Hu et al., Journal of Sensors 2015) * Semi-supervised 1D CNN (Autoencodeurs pour la visualisation d'images hyperspectrales, Boulch et al., GRETSI 2017) * 2D CNN (Hyperspectral CNN for Image Classification & Band Selection, with Application to Face Recognition, Sharma et al, technical report 2018) * Semi-supervised 2D CNN (A semi-supervised Convolutional Neural Network for Hyperspectral Image Classification, Liu et al, Remote Sensing Letters 2017) * 3D CNN (3-D Deep Learning Approach for Remote Sensing Image Classification, Hamida et al., TGRS 2018) * 3D FCN (Contextual Deep CNN Based Hyperspectral Classification, Lee and Kwon, IGARSS 2016) * 3D CNN (Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks, Chen et al., TGRS 2016) * 3D CNN (Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network, Li et al., Remote Sensing 2017) * 3D CNN (HSI-CNN: A Novel Convolution Neural Network for Hyperspectral Image, Luo et al, ICPR 2018) * Multi-scale 3D CNN (Multi-scale 3D Deep Convolutional Neural Network for Hyperspectral Image Classification, He et al, ICIP 2017)

Adding a new model

Adding a custom deep network can be done by modifying the
file. This implies creating a new class for the custom deep network and altering the


Start a Visdom server:

python -m visdom.server
and go to
to see the visualizations (or
if you use Docker).

Then, run the script

The most useful arguments are: *

to specify the model (e.g. 'svm', 'nn', 'hamida', 'lee', 'chen', 'li'), *
to specify which dataset to use (e.g. 'PaviaC', 'PaviaU', 'IndianPines', 'KSC', 'Botswana'), * the
switch to run the neural nets on GPU. The tool fallbacks on CPU if this switch is not specified.

There are more parameters that can be used to control more finely the behaviour of the tool. See

python -h
for more information.

Examples: *

python --model SVM --dataset IndianPines --training_sample 0.3
This runs a grid search on SVM on the Indian Pines dataset, using 30% of the samples for training and the rest for testing. Results are displayed in the visdom panel. *
python --model nn --dataset PaviaU --training_sample 0.1 --cuda
This runs on GPU a basic 4-layers fully connected neural network on the Pavia University dataset, using 10% of the samples for training. *
python --model hamida --dataset PaviaU --training_sample 0.5 --patch_size 7 --epoch 50 --cuda
This runs on GPU the 3D CNN from Hamida et al. on the Pavia University dataset with a patch size of 7, using 50% of the samples for training and optimizing for 50 epochs.

Say Thanks!

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.