Python script to download the celebA-HQ dataset from google drive
Python script to download and create the celebA-HQ dataset.
WARNING from the author. I believe this script is broken since a few months (I have not try it for a while). I am really sorry about that. If you fix it, please share you solution in a PR so that everyone can benefit from it.
To get the celebA-HQ dataset, you need to a) download the celebA dataset
download_celebA.py, b) download some extra files
download_celebA_HQ.py, c) do some processing to get the HQ images
The size of the final dataset is 89G. However, you will need a bit more storage to be able to run the scripts.
1) Clone the repository
git clone https://github.com/nperraud/download-celebA-HQ.git cd download-celebA-HQ
2) Install necessary packages (Because specific versions are required Conda is recomended) * Install miniconda https://conda.io/miniconda.html * Create a new environement
conda create -n celebaHQ python=3 source activate celebaHQ* Install the packages
conda install jpeg=8d tqdm requests pillow==3.1.1 urllib3 numpy cryptography scipy pip install opencv-python==220.127.116.11 cryptography==2.1.4* Install 7zip (On Ubuntu)
sudo apt-get install p7zip-full
3) Run the scripts ``` python downloadcelebA.py ./ python downloadcelebAHQ.py ./ python makeHQ_images.py ./
where `./` is the directory where you wish the data to be saved.
- Go watch a movie, theses scripts will take a few hours to run depending on your internet connection and your CPU power. The final HQ images will be saved as
.npyfiles in the
The script may work on windows, though I have not tested this solution personnaly
Step 2 becomes
conda create -n celebaHQ python=3 source activate celebaHQ
The rest should be unchanged.
If you have Docker installed, skip the previous installation steps and run the following command from the root directory of this project:
docker build -t celeba . && docker run -it -v $(pwd):/data celeba
By default, this will create the dataset in same directory. To put it elsewhere, replace
$(pwd)with the absolute path to the desired output directory.
It seems that the dataset has a few outliers. A of problematic images is stored in
bad_images.txt. Please report if you find other outliers.
This script is likely to break somewhere, but if it executes until the end, you should obtain the correct dataset.
This code is inspired by these files * https://github.com/tkarras/progressivegrowingofgans/blob/master/datasettool.py * https://github.com/andersbll/deeppy/blob/master/deeppy/dataset/celeba.py * https://github.com/andersbll/deeppy/blob/master/deeppy/dataset/util.py
You probably want to cite the paper "Progressive Growing of GANs for Improved Quality, Stability, and Variation" that was submitted to ICLR 2018 by Tero Karras (NVIDIA), Timo Aila (NVIDIA), Samuli Laine (NVIDIA), Jaakko Lehtinen (NVIDIA and Aalto University).