by MingSun-Tse

MingSun-Tse /EfficientDNNs

Collection of recent methods on DNN compression and acceleration

493 Stars 77 Forks Last release: Not found MIT License 134 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:


A collection of recent methods on DNN compression and acceleration. There are mainly 5 kinds of methods for efficient DNNs: - neural architecture re-designing or searching - maintain accuracy, less cost (e.g., #Params, #FLOPs, etc.): MobileNet, ShuffleNet etc. - maintain cost, more accuracy: Inception, ResNeXt, Xception etc. - pruning (including structured and unstructured) - quantization - matrix decomposition - knowledge distillation

About abbreviation: In the list below,

for oral,
for workshop,
for spotlight,
for best paper.


NAS (Neural Architecture Search)


Papers-Knowledge Distillation

People (in alphabeta order)

People in NAS (in alphabeta order)


Lightweight DNN Engines/APIs

Related Repos and Websites

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.