General-purpose (CPU, GPU) and specialized (systolic, many-core, etc.) hardware architectures for deep learning (DL). Energy consumption calculation of a DL model at different abstraction levels. Optimizing models for implementation: quantization, compression, pruning, and design space exploration. Interactions between DL models and hardware architectures and optimization of hardware accelerator designs. Experimental project focusing on minimizing the energy cost of a deep learning task.
- Responsable du site: François Leduc-Primeau
- Enseignant (éditeur): Reda Bensaid
- Enseignant (éditeur): Kamran Chitsaz Zade Allaf