Options d’inscription

General-purpose (CPU, GPU) and specialized (systolic, many-core, etc.) hardware architectures for deep learning (DL). Energy consumption calculation of a DL model at different abstraction levels. Optimizing models for implementation: quantization, compression, pruning, and design space exploration. Interactions between DL models and hardware architectures and optimization of hardware accelerator designs. Experimental project focusing on minimizing the energy cost of a deep learning task.
Les visiteurs anonymes ne peuvent pas accéder à ce cours. Veuillez vous connecter.