Introduction to High Performance Deep Learning
This talk presents examples and insights in dealing with large-scale models training with PyTorch. They are inspired by materials, examples, exercises mainly taken from official PyTorch tutorials and other authors.
Topics covered in this PyTorch Multi-GPU approach to Deep learning Models include Data and Model Parallelism, Message Passing, Distributed training using Horovord, Mixed Precision and Memory Format, and Pipeline Parallelism.
About the speakers
Giuseppe Fiameni is a Data Scientist at NVIDIA where he oversees the NVIDIA AI Technology Centre in Italy, a collaboration among NVIDIA, CINI and CINECA to accelerate academic research in the field of AI. He has been working as HPC specialist at CINECA, the largest HPC facility in Italy, for more than 14 years providing support for large-scale data analytics workloads. Research interests include large scale deep learning models, system architectures, massive data engineering, video action detection.
Andrea Pilzer is a Solutions Architect at NVIDIA with the NVIDIA AI Technology Centre in Italy. He received his Ph.D. under Profs. Nicu Sebe and Elisa Ricci at the University of Trento (Italy). He worked as a researcher at Huawei Ireland and as a postdoc in the groups of Profs. Arno Solin and Juho Kannala at Aalto University (Finland). His research interests are in 3D scene understanding, domain adaptation, and in uncertainty estimation for deep learning. He is a co-organizer of the T-CAP workshop series (ICIAP21, ICPR22, ECCV22) and "Uncertainty Quantification for Computer Vision" (ECCV22).
This seminar is mainly addressed to PhD students enrolled in the IECS Doctoral School. However, it is open also to people interested in the proposed topic.