Info
This is a test page!!!!
Overview
Learn how to accelerate your applications with OpenACC and CUDA and how to train and deploy a neural network to solve real-world problems.
The online workshop combines lectures about Accelerated Computing with OpenACC and CUDA on single and multiple GPUs with lectures about Fundamentals of Deep Learning.
The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud.
The workshop is co-organised by LRZ and NVIDIA Deep Learning Institute (DLI). NVIDIA DLI offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning.
All instructors are NVIDIA certified University Ambassadors.
Agenda
1st day: Fundamentals of Accelerated Computing with OpenACC
This lecture covers the fundamental tools and techniques needed for accelerating C/C++ or Fortran applications to run on massively parallel GPUs with OpenACC. You will learn how to write code, configure code parallelisation with OpenACC, optimise memory movements between the CPU and GPU accelerator, and implement the workflow that you have learned on a new task—accelerating a fully functional, but CPU-only, Laplace Heat Equation code for massive performance gains. At the end of the lecture, you will be able to create new GPU-accelerated applications on your own.
2nd day: Fundamentals of Accelerated Computing with CUDA C/C++
This lecture teaches the fundamental tools and techniques for accelerating C/C++ applications to run on massively parallel GPUs with CUDA. You’ll learn how to write code, configure code parallelisation with CUDA, optimise memory migration between the CPU and GPU accelerator, and implement the workflow that you’ve learned on a new task—accelerating a fully functional, but CPU-only, particle simulator for observable massive performance gains. At the end of the lecture, you will be able to create new GPU-accelerated applications on your own.
3rd day: Accelerating CUDA C++ Applications with Multiple GPUs
Computationally intensive CUDA C++ applications in high-performance computing, data science, bioinformatics, and deep learning can be accelerated by using multiple GPUs, which can increase throughput and/or decrease your total runtime. When combined with the concurrent overlap of computation and memory transfers, computation can be scaled across multiple GPUs without increasing the cost of memory transfers. For organisations with multi-GPU servers, whether in the cloud or on NVIDIA DGX systems, these techniques enable you to achieve peak performance from GPU-accelerated applications. And it’s important to implement these single-node, multi-GPU techniques before scaling your applications across multiple nodes.
This lecture covers how to write CUDA C++ applications that efficiently and correctly utilise all available GPUs in a single node, dramatically improving the performance of your applications and making the most cost-effective use of systems with multiple GPUs.
4th day: Fundamentals of Deep Learning
Businesses worldwide are using artificial intelligence to solve their greatest challenges. Healthcare professionals use AI to enable more accurate, faster diagnoses in patients. Retail businesses use it to offer personalised customer shopping experiences. Automakers use it to make personal vehicles, shared mobility, and delivery services safer and more efficient. Deep learning is a powerful AI approach that uses multi-layered artificial neural networks to deliver state-of-the-art accuracy in tasks such as object detection, speech recognition, and language translation. Using deep learning, computers can learn and recognise patterns from data that are considered too complex or subtle for expert-written software.
In this lecture, you’ll learn how deep learning works through hands-on exercises in computer vision and natural language processing. You’ll train deep learning models from scratch, learning tools and tricks to achieve highly accurate results. You’ll also learn to leverage freely available, state-of-the-art pre-trained models to save time and get your deep learning application up and running quickly.
Important information
After you are accepted, please create an account under courses.nvidia.com/join .
Ensure your laptop / PC will run smoothly by going to http://websocketstest.com/ Make sure that WebSockets work for you by seeing under Environment, WebSockets is supported and Data Receive, Send and Echo Test all check Yes under WebSockets (Port 80).If there are issues with WebSockets, try updating your browser.
NVIDIA Deep Learning Institute
The NVIDIA Deep Learning Institute delivers hands-on training for developers, data scientists, and engineers. The program is designed to help you get started with training, optimizing, and deploying neural networks to solve real-world problems across diverse industries such as self-driving cars, healthcare, online services, and robotics.
Prerequisites
Technical background, basic understanding of machine learning concepts, basic C/C++ programming skills.
For the 4th day, basics in Python (see https://www.python.org/about/gettingstarted/ ) will be useful.
Hands-On
The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud.
Language
English
Lecturer
Dr. Momme Allalen, PD Dr. Juan Durillo Barrionuevo, Dr. Volker Weinberg (LRZ and NVIDIA University Ambassadors)
Prices and Eligibility
The course is open and free of charge for academic participants from the Member States of the European Union (EU) and Associated Countries to the Horizon 2020 programme.
Registration
Please register with your official e-mail address to prove your affiliation.
Withdrawal Policy
See Withdrawal
Legal Notices
For registration for LRZ courses and workshops we use the service edoobox from Etzensperger Informatik AG (www.edoobox.com). Etzensperger Informatik AG acts as processor and we have concluded a Data Processing Agreement with them.