1. Disaster Risk Monitoring Using Satellite Imagery:
Deploy a deep learning model to automate disaster management use cases.
About this Course
Learn how to build and deploy a deep learning model to automate the detection of flood events using satellite imagery. This workflow can be applied to lower the cost, improve the efficiency, and significantly enhance the effectiveness of various natural disaster management use cases.Learning Objectives
By participating in this is course, you will learn how to:
- Implement a machine learning workflow for disaster management solutions
- Use hardware accelerated tools to process large satellite imagery data
- Apply transfer-learning to cost-efficiently build deep learning segmentation models
- Deploy deep learning models for near real-time analysis
- Utilize deep learning-based model inference to detect and respond to flood events
Topics Covered
Tools, libraries, frameworks used: NVIDIA DALI, NVIDIA TAO Toolkit, NVIDIA TensorRT, and NVIDIA Triton Inference Server
Related Training
This course was developed jointly with UNOSAT, the United Nations Satellite Centre.
Course Details
Duration: 08:00
Price: Free
Level: Technical - Beginner
Subject: Deep Learning
Language: English
Course Prerequisites:- Competency in the Python 3 programming language
- Basic understanding of Machine Learning and Deep Learning concepts (specifically variations of CNNs) and pipelines
- Interest in understanding how to manipulate satellite imagery using modern methods
2. Getting Started with AI on Jetson Nano
Build and train a classification data set and model with the NVIDIA Jetson Nano.
About this Course
The power of AI is now in the hands of makers, self-taught developers, and embedded technology enthusiasts everywhere with the NVIDIA Jetson developer kits. This easy-to-use, powerful computer lets you run multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. In this course, you'll use Jupyter iPython notebooks on your own Jetson to build a deep learning classification project with computer vision models.
Required Hardware
Supported Jetson Developer Kit:Additional peripherals for Orin kits (power supply comes with kits):- High-performance microSD card for Orin Nano 64GB minimum (such as this one)
- 2-pin jumper to put Orin Nano kit into Force Recovery Mode if flashing with SDK Manager (here's an example)
- Logitech C270 USB Webcam or similar (we've tested and recommend this one).
- Optional USB cable: USB-C To USB-A/USB-C with DATA enabled
- Optional keyboard, mouse, monitor
Additional peripherals for original 2GB and 4GB kits:- High-performance microSD card 32GB minimum (we've tested and recommend this one)
- 5V 4A power supply with 2.1mm DC barrel connector (we've tested and recommend this one)
- 2-pin jumper: must be added to the Jetson Nano Developer Kit board to enable power from the barrel jack power supply (here's an example)
- Logitech C270 USB Webcam (we've tested and recommend this one).
- USB cable: Micro-B To Type-A with DATA enabled (we've tested and recommend this one)
Additional Computer Requirements- A computer with an internet connection and the ability to flash your microSD card
- An available USB-A port on your computer (you may need an adapter or different cable if you only have USB-C ports)
Learning Objectives
You'll learn how to:
- Set up your NVIDIA Jetson Nano and camera
- Collect image data for classification models
- Annotate image data for regression models
- Train a neural network on your data to create your own models
- Run inference on the NVIDIA Jetson Nano with the models you create
Upon completion, you'll be able to create your own deep learning classification and regression models with the Jetson Nano.
Topics Covered
Tools and frameworks used in this course include PyTorch and NVIDIA Jetson Nano.
Course Outline
1. Setting up your Jetson NanoStep-by-step guide to set up your hardware and software for the course projects
- Introduction and Setup
Video walk-through and instructions for setting up JetPack and what items you need to get started
- Cameras
Details on how to connect your camera to the Jetson Nano Developer Kit
- Headless Device Mode
Video walk-through and instructions for running the Docker container for the course using headless device mode (remotely from your computer).
- Hello Camera
How to test your camera with an interactive Jupyter notebook on the Jetson Nano Developer Kit
- JupyterLab
A brief introduction to the JupyterLab interface and notebooks
2. Image ClassificationBackground information and instructions to create projects that classify images using Deep Learning
- AI and Deep Learning
A brief overview of Deep Learning and how it relates to Artificial Intelligence (AI)
- Convolutional Neural Networks (CNNs)
An introduction to the dominant class of artificial neural networks for computer vision tasks
- ResNet-18
Specifics on the ResNet-18 network architecture used in the class projects
- Thumbs Project
Video walk-through and instructions to work with the interactive image classification notebook to create your first project
- Emotions Project
Build a new project with the same classification notebook to detect emotions from facial expressions
- Quiz Questions
Answer questions about what you've learned to reinforce your knowledge
3. Image RegressionInstructions to create projects that can localize and track image features in a live camera image
- Classification vs. Regression
With a few changes, the Classification model can be converted to a Regression model
- Face XY Project
Video walk-through and instructions to build a project that finds the coordinates of facial features
- Quiz Questions
Answer questions about what you've learned to reinforce your knowledge
Course Details
Duration: 08:00
Price: Free
Level: Technical - Beginner
Subject: Deep Learning
Language: English
Course Prerequisites: Basic familiarity with Python (helpful, not required)
Related Training:You may be interested in the following free self-paced training on Jetson:
Enrollnow
3. Develop, Customize, and Publish in Omniverse With Extensions
About this Course
Want to change the functionality and User Interface (UI) of Omniverse more to your liking? Customize the Omniverse experience with Extensions using Python code. Extensions can be used for a wide variety of modifications from spawning objects with a button press to applying custom physics on selected objects. Optimize a workflow by copying commonly repeated operations into an Extension or add a new way to manipulate objects in the UI.
Learning Objectives
In this course, participants will learn how to:
- Make a button to spawn a cube
- Alter the Omniverse UI with custom windows
- Debug and an publish Extensions
Topics Covered
Upon completion, you will be able to create and share your own custom Extension with the Omniverse community.
Course Outline
| Develop, Customize, and Publish in Omniverse With Extensions | - Installing Omniverse, Completed.
- An Introductory Extension: Spawn an Object, Completed.
- Extension Debugging, Completed.
|
| Intermediate Extension Concepts | - Intermediate Extension Concepts.
- How to Build a Command, Completed.
- Extension Publishing, Completed.
|
| Advanced Example: The Scatter Tool | - Model-view-controller, Completed.
- Advanced Windows, Completed.
- Scatter with Style, Completed.
|
Course Details
Duration: 08:00
Price: Free
Subject: Graphics and Simulation
Language: English
Course Prerequisites:- Basic familiarity with Python (helpful, not required) Suggested materials to satisfy prerequisites: The Python Tutorial
- Tools, libraries, frameworks used:Omniverse Code, Visual Studio Code, Python, and the Python Extension
- Hardware (minimum suggested requirements):Desktop or Laptop Computer with an Intel i7 Gen 5 or AMD Ryzen, NVIDIA RTX Enabled GPU with 16GB
Related Training:For additional hands-on training through the NVIDIA Deep Learning Institute, visit
Getting Started with USD for Collaborative 3D WorkflowsA self-paced course on the Universal Scene Description (USD) framework developed by Pixar to foster collaboration between technical and creative world builders, moving from a sequential workflow to a collaborative workflow.
Easily Develop Advanced 3D Layout Tools on NVIDIA Omniverse
Get hands-on experience with NVIDIA Omniverse - the platform for connecting and creating physically accurate, 3D virtual worlds. See how easy it is to create your own custom scene layout tools in Omniverse Code with a few lines of Python script.
Build Beautiful, Custom UI for 3D Tools on NVIDIA Omniverse
Experience the NVIDIA Omniverse development platform for builders and creators of virtual worlds. Become a master in UI with a deep dive into NVIDIA Omniverse Kit’s powerful omni.ui suite of tools and frameworks.
How to Build Custom 3D Scene Manipulator Tools on NVIDIA Omniverse
See how you can build advanced tools on the modular, easily extensible Omniverse platform. You’ll learn from the Omniverse developer ecosystem team how you can extend and enhance the 3D tools you know and love today.
Enrollnow
4.Building RAG Agents with LLMs
[FREE FOR LIMITED TIME] Agents powered by large language models (LLMs) have shown great retrieval capability for using tools, looking at documents, and plan their approaches. This course will show you how to deploy an agent system in practice with the flexibility to scale up your system to meet the demands of users and customers.
About this Course
The evolution and adoption of large language models (LLMs) have been nothing short of revolutionary, with retrieval-based systems at the forefront of this technological leap. These models are not just tools for automation; they are partners in enhancing productivity, capable of holding informed conversations by interacting with a vast array of tools and documents. This course is designed for those eager to explore the potential of these systems, focusing on practical deployment and the efficient implementation required to manage the considerable demands of both users and deep learning models. As we delve into the intricacies of LLMs, participants will gain insights into advanced orchestration techniques that include internal reasoning, dialog management, and effective tooling strategies.
Learning Objectives
The goal of the course is to teach participants how to:
- Compose an LLM system that can interact predictably with a user by leveraging internal and external reasoning components.
- Design a dialog management and document reasoning system that maintains state and coerces information into structured formats.
- Leverage embedding models for efficient similarity queries for content retrieval and dialog guardrailing.
- Implement, modularize, and evaluate a RAG agent that can answer questions about the research papers in its dataset without any fine-tuning.
- By the end of this workshop, participants will have a solid understanding of RAG agents and the tools necessary to develop their own LLM applications.
Topics Covered:
The workshop includes topics such as LLM Inference Interfaces, Pipeline Design with LangChain, Gradio, and LangServe, Dialog Management with Running States, Working with Documents, Embeddings for Semantic Similarity and Guardrailing, and Vector Stores for RAG Agents. Each of these sections is designed to equip participants with the knowledge and skills necessary to develop and deploy advanced LLM systems effectively.
Course Outline
- Introduction to the workshop and setting up the environment.
- Exploration of LLM inference interfaces and microservices.
- Designing LLM pipelines using LangChain, Gradio, and LangServe.
- Managing dialog states and integrating knowledge extraction.
- Strategies for working with long-form documents.
- Utilizing embeddings for semantic similarity and guardrailing.
- Implementing vector stores for efficient document retrieval.
- Evaluation, assessment, and certification.
5.Getting Started: Simulating Your First Robot in Isaac Sim
Build foundational skills in robotics simulation and control with Isaac Sim, the first step in the Isaac Sim Learning Path.
About this Course
In Getting Started: Simulating Your First Robot in Isaac Sim, we will begin a comprehensive learning path to begin robotics simulation using Isaac Sim. This beginner-friendly course guides you through building a simple robot by assembling key components such as the chassis, wheels, and joints. You'll then apply physics properties to simulate realistic interactions between the robot and its environment.
As you progress, you'll design control mechanisms using ROS 2 and OmniGraph to enable dynamic movement of the robot. You'll integrate advanced sensors like RGB and 2D Lidar to enhance the robot's environmental awareness. Next, you'll transmit sensor data from Isaac Sim to ROS 2 for real-time visualization and analysis in RViz. Finally, you'll learn to diagnose and troubleshoot common issues related to simulation, control, and data streaming to ensure smooth operation of the robot.
This course equips you with essential skills for creating dynamic simulations and lays the groundwork for future robotics projects. By the end of this course, you'll have a solid foundation in using Isaac Sim to simulate and control robots, preparing you for more advanced robotics projects.
Check out the other courses in the Getting Started With Isaac Sim learning path.
Course 1
Getting Started: Simulating Your First Robot in Isaac Sim
Course 2
Ingesting Robot Assets and Simulating Your Robot in Isaac Sim
Course 3
Synthetic Data Generation for Perception Model Training in Isaac Sim
Course 4
Developing Robots With Software-in-the-Loop (SIL) In Isaac Sim
Course 5
Leveraging ROS 2 and Hardware-in-the-Loop (HIL) in Isaac Sim
Learning Objectives
- Assemble a simple robot model in Isaac Sim using basic prims as key components such as the chassis, wheels, and joints.
- Apply physics to simulate realistic interactions between the robot and its environment.
- Design control mechanisms using ROS 2 and OmniGraph to enable dynamic movement of the robot.
- Integrate advanced sensors like RGB and 2D lidar to enhance the robot's environmental awareness.
- Transmit sensor data from Isaac Sim to ROS 2 for real-time visualization and analysis in RViz.
Course Details
Duration: 01:30
Price: Free
Level: Technical - Beginner
Subject: Robotics
Language: English
Course Prerequisites:
Basic Python knowledge and familiarity with robotics concepts.
A Linux machine meeting Isaac Sim's system requirements is necessary for this course and for properly running simulations.
6.Transferring Robot Learning Policies From Simulation to Reality
Learn the principles of developing effective robot learning policies.
About this Course
Welcome to our course on Transferring Robot Learning Policies From Simulation to Reality, which we often refer to as sim-to-real. While this course won't focus specifically on Isaac Lab, the principles we'll cover are important to understand before you start developing robot learning policies in Isaac Lab. Our goal in this course is to teach you about sim-to-real, including best practices and pitfalls to avoid.Learning Objectives
- Understand the challenges of reinforcement learning in robotics.
- Explain the concept of the “reality gap” and its characteristics.
- Describe various techniques for bridging the reality gap.
- Evaluate strategies for leveraging privileged information in robot learning.
- Implement best practices for successful sim-to-real transfer in robotics applications.
Course Details
Duration: 01:00
Price: Free
Level: General Interest
Subject: Graphics and Simulation
Language: English
Course Prerequisites:
This course does not have a hands-on component. No prerequisites are required to take this course.
Prefer learning from an instructor?
0 Comments