Home
Courses
Knowledge Base
Register
Login
✕
Home
Courses
Simulation and Digital Tools
NVIDIA Isaac Sim. Total Immersion
NVIDIA Isaac Sim. Total Immersion
Curriculum
30 Sections
284 Lessons
Lifetime
Expand all sections
Collapse all sections
0QE8 0QE8 1. Introduction to Robotic Simulations
6
1.1
0QE8 1.1 What Are Robotic Simulations and Why They Are Needed
1.2
0QE8 1.2 Overview of Existing Simulators — Gazebo, Webots, PyBullet, Isaac Sim
1.3
0QE8 1.3 Advantages of NVIDIA Isaac Sim — Photorealism, PhysX, GPU Acceleration
1.4
0QE8 1.4 NVIDIA Ecosystem — Isaac Sim, Isaac Lab, GROOT, ROS 2
1.5
0QE8 1.5 Applications of Simulation — Robot Training, Testing, Prototyping
1.6
0QE8 1.6 Sim-to-Real Transfer — Transitioning from Simulation to Reality
0QE8 0QE8 2. Installation and Setup
10
2.1
0QE8 2.1 System Requirements — Hardware, Drivers, Operating Systems
2.2
0QE8 2.2 Installing NVIDIA Drivers and CUDA Toolkit
2.3
0QE8 2.3 Installing Omniverse Launcher — Interface Overview
2.4
0QE8 2.4 Installing Isaac Sim via Omniverse Launcher
2.5
0QE8 2.5 Alternative Installation Methods — Docker, Native, Cloud
2.6
0QE8 2.6 Installing Omniverse Nucleus for Local Asset Storage
2.7
0QE8 2.7 Installing Omniverse Cache for Asset Acceleration
2.8
0QE8 2.8 First Isaac Sim Launch — Startup Process
2.9
0QE8 2.9 Setting Up Python Environment for the API
2.10
0QE8 2.10 Troubleshooting Common Installation Issues
0QE8 0QE8 3. Interface and Basics
12
3.1
0QE8 3.1 Main Window Overview — Viewport, Panels, Menus, Toolbars
3.2
0QE8 3.2 Navigating 3D Space — Camera Movement, Zoom, Rotation
3.3
0QE8 3.3 Stage Concept — Scene and Object Hierarchy
3.4
0QE8 3.4 Creating Your First Scene — Ground Plane and Lighting
3.5
0QE8 3.5 Adding Simple Primitives — Cubes, Spheres, Cylinders
3.6
0QE8 3.6 Object Transformations — Position, Rotation, Scale
3.7
0QE8 3.7 Property Panel — Exploring Object Properties
3.8
0QE8 3.8 Materials and Textures — Visual Diversity
3.9
0QE8 3.9 Lighting System — Light Types and Configuration
3.10
0QE8 3.10 First Simulation Run — Using the Play Button
3.11
0QE8 3.11 Simulation Time Control — Timestep, Speed, Pause
3.12
0QE8 3.12 Saving and Loading Scenes — USD Format
0QE8 0QE8 4. USD Format Essentials
7
4.1
0QE8 4.1 What Is USD and Why It Matters for Robotics
4.2
0QE8 4.2 USD File Structure — Prims, Attributes, Relationships
4.3
0QE8 4.3 USD Layers — Composing Scenes from Multiple Files
4.4
0QE8 4.4 References and Payloads — Asset Reuse Patterns
4.5
0QE8 4.5 Variants — Creating Efficient Object Variations
4.6
0QE8 4.6 Working with USD via the Python API
4.7
0QE8 4.7 Exporting and Importing USD Files
0QE8 0QE8 5. Physics Engine PhysX 5
10
5.1
0QE8 5.1 Introduction to the PhysX 5 Physics Engine
5.2
0QE8 5.2 Rigid Bodies — Mass, Inertia, Center of Mass
5.3
0QE8 5.3 Collision Shapes — Collision Detection Forms
5.4
0QE8 5.4 Visual vs. Collision Meshes — Differences
5.5
0QE8 5.5 Physics Materials — Friction, Elasticity, Damping
5.6
0QE8 5.6 Gravity and Global Forces in Simulation
5.7
0QE8 5.7 Contacts and Collision Handling — Detection and Response
5.8
0QE8 5.8 PhysX Scene Parameters — Accuracy vs. Performance
5.9
0QE8 5.9 Deformable Bodies — Soft Objects and Fluids
5.10
0QE8 5.10 Particle Systems — Granular Materials Simulation
0QE8 0QE8 6. Joints, Articulations, and Mechanisms
10
6.1
0QE8 6.1 Joint Concepts in Robotics — Fundamentals
6.2
0QE8 6.2 Joint Types — Revolute, Prismatic, Fixed, Spherical
6.3
0QE8 6.3 Creating a First Joint — Connecting Two Bodies
6.4
0QE8 6.4 Degrees of Freedom (DoF) Explained
6.5
0QE8 6.5 Articulation Concept — Linked Body Systems
6.6
0QE8 6.6 Kinematic Chains — Mechanics
6.7
0QE8 6.7 Joint Limits — Angle and Position Constraints
6.8
0QE8 6.8 Joint Drives and Actuators — Position, Velocity, Effort
6.9
0QE8 6.9 Stiffness and Damping — Joint Properties
6.10
0QE8 6.10 Creating a Simple Two-Link Manipulator
0QE8 0QE8 7. Importing and Configuring Robots
9
7.1
0QE8 7.1 Isaac Sim Asset Library — Finding Ready Robots
7.2
0QE8 7.2 Importing URDF Files (ROS Robot Description Format)
7.3
0QE8 7.3 Importing MJCF Files (MuJoCo Format)
7.4
0QE8 7.4 Converting Models to USD Format
7.5
0QE8 7.5 Importing Popular Robots — Franka Panda, UR5, Fetch
7.6
0QE8 7.6 Verifying Import Correctness — Joints, Links, Meshes
7.7
0QE8 7.7 Fixing Issues with Imported Models
7.8
0QE8 7.8 Configuring Robot Visualization — Materials and Colors
7.9
0QE8 7.9 Configuring Physical Properties and Parameters
0QE8 0QE8 8. Controllers and Robot Control
11
8.1
0QE8 8.1 Controller Types — Position, Velocity, Effort
8.2
0QE8 8.2 Creating a Simple Position Controller for a Joint
8.3
0QE8 8.3 PID Controllers — Theory and Practical Tuning
8.4
0QE8 8.4 Velocity Control Implementation
8.5
0QE8 8.5 Torque and Force Control Systems
8.6
0QE8 8.6 Impedance Control — Compliant Collaboration
8.7
0QE8 8.7 Inverse Kinematics (IK) — Solving
8.8
0QE8 8.8 Forward Kinematics (FK) — Calculations
8.9
0QE8 8.9 Configuring an IK Controller for a Manipulator
8.10
0QE8 8.10 Trajectory Planning — Path Generation
8.11
0QE8 8.11 Motion Planning Algorithms — RRT and PRM Basics
0QE8 0QE8 9. Sensors and Environmental Perception
12
9.1
0QE8 9.1 Sensor Overview — Vision, Range, Proprioception
9.2
0QE8 9.2 RGB Cameras — Resolution, FOV, Frequency
9.3
0QE8 9.3 Depth Cameras — Obtaining Depth Maps
9.4
0QE8 9.4 Stereo Cameras — 3D Perception
9.5
0QE8 9.5 LiDAR — 2D and 3D Laser Rangefinders
9.6
0QE8 9.6 IMU Sensors — Acceleration and Angular Velocity
9.7
0QE8 9.7 Force/Torque Sensors — Measuring Joint Forces
9.8
0QE8 9.8 Contact Sensors — Detecting Touch and Interaction
9.9
0QE8 9.9 Encoders — Reading Joint Positions
9.10
0QE8 9.10 Getting Sensor Data via the Python API
9.11
0QE8 9.11 Visualizing Sensor Data in Real Time
9.12
0QE8 9.12 Sensor Noise and Domain Randomization for Robustness
0QE8 0QE8 10. Robot Types and Capabilities
9
10.1
0QE8 10.1 Manipulators — Kinematics, Workspace, Singularities
10.2
0QE8 10.2 Configuring Franka Panda for Pick-and-Place
10.3
0QE8 10.3 Mobile Robots — Differential Drive and Holonomic
10.4
0QE8 10.4 Creating a Wheeled Mobile Robot Simulation
10.5
0QE8 10.5 Humanoid Robots — Balance, Locomotion, Control Complexity
10.6
0QE8 10.6 Quadcopters and Drones — Flight Dynamics and Stabilization
10.7
0QE8 10.7 Collaborative Robots (Cobots) — Safety and Compliance
10.8
0QE8 10.8 Hybrid Robots — Mobile Manipulators (Fetch, TIAGo)
10.9
0QE8 10.9 Underwater and Aerial Robots — Special Physics
0QE8 0QE8 11. Python API and Simulation Programming
11
11.1
0QE8 11.1 Introduction to the Isaac Sim Python API
11.2
0QE8 11.2 Script Structure — Setup, Step Loop, Cleanup
11.3
0QE8 11.3 Creating Scenes Programmatically — Adding Objects
11.4
0QE8 11.4 Controlling Robots via Python — Sending Commands
11.5
0QE8 11.5 Reading Sensor Data Programmatically
11.6
0QE8 11.6 Standalone Scripts vs. Extension API
11.7
0QE8 11.7 Creating the First Standalone Script
11.8
0QE8 11.8 Working with SimulationContext for Control
11.9
0QE8 11.9 Callbacks and Events — Reacting to Simulation Events
11.10
0QE8 11.10 Debugging Python Code in Isaac Sim
11.11
0QE8 11.11 Best Practices for Efficient Robotics Code
0QE8 0QE8 12. Isaac Lab RL Framework
13
12.1
0QE8 12.1 What Is Isaac Lab and How It Differs from Isaac Sim
12.2
0QE8 12.2 Installing Isaac Lab — Environment Setup and Configuration
12.3
0QE8 12.3 Isaac Lab Architecture — Managers, Environments, Wrappers
12.4
0QE8 12.4 Ready Environments — Available Tasks Overview
12.5
0QE8 12.5 Running the First RL Environment — CartPole Example
12.6
0QE8 12.6 RL Task Structure — Observations, Actions, Rewards
12.7
0QE8 12.7 Creating a Custom Environment from Scratch
12.8
0QE8 12.8 Observation Manager — Configuring Agent Perception
12.9
0QE8 12.9 Action Manager — Defining Action Space
12.10
0QE8 12.10 Reward Manager — Reward Function Design
12.11
0QE8 12.11 Termination Conditions — Episode End Criteria
12.12
0QE8 12.12 Parallel Environments — Running Thousands of Robots
12.13
0QE8 12.13 Domain Randomization — Variability for Robustness
0QE8 0QE8 13. Reinforcement Learning Training
11
13.1
0QE8 13.1 RL Basics — Agent, Environment, Policy
13.2
0QE8 13.2 Popular RL Algorithms — PPO, SAC, DQN
13.3
0QE8 13.3 Integration with RL Libraries — SB3 and RL Games
13.4
0QE8 13.4 Hyperparameter Tuning for Training
13.5
0QE8 13.5 Starting the Training Loop
13.6
0QE8 13.6 Monitoring with TensorBoard and Weights & Biases
13.7
0QE8 13.7 Policy Evaluation and Performance Testing
13.8
0QE8 13.8 Saving and Loading Trained Policies (Checkpoints)
13.9
0QE8 13.9 Fine-Tuning Pretrained Policies
13.10
0QE8 13.10 Sim-to-Real Transfer of Policies
13.11
0QE8 13.11 Troubleshooting Training Instability and Low Reward
0QE8 0QE8 14. Imitation Learning and Demonstrations
10
14.1
0QE8 14.1 What Is Imitation Learning and When to Use It
14.2
0QE8 14.2 Teleoperation — Human-in-the-Loop Robot Control
14.3
0QE8 14.3 Recording Demonstrations in Isaac Sim
14.4
0QE8 14.4 Demonstration Data Format and HDF5 Structure
14.5
0QE8 14.5 Isaac Lab Mimic — Automatic Demonstration Generation
14.6
0QE8 14.6 Annotating Subtasks in Demonstrations
14.7
0QE8 14.7 Behavioral Cloning from Imitation Data
14.8
0QE8 14.8 Training Policies on Collected Datasets
14.9
0QE8 14.9 Dataset Aggregation (DAgger) — Iterative Improvement
14.10
0QE8 14.10 Comparing IL and RL — When to Use Which
0QE8 0QE8 15. Vision-Language Models (VLM)
8
15.1
0QE8 15.1 Introduction to Vision-Language Model Capabilities
15.2
0QE8 15.2 Popular VLMs — CLIP, BLIP, LLaVA, PaliGemma
15.3
0QE8 15.3 Integrating a VLM with Isaac Sim via Python
15.4
0QE8 15.4 Visual Grounding — Linking Language to Objects
15.5
0QE8 15.5 Processing Camera Images for VLM Input
15.6
0QE8 15.6 Zero-Shot Object Detection with VLMs
15.7
0QE8 15.7 Natural Language Command Systems for Robots
15.8
0QE8 15.8 Using a VLM as a High-Level Planner
0QE8 0QE8 16. Vision-Language-Action Models (VLA)
10
16.1
0QE8 16.1 What Are VLA Models — From Perception to Actions
16.2
0QE8 16.2 VLA Architecture — Vision Encoder, Language Module, Action Head
16.3
0QE8 16.3 Popular VLA Models — RT-2, OpenVLA, SmolVLA, Octo
16.4
0QE8 16.4 Preparing Data for VLA Training — Datasets
16.5
0QE8 16.5 Action Tokenization — Representing Actions
16.6
0QE8 16.6 Fine-Tuning a VLA Model on a Custom Task
16.7
0QE8 16.7 VLA Inference in Isaac Sim — Real-Time Actions
16.8
0QE8 16.8 Action Chunking — Predicting Sequences
16.9
0QE8 16.9 Asynchronous Inference for Better Reactivity
16.10
0QE8 16.10 Comparing VLA with Classic RL Approaches
0QE8 0QE8 17. Large Language Models as Planners
8
17.1
0QE8 17.1 LLMs in Robotics — High-Level Planning
17.2
0QE8 17.2 Integrating OpenAI APIs and Local LLMs with Isaac Sim
17.3
0QE8 17.3 Prompt Engineering for Robotics Tasks
17.4
0QE8 17.4 Decomposing Complex Tasks into Subtasks with LLMs
17.5
0QE8 17.5 Function Calling — Executing Robot Commands via LLM
17.6
0QE8 17.6 Reasoning for Complex Manipulation
17.7
0QE8 17.7 Multimodal LLMs — Processing Images and Text
17.8
0QE8 17.8 Error Recovery and Safety Mechanisms with LLMs
0QE8 0QE8 18. Isaac GROOT Generalist Policies
10
18.1
0QE8 18.1 Introduction to Isaac GROOT Generalist Robot Technology
18.2
0QE8 18.2 GROOT Architecture — Foundation Model for Robots
18.3
0QE8 18.3 GR00T-N1 Model — Capabilities and Limitations
18.4
0QE8 18.4 Integrating GROOT with Isaac Lab
18.5
0QE8 18.5 Loading Pretrained GROOT Policies
18.6
0QE8 18.6 Evaluating GROOT in Simulation
18.7
0QE8 18.7 Fine-Tuning GROOT for Specific Tasks
18.8
0QE8 18.8 Data Collection for GROOT — Dataset Requirements
18.9
0QE8 18.9 GROOT for Humanoid Locomotion Control
18.10
0QE8 18.10 Multi-Embodiment Learning — Training Across Robots
0QE8 0QE8 19. ROS 2 Integration Ecosystem
11
19.1
0QE8 19.1 Introduction to ROS 2 — Core Concepts
19.2
0QE8 19.2 Installing ROS 2 and Setup with Isaac Sim
19.3
0QE8 19.3 Isaac Sim ROS 2 Bridge — Installation and Configuration
19.4
0QE8 19.4 Publishing Sensor Data to ROS Topics
19.5
0QE8 19.5 Subscribing to Control Commands from ROS Topics
19.6
0QE8 19.6 TF Transforms — Building the Transform Tree
19.7
0QE8 19.7 Visualizing Isaac Sim Data in RViz
19.8
0QE8 19.8 Using Navigation2 with Isaac Sim
19.9
0QE8 19.9 MoveIt2 for Manipulator Motion Planning
19.10
0QE8 19.10 Creating Custom ROS 2 Nodes for Control
19.11
0QE8 19.11 ROS 2 Actions for Long-Running Tasks
0QE8 0QE8 20. Navigation and SLAM
10
20.1
0QE8 20.1 Autonomous Navigation Basics
20.2
0QE8 20.2 Configuring a Mobile Robot with LiDAR
20.3
0QE8 20.3 Obstacle Avoidance and Collision Prevention
20.4
0QE8 20.4 Path Planning — A* and RRT
20.5
0QE8 20.5 SLAM — Simultaneous Localization and Mapping Basics
20.6
0QE8 20.6 Creating an Environment Map in Simulation
20.7
0QE8 20.7 Robot Localization on a Map
20.8
0QE8 20.8 Navigation Stack — Autonomous Driving in Isaac Sim
20.9
0QE8 20.9 Waypoint Navigation Through Multiple Goals
20.10
0QE8 20.10 Dynamic Obstacles — Responding to Moving Objects
0QE8 0QE8 21. Manipulation and Pick-Place
12
21.1
0QE8 21.1 Robotic Manipulation Basics
21.2
0QE8 21.2 Gripper Types — Parallel, Vacuum, Dexterous
21.3
0QE8 21.3 Grasp Planning — Approach Strategies
21.4
0QE8 21.4 Configuring Controllers for Gripper Control
21.5
0QE8 21.5 Building a Pick-and-Place Scene
21.6
0QE8 21.6 Object Pose Estimation
21.7
0QE8 21.7 Approach Trajectory Design
21.8
0QE8 21.8 Implementing Grasp Mechanics
21.9
0QE8 21.9 Lifting and Transport Operations
21.10
0QE8 21.10 Precise Placement and Alignment
21.11
0QE8 21.11 Error Handling and Recovery
21.12
0QE8 21.12 Multi-Object Manipulation
0QE8 0QE8 22. Advanced Simulation Techniques
10
22.1
0QE8 22.1 Multi-Robot Systems — Simulating Multiple Robots
22.2
0QE8 22.2 Robot Coordination and Collaborative Tasks
22.3
0QE8 22.3 Distributed Simulation Across Multiple Machines
22.4
0QE8 22.4 GPU-Accelerated Simulation for Maximum Performance
22.5
0QE8 22.5 Real-Time Factor — Measuring Simulation Speed
22.6
0QE8 22.6 Headless Mode — Running Without GUI
22.7
0QE8 22.7 Scripting and Automation for Experiments
22.8
0QE8 22.8 Batch Processing of Multiple Simulations
22.9
0QE8 22.9 Checkpointing — Saving Simulation State
22.10
0QE8 22.10 Deterministic Simulation for Reproducible Results
0QE8 0QE8 23. Realistic Environments Design
10
23.1
0QE8 23.1 Scene Composition — Building Complex Scenes
23.2
0QE8 23.2 Importing 3D Models — Formats and Optimization
23.3
0QE8 23.3 Materials and PBR — Physically Based Rendering
23.4
0QE8 23.4 Lighting Setup — Realistic Illumination
23.5
0QE8 23.5 Backgrounds and Skyboxes
23.6
0QE8 23.6 Procedural Generation of Scenes
23.7
0QE8 23.7 Warehouse Environments — Industrial Design
23.8
0QE8 23.8 Kitchen and Household Environments
23.9
0QE8 23.9 Factory and Industrial Environments
23.10
0QE8 23.10 Outdoor Environments — Terrain and Weather
0QE8 0QE8 24. Performance Optimization
9
24.1
0QE8 24.1 Profiling — Measuring Performance
24.2
0QE8 24.2 Identifying Bottlenecks
24.3
0QE8 24.3 Level of Detail (LOD) Strategies
24.4
0QE8 24.4 Collision Mesh Optimization
24.5
0QE8 24.5 Physics Substeps — Balancing Accuracy and Speed
24.6
0QE8 24.6 GPU Memory Management
24.7
0QE8 24.7 Batching Strategies for Efficiency
24.8
0QE8 24.8 Optimizing Parallel Environments
24.9
0QE8 24.9 Network Latency in Distributed Simulation
0QE8 0QE8 25. Extensions and Customization
7
25.1
0QE8 25.1 What Are Omniverse Extensions
25.2
0QE8 25.2 Creating Your First Extension
25.3
0QE8 25.3 UI Customization — Adding Custom Panels
25.4
0QE8 25.4 Custom Physics — Extending Physical Components
25.5
0QE8 25.5 Custom Sensors — Developing New Sensor Types
25.6
0QE8 25.6 Packaging and Distributing Extensions
25.7
0QE8 25.7 Using Third-Party Community Extensions
0QE8 0QE8 26. Computer Vision Processing
8
26.1
0QE8 26.1 Synthetic Data Generation for Training
26.2
0QE8 26.2 Semantic Segmentation
26.3
0QE8 26.3 Instance Segmentation
26.4
0QE8 26.4 Bounding Box Annotation and Auto-Labeling
26.5
0QE8 26.5 Depth Estimation and Depth Map Generation
26.6
0QE8 26.6 Optical Flow — Estimating Motion
26.7
0QE8 26.7 Object Tracking Across Frames
26.8
0QE8 26.8 Pose Estimation of Objects
0QE8 0QE8 27. Real Projects and Cases
6
27.1
0QE8 27.1 Project — Autonomous Object Sorting on a Conveyor
27.2
0QE8 27.2 Project — Mobile Delivery Robot in an Office
27.3
0QE8 27.3 Project — Collaborative Assembly with Two Manipulators
27.4
0QE8 27.4 Project — Humanoid Robot Simple Locomotion
27.5
0QE8 27.5 Project — Drone Autonomous Indoor Navigation
27.6
0QE8 27.6 Project — VLA-Controlled Robot via Natural Language
0QE8 0QE8 28. Testing and Validation
6
28.1
0QE8 28.1 Unit Testing for Robotics Code
28.2
0QE8 28.2 Scenario Testing with Simulation Scenarios
28.3
0QE8 28.3 Performance Testing and Evaluation
28.4
0QE8 28.4 Safety Testing and Verification
28.5
0QE8 28.5 Regression Testing — Preventing Degradation
28.6
0QE8 28.6 CI/CD for Robotics Projects
0QE8 0QE8 29. Sim-to-Real Transition
8
29.1
0QE8 29.1 The Sim-to-Real Gap — Simulation vs. Reality
29.2
0QE8 29.2 Domain Randomization Strategies
29.3
0QE8 29.3 Reality Gap Mitigation Techniques
29.4
0QE8 29.4 System Identification — Calibrating Models
29.5
0QE8 29.5 Hardware-in-the-Loop (HIL) Testing
29.6
0QE8 29.6 Deployment to a Real Robot
29.7
0QE8 29.7 On-Robot Testing and Live Debugging
29.8
0QE8 29.8 Continuous Learning from Real Data
0QE8 0QE8 30. Career and Future Development
10
30.1
0QE8 30.1 Career Paths in Robotics — Research, Engineering, Product
30.2
0QE8 30.2 Building a Project Portfolio for Employers
30.3
0QE8 30.3 Contributing to Open Source
30.4
0QE8 30.4 Publications and Conferences
30.5
0QE8 30.5 Advanced Topics — Optimal Control and Theory
30.6
0QE8 30.6 Community and Networking in Robotics
30.7
0QE8 30.7 Resources for Further Learning
30.8
0QE8 30.8 Robotics Trends in 2025 and Beyond
30.9
0QE8 30.9 Final Course Project — Comprehensive System
30.10
0QE8 30.10 Conclusion and Next Steps
This content is protected, please
login
and
enroll
in the course to view this content!
Modal title
Main Content