Write For Us

Pruning AI Models for Peak Performance - NVIDIA DRIVE Labs Ep. 31

E-Commerce Solutions SEO Solutions Marketing Solutions
137 Views
Published
Check out HALP (Hardware-Aware Latency Pruning), a new method designed to adapt convolutional neural networks (CNNs) and #transformer-based architectures for real-time performance. HALP optimizes pre-trained models to maximize compute utilization. In testing with NVIDIA DRIVE Orin™ on the road, it consistently outperformed alternative approaches.

00:00:00 - Introducing Hardware-Aware Latency Pruning (HALP)
00:00:29 - Common Model Optimization
00:00:59 - DNN Pruning
00:01:21 - Hardware Aware Latency Pruning
00:01:31 - Classification Tasks
00:01:37 - 3D Object Detection
00:02:04 - HALP with Transformers
00:03:09 - To learn more, visit our GitHub and project pages

GitHub: https://nvda.ws/3rlM7mo
Product page: https://nvda.ws/46961je
Watch the full series here: https://nvda.ws/3LsSgnH
Learn more about DRIVE Labs: https://nvda.ws/36r5c6t

Follow us on social:
Twitter: https://nvda.ws/3LRdkSs
LinkedIn: https://nvda.ws/3wI4kue
#NVIDIADRIVE
Category
Hardware
Tags
NVIDIA, drive labs, self-driving cars
Sign in or sign up to post comments.
Be the first to comment