NVIDIA’s home for open source projects and research across artificial intelligence, robotics, and more.

 

RAPIDS

Open GPU data science

   

TensorRT

High-performance platform for deep learning inference

   

NCCL

Optimized primitives for collective multi-gpu communication

   

TensorRT Inference Server

Inference microservice for data center production that maximizes GPU utilization

   

DALI

Data pre-processing in deep learning applications

Apex

A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

   

DIGITS

Deep Learning GPU Training System

   

TensorFlow

Examples that show how to use TF-TRT

   

ONNX

TensorRT backend for ONNX

   

NVDLA

Open source Deep Learning Inference Accelerator

   

Docker

Build and run Docker containers leveraging NVIDIA GPUs

   

Kubernetes

Kubernetes on NVIDIA GPUs

   

K8s Device Plugin

Enable GPU support in Kubernetes with the NVIDIA device plugin

Container Runtime

Support multiple Linux container runtimes via the NVIDIA Container Runtime

   

Container Runtime Library

Automatically configure GNU/Linux containers leveraging NVIDIA hardware

GPU Monitoring Tools

Bindings and utilities for monitoring NVIDIA GPUs on Linux

MDL

NVIDIA Material Definition Language SDK

   

USD

Universal Scene Description

   

Falcor

Real-time rendering framework

   

PhysX

Physics simulation engine

   

TF TRT Image Classification

Image classification with NVIDIA TensorRT from TensorFlow models

Redtail

Perception and AI components for autonomous mobile robotics

   

Flang

Flang is a Fortran compiler targeting LLVM

   

CUTLASS

CUDA Templates for Linear Algebra Subroutines

   

Thrust

Parallel algorithms library

   

AmgX

Distributed multi-grid linear solver library on GPU

   

Open Seq2Seq

Toolkit for efficient experimentation with various sequence-to-sequence models

   

vid2vid

High-resolution photorealistic video-to-video translation

Deep Recommender

Deep learning for recommender systems

Milano

Tool for automating hyper-parameters search for your models on a backend of your choice

Tacotron 2

PyTorch implementation with faster-than-realtime inference

Deep Learning Examples

Tensor Cores optimized code-samples