NVIDIA CUDA 13.2 Expands Tile Programming to Ampere and Ada GPUs
Iris Coleman Mar 09, 2026 23:00
CUDA 13.2 extends tile-based GPU programming to older architectures, adds Python profiling tools, and delivers up to 5x speedups with new Top-K algorithms.
NVIDIA's CUDA 13.2 release extends its tile-based programming model to Ampere and Ada architectures, bringing what the company calls its largest platform update in two decades to a significantly broader hardware base. The update also introduces native Python profiling capabilities and new algorithms delivering up to 5x performance improvements for specific workloads.
Previously limited to Blackwell-class GPUs, CUDA Tile now supports compute capability 8.X architectures (Ampere and Ada), alongside existing 10.X and 12.X support. NVIDIA indicated that a future toolkit release will extend full support to all GPU architectures starting with Ampere, potentially covering millions of deployed professional and consumer GPUs.
Python Gets First-Class Treatment
The release significantly expands Python tooling. cuTile Python, the DSL implementation of NVIDIA's tile programming model, now supports recursive functions, closures with capture, lambda functions, and custom reduction operations. Installation has been simplified to a single pip command that pulls all dependencies without requiring a system-wide CUDA Toolkit installation.
A new profiling interface called Nsight Python brings kernel profiling directly to Python developers. Using decorators, developers can automatically configure, profile, and plot kernel performance comparisons across multiple configurations. The tool exposes performance data through standard Python data structures for custom analysis.
Perhaps more significant for debugging workflows: Numba-CUDA kernels can now be debugged on actual GPU hardware for the first time. Developers can set breakpoints, step through statements, and inspect program state using CUDA-GDB or Nsight Visual Studio Code Edition.
Algorithm Performance Gains
The CUDA Core Compute Libraries (CCCL) 3.2 release introduces several optimized algorithms. The new cub::DeviceTopK provides up to 5x speedups over full radix sort when selecting the K largest or smallest elements from a dataset—a common operation in recommendation systems and search applications.
Fixed-size segmented reduction shows even more dramatic improvements: up to 66x faster for small segment sizes and 14x for large segments compared to the existing offset-based implementation. The cuSOLVER library adds FP64-emulated calculations that leverage INT8 throughput, achieving up to 2x performance gains for QR factorization on B200 systems when matrix sizes approach 80K.
Enterprise and Embedded Updates
Windows compute drivers now default to MCDM instead of TCC mode starting with driver version R595. This change addresses compatibility issues where some systems displayed errors at startup. MCDM enables WSL2 support, native container compatibility, and advanced memory management APIs previously reserved for WDDM mode. NVIDIA acknowledged that MCDM currently has slightly higher submission latency than TCC and is working to close that gap.
For embedded systems, the same Arm SBSA CUDA Toolkit now works across all Arm targets, including Jetson Orin devices. Jetson Thor gains Multi-Instance GPU support, allowing the integrated GPU to be partitioned into two isolated instances—useful for robotics applications that need to separate safety-critical motor control from heavier perception workloads.
The toolkit is available now through NVIDIA's developer portal. Developers using Ampere, Ada, or Blackwell GPUs can access the cuTile Python Quickstart guide to begin experimenting with tile-based programming.
Image source: Shutterstock- nvidia
- cuda
- gpu computing
- ai development
- python


