Equipped with the Jetson Orin core module, it accelerates the development and deployment of complex edge AI applications.
Equipped with Jetson OrinNano module
67 TOPS Computing Power
Private Deployment of Large Models
Supports up to 4K Decoding
Equipped with the NVIDIA Jetson Orin Nano edge computing module (8GB version), featuring a hexa-core ARM CPU and a 1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores. Capable of running multiple concurrent AI application pipelines, delivering powerful AI performance.
Boasts up to 67 TOPS of AI performance, enabling the execution of mainstream AI models, including Transformer and ROS robotics models. Supports complex deep neural networks using frameworks such as TensorFlow, OpenCV, JetPack, MXNet, and PyTorch, with capabilities like object recognition, target detection & tracking, speech recognition, and vision development, adaptable to diverse AI applications.
Supports private deployment of modern mainstream AI models, including large language models like LLaMA-2, ChatGLM, and Qwen; vision foundation models such as ViT, Grounding DINO, and SAM; as well as AIGC models like Stable Diffusion V1.5 for image generation.
Capable of decoding 11 channels of 1080p30 H.265 video, support for: 1x4K60 (H.265)、2x4K30 (H.265)、5x1080p60 (H.265). Meets the demands of various AI-driven vision applications.
NVIDIA Jetson Orin series delivers powerful AI computing capabilities, large unified memory, and a comprehensive software stack, enabling ultra-high-efficiency execution of the latest generative AI applications. It facilitates rapid inference for any generative AI model powered by Transformer architecture, achieving outstanding edge performance in MLPerf benchmarks.
Leveraging a complete AI software stack and ecosystem, supported by edge generative AI, NVIDIA Metropolis, and Isaac platforms, democratizing edge AI and robotics development. With NVIDIA JetPack, Isaac ROS, and reference AI workflows, advanced technologies can be integrated into products without relying on expensive in-house AI resources.
Suitable for industries such as edge computing, robotics, large-scale model localization, smart cities, smart healthcare, and smart industrial applications.
AIO-OrinNano(8GB) | AIO-OrinNX(16GB) | ||
Basic Specifications |
Module |
Original NVIDIA Jetson OrinNano (8GB) module |
Original NVIDIA Jetson OrinNX (16GB) module |
CPU |
Hexa-core 64-bit ARM Cortex-A78AE v8.2 processor, up to 1.7GHz |
Octa-core 64-bit ARM Cortex-A78AE v8.2 processor, up to 2.0GHz |
|
AI performance |
67 TOPS |
157 TOPS |
|
GPU |
1024 core NVIDIA Ampere architecture GPU with 32 Tensor Cores |
||
Video encoding |
H.265:1080p30 |
H.265:1×4K60, 3×4K30, 6×1080p60, 12×1080p30 |
|
Video decoding |
H.265:1×4K60, 2×4K30, 5×1080p60, 11×1080p30 |
H.265:1×8K30, 2×4K60, 4×4K30, 9×1080p60, 18×1080p30 |
|
Memory |
8GB LPDDR5 |
16GB LPDDR5 |
|
Storage |
1 × M.2 M-KEY (Expandable PCIe NVMe SSD, supports 2242/2260/2280) |
||
Power |
DC 12V (5.5 × 2.1mm, support 9V~24V wide voltage input) |
||
Power consumption |
Normal: 6W (12V/500mA), Max: 30W (12V/2500mA) |
Normal: 7.2W (12V/600W), Max: 34.8W (12V/2900mA) |
|
Size |
122.89mm × 85.04mm × 35.28mm |
||
Weight |
Without Fan: 129g, with fan: 180g |
||
Environment |
Operating Temperature: -20℃~60℃, Storage Temperature: -20℃~70℃, Storage Humidity: 10%~90%RH (non-condensing) |
||
Software Support |
OS |
Jetson systems based on Ubuntu 22.04 provide a complete desktop Linux environment with graphics acceleration and support for libraries such as NVIDIA CUDA, TensorRT, CuDNN, and more. |
|
Large model |
Robot model: ROS robot model is supported. Large language models: Support Ollama local large model deployment framework, which can be used for natural language processing, code generation, and assistance scenarios. Support the private deployment of ultra-large-scale parametric models under the Transformer architecture, such as Llama3 and Phi-3 Mini. Large visual models: Support the privatization deployment of large visual models such as EfficientVIT, NanoOWL, NanoSAM, SAM and TAM. AI Painting: Support ComfyUI graphical deployment framework, which can be used for scenarios such as image restoration, image style conversion, and image synthesis. Supports the private deployment of Flux, Stable Diffusion and Stable Diffusion XL image generation model in the AIGC field. |
||
Traditional network architecture |
Supports multiple deep learning frameworks accelerated by cuDNN, including PaddlePaddle, PyTorch, TensorFlow, MATLAB, MxNet, Caffe2, Chainer and Keras. Supports custom operator development. Docker containerization: Docker containerization technology is supported, which can be easily used for image deployment. |
||
AI software stack |
The NVIDIA Jetson Orin series delivers powerful AI compute power, massive unified memory, and a comprehensive software stack to power the latest generative AI applications. It enables fast inference on any generative AI model powered by the Transformer architecture, enabling superior edge performance on MLPerf. |
||
Interface Specifications |
Internet |
Ethernet: 1 × Gigabit Ethernet (RJ45) WiFi: Expand WiFi/Bluetooth module via M.2 E-KEY (2230), support 2.4GHz/5GHz dual-band WiFi6 (802.11a/b/g/n/ac/ax), Bluetooth 5.2 4G: Extend 4G LTE with Mini PCIe 5G: Extend 5G via M.2 B-KEY (multiplexed with 4G, USB3.0(1), not pasted by default) |
|
Video input |
2 × MIPI CSI DPHY (1 × 4 Lanes or 2 × 2 Lanes), Line in (Led by double row headers) |
||
Video output |
1 × HDMI2.0(4K@30fps) |
1 × HDMI2.0(4K@60fps) |
|
Audio |
1 × 3.5mm Audio jack (Support MIC recording, American Standard CTIA) |
||
USB |
2 × USB3.0 (Max: 1A; UP: USB3.0 (1), multiplexed with 5G; DOWN: USB3.0(2)), 1 × Type-C (USB2.0 OTG/Debug) |
||
Button |
1 × Reset, 1 × Recovery, 1 × Power |
||
Other interfaces |
1 × FAN (4Pin-1.25mm), 1 × SIM Card, 1 × Debug (3Pin-2mm), 1 × Double-row pin headers (2×10-20PIN-2.0mm): USB2.0, SPI, 2×I2C, Line in, Line out, GPIO 1 × Phoenix connector (2×4Pin, 3.5mm pitch): 1 × RS485, 1 × RS232, 1 × CAN 2.0 |