Putting Large Models into a Small Box
—— AIBOX-OrinNano Large-Model AI Box
40 TOPS of computing power! Support the private deployment of mainstream large models.
Bring private AI capability to meet individual AI deployment needs.

High-performance edge computing module

NVIDIA Jetson Orin Nano edge computing module (8GB version) features a hexa-core ARM CPU and a 1024-core NVIDIA Ampere
architecture GPU with 32 Tensor Cores. It is capable of running multiple concurrent AI application pipelines, providing
powerful inference performance.

Up to 40 TOPS of computing power

AIBOX-OrinNano delivers up to 40 TOPS of computing power and provides a 128GB PCIe NVMe SSD. The JETSON system,
based on Ubuntu 22.04, offers a comprehensive desktop Linux environment with accelerated graphics and supports libraries
such as NVIDIA CUDA, TensorRT, and CuDNN. The device is suitable for a wide range of AI application scenarios. (Support for
the above content varies across different versions of the JetPack SDK. For detailed information, please refer to the NVIDIA official website.)

The private deployment of large models

Generative AI at the edge

NVIDIA Jetson Orin offers unparalleled AI compute, large unified memory, and comprehensive software stacks, delivering superior
energy efficiency to drive the latest generative AI applications. It’s capable of fast inference for any generative AI models powered by
the transformer architecture, providing superior edge performance on MLPerf.

AI software stack and ecosystem

Democratize edge AI and robotics development with the world's most comprehensive AI software stack and ecosystem, powered by 
generative AI at the edge and the NVIDIA Metropolis and Isaac™ platforms. NVIDIA JetPack™, Isaac ROS, and reference AI workflows enable
seamless integration of cutting-edge technologies into your products, eliminating the need for costly internal AI resources. Experience
end-to-end acceleration for AI applications and speed your time to market using the same powerful technologies that drive data centers
and cloud deployments.

Support 11-channel 1080p30 H.265 video decoding

AIBOX-OrinNano supports up to 11-channel 1080p30 H.265 video decoding, or 1×4K60 (H.265), 2×4K30 (H.265), and 5×1080p60 (H.265) video
decoding,meeting the diverse demands for AI application scenarios.

Aluminum alloy enclosure with efficient heat dissipation

The AI box features an industrial-grade all-metal enclosure with an aluminum alloy structure for thermal conduction. The side of the top cover
features a grille design for external airflow and efficient heat dissipation, ensuring computing performance and stability even under high-temperature
operating conditions.Its top cover is a porous hexagonal design, combining elegance with high efficiency. The compact, exquisite device operates
stably and meets the needs of various industrial-grade applications.

A wide range of applications

AIBOX-OrinNano is widely used in intelligent surveillance, AI education, services based on computing
power, edge computing, private deployment of large models, data security, and privacy protection.

Data security
Edge computing
Large models
AI education
Computing services
Intelligent surveillance

Specifications

AIBOX-OrinNX ( 16 GB ) AIBOX-OrinNano ( 8 GB )
Basic Specifications Module

NVIDIA Jetson OrinNX (16GB) module

NVIDIA Jetson OrinNano (8GB) module

CPU

Octa-core 64-bit Arm Cortex-A78AE v8.2 processor, up to 2.0GHz

Hexa-core 64-bit ARM Cortex-A78AE v8.2 processor, up to 1.5GHz

AI
Performance

100 TOPS

40 TOPS

GPU

1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores

Video
Encoding

H.265:1x4K60、3x4K30、6x1080p60、12x1080p30

H.265:1080p30

Video
Decoding

H.265:1x8K30、2x4K60、4x4K30、

9x1080p60、18x1080p30

H.265:1*4K60、2*4K30、5*1080p60、11*1080p30

Memory

16GB LPDDR5

8GB LPDDR5

Storage
Expansion

128GB PCIe NVMe SSD (installed inside the device)

Power

DC 12V(DC 5.5 * 2.1mm)

Power
Consumption

Normal: 7.2W (12V/600mA)

Max: 33.6W (12V/2800mA)

Normal: 7.2W (12V/600mA)

Max: 18W (12V/1500mA)

Size

93.4mm * 93.4mm * 50mm

Weight

≈ 500g

Environment

Operating temperature: -20℃~60℃,Storage temperature: -20℃~70℃, Storage Humidity: 10%~90%RH (non-condensing)

Software Support OS

The Jetson system, based on Ubuntu 22.04, offers a comprehensive desktop Linux environment with accelerated

graphics, supporting libraries such as NVIDIA CUDA, TensorRT, and CuDNN.

Large Models

Robotic models: ROS models

Large language models: The private deployment of ultra-large-scale parameter models under the Transformer

architecture, such as LLaMa2, ChatGLM, and Qwen.

Vision models: ViT, Grounding DINO, and SAM.

AI painting: Stable Diffusion V1.5 image generation model in the AIGC field.

Traditional
Network
Architectures

Traditional network architectures such as CNN, RNN, and LSTM; a variety of deep learning frameworks include TensorFlow,

PyTorch, MXNet, PaddlePaddle, ONNX, and Darknet; custom operator development.

Docker container management technology facilitates easy image deployment.

AI Software
Stack

NVIDIA Jetson Orin offers unparalleled AI compute, large unified memory, and comprehensive software stacks, delivering

superior energy efficiency to drive the latest generative AI applications. It’s capable of fast inference for any generative AI

models powered by the transformer architecture, providing superior edge performance on MLPerf.

Interfaces Ethernet

1×Gigabit Ethernet (1000Mbps / RJ45)

Video Output

1×HDMI 2.1 (4K@60fps)

USB

2×USB 3.0 (Max: 1A)

Watchdog

Support external watchdog

Other

1×Type-C (USB 2.0 OTG), 1×Console (debug serial port), 1× Recovery, 1× power button