Choose the device for more info

EC-OrinNX EC-OrinNano

EC-OrinNX

100T Edge Computing Computer

Equipped with the NVIDIA Jetson OrinNX core board module (16GB RAM version), EC-OrinNX delivers up to 100 TOPS of computing power,
making it capable of running all current AI models. It enables larger and more complex deep neural networks, featuring object recognition,
target detection and tracking, speech recognition, and other visual development functions, meeting the demands for a wider range of artificial
intelligence application scenarios.

High-performance edge computing module

NVIDIA Jetson OrinNX edge computing module (16GB version) features an octa-core ARM CPU and a 1024-core NVIDIA Ampere architecture
GPU with 32 Tensor Cores. It is capable of running multiple concurrent AI application pipelines, delivering enhanced AI performance.

Up to 100 TOPS of computing power

EC-OrinNX delivers up to 100 TOPS of computing power, enabling it to run all current AI models, including Transformer and ROS models. It supports
larger and more complex deep neural networks using frameworks such as TensorFlow, OpenCV, JetPack, MXNet, and PyTorch. This computer
features object recognition, target detection and tracking, speech recognition, and other visual development functions,
making it suitable for a wide range of AI application scenarios.

The private deployment of large models

Generative AI at the edge

NVIDIA Jetson Orin offers unparalleled AI compute, large unified memory, and comprehensive software stacks, delivering superior energy efficiency
to drive the latest generative AI applications. It’s capable of fast inference for any generative AI models powered by the
transformer architecture, providing superior edge performance on MLPerf.

AI software stack and ecosystem

Democratize edge AI and robotics development with the world's most comprehensive AI software stack and ecosystem, powered by generative
AI at the edge and the NVIDIA Metropolis and Isaac™ platforms. NVIDIA JetPack™, Isaac ROS, and reference AI workflows enable seamless
integration of cutting-edge technologies into your products, eliminating the need for costly internal AI resources. Experience end-to-end
acceleration for AI applications and speed your time to market using the same powerful technologies that
drive data centers and cloud deployments.

Support 18-channel 1080p30 H.265 video decoding

EC-OrinNX supports up to 1×8K30 (H.265), 2×4K60 (H.265), 4×4K30 (H.265), 9×1080p60 (H.265), and 18×1080p30 (H.265) video decoding,
meeting the diverse demands for AI application scenarios.

All-new system image

The JETSON system, based on Ubuntu 22.04, offers a comprehensive desktop Linux environment with accelerated graphics,
supporting libraries such as NVIDIA CUDA 11.4.19, TensorRT 8.5.2, and CuDNN 8.6.0.

Aluminum alloy enclosure with passive heat dissipation

With an industrial-grade aluminum alloy enclosure, this device boasts efficient fanless passive dissipation, ensuring stable 24/7 operation to meet
various industrial application requirements. Designed for saving space, it supports wall-mounting for flexible installation on walls or industrial
automation machinery.

Extensive connectivity

A wide range of interface options includes Gigabit Ethernet (RJ45), HDMI 2.0, USB 3.0, RS485, RS232, CAN, Mini PCIe (4G), M.2 (WiFi),
and M.2 (SSD). These interfaces facilitate the connection of peripheral devices, enabling different product applications across various fields.

A wide range of applications

The computer is widely used in edge computing, robots, local deployment of large models, smart
cities, smart healthcare, smart industry, and more.

Robots
Edge computing
Large models
Smart industry
Smart cities
Smart healthcare

Specifications

EC-OrinNX(16GB) EC-OrinNano(8GB)
Basic Specifications Module

NVIDIA Jetson OrinNX (16GB) module

NVIDIA Jetson OrinNano (8GB) module

CPU

Octa-core Arm® Cortex®-A78AE v8.2 64-bit CPU,

up to 2.0GHz

Hexa-core 64-bit ARM Cortex-A78AE v8.2 processor,

up to 1.5GHz

AI
Performance

100 TOPS

40 TOPS

GPU

1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores

Video
Encoding

H.265:1x4K60、3x4K30、6x1080p60、12x1080p30

H.265:1080p30

Video
Decoding

H.265:1x8K30、2x4K60、4x4K30、

9x1080p60、18x1080p30

H.265:1*4K60、2*4K30、5*1080p60、11*1080p30

Memory

16GB LPDDR5

8GB LPDDR5

Storage

1 * M.2 (internal device, PCIe NVMe SSD expansion available, supporting 2242/2260/2280)

Power

DC 12V (5.5mm * 2.1mm, 9-24V wide input voltage)

OS

The Jetson system, based on Ubuntu 22.04, offers a comprehensive desktop Linux environment with accelerated graphics,

supporting libraries such as NVIDIA CUDA 11.4.19, TensorRT 8.5.2, and CuDNN 8.6.0.

Software
Support

Robotic models: ROS models

Large language models: The private deployment of ultra-large-scale parameter models under the Transformer architecture,

such as LLaMa2, ChatGLM, and Qwen.

Vision models: ViT, Grounding DINO, and SAM.

AI painting: Stable Diffusion V1.5 image generation model in the AIGC field.

Traditional network architectures: Traditional network architectures such as CNN, RNN, and LSTM; a variety of deep

learning frameworks include TensorFlow, PyTorch, MXNet, ONNX, PaddlePaddle, and Darknet; custom operator development

Docker containerization: Docker container management technology, facilitating easy image deployment.

Dimension

188mm * 88.44mm * 50.65mm

Environment

Operating temperature: -20℃~60℃ Operating humidity: 10%~90%RH (non-condensing)

Interfaces Network

Ethernet: 1 * Gigabit Ethernet (RJ45) WiFi: WiFi 6 / BT 5.2 modules expansion available via the M.2 interface

4G: 4G LTE expansion available via the Mini PCIe

Display

1 * HDMI2.0(4K@60fps)

Audio

1 * 3.5mm audio jack, supporting MIC recording, CTIA standard

USB

2 * USB3.0

Other

1 * Type-C (USB 2.0/DEBUG), 1 * SIM card, 1 * Phoenix connector (2*4Pin, 3.5mm pitch): 1 * RS485, 1 * RS232, 1 * CAN 2.0