T-Chip Group
Innovate intelligent hardware and solutions.
Server products and efficient solutions.
StationPC NAS products and services.

EC-AGXOrin Egde Computers

275 TOPS Computing Power NPU

· Equipped with NVIDIA Jetson module

· Private deployment of mainstream AI models

· Multiple deep learning frameworks

· AI software stack and ecosystem

· High-bandwidth LPDDR5 memory

· Up to 22 channels of 1080P video decoding

· Powerful network capabilities

· Rich expansion interfaces

ROS

ChatGLM

Qwen

Stable Diffusion

EC-AGXOrin Egde Computers

Equipped with NVIDIA Jetson AGX Orin (64GB) module, it delivers up to 275 TOPS of computing power and supports various large AI models and deep learning frameworks. It enables 22-channel 1080P video decoding and 16-channel 1080P video encoding. Designed with an industrial-grade all-aluminum casing and dual cooling fans, it ensures stable 24/7 operation.

Equipped with Jetson AGX Orin

Equipped with NVIDIA Jetson AGX Orin (64GB) module, it features a 12-core CPU and a 2048-core GPU based on the NVIDIA Ampere architecture (with 64 Tensor Cores). Capable of running multiple concurrent AI application pipelines, it delivers high inference performance, providing robust computational power for edge AI computing, intelligent robotics, and other scenarios.

Private Deployment

It supports the private deployment of mainstream large AI models, including the ROS robot model; large language models under the Transformer architecture such as the Gemma series, ChatGLM series, Qwen series, and Phi series; large vision models like EfficientVIT, NanoOWL, NanoSAM, SAM, and TAM; as well as image generation models such as the Flux and Stable Diffusion series.

Deep Learning Frameworks

It supports Ollama framework for local deployment of large models, the ComfyUI workflow framework for AI image generation, and deep learning frameworks accelerated by cuDNN, including PaddlePaddle, PyTorch, TensorFlow, MATLAB, MxNet, and Keras. It also supports custom operator development and Docker containerization management technology.

AI Software Stack and Ecosystem

A full AI software stack and ecosystem, powered by edge generative AI and NVIDIA Metropolis and Isaac™, makes edge AI and robotics development accessible. With NVIDIA JetPack and Isaac ROS, accelerate AI applications end-to-end and integrate advanced technology without needing costly in-house experts.

LPDDR5 Memory

LPDDR5 offers larger memory capacity, higher bandwidth, faster data transfer rates, lower power consumption, and more advanced Error Correction Code (ECC) technology. Meets the memory space and response speed requirements for private deployment of large models.

Video AI Performance

Supports up to 22 channels of 1080P@30fps or 1 channel of 8K@30fps video decoding, and 16 channels of 1080P@30fps or 2 channels of 4K@60fps video encoding. This robust video processing capability meets the demands of various AI application scenarios.

Powerful Network Capabilities

Features one 10-gigabit Ethernet port, five Gigabit Ethernet ports, and a built-in GPS module. It also supports dual-band Wi-Fi 6, 5G, and 4G expansion, meeting diverse network connectivity requirements across various scenarios.

Rich Expansion Interfaces

Equipped with interfaces including GMSL2, HDMI, USB 3.0, RS485, RS232, CAN, Type-C, digital input, and digital output, facilitating seamless connectivity with various peripherals.
275 TOPS NPU Computing Power

The integrated Jetson AGX Orin (64GB) module which delivers up to 275 TOPS of computing power, capable of smoothly running mainstream modern AI models—including robotics models, large language models, large vision models, and AI image generation models—while also enabling advanced functionalities such as object recognition, target detection and tracking, speech recognition, and other vision-based development tasks.

Interfaces
Specifications

EC-AGXOrin

Basic Specifications

SOC

NVIDIA Jetson AGX Orin (64GB, original module)

CPU

12-core 64-bit ARM Cortex-A78AE v8.2 processor with a frequency of up to 2.2GHz

AI performance

275 TOPS

GPU

64 Tensor Cores 2048-core NVIDIA Ampere architecture GPU

Video encoding

H.265: 2×4K60, 4×4K30, 8×1080p60, 16×1080p30

Video decoding

H.265: 1×8K30, 3×4K60, 7×4K30, 11×1080p60, 22×1080p30

Memory

64GB LPDDR5

Storage

64GB eMMC

Storage expansion

1 × M.2 M-KEY (Expandable PCIe NVMe SSD, supports 2280 specification), 1 × TF Card

Power

DC 24V (5.5 × 2.1mm, support 9V~36V wide voltage input)

Size

277.95mm × 136.09mm × 88.0mm

Environment

Operating Temperature: -20℃~60℃, Storage Temperature: -20℃~70℃, Storage Humidity: 10%~90%RH (non-condensing)

Software Support

OS

Jetson systems based on Ubuntu 22.04 provide a complete desktop Linux environment with graphics acceleration and support for libraries such as NVIDIA CUDA, TensorRT, CuDNN, and more.

Large model

Robot model: - ROS robot model is supported. Large language models: - Support the privatization deployment of ultra-large-scale parametric models under the Transformer architecture, such as Deepseek-R1 series, Gemma series, Llama series, ChatGLM series, Qwen series, Phi series and other large language models Large visual models: - Support the privatization deployment of large visual models such as EfficientVIT, NanoOWL, NanoSAM, SAM and TAM. AI Painting: - Supports the private deployment of Flux, Stable Diffusion, and Stable Diffusion XL image generation models.

Traditional network architecture

Support Ollama local large model deployment framework, which can be used for natural language processing, code generation, and assistance scenarios. Support ComfyUI graphical deployment framework, which can be used for scenarios such as image restoration, image style conversion, and image synthesis. Supports multiple deep learning frameworks accelerated by cuDNN, including PaddlePaddle, PyTorch, TensorFlow, MATLAB, MXNet and Keras. Supports custom operator development. Supports Docker containerization technology, which can be easily used for image deployment.

AI software stack

The NVIDIA Jetson Orin series delivers powerful AI compute power, massive unified memory, and a comprehensive software stack to power the latest generative AI applications. It enables fast inference on any generative AI model powered by the Transformer architecture, enabling superior edge performance on MLPerf.

Interface Specifications

Internet

Ethernet: 1 × 10G Ethernet (RJ45), 5 × Gigabit Ethernet (RJ45; Among them, GE2, GE3, GE4, and GE5 support PSE) WiFi: Expand WiFi/Bluetooth module via M.2 E-KEY (2230), supports 2.4GHz/5GHz dual-band WiFi6 (802.11a/b/g/n/ac/ax) and Bluetooth 5.2 4G: Expanding 4G LTE through Mini PCIe 5G: Expanding 5G through M.2 B-KEY

GPS

Support GPS positioning, real-time positioning, tracking, tracking, and time calibration of field devices (synchronized with UTC)

Video input

8 × GMSL2 (Input via two 4Pin Mini FAKRA interfaces)

Video output

1 × HDMI2.0 (4K@60Hz)

Audio

1 × 3.5mm Audio jack (Support MIC recording, American standard CTIA)

USB

4 × USB3.0 (Max: 1A), 1 × Type-C (USB3.2 OTG), 1 × Type-C (Debug)

Antenna

4 × 5G antenna, 1 × 4G/5G antenna, 1 × GPS antenna, 1 × WiFi antenna

Button

1 × Reset, 1 × Recovery, 1 × Power

Other interfaces

1 × SIM Card 1 × Phoenix connector (2×12Pin, 3.5mm pitch): 1 × RS485, 1 × RS232, 2 × CAN 2.0, 1 × UART, 1 × IO input, 1 × IO output

Customization

Firefly team, with over 20 years of experience in product design, research and development, and production, provides you with services such as hardware, software, complete machine customization, and OEM server.