T-Chip Group
Innovate intelligent hardware and solutions.
Server products and efficient solutions.
StationPC NAS products and services.

EC-ThorT5000 Computers

2070 TFLOPS Computing Power

· Equipped with NVIDIA Jetson module

· Private deployment of mainstream AI models

· Multiple deep learning frameworks

· AI software stack and ecosystem

· JetPack 7 with SBSA-Aligned Architecture

· Up to 92 channels of 1080P video decoding

· Four 10-Gigabit Ethernet Port

· Rich expansion interfaces

ROS

ChatGLM

Qwen

Stable Diffusion

EC-ThorT5000 Edge Computers

Equipped with NVIDIA Jetson Thor series Jetson T5000 module, it delivers up to 2070 TFLOPS of computing power and supports various large AI models. It enables 92-channel 1080P video decoding and 50-channel 1080P video encoding. It is equipped with both four 10-gigabit Ethernet ports and four gigabit Ethernet ports. Designed with an industrial-grade all-aluminum casing and dual cooling fans, it ensures stable 24/7 operation.

Equipped with Jetson T5000

Equipped with NVIDIA Jetson AGX Orin (64GB) module, it features a 12-core CPU and a 2048-core GPU based on the NVIDIA Ampere architecture (with 64 Tensor Cores). Capable of running multiple concurrent AI application pipelines, it delivers high inference performance, providing robust computational power for edge AI computing, intelligent robotics, and other scenarios.

Private Deployment

It supports the private deployment of mainstream large AI models, including the ROS robot model; large language models under the Transformer architecture such as the Gemma series, ChatGLM series, Qwen series, and Phi series; large vision models like EfficientVIT, NanoOWL, NanoSAM, SAM, and TAM; as well as image generation models such as the Flux and Stable Diffusion series.

Deep Learning Frameworks

It supports Ollama framework for local deployment of large models, the ComfyUI workflow framework for AI image generation, and deep learning frameworks accelerated by cuDNN, including PaddlePaddle, PyTorch, TensorFlow, MATLAB, MxNet, and Keras. It also supports custom operator development and Docker containerization management technology.

AI Software Stack and Ecosystem

A full AI software stack and ecosystem, powered by edge generative AI and NVIDIA Metropolis and Isaac™, makes edge AI and robotics development accessible. With NVIDIA JetPack and Isaac ROS, accelerate AI applications end-to-end and integrate advanced technology without needing costly in-house experts.

JetPack 7 with SBSA

JetPack 7 aligns Jetson with the Server Base System Architecture (SBSA) standard. The SBSA specification defines key hardware and firmware interfaces, enabling stronger OS support, simplified software porting, and seamless enterprise integration. This support, along with a unified CUDA 13.0 across Arm platforms, streamlines development and ensures consistency from servers to Jetson Thor.

Video AI Performance

Supports up to 22 channels of 1080P@30fps or 1 channel of 8K@30fps video decoding, and 16 channels of 1080P@30fps or 2 channels of 4K@60fps video encoding. This robust video processing capability meets the demands of various AI application scenarios.

Four 10-Gigabit Ethernet Port

Features four 10-gigabit Ethernet port, four Gigabit Ethernet ports, and a built-in GPS module. It also supports dual-band Wi-Fi 6, 5G, and 4G expansion, meeting diverse network connectivity requirements across various scenarios.

Rich Expansion Interfaces

Equipped with interfaces including GMSL2, HDMI, USB 3.0, RS485, RS232, CAN, Type-C, digital input, and digital output, facilitating seamless connectivity with various peripherals.
2070 TFLOPS Computing Power

The integrated NVIDIA Jetson T5000 module from the Jetson Thor series delivers up to 2070 TFLOPS (FP4) of computing power. it ensures smooth execution of mainstream AI models—including robotics models, large language models, large vision models, and AI painting models, and supports the deployment of larger and more complex deep neural networks. As a result, it enables object recognition, target detection and tracking, speech recognition, and other visual development functions, fully meeting the demands of high-performance AI applications.

Interfaces
Specifications

EC-ThorT5000

Basic Specifications

SOC

NVIDIA Jetson Thor series Jetson T5000 (original module)

CPU

14-core 64-bit ARM Neoverse-V3AE processor with a frequency of up to 2.6GHz

AI performance

2070 TFLOPS (FP4—Sparse)

GPU

2560-core NVIDIA Blackwell architecture GPU with 96 fifth-gen Tensor Cores, multi-Instance GPU (MIG) with 10 TPCs

Video encoding

6×4K60, 12×4K30, 24×1080p60, 50×1080p30

Video decoding

4×8K30, 10×4K60, 22×4K30, 46×1080p60, 92×1080p30

Memory

128GB LPDDR5X, 273GB/s

Storage expansion

2 × M.2 M-KEY (Expandable PCIe NVMe 2280 SSD, inside the computer)

Power

DC 24V (5.5 × 2.1mm, support 9V~36V wide voltage input)

Size

277.95mm × 136.09mm × 88.0mm

Environment

Operating Temperature: -20℃~60℃, Storage Temperature: -20℃~70℃, Storage Humidity: 10%~90%RH (non-condensing)

Software Support

OS

Jetson systems based on Ubuntu 24.04 provide a complete desktop Linux environment with graphics acceleration and support for libraries such as NVIDIA CUDA, TensorRT, CuDNN, and more.

Large model

Robot model: - ROS robot model is supported. Large language models: - Support the privatization deployment of ultra-large-scale parametric models under the Transformer architecture, such as Deepseek-R1 series, Gemma series, Llama series, ChatGLM series, Qwen series, Phi series and other large language models. Large visual models: - Support the privatization deployment of large visual models such as EfficientVIT, NanoOWL, NanoSAM, SAM and TAM. AI Painting: - Supports the private deployment of Flux, Stable Diffusion, and Stable Diffusion XL image generation models.

Traditional network architecture

Support Ollama local large model deployment framework, which can be used for natural language processing, code generation, and assistance scenarios. Support ComfyUI graphical deployment framework, which can be used for scenarios such as image restoration, image style conversion, and image synthesis. Supports multiple deep learning frameworks accelerated by cuDNN, including PaddlePaddle, PyTorch, TensorFlow, MATLAB, MXNet and Keras. Supports custom operator development. Supports Docker containerization technology, which can be easily used for image deployment.

AI software stack

The NVIDIA Jetson Thor series delivers powerful AI compute power, massive unified memory, and a comprehensive software stack to power the latest generative AI applications. It enables fast inference on any generative AI model powered by the Transformer architecture, enabling superior edge performance on MLPerf.

Interface Specifications

Internet

Ethernet: 4 × 10G Ethernet (RJ45), 4 × Gigabit Ethernet (RJ45, support PSE) WiFi: Expand WiFi/Bluetooth module via M.2 E-KEY (2230), supports 2.4GHz/5GHz dual-band WiFi6 (802.11a/b/g/n/ac/ax) and Bluetooth 5.2 4G: Expanding 4G LTE through Mini PCIe 5G: Expanding 5G through M.2 B-KEY

GPS

Support GPS positioning, real-time positioning, tracking, tracking, and time calibration of field devices (synchronized with UTC)

Video input

8 × GMSL2 (Input via two 4Pin Mini FAKRA interfaces)

Video output

2 × HDMI2.0 (4K@60Hz)

Audio

1 × 3.5mm Audio jack (Support MIC recording, American standard CTIA)

USB

4 × USB3.0 (Max: 1A), 1 × Type-C (USB3.2 OTG), 1 × Type-C (Debug)

Antenna Antenna

4 × 5G antenna, 1 × 4G/5G antenna, 1 × GPS antenna, 1 × WiFi antenna

Button

1 × Reset, 1 × Recovery, 1 × Power

Others

1 × SIM Card

Customization

Firefly team, with over 20 years of experience in product design, research and development, and production, provides you with services such as hardware, software, complete machine customization, and OEM server.