T-Chip Group
Innovate intelligent hardware and solutions.
Server products and efficient solutions.
StationPC NAS products and services.

CSB1-N10NOrinNano Computing Power Server

670 TOPS
Computing Power

Private Deployment
of Large Models

Secure and High-Speed
Network Communication

Supports Multiple
Deep Learning Frameworks

AI Processor: NVIDIA Jetson OrinNano

CSB1-N10NOrinNano system is capable of accommodating a maximum of 10 computing nodes, each featuring an NVIDIA Jetson
OrinNano hexa-core 64-bit processor with a top speed of 1.5GHz.With a total computing power of up to 670 TOPS, it provides
robust computational support for artificial intelligence and deep learning applications.

Custom Private Deployment of AI Models

Supports the Ollama framework for local deployment of large models and private deployment of modern mainstream AI models,
including large language models like Llama3 and Phi-3 Mini, ROS robotic models, visual models such as EfficientVIT, NanoOWL,
NanoSAM, and supports AIGC field models like Stable Diffusion for image generation.

Equipped with the
aBMC management
system

With BMC remote management system, it easily
achieves real-time monitoring, software
configuration, hardware management, remote
operations,maintenance,while also offering
capabilities for secondary development.

Effectively reduce costs

The server consolidates compute modules, storage, USB
interfaces, network controllers, power management, and
sensors into a streamlined system, minimizing the
acquisition, development, and operational expenses
for users.

User-friendly and
easy to develop

Provides the one-stop SDK for deep learning
development, which includes a suite of software tools
such as underlying driver environments, compilers, and
tools for inference and deployment. It supports the
development of mainstream network models and custom
operators, as well as Docker containerization for the
rapid deployment of algorithmic applications.

Comprehensive Expansion Interfaces

Specifications

CSB1-N10NOrinNano
Technical Specifications
Server form

1U rack-mounted computing power server

Architecture

ARM architecture

Number of nodes

10 distributed computing nodes + 1 control node

Compute nodes

Hexa-core 64-bit processor NVIDIA Jetson Orin Nano, main frequency up to 1.7GHz

Control nodes

Octa-core 64-bit processor RK3588, main frequency up to 2.4GHz, the highest computing power is 6TOPS

AI computing power

670TOPS (67T × 10, INT8)

RAM

8GB LPDDR5 × 10

Storage

256GB (2242 PCIe NVMe SSD, the server is internally assembled)

Storage Expansion

3.5-inch/2.5-inch SATA3.0/SSD hard drive slot × 1 (BMC can directly operate the hard drive, and computing child nodes can indirectly access the hard drive through the network sharing method provided by BMC)

Power

550W AC power supply (Input: 90V AC~264V AC, 47 Hz~63 Hz, 8A) (Hot swappable not supported)

Fan module

6 high-speed cooling fans

Physical Specifications
Size

494.0mm(L) × 440.5mm(W) × 44.4mm(H)

Installation requirements

IEC 297 Universal Cabinet Installation: 19 inches wide and 800 mm deep and above Retractable slideway installation: The distance between the front and rear holes of the cabinet is 543.5mm~848.5mm

Full weight

Server net weight: 8.1kg, total weight with packaging: 10.3kg

Environment

Operating Temperature: 0ºC ~ 45ºC, Storage Temperature: -40ºC ~ 60ºC, Operating Humidity: 5% ~ 80%RH(non-condensing)

Software Specifications
BMC

The BMC management system is integrated with the web-based management interface, supporting Redfish, VNC, NTP, monitoring advanced and virtual media, and the BMC management system can be redeveloped

Large language models

All models support the privatization of ultra-large-scale parametric models under the Transformer architecture, such as Deepseek-R1 series, Gemma series, Llama series, ChatGLM series, Qwen series, Phi series and other large language models

Visual large model

Jetson Orin Nano/Jetson Orin NX: Supports the privatization deployment of large vision models such as EfficientVIT, NanoOWL, NanoSAM, SAM, TAM, etc.

AI Painting

Jetson Orin Nano/Jetson Orin NX: Support the private deployment of Flux, Stable Diffusion, and Stable Diffusion XL image generation models

Deep learning

All models: Support traditional network architectures such as CNN, RNN, LSTM, and support various deep learning frameworks such as TensorFlow, PyTorch, PaddlePaddle, ONNX, and Caffe. Support custom operator development and Docker containerization management technology Jetson Orin Nano/Jetson Orin NX: Supports Ollama local large model deployment framework and ComfyUI graphical deployment framework

Interface Specifications
Internet

2 × 10G Ethernet (SFP+), 2 × Gigabit Ethernet (RJ45), 1 × Gigabit Ethernet (RJ45, MGNT is used as BMC management network)

Console

1 × Console (RJ45, BMC debug serial port, baud rate 115200)

Display

1 × VGA (maximum resolution 1080P, BMC management display)

USB

2 × USB3.0 (The lower USB is USB3.0 OTG, and the BMC can be upgraded OTG by using a USB flash drive)

Button

1 × Reset, 1 × UID, 1 × Power

Other interfaces

1 × RS232 (DB9, baud rate 115200),1 × RS485 (DB9, baud rate 115200)