CSB1-N10NOrinNX Computing Power Server

1000 TOPS
Computing Power

Private Deployment
of Large Models

Secure and High-Speed
Network Communication

Supports Multiple
Deep Learning Frameworks

AI Processor: NVIDIA Jetson OrinNX

CSB1-N10NOrinNX system is capable of accommodating a maximum of 10 computing nodes, each featuring an NVIDIA Jetson
OrinNX octa-core 64-bit processor with a top speed of 2.0GHz.With a total computing power of up to 1000 TOPS, it provides
robust computational support for artificial intelligence and deep learning applications.

Custom Private Deployment of AI Models

Supports the Ollama framework for local deployment of large models and private deployment of modern mainstream AI models,
including large language models like Llama3 and Phi-3 Mini, ROS robotic models, visual models such as EfficientVIT, NanoOWL,
NanoSAM, and supports AIGC field models like Stable Diffusion for image generation.

Equipped with the
aBMC management
system

With BMC remote management system, it easily
achieves real-time monitoring, software
configuration, hardware management, remote
operations,maintenance,while also offering
capabilities for secondary development.

Effectively reduce costs

The server consolidates compute modules, storage, USB
interfaces, network controllers, power management, and
sensors into a streamlined system, minimizing the
acquisition, development, and operational expenses
for users.

User-friendly and
easy to develop

Provides the one-stop SDK for deep learning
development, which includes a suite of software tools
such as underlying driver environments, compilers, and
tools for inference and deployment. It supports the
development of mainstream network models and custom
operators, as well as Docker containerization for the
rapid deployment of algorithmic applications.

Comprehensive Expansion Interfaces

Specifications

CSB1-N10NOrinNX
Technical Specifications
Server form

1U rack-mounted computing power server

Framework

ARM architecture

Number of nodes

10 distributed computing nodes (up to 80 ARM cores) + 1 control node

Compute nodes

Octa-core 64-bit processor NVIDIA Jetson OrinNX, main frequency up to 2.0GHz

Control nodes

Octa-core 64-bit processor RK3588, main frequency up to 2.4GHz, the highest computing power is 6TOPS

AI computing power

1000TOPS (INT8)

RAM

16GB LPDDR5 × 10 (Number of compute nodes)

Storage

256GB (2242 PCIe NVMe SSD, the server is internally assembled)

Storage Expansion

3.5-inch/2.5-inch SATA3.0/SSD hard drive slot × 1 (BMC can directly operate the hard drive, and computing child nodes can indirectly access the hard drive through the network sharing method provided by BMC)

Power

550W AC power supply (Input: 90V AC~264V AC, 47 Hz~63 Hz, 8A) (Hot swappable not supported)

Fan module

6 high-speed cooling fans

Physical Specifications
Size

420.0mm(L) × 421.3mm(W) × 44.4mm(H)

Installation requirements

IEC 297 Universal Cabinet Installation: 19 inches wide and 800 mm deep and above Retractable slideway installation: The distance between the front and rear holes of the cabinet is 543.5mm~848.5mm

Full weight

Server net weight: 8.1kg, total weight with packaging: 10.3kg

Environment

Operating Temperature: 0ºC ~ 45ºC, Storage Temperature: -40ºC ~ 60ºC, Operating Humidity: 5% ~ 90%RH(non-condensing)

Software Specifications
BMC

The BMC management system is integrated with the web-based management interface, supporting Redfish, VNC, NTP, monitoring advanced and virtual media, and the BMC management system can be redeveloped

Large model

Robot model: ROS robot model is supported. Large language models: Support Ollama local large model deployment framework, which can be used for natural language processing, code generation, and assistance scenarios. Support the private deployment of ultra- large-scale parametric models under the Transformer architecture, such as LLaMa3 and Phi-3 Mini. Large visual models: Support the privatization deployment of large visual models such as EfficientVIT, NanoOWL, NanoSAM, SAM and TAM. AI Painting: Support ComfyUI graphical deployment framework, which can be used for scenarios such as image restoration, image style conversion, and image synthesis. Supports the private deployment of Flux, Stable Diffusion and Stable Diffusion XL image generation model in the AIGC field.

Deep learning

Supports multiple deep learning frameworks accelerated by cuDNN, including PaddlePaddle, PyTorch, TensorFlow, MATLAB, MxNet, Caffe2, Chainer and Keras. Supports custom operator developmen. Docker containerization: Docker containerization technology is supported, which can be easily used for image deployment.

Interface Specifications
Internet

2 × 10G Ethernet (SFP+), 2 × Gigabit Ethernet (RJ45), 1 × Gigabit Ethernet (RJ45, MGNT is used as BMC management network)

Console

1 × Console (RJ45, BMC debug serial port, baud rate 115200)

Display

1 × VGA (maximum resolution 1080P, BMC management display)

USB

2 × USB3.0 (The lower USB is USB3.0 OTG, and the BMC can be upgraded OTG by using a USB flash drive)

Button

1 × Reset, 1 × UID, 1 × Power button

Other interfaces

1 × RS232 (DB9, baud rate 115200),1 × RS485 (DB9, baud rate 115200)