· Private deployment of AI models
· Multiple deep learning frameworks
· Up to 196-channel AI video processing
· Supports 3.5-inch SATA3.0 HDD/SSD
· Dual 10 Gigabit SFP+ ports and Gigabit Ethernet
· Standard 1U rack server size
· Includes aBMC management system
· Highly integrated server design
Llama
Qwen
Stable Diffusion
Features 10 built-in distributed computing nodes, with each node delivering computing power ranging from 6T-157T. It supports multiple platforms such as Rockchip, Sophgo, and NVIDIA. CSB1-N10 servers enables private deployment of mainstream AI models and It equipped with two 10-gigabit network ports and a gigabit network port, while also supporting expandable SATA 3.0 hard drives.
Private Deployment
Deep Learning Frameworks
Processing for 196 Video Streams
3.5-inch SATA 3.0 HDD/SSD
Dual 10GbE SFP+
Standard 1U Server Size
aBMC Management System
Efficient and Low-Cost
CSB1-N10 Integrates 10 distributed computing nodes with comprehensive processor support, including Rockchip, Sophgo, and NVIDIA platforms. Each node delivers 6-157 TOPS of computing power, with configurable node quantities to provide robust acceleration for AI and deep learning workloads.
CSB1-N10R3588S |
CSB1-N10S1684X |
CSB1-N10NOrinNano |
CSB1-N10NOrinNX |
||
Technical Specifications |
Server form |
1U rack-mounted computing power server |
|||
Architecture |
ARM architecture |
||||
Number of nodes |
10 distributed computing nodes + 1 control node |
||||
Compute nodes |
Octa-core 64-bit processor RK3588S, main frequency up to 2.4GHz |
Octa-core 64-bit processor BM1684X, up to 2.3GHz |
Hexa-core 64-bit processor NVIDIA Jetson Orin Nano, main frequency up to 1.7GHz |
Octa-core 64-bit processor NVIDIA Jetson Orin NX, main frequency up to 2.0GHz |
|
Video encoding |
H.264: 1×8K@30fps, 16×1080P@30fps |
H.264: 3×4K@25fps, 12×1080P@25fps |
1080P@30fps, supported by 1-2 CPU cores |
1×4K@60fps, 3×4K@30fps 6×1080P@60fps, 12×1080P@30fps |
|
Video decoding |
8K@60fps/4K@120fps (VP9/AVS2) 8K@30fps (H.264/AVC/MVC) 30×1080P@30fps (H.264) |
H.264: 8×4K@25fps, 32×1080P@25fps 1×8K@25fps |
1×4K@60fps, 2×4K@30fps, 5×1080P@60fps, 11×1080P@30fps |
1×8K@30fps, 2×4K@60fps, 4×4K@30fps, 9×1080P@60fps, 18×1080P@30fps |
|
Control nodes |
Octa-core 64-bit processor RK3588, main frequency up to 2.4GHz, the highest computing power is 6TOPS |
||||
AI computing power |
60TOPS (6T × 10, INT8) |
320TOPS (32T × 10, INT8) |
670TOPS (67T × 10, INT8) |
1570TOPS (157T × 10, INT8) |
|
RAM |
16GB LPDDR5 × 10 (4/8/16/32GB) |
8GB LPDDR4 × 10 (8/12/16GB) |
8GB LPDDR5 × 10 |
16GB LPDDR5 × 10 |
|
Storage |
256GB eMMC × 10 (16/32/64/128/256GB) |
32GB eMMC × 10 (32/64/128GB) |
256GB (2242 PCIe NVMe SSD, the server is internally assembled) |
||
Storage Expansion |
3.5-inch/2.5-inch SATA3.0/SSD hard drive slot × 1 (BMC can directly operate the hard drive, and computing child nodes can indirectly access the hard drive through the network sharing method provided by BMC) |
||||
Power |
550W AC power supply (Input: 90V AC~264V AC, 47 Hz~63 Hz, 8A) (Hot swappable not supported) |
||||
Fan module |
6 high-speed cooling fans |
||||
Physical Specifications |
Size |
494.0mm(L) × 440.5mm(W) × 44.4mm(H) |
|||
Installation requirements |
IEC 297 Universal Cabinet Installation: 19 inches wide and 800 mm deep and above Retractable slideway installation: The distance between the front and rear holes of the cabinet is 543.5mm~848.5mm |
||||
Full weight |
Server net weight: 8.1kg, total weight with packaging: 10.3kg |
||||
Environment |
Operating Temperature: 0ºC ~ 45ºC, Storage Temperature: -40ºC ~ 60ºC, Operating Humidity: 5% ~ 80%RH(non-condensing) |
||||
Software Specifications |
BMC |
The BMC management system is integrated with the web-based management interface, supporting Redfish, VNC, NTP, monitoring advanced and virtual media, and the BMC management system can be redeveloped |
|||
Large language models |
All models support the privatization of ultra-large-scale parametric models under the Transformer architecture, such as Deepseek-R1 series, Gemma series, Llama series, ChatGLM series, Qwen series, Phi series and other large language models |
||||
Visual large model |
BM1684X: Support the privatization deployment of large visual models such as ViT, Grounding DINO, SAM, etc. Jetson Orin Nano/Jetson Orin NX: Supports the privatization deployment of large vision models such as EfficientVIT, NanoOWL, NanoSAM, SAM, TAM, etc. |
||||
AI Painting |
BM1684X/Jetson Orin Nano/Jetson Orin NX: Support the private deployment of Flux, Stable Diffusion, and Stable Diffusion XL image generation models |
||||
Deep learning |
All models: Support traditional network architectures such as CNN, RNN, LSTM, and support various deep learning frameworks such as TensorFlow, PyTorch, PaddlePaddle, ONNX, and Caffe. Support custom operator development and Docker containerization management technology Jetson Orin Nano/Jetson Orin NX: Supports Ollama local large model deployment framework and ComfyUI graphical deployment framework |
||||
Interface Specifications |
Internet |
2 × 10G Ethernet (SFP+), 2 × Gigabit Ethernet (RJ45), 1 × Gigabit Ethernet (RJ45, MGMT is used as BMC management network) |
|||
Console |
1 × Console (RJ45, BMC debug serial port, baud rate 115200) |
||||
Display |
1 × VGA (maximum resolution 1080P, BMC management display) |
||||
USB |
2 × USB3.0 (The lower USB is USB3.0 OTG, and the BMC can be upgraded OTG by using a USB flash drive) |
||||
Button |
1 × Reset, 1 × UID, 1 × Power |
||||
Other interfaces |
1 × RS232 (DB9, baud rate 115200),1 × RS485 (DB9, baud rate 115200) |
Firefly team, with over 20 years of experience in product design, research and development, and production, provides you with services such as hardware, software, complete machine customization, and OEM server.