· Private deployment of AI models
· Multiple deep learning frameworks
· 256-channel video AI processing capability
· Supports 3.5-inch SATA3.0 HDD/SSD
· Dual 10-Gigabit SFP+ ports & Gigabit Ethernet
· Standard 1U rack server size
· Includes aBMC management system
· Highly integrated server design
Llama
Qwen
Stable Diffusion
CSA1-N8 features built-in distributed computing nodes with 8 computing platforms, each capable of delivering up to 32 TOPS of computing power. It supports the private deployment of mainstream AI large models and multiple deep learning frameworks. Equipped with two 10-gigabit Ethernet ports and a gigabit Ethernet port, it also supports expandable SATA 3.0 hard drives.
Private Deployment
Deep Learning Frameworks
Video AI Processing Capability
3.5-inch SATA 3.0 HDD/SSD
Dual 10-Gigabit SFP+ ports
Standard 1U Rack Server Size
aBMC Management System
Efficient and Low-Cost
CSA1-N8 features eight distributed computing nodes, configurable with either Sophon BM1684X or BM1684 chips. Each node delivers up to 32 TOPS or 17.6 TOPS of computing power, respectively. The number of nodes is also customizable (up to eight), providing robust computational support for AI and deep learning applications.

CSA1-N8S1684X |
CSA1-N8S1684 |
||
|
Technical Specifications |
Server form |
1U rack-mounted computing power server |
|
|
Architecture |
ARM architecture |
||
|
Number of nodes |
8 distributed computing nodes + 1 control node |
||
|
Compute nodes |
Octa-core 64-bit processor BM1684X, up to 2.3GHz |
Octa-core 64-bit processor BM1684, up to 2.3GHz |
|
|
Video encoding |
H.264: 3×4K@25fps, 12×1080P@25fps |
H.264: 2×1080P@25fps |
|
|
Video decoding |
H.264: 8×4K@25fps, 32×1080P@25fps |
H.264: 32×1080P@30fps |
|
|
Control nodes |
Octa-core 64-bit processor RK3588S, main frequency up to 2.4GHz, the highest computing power is 6TOPS |
||
|
AI computing power |
256TOPS (32T × 8, INT8) |
140.8TOPS (17.6T × 8, INT8) |
|
|
RAM |
16GB LPDDR4/LPDDR4X × 8 |
12GB LPDDR4/LPDDR4X × 8 (12GB/16GB Optional) |
|
|
Storage |
64GB eMMC × 8 (64GB/128GB Optional) |
32GB eMMC × 8 (32GB/256GB Optional) |
|
|
Storage expansion |
3.5-inch/2.5-inch SATA3.0 SSD hard drive slot × 1 (Supports hot swapping; BMC can directly operate the hard drive, and computing child nodes can indirectly access the hard drive through the network sharing method provided by BMC) |
||
|
Power consumption |
Normal: 300W, Max: 430W |
Normal: 170W, Max: 350W |
|
|
Fan module |
6 high-speed cooling fans |
||
|
Physical Specifications |
Size |
490.0mm(L) × 417.3mm(W) × 44.4mm(H) |
|
|
Installation requirements |
IEC 297 Universal Cabinet Installation: 19 inches wide and 800 mm deep and above Retractable slideway installation: The distance between the front and rear holes of the cabinet is 543.5mm~848.5mm |
||
|
Full weight |
Server net weight: 6.7kg, total weight with packaging: 8.9kg |
||
|
Environment |
Operating Temperature: 0ºC ~ 42ºC, Storage Temperature: -40ºC ~ 60ºC, Operating Humidity: 20% ~ 80%RH (non-condensing) |
||
|
Software Specifications |
BMC |
The BMC management system is integrated with the web-based management interface, supporting Redfish, VNC, NTP, monitoring advanced and virtual media, and the BMC management system can be redeveloped |
|
|
Large language models |
BM1684X: Support the privatization of ultra-large-scale parametric models under the Transformer architecture, such as Deepseek-R1 series, Gemma series, Llama series, ChatGLM series, Qwen series, Phi series and other large language models |
||
|
Visual large model |
BM1684X: Support the privatization deployment of large visual models such as ViT, Grounding DINO, SAM, etc. |
||
|
AI Painting |
BM1684X: Support the private deployment of Flux, Stable Diffusion, and Stable Diffusion XL image generation models |
||
|
Deep learning |
All models: Support traditional network architectures such as CNN, RNN, LSTM, and support various deep learning frameworks such as TensorFlow, PyTorch, PaddlePaddle, ONNX, and Caffe. Support custom operator development and Docker containerization management technology |
||
|
Interface Specifications |
Internet |
2 × 10G Ethernet (SFP+), 2 × Gigabit Ethernet (RJ45, 1 management network port, 1 ordinary network port) |
|
|
Display |
1 × HDMI2.0 (Maximum resolution 1080P, main processor core board display) |
||
|
USB |
2 × USB3.0 HOST, 1 × Type-C (USB3.0 OTG, processor core board debugging) |
||
|
Others |
1 × SIM Card, 2 × 4G antenna |
||
Firefly team, with over 20 years of experience in product design, research and development, and production, provides you with services such as hardware, software, complete machine customization, and OEM server.