Built-in 48 computing nodes
Private Deployment of Large Models
Configure aBMC Management System
Scalable M.2 PCIe NVMe SSD
Server integrates 48 distributed computing nodes supporting multiple processor platforms (Rockchip / SOPHGO / NVIDIA) , with each node delivering 6~157 TOPS computing power. This scalable configuration provides powerful AI and deep learning acceleration capabilities.
Enables on-premises deployment of leading large language models (Deepseek-R1, Llama, Qwen), vision models (SAM, ViT), and image generation models (Flux, Stable Diffusion).
Compatible with traditional network architectures (CNN, RNN, LSTM) and mainstream deep learning frameworks including TensorFlow, PyTorch, PaddlePaddle, and ONNX. Supports custom operator development and Docker container management technology.
Features 48 onboard M.2 PCIe interfaces supporting NVMe 2280 SSD expansion, enabling massive high-speed storage capacity. Delivers significantly improved data throughput to meet high-density storage requirements, with scalable capacity up to TB-level.
Seamlessly runs various software including: Databases (MySQL, Oracle、Redis), Web services (Nginx, Apache), Virtualization (KVM), Mail services (Postfix), Data analytics (Hadoop, Spark). Maintains stable and efficient performance during concurrent multi-software operations.
The device is equipped with a touchscreen display that provides real-time monitoring of chassis temperature, power efficiency, fan speed, network IP, date, time and other system parameters, allowing users to keep track of the operational status at all times.
Includes aBMC intelligent management system for: Real-time monitoring, Software/hardware configuration, Troubleshooting, Alert notification, System upgrades, Remote maintenance, Supports secondary development for customization.
Widely applicable to intelligent computing servers, edge computing, large-scale model localization, smart cities, smart healthcare, smart industrial systems, intelligent security, and other related products and fields.
CSC2-N48 | ||||||||
Technical Specifications | Core Mods (Optional) | RK3588S |
RK3588 |
RK35876 |
BM1684X |
BM1688 |
Jetson Orin Nano ( 8GB ) |
Jetson Orin NX ( 16GB ) |
Server form |
2U rackmount computing power server |
|||||||
Architecture |
ARM architecture |
|||||||
Number of nodes |
48 distributed computing nodes + 1 control node |
|||||||
Compute nodes | Octa-core 64-bit processor RK3588S, up to 2.4GHz |
Octa-core 64-bit processor RK3588, up to 2.4GHz |
Octa-core 64-bit processor RK3576, up to 2.2GHz |
Octa-core 64-bit processor BM1684X, up to 2.3GHz |
Octa-core 64-bit processor BM1688, up to 1.6GHz |
Hexa-core 64-bit processor NVIDIA Jetson Orin Nano, up to 1.7GHz |
Octa-core 64-bit processor NVIDIA Jetson Orin NX, up to 2.0GHz |
|
Control nodes |
Octa-core 64-bit processor RK3588, with a maximum clock speed of 2.4GHz and a peak computing power of 6TOPS |
|||||||
AI computing power | 288T |
288T |
288T |
1536T |
768T |
3216T |
7536T |
|
RAM | 16GB LPDDR5 |
16GB LPDDR4 |
8GB LPDDR4 |
8GB LPDDR4 |
8GB LPDDR4 |
8GB LPDDR5 |
16GB LPDDR5 |
|
Storage | 256GB eMMC × 48 ( 16/32/64/ 128/256GB ) |
256GB eMMC × 48 ( 16/32/64/ 128/256GB ) |
64GB eMMC × 48 ( 16/32/64/ 128/256GB ) |
32GB eMMC × 48 ( 32/64/128GB ) |
32GB eMMC × 48 ( 16/32/64/ 128/256GB ) |
Not |
||
Storage expansion | 2280 PCIe NVMe SSD × 48 (Optional) |
256GB (2280 PCIe NVMe SSD, the server is internally assembled) |
||||||
Power |
2 AC redundant power supplies (Hot-swappable supported) |
|||||||
Screen |
1 touch screen display |
|||||||
Fan module |
12 high-speed cooling fans |
|||||||
Physical Specifications | Size |
724.0mm (L) × 430.0mm (W) × 88.8 mm (H) |
||||||
Full weight |
Net weight of the server: 23.1kg, total weight with packaging: 25.3kg |
|||||||
Environment |
Operating Temperature: 0ºC ~ 35ºC, Storage Temperature: -40ºC ~ 60ºC, Operating Humidity: 5% ~ 80%RH (non-condensing) |
|||||||
Software Specifications | BMC |
Integrated BMC management system based on web management interface, supports monitoring, configuration, alarming, remote operation and maintenance, virtual replacement management, and provides CLI command line and Redfish and other tools to facilitate secondary development |
||||||
Large language models |
All models support the privatization of ultra-large-scale parametric models under the Transformer architecture, such as Deepseek-R1 series, Gemma series, Llama series, ChatGLM series, Qwen series, Phi series and other large language models |
|||||||
Visual large model |
BM1684X: Support the privatization deployment of large visual models such as ViT, Grounding DINO, SAM, etc. Jetson Orin Nano/Jetson Orin NX: Supports the privatization deployment of large vision models such as EfficientVIT, NanoOWL, NanoSAM, SAM, TAM, etc. |
|||||||
AI Painting |
BM1684X/Jetson Orin Nano/Jetson Orin NX: Support the private deployment of Flux, Stable Diffusion, and Stable Diffusion XL image generation models |
|||||||
Deep learning |
All models: Supports traditional network architectures such as CNN, RNN, LSTM, and supports a variety of deep learning frameworks, including TensorFlow, PyTorch, PaddlePaddle, ONNX, Caffe, etc. Support custom operator development and Docker containerization management technology Jetson Orin Nano/Jetson Orin NX: Support Ollama local large model deployment framework and ComfyUI graphical deployment framework |
|||||||
Interface Specifications | Internet |
4 × 10G Ethernet (SFP+), 1 × Gigabit Ethernet (RJ45, MGMT used as BMC management network) |
||||||
Console |
1 × Console (RJ45, BMC debug serial port, baud rate 115200) |
|||||||
Display |
1 × HDMI (Maximum resolution 1080P, BMC management display) |
|||||||
USB |
2 × USB3.0 (The lower USB is USB3.0 OTG, and the BMC can be upgraded OTG using a USB flash drive) |
|||||||
Button |
1 × Reset button, 1 × Power button, 1 × Restart BMC button |