Ultra-high energy efficiency ratio! Supports private deployment of mainstream large models.
Large-Model AI Box
Equipped with RK3588S Processor
32GB LPDDR5 High-Capacity Memory
Private Deployment of Large Models
8K Video Encoding & Decoding
RK3588S is Rockchip’s flagship AIoT chip, built on an advanced 8nm LP process. It features an octa-core 64-bit CPU with a clock speed of up to 2.4GHz, an integrated ARM Mali-G610 MP4 quad-core GPU, and a built-in AI accelerator NPU delivering 6 TOPS of computing power. RK3588S optimizes performance for diverse AI applications.
With 6 TOPS NPU computing power, the RK3588S supports private deployment of large-scale Transformer-based models, including the Gemma series, ChatGLM series, Qwen series, Phi series, and other large language models (LLMs). It also enables RKNN model import/export and supports multiple deep learning frameworks such as TensorFlow, TensorFlow Lite, PyTorch, and Caffe.
Compared to LPDDR4, LPDDR5 offers larger memory capacity, higher bandwidth, faster data transfer rates, lower power consumption, and advanced ECC (Error Correction Code) technology. This meets the demands of large model deployment for memory space and responsiveness, ensuring efficient hardware synergy to enhance model performance and energy efficiency.
Supports 8K@60fps H.265/VP9 decoding and 8K@30fps H.265/H.264 encoding, with simultaneous encode/decode capabilities. It can handle up to 32 channels of 1080P@30fps decoding and 16 channels of 1080P@30fps encoding. The high-resolution, multi-channel decoding accelerates video-based AI training and inference, improving visual analysis accuracy and optimizing algorithm training.
The industrial-grade all-metal aluminum alloy casing ensures superior heat dissipation. The top cover and side vents enhance airflow, maintaining stable performance and reliability even under high-temperature operation. Compact yet robust, it meets industrial-grade application requirements.
Supports Linux OS for a secure and stable development environment. Provides complete source code, tutorials, technical documentation, and tools to streamline the development process.
Ideal for smart surveillance, AI education, computing services, edge computing, private AI model deployment, data security, and privacy protection.
AIBOX-3576 | AIBOX-3588 | AIBOX-3588S | |||
Basic Specificat |
SOC |
Rockchip RK3576 |
Rockchip RK3588 |
Rockchip RK3588S |
|
CPU |
Octa-core 64-bit processor(4×A72+4×A53), main frequency up to 2.2GHz |
Octa-core 64-bit processor(4×Cortex-A76+4×Cortex-A55), main frequency up to 2.4 GHz |
|||
GPU |
G52 MC3@1GHz, supports OpenGL ES 1.1/2.0/3.2, OpenCL 2.0, Vulkan 1.1, embedded high-performance 2D acceleration hardware |
ARM Mali-G610 MP4 quad-core GPU, supports OpenGL ES3.2/ OpenCL 2.2/Vulkan1.1, 450 GFLOPS |
|||
NPU |
6 TOPS NPU, supports INT4/8/16/FP16/ BF16/TF32 mixed operations |
6 TOPS NPU, supports INT4/INT8/INT16 mixed operations |
|||
ISP |
Built-in 16 million pixel ISP, support low-light noise reduction, support RGB-IR sensor, support up to 120dB HDR, AI-ISP to improve low-noise image effect |
Integrated 48MP ISP with HDR&3DNR |
|||
Encoding Decoding |
Decoding: 8K@30fps/4K@120fps: H.265/HEVC, VP9, AVS2, AV1, 4K@60fps: H.264/AVC Encoding: 4K@60fps: H.265/HEVC、H.264/AVC |
Decoding: 8K@60fps/4K@120fps H.265/VP9/AVS2, 8K@30fps H.264 AVC/MVC, 4K@60fps AV1, 1080P@60fps MPEG-2/-1/VC-1/VP8 Encoding: 8K@30fps H.265/H.264 |
Decode: 8K@60fps H.265/VP9/AVS2 8K@30fps H.264 AVC/MVC 4K@60fps AV1 1080P@60fps MPEG-2/-1/VC-1/VP8 Encode: 8K@30fps H.265/H.264 |
||
RAM |
LPDDR4 (4GB/8GB/16GB optional) |
LPDDR4 (4GB/8GB/16GB/32GB optional) |
LPDDR5 (4GB/8GB/16GB/32GB optional) |
||
Storage |
eMMC (16GB/32GB/64GB/128GB/256GB optional), UFS2.0 (Only AIBOX-3576 optional) |
||||
Storage Expansion |
1 × M.2 (Expandable SATA 3.0/PCIe NVMe SSD, supports 2242/2260/2280; Inside the computer), 1 × TF Card |
||||
Power |
DC 12V/2A(DC 5.5 × 2.1mm) |
||||
Power consumption |
Normal: 1.2W(12V/100mA) Max: 7.2W(12V/600mA) Min: 0.72W (12V/6mA) |
Normal: 2.64W(12V/220mA) Max: 14.4W(12V/1200mA) Min(Sleep): 0.18W(12V/15mA) |
Normal: 1.26W(12V/105mA) Max: 13.2W(12V/1100mA) Min(Sleep): 0.18W(12V/15mA) |
||
OS |
Linux |
||||
Software support |
·Support the privatization deployment of ultra-large-scale parametric models under the Transformer architecture, such as Gemma series, ChatGLM series, Qwen series, Phi series and other large language models ·It supports traditional network architectures such as CNN, RNN, and LSTM, and supports the import and export of RKNN models; Support a variety of deep learning frameworks, including TensorFlow, TensorFlow Lite, PyTorch, Caffe, ONNX and Darknet. It also supports the development of custom operators ·Support Docker container management technology |
||||
Size |
93.4mm × 93.4mm × 50mm |
||||
Weight |
≈ 500g |
||||
Environment |
Operating: -20℃~60℃, Storage: -20℃~70℃, Storage Humidity: 10%~90%RH (non-condensing) |
||||
Interface Specif |
Ethernet |
2 × Gigabit Ethernet (1000Mbps/RJ45) |
1 × Gigabit Ethernet (1000Mbps/RJ45) |
||
Video Output |
1 × HDMI2.1(4K@120fps) |
1 × HDMI2.1(8K@60fps) |
|||
USB |
2 × USB3.0 (Max: 1A), 1 × Type-C (Firmware flashing) |
2 × USB3.0 (Max: 1A), 1 × Type-C (Can be used as a firmware flashing port. Set to USB2.0 HOST after booting up) |
|||
Button |
1 × Power, 1 × MaskRom |
||||
Other interfaces |
1 × Console (Debug serial) |