Putting Large Models
into a Small Box

High-Performance Large-Model Box
—— AIBOX-3588

Support the private deployment of mainstream large models.
Bring private AI capability to meet individual AI deployment needs.

The private deployment of large models

The box is delivering up to 6 TOPS of computing power. This enables advanced intelligent data processing, speech recognition,
and image analysis, effectively fulfilling the AI application demands for edge computing on a wide range of terminal devices.

Octa-core 64-bit AIOT processor RK3588

RK3588, The new-generation octa-core 64-bit high-performance AIOT processor RK3588 adopts an 8nm LP process with a maximum
clock speed of 2.4GHz. It integrates an ARM Mali-G610 MP4 quad-core GPU and is equipped with a built-in AI accelerator NPU,
providing computing power of 6 TOPS. The powerful RK3588 can deliver optimized performance for various AI application scenarios.

8K@60 fps H.265 / VP9 video decoding

It supports 8K@60fps H.265/VP9 video decoding and 8K@30fps H.265/H.264 video encoding, with simultaneous encoding and decoding
capabilities. It can achieve a maximum of 32 channels of 1080P@30fps decoding and 16 channels of 1080P@30fps encoding.

Strong network communication capability

With dual 1000Mbps Ethernet, the AI box ensures high-speed and stable network communication,
meeting the needs of various application scenarios.

Alloy shell, efficient heat dissipation

Equipped with a full metal shell and aluminum alloy structure for thermal conductivity, the top cover shell side adopts a banner grille
design to ensure external air circulation, efficient heat dissipation, and ensure computing performance and
stability under high temperature operation

Abundant resources

Support Linux OS. This provides a safe and stable system environment for product research and production. We offer SDKs,
tutorials, technical documentation, and development tools to streamline and improve the development process.

A wide range of applications

AIBOX-3588 is widely used in intelligent surveillance, AI education, services based on computing power, edge computing,
private deployment of large models, data security, and privacy protection.

intelligent
surveillance
AI education
computing
services
Edge computing
Large models
Data security

Specifications

AIBOX-3576 AIBOX-3588

Basic Specificat

SOC

Rockchip RK3576

Rockchip RK3588

CPU

Octa-core 64-bit processor(4×A72+4×A53), main frequency up to 2.2GHz

Octa-core 64-bit processor(4×Cortex-A76+4×Cortex-A55), main frequency up to 2.4 GHz

GPU

G52 MC3@1GHz, supports OpenGL ES 1.1/2.0/3.2, OpenCL 2.0, Vulkan 1.1, embedded high-performance 2D acceleration hardware

ARM Mali-G610 MP4 quad-core GPU, supports OpenGL ES3.2/ OpenCL 2.2/Vulkan1.1, 450 GFLOPS

NPU

6 TOPS NPU, supports INT4/8/16/FP16/ BF16/TF32 mixed operations

6 TOPS NPU, supports INT4/INT8/INT16 mixed operations

ISP

Built-in 16 million pixel ISP, support low-light noise reduction, support RGB-IR sensor, support up to 120dB HDR, AI-ISP to improve low-noise image effect

Integrated 48MP ISP with HDR&3DNR

Encoding Decoding

Decoding: 4K@120fps: H.265/HEVC, VP9, AVS2, AV1, 4K@60fps: H.264/AVC Encoding: 4K@60fps: H.265/HEVC、H.264/AVC

Decoding: 8K@60fps/4K@120fps H.265/VP9/AVS2, 8K@30fps H.264 AVC/MVC, 4K@60fps AV1, 1080P@60fps MPEG-2/-1/VC-1/VP8 Encoding: 8K@30fps H.265/H.264

RAM

LPDDR4 (4GB/8GB/16GB optional)

LPDDR4 (4GB/8GB/16GB optional, up to 32GB)

Storage

eMMC (16GB/32GB/64GB/128GB/256GB optional), UFS2.0 (Only AIBOX-3576 optional)

Storage Expansion

1 × M.2 (Expandable SATA 3.0/PCIe NVMe SSD, supports 2242/2260/2280; Inside the computer, M.2 multiplexed with lower USB3.0), 1 × TF Card

1 × M.2 (Expandable SATA 3.0/PCIe NVMe SSD, supports 2242/2260/2280; Inside the computer), 1 × TF Card

Power

DC 12V/3A(DC 5.5 × 2.1mm)

Power consumption

Normal: 1.2W(12V/100mA) Max: 7.2W(12V/600mA) Min: 0.72W (12V/6mA)

Normal: 3.6W(12V/300mA) Max: 13.2W(12V/1100mA) Min: 1.38W(12V/115mA)

OS

Linux OS(Ubuntu)

Software support

·Support the privatization deployment of ultra-large-scale parametric models under the Transformer architecture, such as Gemma-2B, ChatGLM3-6B, Qwen-1.8B, Phi-3-3.8B and other large language models ·It supports traditional network architectures such as CNN, RNN, and LSTM, and supports the import and export of RKNN models; Support a variety of deep learning frameworks, including TensorFlow, TensorFlow Lite, PyTorch, Caffe, ONNX and Darknet. It also supports the development of custom operators ·Support Docker container management technology

Size

93.4mm × 93.4mm × 50mm

Weight

≈ 500g

Environment

Operating: -20℃~60℃, Storage: -20℃~70℃, Storage Humidity: 10%~90%RH (non-condensing)

Interface Specif

Ethernet

2 × Gigabit Ethernet (1000Mbps/RJ45)

Video Output

1 * HDMI2.1(4K@120fps)

USB

2 × USB3.0 (Max: 1A, M.2 multiplexed with lower USB3.0)

2 × USB3.0 (Max: 1A)

Other interfaces

1 × Type-C (Flash), 1 × Console (Debug serial), 1 × Power button, 1 × MaskRom