Putting Large Models
into a Small Box

—— Low-Power Large-Model Box
AIBOX-3576

Superior energy efficiency! Support the private deployment of
mainstream large models. Bring private AI capability to meet
individual AI deployment needs.

The private deployment of large models

The box is equipped with an ARM Mali G52 MC3 GPU, delivering up to 6 TOPS of computing power. This enables advanced intelligent data
processing, speech recognition, and image analysis, effectively fulfilling the AI application demands for edge computing
on a wide range of terminal devices.

Octa-core 64-bit AIOT processor RK3576

RK3576, the new octa-core 64-bit high-performance AIOT processor, features a big.LITTLE architecture (4×A72 +4×A53), an advanced
lithography process, and a frequency of up to 2.2 GHz. It ensures powerful support for high-performance computing and multitasking.
The Mali-G52 MC3 GPU, delivering 145G FLOPS, is capable of supporting efficient heterogeneous computing
to meet the demands of graphics-intensive applications.

4K@120 fps high frame rate video decoding

This device supports 8K@30fps / 4K@120fps decoding (H.265 / HEVC, VP9, AVS2, and AV1),
4K@60fps decoding (H.264 / AVC), and 4K@60fps encoding (H.265 / HEVC, H.264 / AVC).

Strong network communication capability

With dual 1000Mbps Ethernet, the AI box ensures high-speed and stable network communication,
meeting the needs of various application scenarios.

All-aluminum alloy enclosure for heat dissipation

The AI box features an industrial-grade all-metal enclosure with an aluminum alloy structure for thermal conduction. The side of the
top cover features a grille design for external airflow and efficient heat dissipation, ensuring computing performance
and stability even under high-temperature operating conditions.

Its top cover is a porous hexagonal design, combining elegance with high efficiency. The compact, exquisite device operates stably
and meets the needs of various industrial-grade applications.

Abundant resources

Support Linux OS. This provides a safe and stable system environment for product research and production. We offer SDKs, tutorials, technical
documentation, and development tools to streamline and improve the development process.

A wide range of applications

AIBOX-3576 is widely used in intelligent surveillance, AI education, services based on computing power, edge computing, private
n deployment of large models, data security, and privacy protection.

Intelligent
surveillance
AI education
Computing
services
Edge computing
Large models
Data security

Specifications

AIBOX-3576 AIBOX-3588

Basic Specificat

SOC

Rockchip RK3576

Rockchip RK3588

CPU

Octa-core 64-bit processor(4×A72+4×A53), main frequency up to 2.2GHz

Octa-core 64-bit processor(4×Cortex-A76+4×Cortex-A55), main frequency up to 2.4 GHz

GPU

G52 MC3@1GHz, supports OpenGL ES 1.1/2.0/3.2, OpenCL 2.0, Vulkan 1.1, embedded high-performance 2D acceleration hardware

ARM Mali-G610 MP4 quad-core GPU, supports OpenGL ES3.2/ OpenCL 2.2/Vulkan1.1, 450 GFLOPS

NPU

6 TOPS NPU, supports INT4/8/16/FP16/ BF16/TF32 mixed operations

6 TOPS NPU, supports INT4/INT8/INT16 mixed operations

ISP

Built-in 16 million pixel ISP, support low-light noise reduction, support RGB-IR sensor, support up to 120dB HDR, AI-ISP to improve low-noise image effect

Integrated 48MP ISP with HDR&3DNR

Encoding Decoding

Decoding: 4K@120fps: H.265/HEVC, VP9, AVS2, AV1, 4K@60fps: H.264/AVC Encoding: 4K@60fps: H.265/HEVC、H.264/AVC

Decoding: 8K@60fps/4K@120fps H.265/VP9/AVS2, 8K@30fps H.264 AVC/MVC, 4K@60fps AV1, 1080P@60fps MPEG-2/-1/VC-1/VP8 Encoding: 8K@30fps H.265/H.264

RAM

LPDDR4 (4GB/8GB/16GB optional)

LPDDR4 (4GB/8GB/16GB optional, up to 32GB)

Storage

eMMC (16GB/32GB/64GB/128GB/256GB optional), UFS2.0 (Only AIBOX-3576 optional)

Storage Expansion

1 × M.2 (Expandable SATA 3.0/PCIe NVMe SSD, supports 2242/2260/2280; Inside the computer, M.2 multiplexed with lower USB3.0), 1 × TF Card

1 × M.2 (Expandable SATA 3.0/PCIe NVMe SSD, supports 2242/2260/2280; Inside the computer), 1 × TF Card

Power

DC 12V/3A(DC 5.5 × 2.1mm)

Power consumption

Normal: 1.2W(12V/100mA) Max: 7.2W(12V/600mA) Min: 0.72W (12V/6mA)

Normal: 3.6W(12V/300mA) Max: 13.2W(12V/1100mA) Min: 1.38W(12V/115mA)

OS

Linux OS(Ubuntu)

Software support

·Support the privatization deployment of ultra-large-scale parametric models under the Transformer architecture, such as Gemma-2B, ChatGLM3-6B, Qwen-1.8B, Phi-3-3.8B and other large language models ·It supports traditional network architectures such as CNN, RNN, and LSTM, and supports the import and export of RKNN models; Support a variety of deep learning frameworks, including TensorFlow, TensorFlow Lite, PyTorch, Caffe, ONNX and Darknet. It also supports the development of custom operators ·Support Docker container management technology

Size

93.4mm × 93.4mm × 50mm

Weight

≈ 500g

Environment

Operating: -20℃~60℃, Storage: -20℃~70℃, Storage Humidity: 10%~90%RH (non-condensing)

Interface Specif

Ethernet

2 × Gigabit Ethernet (1000Mbps/RJ45)

Video Output

1 * HDMI2.1(4K@120fps)

USB

2 × USB3.0 (Max: 1A, M.2 multiplexed with lower USB3.0)

2 × USB3.0 (Max: 1A)

Other interfaces

1 × Type-C (Flash), 1 × Console (Debug serial), 1 × Power button, 1 × MaskRom