AIBOX-1684

Artificial Intelligence Box

AIBOX-1684 powered by SOPHON AI processor BM1684, With up to 17.6TOPS of INT8
computing power, It supports up to 32 channels of 1080P H.265/H.264 video
decoding and 2 channels of 1080P H.265/H.264 video encoding, making it suitable
for applications in smart surveillance, AI education, computational services,
edge computing, data security, and privacy protection.

17.6 TOPS AI processor BM1684

BM1684, the SOPHON AI processor, features an octa-core ARM Cortex-A53 with up to 2.3 GHz of frequency. Equipped with a neural network
acceleration engine TPU, it delivers peak performance of 17.6T@INT8 and 2.2T@FP32, 35.2T@FP16(INT8,use winograd).
With support for mainstream programming frameworks, this processor can be widely used in AI inference, computer vision, and more.

Multi channel video AI processing performance

The AI box supports up to 32 channels of H.265 / H.264 1080P@30fps video decoding,
2 channels of 1080P@25fps video encoding, and MJPEG image codec up to 1080P@480fps.

One-stop toolkit, convenient and efficient

SOPHON SDK, one-stop deep learning development toolkit provides a series of software tools, including the underlying driver environment,
compiler and inference deployment tool. It supports mainstream frameworks: Caffe/TF/PyTorch/Mxnet/Paddle,
mainstream network model and custom operator development, Docker containerization, and rapid deployment of algorithm applications.

Comprehensive software and hardware support
to accelerate application deployment

With a complete software and hardware, Artificial Intelligence inference for cloud and edge applications can be easily achieved.
All of them accelerate development of edge applications, such as face recognition, video structuring,
abnormal alarm, equipment inspection, and situation prediction, etc.

Abundant algorithms

It supports the migration of multiple algorithms, including "persons/vehicles/objects" recognition, video structuring, and trajectory behavior,
featuring high security and reliability. It can be flexibly applied to a wide range of product development.

Strong network communication capability

With dual 1000Mbps Ethernet, the AI box ensures high-speed and stable network communication,
meeting the needs of various application scenarios.

All-aluminum alloy enclosure for heat dissipation

The AI box features an industrial-grade all-metal enclosure with an aluminum alloy structure for thermal conduction. The side of the top
cover features a grille design for external airflow and efficient heat dissipation, ensuring computing performance and stability even
under high-temperature operating conditions.

Exquisite ingenious design

Its top cover is a porous hexagonal design, combining elegance with high efficiency. The compact,
exquisite device operates stably and meets the needs of various industrial-grade applications.

Abundant resources

We offer SDKs, tutorials, technical documentation, and development tools to streamline and improve the development process.

A wide range of applications

AIBOX-1684 is widely used in intelligent surveillance, AI education, services based on computing power, edge computing,
private deployment of large models, and data security and privacy protection.

Intelligent
surveillance
AI education
Services based on
computing power
Edge computing
Intelligent
Transportation
Data security

Specifications

AIBOX-1684X AIBOX-1684
Basic Specificat SOC

SOPHON BM1684X

SOPHON BM1684

CPU

High-performance octa-core ARM A53, 12nm lithography process, frequency up to 2.3 GHz

TPU

32TOPS (INT8)、16TFLOPS (FP16/BF16)、2TFLOPS (FP32)

17.6TOPS (INT8), 2.2TOPS (FP32), 35.2TOPS (INT8, enable winograd)

VPU

32-channel H.265/H.264 1080P@25fps video decoding 1-channel H.265 8K@25fps video decoding 32-channel 1080P@25fps processing decoding + AI analysis 12-channel H.265/H.264 1080P@25fps video encoding JPEG image encoding and decoding can reach 1080P@600fps

32-channel H.265/H.264 1080P@30fps video decoding 2-channel H.265/H.264 1080P@25fps video encoding MJPEG image encoding decoding can reach 1080P@480fps

RAM

8GB/12GB/16GB LPDDR4/LPDDR4X

6GB/12GB/16GB LPDDR4/LPDDR4X

Storage

32GB/64GB/128GB eMMC、1 × TF Card

Power

DC 12V/4A (5.5 × 2.5mm)

DC 12V/3A (5.5 × 2.5mm)

Power consumption

Normal: 20.4W(12V/1700mA), Max: 43.2W(12V/3600mA)

Normal: 9.6W(12V/800mA), Max: 26.4W(12V/2200mA)

System

Linux

Software Support

・ The private deployment of ultra-large-scale parameter models   under the Transformer architecture, including large language   models such as LLaMa2, ChatGLM, and Qwen, as well as major visual   models like ViT, Grounding DINO, and SAM. ・ The private deployment of the Stable Diffusion V1.5 image   generation model in the AIGC field. ・ Traditional network architectures such as CNN, RNN, and LSTM; a   variety of deep learning frameworks, including TensorFlow, PyTorch,   MXNet, PaddlePaddle, Caffe and ONNX, as well as custom operator   development ・ Docker container management technology

・ Traditional network architectures such as CNN, RNN, and LSTM; a   variety of deep learning frameworks, including TensorFlow, PyTorch,   MXNet, PaddlePaddle, Caffe and ONNX, as well as custom operator   development ・ Docker container management technology

Size

90.6mm × 84.4mm × 48.5mm

Weight

≈ 420g

Environment

Operating temperature: -20℃~60℃, Storage temperature: -20℃~70℃, Storage humidity: 10%~90%RH (non-condensing)

Interface Specif Ethernet

2 × Gigabit Ethernet (1000Mbps/RJ45)

USB

2 × USB3.0 (Max: 1A), 1 × Type-C (Debug serial)

Button

1 × Power button