BM1684X, the SOPHON AI processor, features an octa-core ARM Cortex-A53 with up to 2.3 GHz of frequency and a
12nm lithography process.
With up to 32Tops (INT8) computing power, 16TFLOPS (FP16/BF16), or 2Tops (FP32) high-precision computing power,
it supports mainstream programming frameworks, and can be widely used in artificial intelligence inference for
cloud and edge applications.
The AI box supports up to 32 channels of 1080P H.264/H.265 video decoding and 32 channels of 1080P
HD video processing (decoding +
AI analysis),making it ideal for various AI applications such as face detection and license plate recognition on
video
streaming.
With dual 1000Mbps Ethernet, the AI box ensures high-speed and stable network communication,
meeting the needs of various application scenarios.
The industrial-grade all-metal enclosure with aluminum alloy structure for thermal conduction. The side of the
top cover features a
grille design for external airflow and efficient heat dissipation, ensuring computing performance and stability
even under
high-temperature operating conditions. Its top cover is a porous hexagonal design, combining elegance with high
efficiency.
The compact, exquisite device operates stably and meets the needs of various industrial-grade applications.
We offer SDKs, tutorials, technical documentation, and development tools to streamline and improve the development process
The device is widely used in intelligent surveillance, AI education, services based on computing power, edge
computing,
private deployment of large models, and data security and privacy protection
Specifications | ||
Basic Specifications | SOC |
SOPHON BM1684X |
High-performance octa-core ARM A53, 12nm lithography process, frequency up to 2.3 GHz |
||
TTPU |
Built-in tensor computing module TPU, computing power up to: 32TOPS (INT8), 16TFLOPS (FP16/BF16), 2TFLOPS (FP32) |
|
VPU |
32-channel H.265/H.264 1080p@25fps video decoding, 32-channel 1080P@25fps HD video processing (decoding +AI analysis), 12-channel H.265/H.264 1080p@25fps video encoding |
|
RAM |
8GB/12GB/16GB LPDDR4/LPDDR4X |
|
Storage |
32GB/64GB/128GB eMMC、1*TF Card |
|
Power |
DC 12V/4A (DC 5.5*2.5mm) |
|
OS |
Ubuntu |
|
Software Support |
・ The private deployment of ultra-large-scale parameter models under the Transformer architecture, including large language models such as LLaMa2, ChatGLM, and Qwen, as well as large vision models like ViT, Grounding DINO, and SAM. ・ The private deployment of the Stable Diffusion V1.5 image generation model in the AIGC field. ・ Traditional network architectures such as CNN, RNN, and LSTM; a variety of deep learning frameworks, including TensorFlow, PyTorch, MXNet, PaddlePaddle, ONNX, and Darknet as well as custom operator development ・ Docker container management technology |
|
Dimension |
90.6mm * 84.4mm * 48.5mm |
|
Weight |
≈ 420g |
|
Environment |
Operating: -20℃~60℃, storage: -20℃~70℃, humidity:10%~90%RH (non-condensing) |
|
Interfaces | Ethernet |
2*1000Mbps |
USB |
2*USB3.0(current limit 1A) |
|
Other |
1*power button, 1*Type-C (Debug serial) |
欢迎反馈问题,您的意见与建议是我们的动力!
Copyright © 2014 - 2023 FIREFLY TECHNOLOGY CO.,LTD | 粤ICP备14022046号-2