Powered by SOPHON's BM1684X AI processor, this computer can be configured with 16GB of RAM. With up to 32TOPS of INT8 computing power, it provides mainstream
programming frameworks and a complete, easy-to-use toolchain, ensuring a low cost of algorithm migration. The device can be applied to visual computing,
edge computing, general computing services, smart transportation, smart classrooms, unmanned supermarkets, surveillance security, and more.
This computer is equipped with the BM1684X, SOPHON's AI processor, featuring an octa-core ARM Cortex-A53, a frequency of up to 2.3GHz, and a 12nm lithography
process. With up to 32Tops (INT8) computing power, it supports mainstream programming frameworks and can be widely
used in artificial intelligence inference for cloud and edge applications.
The computer supports up to 32-channel 1080P H.264/H.265 video decoding. It is able to simultaneously process and analyze over 16-channel HD video,
making it ideal for various AI applications such as face detection and license plate recognition in video streaming.
Based on measured data using Batch4 INT8 quantization, EC-A1684XJD4 has a higher throughput and energy efficiency
ratio than the mainstream intelligent computing module platforms in the industry, offering superior performance.
It supports dual 1000Mbps Ethernet, 2.4GHz/5GHz dual-band WiFi, and 5G/4G LTE network expansion, enabling higher-rate communication.
SOPHON SDK, a one-stop deep learning development toolkit, provides a series of software tools, including the underlying driver environment, compiler,
and inference deployment tool. It supports mainstream frameworks: Caffe/TF/PyTorch/Mxnet/PaddlePaddle, mainstream network model
and custom operator development, Docker containerization, and rapid deployment of algorithm applications.
With complete software and hardware, AI inference for cloud and edge applications can be easily achieved. This accelerates the development of edge
applications, such as face recognition, video structuring, abnormal alarm, equipment inspection, situation prediction, and more.
It supports the migration of multiple algorithms, including "persons/vehicles/objects" recognition, video structuring, and trajectory behavior,
featuring high security and reliability. It can be flexibly applied to a wide range of product development.
With a variety of interfaces, including RS485, RS232, USB3.0, USB2.0, and HDMI, the device provides convenient data
connectivity and communication, making it suitable for direct application in AI edge computing products.
We provide SDKs, tutorials, technical documents, and development tools, making development simpler and more convenient.
Efficiently compatible with all AI algorithms available in the market, this computer can be integrated into products like edge computing boxes. It provides powerful
AI performance for a range of industries, including visual computing, edge computing, general computing power services, artificial intelligence,
smart construction sites, smart transportation, smart classrooms, unmanned supermarkets, and security surveillance.
EC-A1684XJD4 | EC-A1684XJD4 V2 | ||
Basic Specificat |
SOC |
SOPHON BM1684X |
|
CPU |
Integrated high-performance octa-core ARM A53, 12nm process, with a frequency of up to 2.3GHz |
||
TPU |
Built-in tensor computing module TPU, computing power up to: 32TOPS(INT8), 16TFLOPS(FP16/BF16), 2TFLOPS(FP32) |
||
Codecs |
32-channel H.265/H.264 1080P@25fps, 1-channel H.265 8K@25fps video decoding 32-channel 1080P@25fps HD video processing (decoding + AI analysis) 12-channel H.265/H.264 1080P@25fps video encoding JPEG image codec 1080P@600fps |
||
RAM |
8GB/12GB/16GB LPDDR4/LPDDR4X |
||
Storage |
32GB/64GB/128GB eMMC |
||
Storage Expansion |
1 × M.2 SATA3.0 (expandable 2242 SATA SSD, located inside the computer), 1 × TF Card |
1 × M.2 SATA3.0 (expandable 2242 SATA SSD, located at the bottom of the computer), 1 × TF Card |
|
Power |
DC 12V (5.5×2.5mm) |
||
Power consumption |
Normal: 24W(12V/2000mA), Max: 42W(12V/3500mA) |
||
System |
Linux |
||
Software Support |
・ The private deployment of ultra-large-scale parameter models under the Transformer architecture, including large language models such as Llama series, ChatGLM series, and Qwen series, as well as major visual models like ViT, Grounding DINO, and SAM ・ The private deployment of the Stable Diffusion V1.5 image generation model in the AIGC field ・ Traditional network architectures such as CNN, RNN, and LSTM; a variety of deep learning frameworks, including TensorFlow, PyTorch, MXNet, PaddlePaddle, Caffe and ONNX, as well as custom operator development ・ Docker container management technology |
||
Size |
210.0mm × 130.0mm × 44.5mm |
||
Weight |
Net weight: 1.28kg, Weight with antenna: 1.33kg, Total weight with package: 2.24kg |
||
Environment |
Operating Temperature: -20℃~60℃, Storage Temperature:-20℃~70℃, Storage Humidity: 10%~90%RH(non-condensing) |
||
Interface Specif |
Ethernet |
2 × Gigabit Ethernet (RJ45/1000Mbps) |
|
Wireless network |
Support 2.4GHz/5GHz dual-band WiFi (802.11a/b/g/n/ac protocol), expandable 4G LTE (via Mini PCIe), 5G (via M.2 B-KEY) |
||
Video output |
1 × HDMI2.0(1080P@30Hz) |
||
Audio |
1 × HDMI Audio Output |
||
USB |
2 × USB3.0 (Max: 1A), 2 × USB2.0 (Max: 500mA) |
2 × USB3.0 (Max: 1A), 2 × USB2.0 (Max: 500mA), 1 × Type-C(Debug) |
|
Debug UART |
1 × RS232(DB9)、1 × RS485(DB9) |
||
Other interfaces |
2 × WiFi antenna, 1 × Bluetooth antenna, 1 × 4G antenna, 1 × SIM Card (Extended 5G/4G LTE) |