Powered by SOPHON AI processor BM1684, this core board can be configured with 12GB RAM. INT8 computing power is up to 17.6TOPS.
It supports mainstream frameworks and complete, easy-to-use toolchain, featuring low cost of algorithm migration. Backplane reference
design is provided for users to make further customization. It can be applied to various AI scenarios, such as visual computing, edge
computing, general computing power services, intelligent transportation, unmanned supermarkets, security surveillance, UAV etc.
This core board is powered by SOPHON AI processor BM1684, which is octa-core ARM Cortex-A53, up to 2.3GHz clock speed and 12nm lithography
process. With up to 17.6Tops INT8 computing power or 2.2Tops FP32 high-precision computing power,it supports mainstream programming
frameworks, which can be widely used in artificial intelligence inference for cloud and edge applications.
Up to 32-channel 1080P H.264/H.265 video decoding is supported. It can process and analyze more than 16-channel HD video at the same time,
meeting the needs of various AI application scenarios, such as face detection on video streaming, license plate recognition, etc.
Based on INT8 quantified Batch4 measured data, Core-1684JD4 has higher throughput and energy efficiency ratio than
the mainstream intelligent computing module platform in the industry, and has more advantages in performance.
The BMNNSDK2 one-stop deep learning development toolkit provides a series of software tools including the underlying driver environment,
compiler and inference deployment tool. It supports mainstream frameworks: Caffe/TF/PyTorch/Mxnet/Paddle, mainstream network
model and custom operator development, Docker containerization, and rapid deployment of algorithm applications.
With a complete software framework, Artificial Intelligence inference for cloud and edge applications can be easily achieved. All of them accelerate
development of edge applications, such as face recognition, video structuring, abnormal alarm, equipment inspection, and situation prediction, etc.
With PCIe3.0, GMAC, SDIO3.0, I2C, PWM, UART and GPIO, it is easy to integrate
into various edge embedded products and accelerate product development.
The core board adopts 260P standard SODIMM interface with immersion gold technology, which is small in size. It can be combined
with a backplane to form a complete high-performance mainboard with richer expansion interfaces.
Backplane reference design and complete technical information are provided, so users can efficiently proceed with secondary development
to quickly create independent and controllable products.
The core board can efficiently adapt to all AI algorithms on the market and integrate into edge computing boxes, which promote development of
industries through AI, such as visual computing, edge computing, general computing power services, Artificial Intelligence, intelligent
construction site, intelligent transportation, smart classes, unmanned supermarkets, security surveillance.
Integrated high-performance octa-core ARM A53, 12nm lithography process, clock speed up to 2.3GHz
Built-in tensor computing module TPU, computing power up to:
17.6T(INT8)/ 2.2T(FP32)/ 35.2 T(INT8, enable winograd)
TPU contains 64 NPU arithmetic units. Each NPU contains 16 EU arithmetic units, 1024 EU in total
Support mainstream programming frameworks, such as TensorFlow / Caffe / PyTorch /
Paddle / ONNX / MXNet / Tengine / DarkNet
Up to 32-channel H.265/H.264 1080p@30fps video decoding
1080p@50fps video encoding
MJPEG image encoding and decoding up to 1080P@480fps
Dual 1000Mbps Ethernet can be extended through GMAC
69.6mm × 55mm
Operating Temperature: -20℃～60℃
Storage Temperature: -20℃～70℃
Storage Humidity: -40%～70 %
Welcome feedback, your comments and suggestions are our driving force!