NCC S1 Neural Network Computing Card

Based on the AI-specific APiM framework, a modular deep neural network learning accelerator without any external caching can be used for high-performance edge computing, as a vision-based deep learning computing and AI algorithm acceleration. NCC S1 is small in shape, extremely low in power consumption and best peak performance. Equipped with complete and easy-to-use model training tools, network training model instances, and professional hardware platform, it can be quickly applied in the artificial intelligence industry.

2.8Tops Best Peak Performance

Based on the AI-embedded Neural Network Processor (NPU), the NCC S1 possesses 28,000 parallel neural computing cores and supports on-chip parallel and in-situ calculations.Its peak performance up to 5.6Tops and computing performance is 2.8Tops.It can afford complex high-density calculations for high-performance edge computing field.

AI Processing Framework APiM

Based on AI-specific MPE matrix engine and APiM (AI processing in Memory) framework, it deals with AI in a revolutionary way. Without any instructions, bus and external DDR cache, plenty of data can be directly input or output to the silicon chip by upgrading the network preloading once, which greatly lifts the processing speed of AI and reduces the processing energy consumption.

9.3Tops/W High Energy Efficiency

The NPU of NCC S1 neural network computing card uses the 28nm process technology. The power is only 300mW when throughput is 2.8 Tops, while the energy efficiency is up to 9.3 Tops/W. It maintains strong computing ability while owning extremely low energy consumption, endowed with great advantages in the edge computing field of terminal equipment.

High-performance Hardware Platform

NCC S1 neural network computing card can be equipped with ROC-RK3399-PC open source main board. On condition that it is stocked with high-performance RK3399 six-core processor and abundant hardware interface, it can rapidly integrate hardware platform for edge computing, set up product prototype, and thus accelerate AI product project process.

  • MIPI
  • eDP
  • HDMI
  • POE
  • Type-C
  • GPIO

Supporting Model Training Tools

Provide the complete and easy-to-use model training tool PLAI (People Learn AI) which based on PyTorch, it can be developed on Windows 10 and Ubuntu 16.04 systems to add custom network models more easily and quickly, which greatly reduces the technical difficulties to applying AI and makes AI technology accessible to more people.

Provide Network Training Model

Support the following three network training model examples such as GNet1, GNet18 and GNetfc with more network instances continuing to emerge subsequently, making it possible to easily test a large number of deep learning applications on the device.

Specification

Specification
NPU

Lightspeeur SPR2801S (28nm process, unique MPE and APiM architecture)

Peak

5.6 TOPs@100MHz

Low Power

2.8 TOPs@300mW

Platform

Applicable ROC-RK3399-PC platform

Framework

Support Pytorch, Caffe framework, follow-up support TensorFlow

Tools

PLAI model training tool(Support for GG1, GNet18 and GNetfc network models based on VGG-16)

Support Ubuntu, Windows operating system

Size

27.5x12.5x3.5mm