NNoM是一个专门为了神经网络在 MCU 上运行的框架

网友投稿 1105 2022-11-02

NNoM是一个专门为了神经网络在 MCU 上运行的框架

NNoM是一个专门为了神经网络在 MCU 上运行的框架

Neural Network on Microcontroller (NNoM)

NNoM is a high-level linference Neural Network library specifically for microcontrollers.

[English Manual] [Chinese Intro]

Highlights

Deploy Keras model to NNoM model with one line of code.User-friendly interfaces.Support complex structures; Inception, ResNet, DenseNet, Octave Convolution...High-performance backend selections.Onboard (MCU) evaluation tools; Runtime analysis, Top-k, Confusion matrix...

More detail avaialble in Development Guide

Discussions welcome using issues. Pull request welcome. QQ/TIM group: 763089399.

Licenses

NNoM is released under Apache License 2.0 since nnom-V0.2.0. License and copyright information can be found within the code.

Why NNoM?

The aims of NNoM is to provide a light-weight, user-friendly and flexible interface for fast deploying.

[1] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).[2] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).[3] Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708).

After 2014, the development of Neural Networks are more focus on structure optimising to improve efficiency and performance, which is more important to the small footprint platforms such as MCUs. However, the available NN libs for MCU are too low-level which make it sooooo difficult to use with these complex strucures.

Therefore, we build NNoM to help embedded developers for faster and simpler deploying NN model directly to MCU.

NNoM will manage the strucutre, memory and everything else for the developer. All you need to do is feeding your new measurements and getting the results.

NNoM is now working closely with Keras (You can easily learn Keras in 30 seconds!). There is no need to learn TensorFlow/Lite or other libs.

Documentations

Guides

5 min to NNoM Guide

The temporary guide

Porting and optimising Guide

RT-Thread Guide(Chinese)

RT-Thread-MNIST example (Chinese)

Examples

Documented examples

Please check examples and choose one to start with.

Available Operations

[API Manual]

*Notes: NNoM now supports both HWC and CHW formats. Some operation might not support both format currently. Please check the tables for the current status. *

Core Layers

LayersHWCCHWLayer APIComments
ConvolutionConv2D()Support 1/2D
Depthwise ConvDW_Conv2D()Support 1/2D
Fully-connectedDense()
LambdaLambda()single input / single output anonymous operation
Batch NormalizationN/AThis layer is merged to the last Conv by the script
FlattenFlatten()
SoftMaxSoftMax()Softmax only has layer API
ActivationActivation()A layer instance for activation
Input/OutputInput()/Output()
Up SamplingUpSample()
Zero PaddingZeroPadding()
CroppingCropping()

RNN Layers

LayersStatusLayer APIComments
Recurrent NNUnder Dev.RNN()Under Developpment
Simple RNNUnder Dev.SimpleCell()Under Developpment
Gated Recurrent Network (GRU)Under Dev.GRUCell()Under Developpment

Activations

Activation can be used by itself as layer, or can be attached to the previous layer as "actail" to reduce memory cost.

ActrivationHWCCHWLayer APIActivation APIComments
ReLUReLU()act_relu()
TanHTanH()act_tanh()
SigmoidSigmoid()act_sigmoid()

Pooling Layers

PoolingHWCCHWLayer APIComments
Max PoolingMaxPool()
Average PoolingAvgPool()
Sum PoolingSumPool()
Global Max PoolingGlobalMaxPool()
Global Average PoolingGlobalAvgPool()
Global Sum PoolingGlobalSumPool()A better alternative to Global average pooling in MCU before Softmax

Matrix Operations Layers

MatrixHWCCHWLayer APIComments
ConcatenateConcat()Concatenate through any axis
MultipleMult()
AdditionAdd()
SubstractionSub()

Dependencies

NNoM now use the local pure C backend implementation by default. Thus, there is no special dependency needed.

Optimization

CMSIS-NN/DSP is an optimized backend for ARM-Cortex-M4/7/33/35P. You can select it for up to 5x performance compared to the default C backend. NNoM will use the equivalent method in CMSIS-NN if the condition met.

Please check Porting and optimising Guide for detail.

Known Issues

Converter do not support implicitly defined activations

The script currently does not support implicit act:

Dense(32, activation="relu")

Use the explicit activation instead.

Dense(32)Relu()

Contacts

Jianjia Ma

majianjia@live.com

Citation Required

Please contact us using above details.

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:64 - 你了解协程吗?
下一篇:29 - python字符串的基本操作
相关文章

 发表评论

暂时没有评论,来抢沙发吧~