NNabla是一个旨在用于研究,开发和生产的深入学习框架

网友投稿 823 2022-11-04

NNabla是一个旨在用于研究,开发和生产的深入学习框架

NNabla是一个旨在用于研究,开发和生产的深入学习框架

Neural Network Libraries

Neural Network Libraries is a deep learning framework that is intended to be used for research, development and production. We aim to have it running everywhere: desktop PCs, HPC clusters, embedded devices and production servers.

Neural Network Libraries - CUDA extension: An extension library of Neural Network Libraries that allows users to speed-up the computation on CUDA-capable GPUs.Neural Network Libraries - Examples: Working examples of Neural Network Libraries from basic to state-of-the-art.Neural Network Libraries - C Runtime: Runtime library for inference Neural Network created by Neural Network Libraries.Neural Network Console: A Windows GUI app for neural network development.

Installation

Installing Neural Network Libraries is easy:

pip install nnabla

This installs the CPU version of Neural Network Libraries. GPU-acceleration can be added by installing the CUDA extension with following command.

pip install nnabla-ext-cuda101

Above command is for version 10.1 CUDA Toolkit.

for other versions: pip install nnabla-ext-cuda100 for CUDA version 10.0. pip install nnabla-ext-cuda90 for CUDA version 9.0. pip install nnabla-ext-cuda80 for CUDA version 8.0.

CUDA ver. 9.1, ver. 9.2 are not supported now.

For more details, see the installation section of the documentation.

Building from Source

See Build Manuals.

Running on Docker

For details on running on Docker, see the installation section of the documentation.

Features

Easy, flexible and expressive

The python API built on the Neural Network Libraries C++11 core gives you flexibility and productivity. For example, a two layer neural network with classification loss can be defined in the following 5 lines of codes (hyper parameters are enclosed by <>).

import nnabla as nnimport nnabla.functions as Fimport nnabla.parametric_functions as PFx = nn.Variable()t = nn.Variable()h = F.tanh(PF.affine(x, , name='affine1'))y = PF.affine(h, , name='affine2')loss = F.mean(F.softmax_cross_entropy(y, t))

Training can be done by:

import nnabla.solvers as S# Create a solver (parameter updater)solver = S.Adam()solver.set_parameters(nn.get_parameters())# Training iterationfor n in range(): # Setting data from any data source x.d = t.d = # Initialize gradients solver.zero_grad() # Forward and backward execution loss.forward() loss.backward() # Update parameters by computed gradients solver.update()

The dynamic computation graph enables flexible runtime network construction. Neural Network Libraries can use both paradigms of static and dynamic graphs, both using the same API.

x.d = t.d = drop_depth = np.random.rand() < with nn.auto_forward(): h = F.relu(PF.convolution(x, , (3, 3), pad=(1, 1), name='conv0')) for i in range(): if drop_depth[i]: continue # Stochastically drop a layer h2 = F.relu(PF.convolution(x, , (3, 3), pad=(1, 1), name='conv%d' % (i + 1))) h = F.add2(h, h2) y = PF.affine(h, , name='classification') loss = F.mean(F.softmax_cross_entropy(y, t))# Backward computation (can also be done in dynamically executed graph)loss.backward()

Command line utility

Neural Network Libraries provides a command line utility nnabla_cli for easier use of NNL.

nnabla_cli provides following functionality.

Training, Evaluation or Inference with NNP file.Dataset and Parameter manipulation.File format converter From ONNX to NNP and NNP to ONNX.From ONNX or NNP to NNB or C source code.

For more details see Documentation

Portable and multi-platform

Python API can be used on Linux and WindowsMost of the library code is written in C++11, deployable to embedded devices

Extensible

Easy to add new modules like neural network operators and optimizersThe library allows developers to add specialized implementations (e.g., for FPGA, ...). For example, we provide CUDA backend as an extension, which gives speed-up by GPU accelerated computation.

Efficient

High speed on a single CUDA GPUMemory optimization engineMultiple GPU support

Documentation

https://nnabla.readthedocs.org

Getting started

A number of Jupyter notebook tutorials can be found in the tutorial folder. We recommend starting from by_examples.ipynb for a first working example in Neural Network Libraries and python_api.ipynb for an introduction into the Neural Network Libraries API. We also provide some more sophisticated examples at nnabla-examples repository. C++ API examples are available in examples/cpp.

Contribution guide

The technology is rapidly progressing, and researchers and developers often want to add their custom features to a deep learning framework. NNabla is really nice in this point. The architecture of Neural Network Libraries is clean and quite simple. Also, you can add new features very easy by the help of our code template generating system. See the following link for details.

Contribution guide

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:TC118AH单通道内置MOS单通道直流无刷马达驱动IC
下一篇:TP4333TPOWER应急灯同步充放移动电源IC解决方案
相关文章

 发表评论

暂时没有评论,来抢沙发吧~