快速,可伸缩的,易于使用的基于Python的深度学习框架

网友投稿 962 2022-11-03

快速,可伸缩的,易于使用的基于python的深度学习框架

快速,可伸缩的,易于使用的基于Python的深度学习框架

DISCONTINUATION OF PROJECT. This project will no longer be maintained by Intel. Intel will not provide or guarantee development of or support for this project, including but not limited to, maintenance, bug fixes, new releases or updates. Patches to this project are no longer accepted by Intel. If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the community, please create your own fork of the project.

neon

neon is Intel's reference deep learning framework committed to best performance on all hardware. Designed for ease-of-use and extensibility.

Tutorials and iPython notebooks to get users started with using neon for deep learning.Support for commonly used layers: convolution, RNN, LSTM, GRU, BatchNorm, and more.Model Zoo contains pre-trained weights and example scripts for start-of-the-art models, including: VGG, Reinforcement learning, Deep Residual Networks, Image Captioning, Sentiment analysis, and more.Swappable hardware backends: write code once and then deploy on CPUs, GPUs, or Nervana hardware

For fast iteration and model exploration, neon has the fastest performance among deep learning libraries (2x speed of cuDNNv4, see benchmarks).

2.5s/macrobatch (3072 images) on AlexNet on Titan X (Full run on 1 GPU ~ 26 hrs)Training VGG with 16-bit floating point on 1 Titan X takes ~10 days (original paper: 4 GPUs for 2-3 weeks)

We use neon internally at Intel Nervana to solve our customers' problems across many domains. We are hiring across several roles. Apply here!

See the new features in our latest release. We want to highlight that neon v2.0.0+ has been optimized for much better performance on CPUs by enabling Intel Math Kernel Library (MKL). The DNN (Deep Neural Networks) component of MKL that is used by neon is provided free of charge and downloaded automatically as part of the neon installation.

Quick Install

Local install and dependencies

On a Mac OSX or Linux machine, enter the following to download and install neon (conda users see the guide), and use it to train your first multi-layer perceptron. To force a python2 or python3 install, replace make below with either make python2 or make python3.

git clone https://github.com/NervanaSystems/neon.git cd neon make . .venv/bin/activate

Starting after neon v2.2.0, the master branch of neon will be updated weekly with work-in-progress toward the next release. Check out a release tag (e.g., "git checkout v2.2.0") for a stable release. Or simply check out the "latest" release tag to get the latest stable release (i.e., "git checkout latest")

Install via pypi

From version 2.4.0, we re-enabled pip install. Neon can be installed using package name nervananeon.

pip install nervananeon

It is noted that aeon needs to be installed separately. The latest release v2.6.0 uses aeon v1.3.0.

Warning

Between neon v2.1.0 and v2.2.0, the aeon manifest file format has been changed. When updating from neon < v2.2.0 manifests have to be recreated using ingest scripts (in examples folder) or updated using this script.

Use a script to run an example

python examples/mnist_mlp.py

Selecting a backend engine from the command line

The gpu backend is selected by default, so the above command is equivalent to if a compatible GPU resource is found on the system:

python examples/mnist_mlp.py -b gpu

When no GPU is available, the optimized CPU (MKL) backend is now selected by default as of neon v2.1.0, which means the above command is now equivalent to:

python examples/mnist_mlp.py -b mkl

If you are interested in comparing the default mkl backend with the non-optimized CPU backend, use the following command:

python examples/mnist_mlp.py -b cpu

Use a yaml file to run an example

Alternatively, a yaml file may be used run an example.

neon examples/mnist_mlp.yaml

To select a specific backend in a yaml file, add or modify a line that contains backend: mkl to enable mkl backend, or backend: cpu to enable cpu backend. The gpu backend is selected by default if a GPU is available.

Recommended Settings for neon with MKL on Intel Architectures

The Intel Math Kernel Library takes advantages of the parallelization and vectorization capabilities of Intel Xeon and Xeon Phi systems. When hyperthreading is enabled on the system, we recommend the following KMP_AFFINITY setting to make sure parallel threads are 1:1 mapped to the available physical cores.

export OMP_NUM_THREADS= export KMP_AFFINITY=compact,1,0,granularity=fine

or

export OMP_NUM_THREADS= export KMP_AFFINITY=verbose,granularity=fine,proclist=[0-],explicit

For more information about KMP_AFFINITY, please check here. We encourage users to set out trying and establishing their own best performance settings.

Documentation

The complete documentation for neon is available here. Some useful starting points are:

Tutorials for neonOverview of the neon workflowAPI documentationResources for neon and deep learning

Support

For any bugs or feature requests please:

Search the open and closed issues list to see if we're already working on what you have uncovered.Check that your issue/request hasn't already been addressed in our Frequently Asked Questions (FAQ) or neon-users Google group.File a new issue or submit a new pull request if you have some code you'd like to contribute

For other questions and discussions please post a message to the neon-users Google group

License

We are releasing neon under an open source Apache 2.0 License. We welcome you to contact us with your use cases.

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:idea 注解配置
下一篇:组合全排列
相关文章

 发表评论

暂时没有评论,来抢沙发吧~