后台小程序开发的全方位指南
847
2022-10-12
NeMo:用于构建由神经网络模块驱动AI应用程序的框架无关的工具包
NVIDIA NeMo
NeMo is a toolkit for creating Conversational AI applications.
NeMo toolkit makes it possible for researchers to easily compose complex neural network architectures for conversational AI using reusable components - Neural Modules. Neural Modules are conceptual blocks of neural networks that take typed inputs and produce typed outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations.
The toolkit comes with extendable collections of pre-built modules for automatic speech recognition (ASR), natural language processing (NLP) and text synthesis (TTS).
Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes. NeMo has integration with NVIDIA Jarvis.
Introduction
Watch this video for a quick walk-through.Documentation (latest released version) and Documentation (master branch)Read NVIDIA Developer Blog to learn how to develop speech recognition models for different languagesRead NVIDIA Developer Blog announcing NeMoRead NVIDIA Developer Blog for example applicationsRead NVIDIA Developer Blog for QuartzNet ASR modelRecommended version to install is 0.10.1 via pip install nemo-toolkit[all]Recommended NVIDIA NGC NeMo Toolkit containerPretrained models are available on NVIDIA NGC Model repository
Getting started
THE LATEST STABLE VERSION OF NeMo is 0.10.1 (Available via PIP).
Requirements
python 3.6 or 3.7PyTorch 1.4.* with GPU support(optional, for best performance) NVIDIA APEX. Install from here: https://github.com/NVIDIA/apex
Docker containers
NeMo docker container
You can use NeMo's docker container with all dependencies pre-installed
docker run --runtime=nvidia -it --rm -v --shm-size=16g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/nemo:v0.10
If you are using the NVIDIA NGC PyTorch container follow these instructions
Pull the docker: docker pull nvcr.io/nvidia/pytorch:20.01-py3Run:docker run --gpus all -it --rm -v
See examples/start_here to get started with the simplest example.
Tutorials
Speech recognitionNatural language processingSpeech Synthesis
Pre-trained models
Modality | Model | Trained on |
---|---|---|
ASR | Jasper10x5DR_En | LibriSpeech, WSJ, Mozilla Common Voice (en_1488h_2019-12-10), Fisher, Switchboard, and Singapore English National Speech Corpus (Part 1) |
ASR | QuartzNet15x5En | LibriSpeech, WSJ, Mozilla Common Voice (en_1087h_2019-06-12), Fisher, and Switchboard |
ASR | QuartzNet15x5Zh | AISHELL-2 Mandarin |
NLP | BERT base uncased | English Wikipedia and BookCorpus dataset seq len <= 512 |
NLP | BERT large uncased | English Wikipedia and BookCorpus dataset seq len <= 512 |
TTS | Tacotron2 | LJspeech |
TTS | WaveGlow | LJspeech |
DEVELOPMENT
If you'd like to use master branch and/or develop NeMo you can run "reinstall.sh" script.
Documentation (master branch).
Installing From Github
If you prefer to use NeMo's latest development version (from GitHub) follow the steps below:
Clone the repository git clone https://github.com/NVIDIA/NeMo.gitGo to NeMo folder and re-install the toolkit with collections:
./reinstall.sh
Style tests
python setup.py style # Checks overall project code style and output issues with diff.python setup.py style --fix # Tries to fix error in-place.python setup.py style --scope=tests # Operates within certain scope (dir of file).
** NeMo Test Suite**
NeMo contains test suite divided into 5 subsets: unit: unit tests, i.e. testing a single, well isolated functionalityintegration: tests checking the elements when integrated into subsystemssystem: tests working at the highest integration levelacceptance: tests checking whether the developed product/model passes the user defined acceptance criteriadocs: tests related to documentation (deselect with '-m "not docs"')
The user can run all the tests locally by simply executing:
pytest
In order to run a subset of tests one can use the -m argument followed by the subset name, e.g. for system subset:
pytest -m system
By default, all the tests will be executed on GPU. There is also an option to run the test suite on CPU by passing the --cpu command line argument, e.g.:
pytest -m unit --cpu
Citation
If you are using NeMo please cite the following publication
@misc{nemo2019, title={NeMo: a toolkit for building AI applications using Neural Modules}, author={Oleksii Kuchaiev and Jason Li and Huyen Nguyen and Oleksii Hrinchuk and Ryan Leary and Boris Ginsburg and Samuel Kriman and Stanislav Beliaev and Vitaly Lavrukhin and Jack Cook and Patrice Castonguay and Mariya Popova and Jocelyn Huang and Jonathan M. Cohen}, year={2019}, eprint={1909.09577}, archivePrefix={arXiv}, primaryClass={cs.LG}}
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~