Mayo:面向硬件加速的深度神经网络框架

网友投稿 761 2022-10-30

Mayo:面向硬件加速的深度神经网络框架

Mayo:面向硬件加速的深度神经网络框架

Mayo

Mayo is a deep learning framework developed with hardware acceleration in mind. It enables rapid deep neural network model design, and common network compression work flows such as fine- and coarse-grained pruning, network slimming, and quantization methods with various arithmetics can be easily used jointly by writing simple YAML model description files (exemplified by the links above). Additionally, Mayo can also accelerate the hyperparameter exploration of these models with automated hyperparameter optimization. With minimal manual intervention, our automated optimization results achieve state-of-the-art compression rates (refer to the results section).

For testing and training our open-source research, please follow the relevant instructions:

Dynamic Channel Pruning: Feature Boosting and Suppression (ICLR 2019) [arXiv] [cite] [Instructions]Efficient and Effective Quantization for Sparse DNNs (Preprint) [arXiv] [Instructions (coming soon)]

Installation

Prerequisites

Before setting up Mayo, you will need to have Git, Git-LFS, Python 3.6.5 or above and TensorFlow 1.11 or above installed.

Setting up Mayo

Mayo can be installed by checking out our repository and install the necessary Python packages:

$ git clone https://github.com/admk/mayo.git$ cd mayo$ pip3 install -r requirements.txt

Adding TFRecord dataset files

Mayo accepts standard TFRecord dataset files. You can use instructions outlined here to generate TFRecord files for ImageNet, CIFAR-10 and MNIST datasets. The newly generated TFRecord files can then be placed in [mayo]/datasets/{imagenet,cifar10,mnist} folders to be used by Mayo. It may be necessary to ensure dataset.path.{train,validate} in [mayo]/datasets/{imagenet,cifar10,mnist}.yaml points to the correct location.

Testing Mayo

Run a simple LeNet-5 validation with the MNIST dataset using:

$ ./my \ models/lenet5.yaml \ datasets/mnist.yaml \ system.checkpoint.load=pretrained \ eval

You should expect the final top-1 and top-5 accuracies to be 99.57% and 100% respectively.

How to start writing your neural network application in YAML

For starters, you can checkout models/lenet5.yaml for an example of a neural network model description, datasets/mnist.yaml of the MNIST dataset, and trainers/lenet5.yaml of a simple training configuration. For more tutorials, please refer to this Wiki page.

The Mayo command line interface

The Mayo command line interface works in a different way from most deep learning framework and tools, because we specifically decouple models from datasets from trainers from compression techniques for maximum flexibility and reuseability. The installation instruction catches a glimpse of how it works, here we further break down the execution:

$ ./my \ # Invocation of Mayo models/lenet5.yaml \ # Imports LeNet-5 network description datasets/mnist.yaml \ # Imports the dataset description system.checkpoint.load=pretrained \ # Specify that we load the pretrained checkpoint eval # Starts model evaluation

The Mayo command line interface accepts a sequence of actions separated by space to be evaluated sequentially. Each YAML import and key-value pair update recursively merges all mappings in the YAML file with the global configuration. So right before eval, we would have a complete application description specifying the model, dataset and checkpoint used.

Why so many YAML files?

In Mayo, we decouple the description of each neural network application into three separate components: the dataset, model and trainer, each written in YAML. The reason for decoupling is to encourage reuse, for instance a ResNet-50 model can not only be used for ImageNet classification, but also object detection with COCO, or even customized tasks.

Furthermore, in network compression, we can use many fine- and coarse-grained pruning techniques in conjunction with a large range of quantization methods, even optionally on top of low-rank approximation of weight tensors, on a wide variety of neural networks, each could use different datasets and could be trained differently. We now encounter a vast number of possible combinations of all of these above options, so by decoupling compression techniques from the neural network, from the dataset, from training methodologies, all possible combinations can be achieved by importing the respective YAML descriptions, without having to write a monolithic description file for each combination.

Cite us

Dynamic Channel Pruning: Feature Boosting and Suppression

@inproceedings{gao2018dynamic, title={Dynamic Channel Pruning: Feature Boosting and Suppression}, author={Xitong Gao and Yiren Zhao and Łukasz Dudziak and Robert Mullins and Cheng-zhong Xu}, booktitle={International Conference on Learning Representations}, year={2019}, url={https://openreview-/forum?id=BJxh2j0qYm},}

Mayo

@inproceedings{Zhao2018mayo, author = {Zhao, Yiren and Gao, Xitong and Mullins, Robert and Xu, Chengzhong}, title = {Mayo: A Framework for Auto-generating Hardware Friendly Deep Neural Networks}, booktitle = {Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning}, series = {EMDL'18}, year = {2018}, url = {http://doi.acm.org/10.1145/3212725.3212726},}

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:若依如何修改超级管理员登录密码?
下一篇:如何去除Druid数据监控广告?
相关文章

 发表评论

暂时没有评论,来抢沙发吧~