PyTorch深度学习项目框架模板(最佳实践)

网友投稿 1197 2022-10-28

PyTorch深度学习项目框架模板(最佳实践)

PyTorch深度学习项目框架模板(最佳实践)

PyTorch Project Template

A simple and well designed structure is essential for any Deep Learning project, so after a lot practice and contributing in pytorch projects here's a pytorch project template that combines simplicity, best practice for folder structure and good OOP design. The main idea is that there's much same stuff you do every time when you start your pytorch project, so wrapping all this shared stuff will help you to change just the core idea every time you start a new pytorch project.

So, here’s a simple pytorch template that help you get into your main project faster and just focus on your core (Model Architecture, Training Flow, etc)

In order to decrease repeated stuff, we recommend to use a high-level library. You can write your own high-level library or you can just use some third-part libraries such as ignite, fastai, mmcv … etc. This can help you write compact but full-featured training loops in a few lines of code. Here we use ignite to train mnist as an example.

Requirements

yacs (Yet Another Configuration System)PyTorch (An open source deep learning platform)ignite (High-level library to help with training neural networks in PyTorch)

Table Of Contents

In a NutshellIn DetailsFuture WorkContributingAcknowledgments

In a Nutshell

In a nutshell here's how to use this template, so for example assume you want to implement ResNet-18 to train mnist, so you should do the following:

In modeling folder create a python file named whatever you like, here we named it example_model.py . In modeling/__init__.py file, you can build a function named build_model to call your model

from .example_model import ResNet18def build_model(cfg): model = ResNet18(cfg.MODEL.NUM_CLASSES) return model

In engine folder create a model trainer function and inference function. In trainer function, you need to write the logic of the training process, you can use some third-party library to decrease the repeated stuff.

# trainerdef do_train(cfg, model, train_loader, val_loader, optimizer, scheduler, loss_fn): """ implement the logic of epoch: -loop on the number of iterations in the config and call the train step -add any summaries you want using the summary """pass# inferencedef inference(cfg, model, val_loader):"""implement the logic of the train step- run the tensorflow session- return any metrics you need to summarize """pass

In tools folder, you create the train.py . In this file, you need to get the instances of the following objects "Model", "DataLoader”, “Optimizer”, and config

# create instance of the model you wantmodel = build_model(cfg)# create your data generatortrain_loader = make_data_loader(cfg, is_train=True)val_loader = make_data_loader(cfg, is_train=False)# create your model optimizeroptimizer = make_optimizer(cfg, model)

Pass the all these objects to the function do_train , and start your training

# here you train your modeldo_train(cfg, model, train_loader, val_loader, optimizer, None, F.cross_entropy)

You will find a template file and a simple example in the model and trainer folder that shows you how to try your first model simply.

In Details

├── config│ └── defaults.py - here's the default config file.││├── configs │ └── train_mnist_softmax.yml - here's the specific config file for specific model or dataset.│ │├── data │ └── datasets - here's the datasets folder that is responsible for all data handling.│ └── transforms - here's the data preprocess folder that is responsible for all data augmentation.│ └── build.py - here's the file to make dataloader.│ └── collate_batch.py - here's the file that is responsible for merges a list of samples to form a mini-batch.││├── engine│ ├── trainer.py - this file contains the train loops.│ └── inference.py - this file contains the inference process.││├── layers - this folder contains any customed layers of your project.│ └── conv_layer.py││├── modeling - this folder contains any model of your project.│ └── example_model.py││├── solver - this folder contains optimizer of your project.│ └── build.py│ └── lr_scheduler.py│ │ ├── tools - here's the train/test model of your project.│ └── train_net.py - here's an example of train model that is responsible for the whole pipeline.│ │ └── utils│ ├── logger.py│ └── any_other_utils_you_need│ │ └── tests - this foler contains unit test of your project. ├── test_data_sampler.py

Future Work

Contributing

Any kind of enhancement or contribution is welcomed.

Acknowledgments

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:JVM基础教程第11讲:JVM参数之堆栈空间配置
下一篇:JVM规范系列开篇:为什么要读JVM规范?
相关文章

 发表评论

暂时没有评论,来抢沙发吧~