yannpp 一个能够帮助你了解深度神经网络工作原理的C++框架

网友投稿 834 2022-11-05

yannpp 一个能够帮助你了解深度神经网络工作原理的C++框架

yannpp 一个能够帮助你了解深度神经网络工作原理的C++框架

yannpp

This is an educational effort to help understand how deep neural networks work.

In order to achieve this goal I prepared a small number of selected educational materials and heavily documented pure C++ implementation of CNN that classifies MNIST digits.

Understand

In order to fully understand what is going on, I would recommend doing following:

read great Michael Nielsen's online book to understand all the basics and do the exercises (at least derivation of BP1-BP4)read "Backpropagation In Convolutional Neural Networks" pdf in the docs/ to understand how to prove backpropagation equations for convolutional layersread "A guide to convolution arithmetic" pdf in docs/ to understand what is padding and how to convolve input and filter

After this you will be able to understand code in the repo.

Get in

C++ code in the repo is simple enough to work in Windows/Mac/Linux. You can use CMake to compile it (check out .travis.yml or appveyor.yml to see how it's done in Linux or Windows).

In order to use MNIST data you will need to unzip archives in the data/ directory first. Also compiled executable accepts path to this data/ directory as first command line argument.

See

Main learning loop (as defined in network2_t::backpropagate()) looks like this:

// feedforward inputfor (size_t i = 0; i < layers_size; i++) { input = layers_[i]->feedforward(input);}// backpropagate errorarray3d_t error(result);for (size_t i = layers_size; i-- > 0;) { error = layers_[i]->backpropagate(error);}

Because of this simplicity most interesting things are located in src/layers/ directory that contains implementations of those feedforward() and backpropagate() methods for each layer.

This codebase contains it's own greatly simplified ndarray as in Numpy and it's called array3d_t. Most useful feature of the array is the ability to slice parts of it's data as subarrays.

network1_t as used in examples/mnist_simple.cpp is all-in-one implementation of network with fully-connected layers while network2_t is more "abstract" implementation that uses arbitrary layers in other examples.

Do

Codebase should encourage you to experiment. For example, examples/mnist_deeplearning.cpp file specifically contains lots of experimental code (e.g. reducing size of the input to be able to experiement with network topology, commented layers in the network itself etc.) that can show you how to experiment. Experimentation is required to select hyperparameters, to see if your network converges etc.

Cope

Feel free to say thank you if it was useful. Also this code (as any other) may contain bugs or other problems - all contributions are highly welcome.

Fork yannpp repository on GitHubClone your fork locallyConfigure the upstream repo (git remote add upstream git@github.com:ribtoks/yannpp.git)Create local branch (git checkout -b your_feature)Work on your featurePush the branch to GitHub (git push origin your_feature)Send a pull request on GitHub

Get out

There are many other similar efforts on GitHub. Their common problems are: code that is hard to read or code with too much magic inside (mainly related to python). Here's a short list with similar efforts with very easy code to understand:

Nympy-CNNzeta-learnMachine Learning Numpy

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:sdnpwn是一个测试软件定义网络(SDN)的安全性的工具包和框架
下一篇:899. 编辑距离(线性DP&最短编辑距离模板题)
相关文章

 发表评论

暂时没有评论,来抢沙发吧~