tiny-cnn:C++11中的一个深度学习框架

网友投稿 679 2022-10-25

tiny-cnn:C++11中的一个深度学习框架

tiny-cnn:C++11中的一个深度学习框架

The project may be abandoned since the maintainer(s) are just looking to move on. In the case anyone is interested in continuing the project, let us know so that we can discuss next steps.

Please visit: https://groups.google.com/forum/#!forum/tiny-dnn-dev

tiny-dnn is a C++14 implementation of deep learning. It is suitable for deep learning on limited computational resource, embedded systems and IoT devices.

Table of contents

FeaturesComparison with other librariesSupported networksDependenciesBuildExamplesContributingReferencesLicenseGitter rooms

Check out the documentation for more info.

What's New

2016/11/30 v1.0.0a3 is released!2016/9/14 tiny-dnn v1.0.0alpha is released!2016/8/7 tiny-dnn is now moved to organization account, and renamed into tiny-dnn :)2016/7/27 tiny-dnn v0.1.1 released!

Features

Reasonably fast, without GPU: With TBB threading and SSE/AVX vectorization.98.8% accuracy on MNIST in 13 minutes training (@Core i7-3520M). Portable & header-only: Runs anywhere as long as you have a compiler which supports C++14.Just include tiny_dnn.h and write your model in C++. There is nothing to install. Easy to integrate with real applications: No output to stdout/stderr.A constant throughput (simple parallelization model, no garbage collection).Works without throwing an exception.Can import caffe's model. Simply implemented: A good library for learning neural networks.

Comparison with other libraries

Please see wiki page.

Supported networks

layer-types

core fully connecteddropoutlinear operationzero paddingpower convolution convolutionalaverage poolingmax poolingdeconvolutionalaverage unpoolingmax unpooling normalization contrast normalization (only forward pass)batch normalization split/merge concatsliceelementwise-add

activation functions

tanhasinhsigmoidsoftmaxsoftplussoftsignrectified linear(relu)leaky reluidentityscaled tanhexponential linear units(elu)scaled exponential linear units (selu)

loss functions

cross-entropymean squared errormean absolute errormean absolute error with epsilon range

optimization algorithms

stochastic gradient descent (with/without L2 normalization)momentum and Nesterov momentumadagradrmspropadamadamax

Dependencies

Nothing. All you need is a C++14 compiler (gcc 4.9+, clang 3.6+ or VS 2015+).

Build

tiny-dnn is header-only, so there's nothing to build. If you want to execute sample program or unit tests, you need to install cmake and type the following commands:

cmake . -DBUILD_EXAMPLES=ONmake

Then change to examples directory and run executable files.

If you would like to use IDE like Visual Studio or Xcode, you can also use cmake to generate corresponding files:

cmake . -G "Xcode" # for Xcode userscmake . -G "NMake Makefiles" # for Windows Visual Studio users

Then open .sln file in visual studio and build(on windows/msvc), or type make command(on linux/mac/windows-mingw).

Some cmake options are available:

optionsdescriptiondefaultadditional requirements to use
USE_TBBUse Intel TBB for parallelizationOFF1Intel TBB
USE_OMPUse OpenMP for parallelizationOFF1OpenMP Compiler
USE_SSEUse Intel SSE instruction setONIntel CPU which supports SSE
USE_AVXUse Intel AVX instruction setONIntel CPU which supports AVX
USE_AVX2Build tiny-dnn with AVX2 library supportOFFIntel CPU which supports AVX2
USE_NNPACKUse NNPACK for convolution operationOFFAcceleration package for neural networks on multi-core CPUs
USE_OPENCLEnable/Disable OpenCL support (experimental)OFFThe open standard for parallel programming of heterogeneous systems
USE_LIBDNNUse Greentea LibDNN for convolution operation with GPU via OpenCL (experimental)OFFAn universal convolution implementation supporting CUDA and OpenCL
USE_SERIALIZEREnable model serializationON2-
USE_DOUBLEUse double precision computations instead of single precisionOFF-
USE_ASANUse Address SanitizerOFFclang or gcc compiler
USE_IMAGE_APIEnable Image API supportON-
USE_GEMMLOWPEnable gemmlowp supportOFF-
BUILD_TESTSBuild unit testsOFF3-
BUILD_EXAMPLESBuild example projectsOFF-
BUILD_DOCSBuild documentationOFFDoxygen
PROFILEBuild unit testsOFFgprof

1 tiny-dnn use C++14 standard library for parallelization by default.

2 If you don't use serialization, you can switch off to speedup compilation time.

3 tiny-dnn uses Google Test as default framework to run unit tests. No pre-installation required, it's automatically downloaded during CMake configuration.

For example, type the following commands if you want to use Intel TBB and build tests:

cmake -DUSE_TBB=ON -DBUILD_TESTS=ON .

Customize configurations

You can edit include/config.h to customize default behavior.

Examples

Construct convolutional neural networks

#include "tiny_dnn/tiny_dnn.h"using namespace tiny_dnn;using namespace tiny_dnn::activation;using namespace tiny_dnn::layers;void construct_cnn() { using namespace tiny_dnn; network net; // add layers net << conv(32, 32, 5, 1, 6) << tanh() // in:32x32x1, 5x5conv, 6fmaps << ave_pool(28, 28, 6, 2) << tanh() // in:28x28x6, 2x2pooling << fc(14 * 14 * 6, 120) << tanh() // in:14x14x6, out:120 << fc(120, 10); // in:120, out:10 assert(net.in_data_size() == 32 * 32); assert(net.out_data_size() == 10); // load MNIST dataset std::vector train_labels; std::vector train_images; parse_mnist_labels("train-labels.idx1-ubyte", &train_labels); parse_mnist_images("train-images.idx3-ubyte", &train_images, -1.0, 1.0, 2, 2); // declare optimization algorithm adagrad optimizer; // train (50-epoch, 30-minibatch) net.train(optimizer, train_images, train_labels, 30, 50); // save net.save("net"); // load // network net2; // net2.load("net");}

Construct multi-layer perceptron (mlp)

#include "tiny_dnn/tiny_dnn.h"using namespace tiny_dnn;using namespace tiny_dnn::activation;using namespace tiny_dnn::layers;void construct_mlp() { network net; net << fc(32 * 32, 300) << sigmoid() << fc(300, 10); assert(net.in_data_size() == 32 * 32); assert(net.out_data_size() == 10);}

Another way to construct mlp

#include "tiny_dnn/tiny_dnn.h"using namespace tiny_dnn;using namespace tiny_dnn::activation;void construct_mlp() { auto mynet = make_mlp({ 32 * 32, 300, 10 }); assert(mynet.in_data_size() == 32 * 32); assert(mynet.out_data_size() == 10);}

For more samples, read examples/main.cpp or MNIST example page.

Contributing

Since deep learning community is rapidly growing, we'd love to get contributions from you to accelerate tiny-dnn development! For a quick guide to contributing, take a look at the Contribution Documents.

References

[1] Y. Bengio, Practical Recommendations for Gradient-Based Training of Deep Architectures. arXiv:1206.5533v2, 2012

[2] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278-2324.

Other useful reference lists:

UFLDL Recommended Readingsdeeplearning- reading list

License

The BSD 3-Clause License

Gitter rooms

We have gitter rooms for discussing new features & QA. Feel free to join us!

developers https://gitter.im/tiny-dnn/developers
users https://gitter.im/tiny-dnn/users

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:找工作那些事-和表弟的一次聊天
下一篇:深入浅出 ~ ConCurrentHashMap底层原理透析
相关文章

 发表评论

暂时没有评论,来抢沙发吧~