QNNPACK - Facebook开源的移动端深度学习加速框架

网友投稿 783 2022-10-29

QNNPACK - Facebook开源的移动端深度学习加速框架

QNNPACK - Facebook开源的移动端深度学习加速框架

QNNPACK

QNNPACK (Quantized Neural Networks PACKage) is a mobile-optimized library for low-precision high-performance neural network inference. QNNPACK provides implementation of common neural network operators on quantized 8-bit tensors.

QNNPACK is not intended to be directly used by machine learning researchers; instead it provides low-level performance primitives for high-level deep learning frameworks. As of today, QNNPACK is integrated in PyTorch 1.0 with Caffe2 graph representation.

Operator Coverage

Currently implemented and planned for implementation operators are below:

2D Convolution 2D Deconvolution Channel Shuffle Fully Connected Locally Connected 2D Max Pooling 2D Average Pooling Global Average Pooling Sigmoid Leaky ReLU Clamp (can be used for ReLU, ReLU6 if it is not fused in another operator) SoftArgMax (aka SoftMax) Group Normalization

Building

QNNPACK provides standard CMake-based build scripts.

Native compilation

Users are recommended to use scripts/build-local.sh script to build QNNPACK for the host machine.

Cross-compilation for Android

To cross-compile for Android, set $ANDROID_NDK environment variable (where $ANDROID_NDK is the path to Android NDK directory, e.g. /opt/android-ndk-r15c) and use one of the scripts from the table below:

ABIBuild scriptRestrictions
armeabi-v7ascripts/build-android-armv7.shRequires CPU with ARM NEON
arm64-v8ascripts/build-android-arm64.sh
x86scripts/build-android-x86.sh

Notes:

On armeabi-v7a qnnp_initialize will fail with qnnp_status_unsupported_hardware if the mobile CPU does not support ARM NEON. Don't set -DANDROID_ARM_NEON=1 for QNNPACK compilation as it can make qnnp_initialize crash on CPUs without ARM NEON.

Cross-compilation for iOS

To cross-compile for iOS, clone ios-cmake, and set $IOS_CMAKE_TOOLCHAIN_FILE environment variable (where $IOS_CMAKE_TOOLCHAIN_FILE is the path to ios.toolchain.cmake file in ios-cmake), and use one of the scripts from the table below:

ArchitectureBuild scriptNotes
armv7scripts/build-ios-armv7.shiPhone 3GS/4/4S
armv7scripts/build-ios-armv7s.shiPhone 5 and newer
arm64scripts/build-ios-arm64.shiPhone 5S and newer
arm64escripts/build-ios-arm64e.shiPhone XS/XR
i386scripts/build-ios-i386.shiPhone Simulator (32-bit)
x86_64scripts/build-ios-x86_64.shiPhone Simulator (64-bit)

End-to-End Benchmarking

Caffe2 backend of PyTorch 1.0 natively integrates QNNPACK, and provides a pre-trained quantized MobileNet v2 model. Below are instructions for benchmarking this model end-to-end with QNNPACK.

Raspberry Pi 2 or 3

# Clone PyTorch 1.0 repogit clone --recursive https://github.com/pytorch/pytorch.gitcd pytorch# Optional: update QNNPACK submodule to latest revisiongit submodule update --remote third_party/QNNPACK# Build Caffe2 (including binaries) for the host system# Use only 1 thread for build to avoid out-of-memory failuresMAX_JOBS=1 scripts/build_local.sh -DBUILD_BINARY=ON -DBUILD_PYTHON=OFF \ -DUSE_OBSERVERS=OFF -DUSE_DISTRIBUTED=OFF# Download model weightswget https://s3.amazonaws.com/download.caffe2.ai/models/mobilenet_v2_1.0_224_quant/init_net.pb# Download model graphwget https://s3.amazonaws.com/download.caffe2.ai/models/mobilenet_v2_1.0_224_quant/predict_net.pb# Run speed benchmark with 50 warm-up iterations and 10 measurement iterationsbuild/bin/speed_benchmark --net predict_net.pb --init_net init_net.pb \ --input data --input_dims 1,3,224,224 --input_type float \ --warmup 50 --iter 10

ARMv7 (32-bit) Android

# Clone PyTorch 1.0 repogit clone --recursive https://github.com/pytorch/pytorch.gitcd pytorch# Optional: update QNNPACK submodule to latest revisiongit submodule update --remote third_party/QNNPACK# Build Caffe2 (including binaries) for Android, and push to devicescripts/build_android.sh -DANDROID_TOOLCHAIN=clang -DBUILD_BINARY=ONadb push build_android/bin/speed_benchmark /data/local/tmp/speed_benchmark# Download model weights and copy them to Android devicewget https://s3.amazonaws.com/download.caffe2.ai/models/mobilenet_v2_1.0_224_quant/init_net.pbadb push init_net.pb /data/local/tmp/init_net.pb# Download model graph and copy it to Android devicewget https://s3.amazonaws.com/download.caffe2.ai/models/mobilenet_v2_1.0_224_quant/predict_net.pbadb push predict_net.pb /data/local/tmp/predict_net.pb# Run speed benchmark with 50 warm-up iterations and 10 measurement iterationsadb shell /data/local/tmp/speed_benchmark \ --net /data/local/tmp/predict_net.pb \ --init_net /data/local/tmp/init_net.pb \ --input data --input_dims 1,3,224,224 --input_type float \ --warmup 50 --iter 10

ARM64 (64-bit) Android

# Clone PyTorch 1.0 repogit clone --recursive https://github.com/pytorch/pytorch.gitcd pytorch# Optional: update QNNPACK submodule to latest revisiongit submodule update --remote third_party/QNNPACK# Build Caffe2 (including binaries) for Android, and push to devicescripts/build_android.sh -DANDROID_ABI=arm64-v8a -DANDROID_TOOLCHAIN=clang -DBUILD_BINARY=ONadb push build_android/bin/speed_benchmark /data/local/tmp/speed_benchmark# Download model weights and copy them to Android devicewget https://s3.amazonaws.com/download.caffe2.ai/models/mobilenet_v2_1.0_224_quant/init_net.pbadb push init_net.pb /data/local/tmp/init_net.pb# Download model graph and copy it to Android devicewget https://s3.amazonaws.com/download.caffe2.ai/models/mobilenet_v2_1.0_224_quant/predict_net.pbadb push predict_net.pb /data/local/tmp/predict_net.pb# Run speed benchmark with 50 warm-up iterations and 10 measurement iterationsadb shell /data/local/tmp/speed_benchmark \ --net /data/local/tmp/predict_net.pb \ --init_net /data/local/tmp/init_net.pb \ --input data --input_dims 1,3,224,224 --input_type float \ --warmup 50 --iter 10

PEP (Performance Evaluation Platform) Method

Facebook AI Performance Evaluation Platform is a framework and backend agnostic benchmarking platform to compare machine learning inferencing runtime metrics on a set of models and a variety of backends.

We use PEP to produce the results we have in our blog

With an ARMv7 device connected:

# Clone PyTorch 1.0 repomkdir ~/Code && cd ~/Codegit clone --recursive https://github.com/pytorch/pytorch.gitcd pytorch# Optional: update QNNPACK submodule to latest revisiongit submodule update --remote third_party/QNNPACK# Clone PEP repocd ~/Codegit clone --recursive https://github.com/facebook/FAI-PEP.git aibenchcd aibench# Run PEP benchmark with cool specifications. Try changing that cmd with more specifications!# First time compile could take 20+ minutes./benchmarking/run_bench.py \ --platform android \ -b ~/Code/aibench/specifications/models/caffe2/mobilenet_v2/mobilenet_v2_quant.json \ --platform android --repo_dir ~/Code/pytorch \ --frameworks_dir ~/Code/aibench/specifications/frameworks --framework caffe2

Acknowledgements

QNNPACK is developed by Marat Dukhan, Yiming Wu, Hao Lu, and Bert Maher. We thank Andrew Tulloch and Yangqing Jia for advice during the development of QNNPACK.

License

QNNPACK is BSD licensed, as found in the LICENSE file.

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:weex-vue2.0快速开发框架,扩展组件,支持weex、weexpack
下一篇:POJ3040给奶牛发工资
相关文章

 发表评论

暂时没有评论,来抢沙发吧~