使用TensorFlow框架进行人体姿势估计

网友投稿 665 2022-10-25

使用TensorFlow框架进行人体姿势估计

使用TensorFlow框架进行人体姿势估计

Human Pose Estimation with TensorFlow

Here you can find the implementation of the Human Body Pose Estimation algorithm, presented in the DeeperCut and ArtTrack papers:

Eldar Insafutdinov, Leonid Pishchulin, Bjoern Andres, Mykhaylo Andriluka and Bernt Schiele DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model. In European Conference on Computer Vision (ECCV), 2016

Eldar Insafutdinov, Mykhaylo Andriluka, Leonid Pishchulin, Siyu Tang, Evgeny Levinkov, Bjoern Andres and Bernt Schiele ArtTrack: Articulated Multi-person Tracking in the Wild. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017

For more information visit http://pose.mpi-inf.mpg.de

Prerequisites

The implementation is in python 3 and TensorFlow. We recommended using conda to install the dependencies. First, create a Python 3.6 environment:

conda create -n py36 python=3.6conda activate py36

Then, install basic dependencies with conda:

conda install numpy scikit-image pillow scipy pyyaml matplotlib cython

Install TensorFlow and remaining packages with pip:

pip install tensorflow-gpu easydict munkres

When running training or prediction scripts, please make sure to set the environment variable TF_CUDNN_USE_AUTOTUNE to 0 (see this ticket for explanation).

If your machine has multiple GPUs, you can select which GPU you want to run on by setting the environment variable, eg. CUDA_VISIBLE_DEVICES=0.

Demo code

Single-Person (if there is only one person in the image)

# Download pre-trained model files$ cd models/mpii$ ./download_models.sh$ cd -# Run demo of single person pose estimation$ TF_CUDNN_USE_AUTOTUNE=0 python3 demo/singleperson.py

Multiple People

# Compile dependencies$ ./compile.sh# Download pre-trained model files$ cd models/coco$ ./download_models.sh$ cd -# Run demo of multi person pose estimation$ TF_CUDNN_USE_AUTOTUNE=0 python3 demo/demo_multiperson.py

Training models

Please follow these instructions

Citation

Please cite ArtTrack and DeeperCut in your publications if it helps your research:

@inproceedings{insafutdinov2017cvpr, title = {ArtTrack: Articulated Multi-person Tracking in the Wild}, booktitle = {CVPR'17}, url = {http://arxiv.org/abs/1612.01465}, author = {Eldar Insafutdinov and Mykhaylo Andriluka and Leonid Pishchulin and Siyu Tang and Evgeny Levinkov and Bjoern Andres and Bernt Schiele}}@article{insafutdinov2016eccv, title = {DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model}, booktitle = {ECCV'16}, url = {http://arxiv.org/abs/1605.03170}, author = {Eldar Insafutdinov and Leonid Pishchulin and Bjoern Andres and Mykhaylo Andriluka and Bernt Schiele}}

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:Bing.ElasticSearch是Bing应用框架的Elasticsearch操作类库
下一篇:Aha!设计模式(106)-访问者模式(2)
相关文章

 发表评论

暂时没有评论,来抢沙发吧~