Paddle Serving是PaddlePaddle的在线预估服务框架

网友投稿 1091 2022-10-26

Paddle Serving是PaddlePaddle的在线预估服务框架

Paddle Serving是PaddlePaddle的在线预估服务框架

(简体中文|English)

Motivation

We consider deploying deep learning inference service online to be a user-facing application in the future. The goal of this project: When you have trained a deep neural net with Paddle, you are also capable to deploy the model online easily. A demo of Paddle Serving is as follows:

Installation

We highly recommend you to run Paddle Serving in Docker, please visit Run in Docker

# Run CPU Dockerdocker pull hub.baidubce.com/paddlepaddle/serving:latestdocker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latestdocker exec -it test bash

# Run GPU Dockernvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-gpunvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-gpunvidia-docker exec -it test bash

pip install paddle-serving-client pip install paddle-serving-server # CPUpip install paddle-serving-server-gpu # GPU

You may need to use a domestic mirror source (in China, you can use the Tsinghua mirror source, add -i https://pypi.tuna.tsinghua.edu-/simple to pip command) to speed up the download.

If you need install modules compiled with develop branch, please download packages from latest packages list and install with pip install command.

Packages of Paddle Serving support Centos 6/7 and Ubuntu 16/18, or you can use HTTP service without install client.

Pre-built services with Paddle Serving

Chinese Word Segmentation

> python -m paddle_serving_app.package --get_model lac> tar -xzf lac.tar.gz> python lac_web_service.py lac_model/ lac_workdir 9393 &> curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "我爱北京天安门"}], "fetch":["word_seg"]}' http://127.0.0.1:9393/lac/prediction{"result":[{"word_seg":"我|爱|北京|天安门"}]}

Image Classification

> python -m paddle_serving_app.package --get_model resnet_v2_50_imagenet> tar -xzf resnet_v2_50_imagenet.tar.gz> python resnet50_imagenet_classify.py resnet50_serving_model &> curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"image": "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg"}], "fetch": ["score"]}' http://127.0.0.1:9292/image/prediction{"result":{"label":["daisy"],"prob":[0.9341403245925903]}}

Quick Start Example

This quick start example is only for users who already have a model to deploy and we prepare a ready-to-deploy model here. If you want to know how to use paddle serving from offline training to online serving, please reference to Train_To_Service

Boston House Price Prediction model

wget --no-check-certificate https://paddle-serving.bj.bcebos.com/uci_housing.tar.gztar -xzf uci_housing.tar.gz

Paddle Serving provides HTTP and RPC based service for users to access

HTTP service

Paddle Serving provides a built-in python module called paddle_serving_server.serve that can start a RPC service or a http service with one-line command. If we specify the argument --name uci, it means that we will have a HTTP service with a url of $IP:$PORT/uci/prediction

python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292 --name uci

ArgumentTypeDefaultDescription
threadint4Concurrency of current service
portint9292Exposed port of current service to users
namestr""Service name, can be used to generate HTTP request url
modelstr""Path of paddle model directory to be served
mem_optim--Enable memory / graphic memory optimization
ir_optim--Enable analysis and optimization of calculation graph
use_mkl (Only for cpu version)--Run inference with MKL

Here, we use curl to send a HTTP POST request to the service we just started. Users can use any python library to send HTTP POST as well, e.g, requests.

curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"x": [0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584, 0.6283, 0.4919, 0.1856, 0.0795, -0.0332]}], "fetch":["price"]}' http://127.0.0.1:9292/uci/prediction

RPC service

A user can also start a RPC service with paddle_serving_server.serve. RPC service is usually faster than HTTP service, although a user needs to do some coding based on Paddle Serving's python client API. Note that we do not specify --name here.

python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292

# A user can visit rpc service through paddle_serving_client APIfrom paddle_serving_client import Clientclient = Client()client.load_client_config("uci_housing_client/serving_client_conf.prototxt")client.connect(["127.0.0.1:9292"])data = [0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584, 0.6283, 0.4919, 0.1856, 0.0795, -0.0332]fetch_map = client.predict(feed={"x": data}, fetch=["price"])print(fetch_map)

Here, client.predict function has two arguments. feed is a python dict with model input variable alias name and values. fetch assigns the prediction variables to be returned from servers. In the example, the name of "x" and "price" are assigned when the servable model is saved during training.

Some Key Features of Paddle Serving

Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed with one line command.Industrial serving features supported, such as models management, online loading, online A/B testing etc.Distributed Key-Value indexing supported which is especially useful for large scale sparse features as model inputs.Highly concurrent and efficient communication between clients and servers supported.Multiple programming languages supported on client side, such as Golang, C++ and python.

Document

New to Paddle Serving

How to save a servable model?An End-to-end tutorial from training to inference service deploymentWrite Bert-as-Service in 10 minutes

Developers

How to config Serving native operators on server side?How to develop a new Serving operator?How to develop a new Web Service?Golang clientCompile from source codeDeploy Web Service with uWSGIHot loading for model file

About Efficiency

How to profile Paddle Serving latency?How to optimize performance?Deploy multi-services on one GPU(Chinese)CPU Benchmarks(Chinese)GPU Benchmarks(Chinese)

FAQ

FAQ(Chinese)

Design

Design Doc

Community

User Group in China

PaddleServing交流QQ群               PaddleServing微信群

Slack

To connect with other users and contributors, welcome to join our Slack channel

Contribution

If you want to contribute code to Paddle Serving, please reference Contribution Guidelines

Feedback

For any feedback or to report a bug, please propose a GitHub Issue.

License

Apache 2.0 License

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:vue-mobile:采用 Vue.js 构建的移动 UI 框架
下一篇:Conjecture—Scalding下可扩展的机器学习框架
相关文章

 发表评论

暂时没有评论,来抢沙发吧~