Home  >  Article  >  Backend Development  >  Product environment model deployment, Docker image, Bazel workspace, export model, server, client

Product environment model deployment, Docker image, Bazel workspace, export model, server, client

巴扎黑
巴扎黑Original
2017-06-23 15:11:381342browse

Product environment model deployment, create a simple Web APP, users upload images, run the Inception model, and realize automatic image classification.

Build a TensorFlow service development environment. Install Docker, . Use the configuration file to create a Docker image locally, docker build --pull -t $USER/tensorflow-serving-devel . To run the container on the image, docker run -v $HOME:/mnt/home -p 9999:9999 -it $USER/tensorflow-serving-devel , load it into the container/mnt/home path in the home directory, and work in the terminal. Use an IDE or editor to edit the code, use the container to run the build tool, and the host is accessed through port 9999 to build the server. The exit command exits the container terminal and stops running.

The TensorFlow service program is written in C++ and uses Google's Bazel build tool. The container runs Bazel. Bazel manages third-party dependencies at the code level. Bazel automatically downloads the build. The project library root directory defines the WORKSPACE file. The TensorFlow model library contains the Inception model code.

TensorFlow service is included in the project as a Git submodule. mkdir ~/serving_example, cd ~/serving_example, git init, git submodule add, tf_serving, git submodule update --init --recursive.

WORKSPACE file local_repository rule defines third-party dependencies as local storage files. The project imports the tf_workspace rule to initialize TensorFlow dependencies.

workspace(name = "serving")

local_repository(
name = "tf_serving",
path = __workspace_dir__ + "/tf_serving",
)

local_repository(
name = "org_tensorflow",
path = __workspace_dir__ + "/tf_serving/tensorflow",
)

load('//tf_serving/tensorflow/tensorflow:workspace .bzl', 'tf_workspace')
tf_workspace("tf_serving/tensorflow/", "@org_tensorflow")

bind(
name = "libssl",
actual = "@boringssl_git //:ssl",
)

bind(
name = "zlib",
actual = "@zlib_archive//:zlib",
)

local_repository(
name = "inception_model",
path = __workspace_dir__ + "/tf_serving/tf_models/inception",
)

Export the trained model, export the data flow diagram and variables , for products. The model data flow graph must receive input from placeholders and calculate the output in a single step of inference. Inception model (or image recognition model in general), JPEG encoded image string input, is different from reading input from TFRecord file. Define the input placeholder, call the function to convert the placeholder to represent the external input to the original inference model input format, convert the image string into a pixel tensor with each component located within [0, 1], scale the image size to meet the expected width and height of the model, The pixel value is transformed into the model required interval [-1, 1]. Call the original model inference method and infer the results based on the transformed input.

Assign values ​​to each parameter of the inference method. Restore parameter values ​​from checkpoint. Periodically save model training checkpoint files, which contain learning parameters. The last saved training checkpoint file contains the last updated model parameters. Go to download the pre-training checkpoint file: . In the Docker container, cd /tmp, curl -0 , tar -xzf inception-v3-2016-03-01.tar.gz.

tensorflow_serving.session_bundle.exporter.Exporter class exports the model. Pass in the saver instance to create the instance, and use exporter.classification_signature to create the model signature. Specify input_tensor and output tensor. classes_tensor contains a list of output class names, and the model assigns each category score (or probability) socres_tensor. For models with multiple categories, the configuration specifies that only tf.nntop_k will be returned to select categories, and the model allocation scores will be sorted into the top K categories in descending order. Call the exporter.Exporter.init method signature. The export method exports the model and receives the output path, model version number, and session object. The Exporter class automatically generates code that has dependencies, and the Doker container uses bazel to run the exporter. The code is saved to bazel workspace exporter.py.

import time
import sys

import tensorflow as tf
from tensorflow_serving.session_bundle import exporter
from inception import inception_model

NUM_CLASSES_TO_RETURN = 10

def convert_external_inputs(external_x):
image = tf.image.convert_image_dtype(tf.image.decode_jpeg(external_x, channels=3), tf.float32)
images = tf.image.resize_bilinear(tf .expand_dims(image, 0), [299, 299])
images = tf.mul(tf.sub(images, 0.5), 2)
return images

def inference(images) ; = inference(x)

saver = tf.train.Saver()

with tf.Session() as sess:
        ckpt = tf.train.get_checkpoint_state(sys.argv[1])
        if ckpt and ckpt.model_checkpoint_path:
            saver.restore(sess, sys.argv[1] + "/" + ckpt.model_checkpoint_path)
        else:
            print("Checkpoint file not found")
            raise SystemExit

scores, class_ids = tf.nn.top_k(y, NUM_CLASSES_TO_RETURN)

classes = tf.contrib.lookup.index_to_string(tf.to_int64(class_ids),
            mapping=tf.constant([str(i) for i in range(1001)]))

model_exporter = exporter.Exporter(saver)
        signature = exporter.classification_signature(
            input_tensor=external_x, classes_tensor=classes, scores_tensor=scores)
        model_exporter.init(default_graph_signature=signature, init_op=tf.initialize_all_tables())
        model_exporter.export(sys.argv[1] + "/export", tf.constant(time.time()), sess)

一个构建规则BUILD文件。在容器命令运行导出器,cd /mnt/home/serving_example, hazel run:export /tmp/inception-v3 ,依据/tmp/inception-v3提到的检查点文件在/tmp/inception-v3/{currenttimestamp}/创建导出器。首次运行要对TensorFlow编译。load从外部导入protobuf库,导入cc_proto_library规则定义,为proto文件定义构建规则。通过命令bazel run :server 9999 /tmp/inception-v3/export/{timestamp},容器运行推断服务器。

py_binary(
        name = "export",
        srcs = [
            "export.py",
        ],
        deps = [
            "@tf_serving//tensorflow_serving/session_bundle:exporter",
            "@org_tensorflow//tensorflow:tensorflow_py",
            "@inception_model//inception",
        ],
    )

load("@protobuf//:protobuf.bzl", "cc_proto_library")

cc_proto_library(
        name="classification_service_proto",
        srcs=["classification_service.proto"],
        cc_libs = ["@protobuf//:protobuf"],
        protoc="@protobuf//:protoc",
        default_runtime="@protobuf//:protobuf",
        use_grpc_plugin=1
    )

cc_binary(
        name = "server",
        srcs = [
            "server.cc",
            ],
        deps = [
            ":classification_service_proto",
            "@tf_serving//tensorflow_serving/servables/tensorflow:session_bundle_factory",
            "@grpc//:grpc++",
            ],
    )

定义服务器接口。TensorFlow服务使用gRPC协议(基于HTTP/2二进制协议)。支持创建服务器和自动生成客户端存根各种语言。在protocol buffer定义服务契约,用于gRPC IDL(接口定义语言)和二进制编码。接收JPEG编码待分类图像字符串输入,返回分数排列推断类别列表。定义在classification_service.proto文件。接收图像、音频片段、文字服务可用可一接口。proto编译器转换proto文件为客户端和服务器类定义。bazel build:classification_service_proto可行构建,通过bazel-genfiles/classification_service.grpc.pb.h检查结果。推断逻辑,ClassificationService::Service接口必须实现。检查bazel-genfiles/classification_service.pb.h查看request、response消息定义。proto定义变成每种类型C++接口。

syntax = "proto3";

message ClassificationRequest {
       // bytes input = 1;
       float petalWidth = 1;
       float petalHeight = 2;
       float sepalWidth = 3;
       float sepalHeight = 4;
    };

message ClassificationResponse {
       repeated ClassificationClass classes = 1;
    };

message ClassificationClass {
        string name = 1; # }

Implement the inference server. Load the exported model, call the inference method, and implement ClassificationService::Service. Export the model, create a SessionBundle object, include a fully loaded data flow graph TF session object, and define the export tool classification signature metadata. The SessionBundleFactory class creates a SessionBundle object, configures it to load the export model at the path specified by pathToExportFiles, and returns a unique pointer to the created SessionBundle instance. Define ClassificationServiceImpl and receive SessionBundle instance parameters.

Load the classification signature, the GetClassificationSignature function loads the model export metadata ClassificationSignature, the signature specifies the input tensor logical name of the real name of the received image, and the data flow graph output tensor logical name mapping inference result. Transform the protobuf input into an inference input tensor, and the request parameter copies the JPEG encoded image string to the inference tensor. To run inference, sessionbundle obtains the TF session object, runs it once, and passes in the input and output tensor inference. The inferred output tensor transforms the protobuf output, and the output tensor result is copied to the ClassificationResponse message and the response output parameter format is specified in the shape. Set up the gRPC server, configure the SessionBundle object, and create the ClassificationServiceImpl instance sample code.

#include

#include

#include

#include

#include "classification_service.grpc.pb.h"

#include "tensorflow_serving/servables/tensorflow/session_bundle_factory.h"

using namespace std;

using namespace tensorflow:: serving;

using namespace grpc;

unique_ptr createSessionBundle(const string& pathToExportFiles) {

actory> bundle_factory;

SessionBundleFactory: }

class ClassificationServiceImpl final : public ClassificationService::Service {


private:
unique_ptr sessionBundle;

public:

ClassificationServiceImpl(unique_ptr sessionBundle) :
sessionBundle(move(sessionBundle)) {};

Status classify(ServerContext* context, const ClassificationRequest* request,
                ClassificationResponse* response) override {

ClassificationSignature signature;
        const tensorflow::Status signatureStatus =

          GetClassificationSignature(sessionBundle->meta_graph_def, &signature);


if (!signatureStatus.ok()) {

                return Status(StatusCode: :INTERNAL, signatureStatus.error_message());

    }

tensorflow::Tensor input(tensorflow::DT_STRING, tensorflow::TensorShape());

                                                                                   () = request->input();


vector outputs;

const tensorflow::Status inferenceStatus = sessionBundle->session->Run(
                                                                                                                                                                               ##               &outputs);

if (!inferenceStatus.ok()) {

                                                                                                                    ##

for (int i = 0; i < outputs[0].NumElements(); ++i) {
ClassificationClass *classificationClass = response->add_classes();
             classificationClass->set_name(outputs[0].flat()(i));
             classificationClass->set_score(outputs[1].flat()(i));
          }

return Status::OK;

}
    };


    int main(int argc, char** argv) {

if (argc < 3) {
cerr << "Usage: server /path/to/export/files" << endl;
          return 1;
        }

const string serverAddress(string("0.0.0.0:") + argv[1]);
       const string pathToExportFiles(argv[2]);

unique_ptr sessionBundle = createSessionBundle(pathToExportFiles);

ClassificationServiceImpl classificationServiceImpl(move(sessionBundle));

ServerBuilder builder;
        builder.AddListeningPort(serverAddress, grpc::InsecureServerCredentials());
        builder.RegisterService(&classificationServiceImpl);

unique_ptr server = builder.BuildAndStart();
        cout << "Server listening on " << serverAddress << endl;

server->Wait();

return 0;
    }

通过服务器端组件从webapp访问推断服务。运行Python protocol buffer编译器,生成ClassificationService Python protocol buffer客户端:pip install grpcio cython grpcio-tools, python -m grpc.tools.protoc -I. --python_out=. --grpc_python_out=. classification_service.proto。生成包含调用服务stub classification_service_pb2.py 。服务器接到POST请求,解析发送表单,创建ClassificationRequest对象 。分类服务器设置一个channel,请求提交,分类响应渲染HTML,送回用户。容器外部命令python client.py,运行服务器。浏览器导航http://localhost:8080 访问UI。

from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler

import cgi
    import classification_service_pb2
    from grpc.beta import implementations

class ClientApp(BaseHTTPRequestHandler):
        def do_GET(self):
            self.respond_form()

def respond_form(self, response=""):

form = """
           
           

Image classification service


           

           
Image:

           

           

            %s
           
            """

response = form % response

self.send_response(200)
            self.send_header("Content-type", "text/html")
            self.send_header("Content-length", len(response))
            self.end_headers()
            self.wfile.write(response)

def do_POST(self):

form = cgi.FieldStorage(
                fp=self.rfile,
                headers=self.headers,
                environ={
                    'REQUEST_METHOD': 'POST',
                    'CONTENT_TYPE': self.headers['Content-Type'],
                })

request = classification_service_pb2.ClassificationRequest()
            request.input = form['file'].file.read()

channel = implementations.insecure_channel("127.0.0.1", 9999)
            stub = classification_service_pb2.beta_create_ClassificationService_stub(channel)
            response = stub.classify(request, 10) # 10 secs timeout

self.respond_form("

Response: %s
" % response)


    if __name__ == '__main__':
        host_port = ('0.0.0.0', 8080)
        print "Serving in %s:%s" % host_port
        HTTPServer(host_port, ClientApp).serve_forever()

产品准备,分类服务器应用产品。编译服务器文件复制到容器永久位置,清理所有临时构建文件。容器中,mkdir /opt/classification_server, cd /mnt/home/serving_example, cp -R bazel-bin/. /opt/classification_server, bazel clean 。容器外部,状态提交新Docker镜像,创建记录虚拟文件系统变化快照。容器外,docker ps, dock commit 。图像推送到自己偏好docker服务云,服务。

参考资料:
《面向机器智能的TensorFlow实践》

欢迎付费咨询(150元每小时),我的微信:qingxingfengzi

The above is the detailed content of Product environment model deployment, Docker image, Bazel workspace, export model, server, client. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn