Pytorch torchserve production environment model deployment

We trained a target detection model, wanted to deploy it in the production environment, checked a lot of data, and finally chose TorchServe to deploy it. TorchServe was jointly developed by AWS and Facebook, so I didn't think much about it. There should be nothing wrong with choosing big factories. I stepped on a lot of holes in the process of deploying the model. Finally, it was successfully deployed on Windows 10 and later on on the linux server.

1. Installation

cuda version installed on my computer is 10.1, and TorchServe is not suitable for cuda 10 1 only supports pytorch1 8.1 or later.

First install dependencies: Download serve( https://github.com/pytorch/serve#serve-a-model):

git clone https://github.com/pytorch/serve.git

Then enter serve:

python ./ts_scripts/install_dependencies.py --cuda=cu101

Install torchserve

pip install torchserve torch-model-archiver torch-workflow-archiver

2. Command analysis:

Package model, package into mar file

torch-model-archiver --model-name densenet161 
--version 1.0 
--model-file ./serve/examples/image_classifier/densenet_161/model.py 
--serialized-file densenet161-8d451a50.pth 
--export-path model_store 
--extra-files ./serve/examples/image_classifier/index_to_name.json 
--handler image_classifier

--Model name: model name, customized, independent of the actual model

--Version: version, custom

--Model file: Specifies the model file, mode Py can only contain one class, that is, your model class, such as classification model and target detection model

--Serialized file: specify model parameters and save the model

--Export Path: Specifies the storage location of the packaged model

--Extra files: it is a json file that stores relevant parameters. It is not necessarily used, but it is best to create one

--handler: processing files, image_classifier is a python file, which needs to write relevant logic including data processing

Where mode Py needs to put its own model, preferably one class or multiple classes, but it needs to be processed in the handler file. The handler file is very important and needs to be rewritten according to your own task.

Start model:

torchserve --start --ncs --model-store model_store --models densenet161.mar

3.handler file (key)

class BaseHandler(abc.ABC):

    def __init__(self):

    def initialize(self, context):

    def _load_torchscript_model(self, model_pt_path):

    def _load_pickled_model(self, model_dir, model_file, model_pt_path):

    def preprocess(self, data):

    def inference(self, data, *args, **kwargs):

    def postprocess(self, data):

    def handle(self, data, context):

    def explain_handle(self, data_preprocess, raw_data):

    def _is_explain(self):

The above code is base_ handler. Core methods in py:

.\serve\ts\torch_handler\base_handler.py

The two methods we need to modify are preprocess and postprocess. We can see from the file that preprocess is the code for data preprocessing, which is very necessary. The data we receive cannot be directly put into the model, and some processing needs to be carried out, such as resizing and normalization of images, and postprocess is also very important, The output result of the model contains a large amount of irrelevant data. We can process it and only return direct results, such as target detection, and only return the detection box with high threshold.

In the handler file, modify the following when loading the model:

    def _load_pickled_model(self, model_dir, model_file, model_pt_path):
        """
        Loads the pickle file from the given model path.
        """
        model_def_path = os.path.join(model_dir, model_file)

        if not os.path.isfile(model_def_path):
            raise RuntimeError("Missing the model.py file")
        module = importlib.import_module(model_file.split(".")[0])
        model_class_definitions = list_classes_from_module(module)
        model_class = model_class_definitions[2]
        model = model_class(self.heads,
                            pretrained=False,
                            down_ratio=self.down_ratio,
                            final_kernel=1,
                            last_level=5,
                            head_conv=self.head_conv
                            )
        if model_pt_path:
            model = load_model(model,model_pt_path)
        return model

model_class needs to be modified here because my mode Py file, the model is not a class, but contains multiple classes. Here, you need to select the main class of the model, traverse all classes in the file, and then select the appropriate model_class.

4.index_to_name.json file description

This file is written according to my own needs. I didn't use it. I wrote a few lines in it:

{
  "threshold": "0.4",
  "classnums": "1"
}

In fact, I met some pits and suddenly I can't remember.

Keywords: Python Anaconda Pytorch Microservices Object Detection

Added by TobyRT on Fri, 31 Dec 2021 13:02:39 +0200