0 background
In " Installation and Call Method of TensorFlow Serving > In this paper, we introduce the basic concept of tensorflow service and the method of installation and invocation. We introduce how to derive our own training model and generate the * pb model and variables folder required by the service.
In " Target Detection API Interface Debugging of TensorFlow (Ultra-detailed) > In this paper, we introduce how to prepare our own training data and how to export the training model. In the fifth step, we find that only saved_model.pb model is generated, while the variables folder is empty, and the files in this folder are essential for the service, so the first step of this paper is how to transform. Generate these files
1 Conversion Method
There are similar questions on github export as Savedmodel generating empty variables directory #1988 It's pointed out below that you just need to modify the write_saved_modle function in the exporter.py file
def write_saved_model(saved_model_path, trained_checkpoint_prefix, inputs, outputs): saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, trained_checkpoint_prefix) builder = tf.saved_model.builder.SavedModelBuilder(saved_model_path) tensor_info_inputs = { 'inputs': tf.saved_model.utils.build_tensor_info(inputs)} tensor_info_outputs = {} for k, v in outputs.items(): tensor_info_outputs[k] = tf.saved_model.utils.build_tensor_info(v) detection_signature = ( tf.saved_model.signature_def_utils.build_signature_def( inputs=tensor_info_inputs, outputs=tensor_info_outputs, method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME )) builder.add_meta_graph_and_variables( sess, [tf.saved_model.tag_constants.SERVING], signature_def_map={ tf.saved_model.signature_constants .DEFAULT_SERVING_SIGNATURE_DEF_KEY: detection_signature, }, ) builder.save()
At the same time, change the last call to write_saved_modle in the _export_inference_graph function to read as follows
write_saved_model(saved_model_path, trained_checkpoint_prefix, placeholder_tensor, outputs)
Then you can export the generated variables folder. So far, if you can export successfully, congratulations on completing the task, but when I implemented it, I made the following mistakes
Traceback (most recent call last): File "export_inference_graph_unfrozen.py", line 173, in <module> tf.app.run() File "/home/lthpc/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/home/lthpc/anaconda3/envs/tensorflow/lib/python3.5/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/home/lthpc/anaconda3/envs/tensorflow/lib/python3.5/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "export_inference_graph_unfrozen.py", line 169, in main FLAGS.output_directory) File "export_inference_graph_unfrozen.py", line 155, in export_inference_graph optimize_graph, output_collection_name) File "export_inference_graph_unfrozen.py", line 107, in _export_inference_graph output_tensors = detection_model.predict(preprocessed_inputs) TypeError: predict() missing 1 required positional argument: 'true_image_shapes'
Someone in github has encountered a similar problem. He said that the python version can be exported after switching to 2.7. I tried and succeeded, so I configure a 2.7 version of tensorflow environment to generate a service model.
2. Environmental Configuration
The environment used is tensorflow-gpu 1.12.0, CUDA 9.0, Python 2.7. Configure the environment from scratch
conda create -n py2.7 pip python=2.7 source activate py2.7 pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.12.0-cp27-none-linux_x86_64.whl pip install tensorflow-estimator==1.10.12 pip install matplotlib pip install pillow
After configuring the python 2.7 environment, download the API code for target detection
git clone https://github.com/tensorflow/models.git cd models-master/research protoc object_detection/protos/*.proto --python_out=. export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
According to the transformation method in step 1, modify the exporter.py function (two modifications), you can run export_inference_graph.py, and export the model.
python export_inference_graph.py --input_type image_tensor --pipeline_config_path mymodel/faster_rcnn_resnet50_coco.config --trained_checkpoint_prefix mymodel/input/model.ckpt-50000 --output_directory mymodel/output/
(python2.7) lthpc@lthpc:~/workspace_zong/tensorflow_serving/models-master/research/object_detection/mymodel$ tree
.
├── faster_rcnn_resnet50_coco.config
├── input
│ ├── checkpoint
│ ├── model.ckpt-50000.data-00000-of-00001
│ ├── model.ckpt-50000.index
│ └── model.ckpt-50000.meta
└── output
├── checkpoint
├── frozen_inference_graph.pb
├── model.ckpt.data-00000-of-00001
├── model.ckpt.index
├── model.ckpt.meta
├── pipeline.config
└── saved_model
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
With the above model, the next step is to import the trained model into the tensorflow service