1, Implementation path
Deploy the YOLO model to the edge computing camera through OpenVINO. Its implementation path is: training (YOLO) - > transformation (OpenVINO) - > deployment and operation (OpenNCC).
2, Specific steps
1. Training YOLO model
1.1 installation environment dependency
For installation details, see https://github.com/AlexeyAB/darknet#requirements-for-windows-linux-and-macos .
1.2 compiling training tools
git clone https://github.com/AlexeyAB/darknetcd darknet mkdir build_releasecd build_release cmake .. cmake --build . --target install --parallel 8
1.3 preparing data sets
Put the training set pictures into the train folder and the verification set into the Val folder.
1.4 tag dataset
git clone https://hub.fastgit.org/AlexeyAB/Yolo_mark.git cmake . make ./linux_mark.sh
See Yolo for details_ Readme in the mark directory md.
1.5 configuration parameter file
In addition to these two data sets, several parameter files need to be configured before training.
- obj.data: This indicates the number of paths and policies for all the above files. If you are using your own dataset, modify the corresponding parameters before marking.
- obj.name: this file contains the names of all target classes;
- train.txt: this file contains all picture paths, and the val.txt file is not necessarily required. You can manually segment 30% of the images from the training file to obtain authentication.
The above three documents will be published in Yolo_ Automatically generated in the directory mark / x64 / release / data. - yolo.cfg: model structure parameters
- Yolo.conv: pre training weight
There is a certain correspondence between CFG and conv files. Considering that the training model needs to be deployed on OpenNCC, it is recommended to use the combination of (yolov4 tiny.cfg+yolov4 tiny.conv.29) or (yolov3 tiny.cfg+yolov3 tiny.conv.11). The cfg file can be found directly in the darknet/cfg directory.
to configure. cfg file:
Search the location of all yolo layers in the cfg file. If there are three types of targets, define the yolo layer classes parameter as 3 and the number of filters on the yolo layer as 24. The calculation formula is filters = (classes +5) * 3.
yolov4-tiny.cfg has two yolo layers, so four parameters need to be modified.
1.6 training
If step 1.2 is compiled successfully, it will be automatically generated in the darknet directory/ darknet tool.
Enter the following command:
./darknet detector train ./obj.data ./yolov4-tiny.cfg ./yolov4-tiny.conv.29 -map
If the GPU is lower than 1080Ti, an error may occur due to insufficient memory. At this time, it is necessary to change the batch parameter to no more than 8 (8,4,2,1) in the first layer [net] of cfg.
If the training goes well, you can see the training log chart as shown in the figure below.
After training, you can see a series of weights file. Here, it is still recommended to set up a verification set when creating a dataset to verify the highest weight yolov4-tiny of the map in the set_ best. weights can be directly locked for subsequent use.
2. Convert model format
IR is the reference format for the OpenVINO toolkit. We transform the "Darknet" model into a "blob" with intermediate "TensorFlow" and "IR".
2.1 converting Darknet to TensorFlow
In this section, we need to use the obj.net created in step 1.5 Names and yolov4 tiny_ best. weights .
git clone https://github.com/RenLuXi/tensorflow-yolov4-tiny.git cd tensorflow-yolov4-tiny python convert_weights_pb.py --class_names obj.names --weights_file yolov4-tiny_best.weights --tiny
2.2 Tensorflow to IR
Modify json configuration file:
Open Yolo in the tensorflow-yolov4-tiny directory_ v4_ tiny. JSON file, change the classes value to your own policy value. OpenVINO needs to convert this file to TensorFlow.
Replace json configuration file:
cp ./yolo_v4_tiny.json /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf
Enter the OpenVINO model conversion tool Directory:
cd /opt/intel/openvino/deployment_tools/model_optimizer
Conversion command:
python mo.py --input_model yolov4-tiny.pb --transformations_config ./extensions/front/tf/yolo_v4_tiny.json --batch 1 --data_type FP32 --reverse_input_channels
2.3 converting IR to blob
Initialize the OpenVINO environment, and then convert the XML and bin files you created earlier.
source /opt/intel/openvino_2020.3.194/bin/setupvars.sh cd /opt/intel/openvino_2020.3.194/deployment_tools/inference_engine/lib/intel64 cp /opt/intel/openvino/deployment_tools/model_optimizer/yolov4-tiny.xml ./ cp /opt/intel/openvino/deployment_tools/model_optimizer/yolov4-tiny.bin ./ /opt/intel/openvino_2020.3.194/deployment_tools/inference_engine/lib/intel64/myriad_compile -m yolov4-tiny.xml -o yolov4-tiny.blob -VPU_MYRIAD_PLATFORM VPU_MYRIAD_2480 -VPU_NUMBER_OF_SHAVES 6 -VPU_NUMBER_OF_CMX_SLICES 6
Part 3. Deploy Model to OpenNCC Edge-AI Camera Module
3. Deploy the model to the openncc edge AI camera module
In this section, the OpenNCC camera module needs to be inserted into the PC for deployment.
Put XML, bin and blob into OpenNCC yolo. For details, please refer to OpenNCC yolo in Github warehouse.
The final results are as follows: