Paddedetection FAQ issue 1

Welcome to paddedetection. In view of the problems encountered in the process of using paddedetection, we have sorted the high-frequency situations into FAQs (frequently asked questions)

Portal: https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/FAQ.md

Github address is: https://github.com/PaddlePaddle/PaddleDetection Welcome to try and click star support~

Q: Why do I use a single GPU to train loss?
A: The original learning rate in the configuration file is adaptive multi GPU training (8x GPU). If single GPU training is used, the learning rate must be adjusted accordingly (for example, divided by 8).

with faster_rcnn_r50 As an example, the calculation rule table in the static diagram is as follows. They are equivalent. The change nodes in the table are the boundaries in the piece wise decay:

Number of GPU sbatch size / cardLearning rateMaximum number of roundsChange node
210.0025720000[480000, 640000]
410.005360000[240000, 320000]
810.01180000[120000, 160000]
  • The above method is applicable to static diagram. In the dynamic graph, because the training is counted in epoch mode, only the learning rate needs to be modified after adjusting the number of GPU cards. The modification method is the same as that in the static graph

Q: When customizing a dataset, Num in the configuration file_ How should classes be set?
A: In a dynamic graph, Num is used when customizing a dataset_ Classes can be uniformly set as the number of categories of the user-defined dataset. In the static diagram (under the static directory), the YOLO series model and anchor free series model will be num_classes can be set as a user-defined dataset category. For other models, such as RCNN series, SSD, RetinaNet, SOLOv2 and so on, in principle, the background box and foreground box need to be distinguished in the classification, and the num is set_ Classes must be the number of custom dataset categories + 1, that is, add a background class.

Q: PP-YOLOv2 model training uses eval for verification during training. When doing eval for the first time, how to deal with it?
A: If pp-yolo series models only load the pre training weight of backbone and start training from scratch, the convergence will be slow. When the model does not converge well, the prediction frame better than the output is chaotic. Sorting and filtering in NMS will be very time-consuming, just like hang in eval, This situation generally occurs when the user-defined data set is used and the number of samples in the user-defined data set is small, resulting in a small number of training rounds at the first time of Eval training, and the model has not converged well. It can be solved by troubleshooting in the following three aspects.

  • The default configuration provided in paddedetection is generally 8-card training configuration, and batch in the configuration file_ The size number is the batch size of each card. If you don't use 8 cards or batch during training_ The size is modified, and the initial learning needs to be reduced in equal proportion_ Rate to obtain better convergence effect

  • If a custom dataset is used and the number of samples is small, it is recommended to increase the snapshot_epoch number is used to increase the number of training rounds during the first eval to ensure that the model has better convergence

  • If you use custom dataset training, you can load the weights trained on our published COCO or VOC dataset for finetune training to speed up the convergence. You can use - O Pretrain_ Specify the pre training weight in the way of weights = xxx. xxx can be the model weight link published in Model Zoo

Q: How to better understand the reader and customize and modify the reader file

# Number of reader processes per GPU
worker_num: 2
# Training data
TrainReader:
  inputs_def:
    num_max_boxes: 50
  # Training data transforms
  sample_transforms:
    - Decode: {} # Picture decoding, converting picture data from numpy format to rgb format, is a must
    - Mixup: {alpha: 1.5, beta: 1.5} # Mixup data enhancement, GT for two samples_ bbbox/gt_ Score operation, construction of virtual training samples, optional OP
    - RandomDistort: {} # Random color distortion, optional OP
    - RandomExpand: {fill_value: [123.675, 116.28, 103.53]} # Random Canvas filling, optional OP
    - RandomCrop: {} # Random clipping, optional OP
    - RandomFlip: {} # Random left-right flip, default probability 0.5, optional OP
  # batch_transforms
  batch_transforms:
    - BatchRandomResize: {target_size: [320, 352, 384, 416, 448, 480, 512, 544, 576, 608], random_size: True, random_interp: True, keep_ratio: False}
    - NormalizeBox: {}
    - PadBox: {num_max_boxes: 50}
    - BboxXYXY2XYWH: {}
    - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
    - Permute: {}
    - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]}
  # Batch during training_ size
  batch_size: 24
  # Is the read data out of order
  shuffle: true
  # Whether to discard the data that can not form a batch completely
  drop_last: true
  # mixup_epoch, larger than the maximum epoch, indicates that the training process has been augmented with mixup data. The default value is - 1, which means that mixup is not used. If the line - Mixup: {alpha: 1.5, beta: 1.5} is deleted, mixup must also be deleted_ Set epoch to - 1 or delete
  mixup_epoch: 25000
  # Whether to speed up data reading through shared memory needs to ensure that the size of shared memory (such as / dev/shm) is greater than 1G
  use_shared_memory: true

  If single scale training is required, remove it batch_transforms Inside BatchRandomResize In this line, in sample_transforms Add last line- Resize: {target_size: [608, 608], keep_ratio: False, interp: 2}

  Decode It must be retained. If you want to remove the data enhancement, you can comment or delete it Mixup RandomDistort RandomExpand RandomCrop RandomFlip,Note that if you comment or delete Mixup You must also mixup_epoch This line is commented or deleted, or set to-1 Indicates not to use Mixup
  sample_transforms:
    - Decode: {}
    - Resize: {target_size: [608, 608], keep_ratio: False, interp: 2}

Q: How do users control category output? That is, there are multiple types of targets in the graph, and only some of them are output

A: Users can modify the code and add condition settings.

# filter by class_id
keep_class_id = [1, 2]
bbox_res = [e for e in bbox_res if int(e[0]) in keep_class_id]

https://github.com/PaddlePaddle/PaddleDetection/blob/b87a1ea86fa18ce69e44a17ad1b49c1326f19ff9/ppdet/engine/trainer.py#L438

Q: User defined dataset training, prediction result label error

A: In this case, the user does not pay attention to the anno in the TestDataset when setting the dataset path_ Path problem. You need to set anno_ Set path to its own path.

TestDataset:
  !ImageFolder
    anno_path: annotations/instances_val2017.json

Keywords: neural networks Deep Learning

Added by IgglePiggle on Mon, 24 Jan 2022 10:13:38 +0200