halcon shape matching

halcon shape matching

1. Examples
Specify the template image area (you can also draw a rectangle with draw_rectangle1)

Row1 := 188
Column1 := 182
Row2 := 298
Column2 := 412
gen_rectangle1 (ROI, Row1, Column1, Row2, Column2)
reduce_domain (ModelImage, ROI, ImageROI)

Create template
If the template contains not only the direction but also the angle: create_scaled_shape_model or create_aniso_shape_model

Find the parameters suitable for model creation (the main parameters are the number of pyramid layers and the minimum contrast), and return a template point set corresponding to the pyramid

inspect_shape_model (ImageROI, ShapeModelImages, ShapeModelRegions, 8, 30)

The angle range of template rotation and the step size of each rotation angle can be effectively reduced by using the optimal parameters. It is very effective for large templates, or the corresponding angle rotation range can be set. Returns a modelID

create_shape_model (ImageROI, NumLevels, 0, rad(360), 'auto', 'none', \
'use_polarity', 30, 10, ModelID)

A matching image was found

for i := 1 to 20 by 1
    grab_image (SearchImage, FGHandle)
    find_shape_model (SearchImage, ModelID, 0, rad(360), 0.7, 1, 0.5, \
    'least_squares', 0, 0.7, RowCheck, ColumnCheck, \
    AngleCheck, Score)
endfor

Anglestart, angleextend and NumLevels have been determined when creating the template. MinScore specifies the number of matching spots that must be found. 0.5 means that half of the template has been found,

2. Create the appropriate template
2.1. The smallest contains the region of interest
The first is to find the ROI area

reduce_domain (ModelImage, ROI, ImageROI)
create_shape_model (ImageROI, 0, 0, rad(360), 0, 'none', 'use_polarity', 30, \
10, ModelID)

2.1.1 how to create an area

gen_rectangle2
gen_ellipse
gen_region_polygon_filled Input parameters such as coordinates are required
draw_rectangle1 (WindowHandle, ROIRow1, ROIColumn1, ROIRow2, ROIColumn2)
gen_rectangle1 (ROI, ROIRow1, ROIColumn1, ROIRow2, ROIColumn2)
 

2.1.2 how to merge and hide an area

union2 and difference

draw_rectangle1 (WindowHandle, ROI1Row1, ROI1Column1, ROI1Row2, ROI1Column2)
gen_rectangle1 (ROI1, ROI1Row1, ROI1Column1, ROI1Row2, ROI1Column2)
draw_rectangle1 (WindowHandle, ROI2Row1, ROI2Column1, ROI2Row2, ROI2Column2)
gen_rectangle1 (ROI2, ROI2Row1, ROI2Column1, ROI2Row2, ROI2Column2)
union2 (ROI1, ROI2, ROI)
draw_circle (WindowHandle, ROI1Row, ROI1Column, ROI1Radius)
gen_circle (ROI1, ROI1Row, ROI1Column, ROI1Radius)
gen_circle (ROI2, ROI1Row, ROI1Column, ROI1Radius-8)
difference (ROI1, ROI2, ROI)

In order to avoid disappearing in the pyramid, the minimum size of ROI area should be 2^{NumLevels-1}

2.1.3 creating and modifying areas using image processing technology

Example 1: creating ROI using blob analysis

a. Extract luminance region

*Threshold segmentation

threshold (ModelImage, BrightRegions, 200, 255)

*Region split

connection (BrightRegions, ConnectedRegions)

*Area fill

fill_up (ConnectedRegions, FilledRegions)

b. Selected area

select_shape (FilledRegions, Card, 'area', 'and', 1800, 1900)

You can use visualization - > region features to select an appropriate feature filtering method

c. Use ROI area

reduce_domain (ModelImage, Card, ImageCard)

Only the images in the selected area are detected to reduce unnecessary interference and simplify the operation process

d. Extract logo

threshold (ImageCard, DarkRegions, 0, 230)
connection (DarkRegions, ConnectedRegions)
select_shape (ConnectedRegions, Characters, 'area', 'and', 150, 450)
union1 (Characters, CharacterRegion)

The same as the previous location area extraction, union1 is used to merge the detection areas after the detection area extraction

e. The morphological morphology method is used to expand the detection area

dilation_circle (CharacterRegion, ROI, 1.5)
reduce_domain (ModelImage, ROI, ImageROI)
create_shape_model (ImageROI, 3, 0, rad(360), 'auto', 'none', \
'use_polarity', 30, 10, ModelID)

Finally, create a shape template shape model

Example 2. Inspect_ shape_ Further processing of model results

Create a first interactive template area and process this area to obtain ROI

a. Select arrow

gen_rectangle1 (ROI, 361, 131, 406, 171)

Artificially created ROI

b, create the first mock exam area

reduce_domain (ModelImage, ROI, ImageROI)

*Pyramid layers, contrast

inspect_shape_model (ImageROI, ShapeModelImage, ShapeModelRegion, 1, 30)

A more complete detection area is obtained by changing different contrast.

c. Process template area

*Fill inside arrow

fill_up (ShapeModelRegion, FilledModelRegion)

*Remove small areas

opening_circle (FilledModelRegion, ROI, 3.5)

d. Create final template

reduce_domain (ModelImage, ROI, ImageROI)
create_shape_model (ImageROI, 3, 0, rad(360), 'auto', 'none', \
'use_polarity', 30, 15, ModelID)

A shape template is created.

2.1.4 how do RIO search

The center point of ROI will be used as the reference point to evaluate the position, rotation and scale. After creating the model, you can use get_shape_model_origin to obtain relevant parameters, using set_shape_model sets relevant deformation parameters.

2.2 where is template information stored
The point contour whose contrast exceeds the set threshold is extracted, and the contrast can include a threshold range of hysteresis threshold, and then minimize the contour size.

2.2.1 where are the pixels of the template

If the contrast contrast (difference from the gray value of neighbor points) is greater than the threshold, it will be created_ shape_ Model. In order to obtain an appropriate template, the contrast should be set to approximate the significant pixels of the object of interest. The template should not contain clutter s (these pixels that do not belong to the object).

In some cases, contrast contrast may not be applicable when there is a contrast threshold, such as kicking out the cluster, but also excluding a part of the object area. In order to solve this problem, contrast provides two additional methods: hierarchy threshold and the size of the selected contour point. That is to add a group of parameters to replace a single parameter.

Hysteresis threshold (you can refer to hysteresis_threshold) uses two thresholds, the lower threshold and the upper threshold. That is, if the contrast is higher than the highest threshold, it is directly selected. If it is lower than the highest threshold but greater than the lowest threshold, it is also selected as long as there are adjacent points higher than the maximum threshold. This method allows the contrast to vary with pixels.

inspect_shape_model (ImageROI, ModelImages, ModelRegions, 1, [26,52])

The second reason is the minimum size (number of pixels). The minimum size is constrained in triples. If you don't want to use hysteresis threshold, you can directly set the maximum and minimum contrast to the same

inspect_shape_model (ImageROI, ModelImages, ModelRegions, 1, [26,26,12])

determine_shape_model_params can 'auto' automatically find appropriate parameters such as contrast.

2.2.2 how to use downsampling to speed up search.

In order to speed up the search, an image pyramid called image pyramid is created, including template image and search image. The image pyramid includes original size image and a series of downsampled images (the size, length and width are reduced by half). The smallest image is at the top of the pyramid, and the search image starts at the smallest image first, The search results will affect the pyramid images to be searched in the next step, and will be used in turn for the next pyramid to the bottom. Using the iterative method, the method is fast and effective.

The number of pyramid layers on the surface of NumLevels. It is recommended that the pyramid on the top layer has at least 10-15 pixels, and the shape of the target object is still saved. Then you can use inspect_shape_model,

***HDevelop program examples\solution_guide\shape_matching\first_example_shape_matching.dev
inspect_shape_model(ImageROI, ShapeModelImages, ShapeModelRegoins, 8, 30)
are_center( ShapeMdelRegions, AreaModelRegions, RowModelRegions, ColumnModelRegions)
count_obj(ShapeModelRegions, HeightPyramid)
for i := 1 to HeightPyramid by 1
    if(AreaModelRegions[i-1] >= 15)
        NumLevels := 1
    endif
endfor
create_shape_model( ImageROI, NumLevels, 0, rad(360), 'auto', 'none', 'use_polarity', 30, 10, ModelID);

The number of layers of the pyramid is determined by calculating that the minimum size of regions is not less than 15.

If you want to automatically select the appropriate number of pyramid layers, use get_shape_model_params to 'auto' NumLevels.

inspect_shape_model returns the image pyramid in the form of image tuples. A single pyramid image can be obtained through select_obj, the index subscript of the object starts from 1, and the subscript of the control parameter starts from 0.

The number of template points can be reduced by Optimization, which can effectively speed up. Especially in large template matching algorithms, it is recommended to use 'auto' to let halcon automatically select appropriate parameters. All points meeting the contrast contrast contrast will be displayed, and there is no way to judge whether they are part of the matching template.

By setting other parameters, you can set whether the template is allowed to rotate or zoom. Generally, the template is not generated in advance, and 'pre generation' is the second value of Optimization through set_system is set to the parameter. If you want to use multithreading, it is appropriate to set it to pre generate pre generation. If a large number of rotations or scale transformations are required, the demand for memory will increase and the time will increase.

2.2.3 allowable direction range

If the object rotation is arbitrary in the search image, the allowable range can be set through the angleextend. The starting angle is AngleStart, and the unit is rad amplitude. The rotation angle is for the template image. For example, if the starting angle AngleStart is 0, the angle of the template is consistent with the angle corresponding to the object in the search image. If the allowable rotation angle is + / - 5 °, the starting angle should be - rad(5) and the angle range AngleExtent should be rad(10).

It is suggested to reduce the angular rotation range to improve the detection speed. If it is a pre-defined template, the loss that can be stored will also increase. In find_ shape_ Further restrictions will be made in the model. If you want to reuse templates for different tasks, you should create a wide range of templates, and then search for templates in a small range.

If the detection object is almost symmetrical, the angle range should be limited, otherwise the template will find multiple best matching templates at different angles. The angle range must be limited to 90 ° for cross shape or square shape, 180 ° for rectangle and 0 ° for circle.

During the matching process, AngleStep can be set to 'auto', allowing halcon to automatically select the optimal step size, \ o_ {opt} to obtain the most possible minimum rotation range. If the center of the object moves by at least 2 pixels, the rotation angle can be recognized. Therefore, the relevant angle is calculated as:

l is the maximum distance between the center point and the boundary of the detection object, and d=2 pixels. For some templates, the evaluation of the angle step size is still too large, so it will be automatically divided by 2

Automatically determine the angle step
}Is it appropriate? Therefore, it is recommended to use 'auto', which can be obtained through get_shape_model_params to query specific parameters. By selecting the highest value, the detection speed can be improved at the cost of reducing the accuracy of the direction. For too high values, matching may fail.

AngleStep should not be too far from the optimal value (\ frac{1}{3} \phi {opt}\leqslant \phi \leqslant 3 \phi{opt}). It should be noted that even if a small angle step is introduced, the accuracy of the angle will not be significantly changed.

2.2.4 allowable scale range

Similar to the direction range, we can also set the scale range and change the scale in two ways:

Scale horizontally and longitudinally in the same proportion (isotropic scaling)
In the transverse and longitudinal directions, the scale is different (anisotropic scale)
For isotropic isometric scaling, we are in create_scaled_shape_model simply set ScaleMin, ScaleMax and scalestap. For anisotropic anisotropic scaling, you can use create_ aniso_ shape_ Instead of model, the same will introduce 6 parameters.

Similarly, it is also necessary to restrict the zoom range to speed up the search speed. Assuming that the preset model is used, a large scale range will also require a lot of memory. The same can be done in find_scaled_shape_model or find_ aniso_ shape_ Constraints in model.

If you search for prime objects in a large scale, you should create a template based on the large scale, because halcon can't guess the template points, especially when the pre calculated model instance is larger than the original image. In other words, NumLevels should be selected, and the high-level pyramid layer is enough to contain enough template points of the smallest scale.

If we select 'auto' for the parameter scalestap or require anisotropic scale, halcon automatically selects the appropriate zoom size step to obtain the minimum scale with high accuracy enough to distinguish the image. Each angle step is the same. If the origin (object center) of the meditation example is far away, it can be clearly distinguished, at least 2 pixels, \ Delta s_{opt}:

\Delta s=\frac{d}{l}\Rightarrow\Delta s_{opt}=\frac{2}{l}

l is the maximum distance between the center point and the boundary of the detection object, and d=2 pixels For some templates, the evaluation of angle step size is still too large, so it will be automatically divided by 2

Automatically determine whether the zoom step size is appropriate, so it is recommended to use 'auto', which can be obtained through get_shape_model_params to query specific parameters. By selecting the highest value, the detection speed can be improved at the cost of reducing the accuracy of scale evaluation. For too high values, matching may fail.

Scalestap should not be too far from the optimal value (\ frac{1}{3} \Delta s {opt}\leqslant \Delta s \leqslant 3 \Delta s{opt}). It should be noted that even introducing a small scale step will not significantly change the scale accuracy.

2.2.5 which pixels are compared with the template

In order to improve efficiency, the template contains a lot of information that affects the search speed. The parameter mincontract can specify that the search image must be greater than a minimum contrast before it can be compared with the template. This parameter is mainly used to avoid noise (fluctuation of gray range) in the matching process. The minimum contrast range can be set through hddevelop dialog - > visualization - > pixel info, which should be larger than the noise, or 'auto' can be used to automatically set MinContrast

The parameter Metric can specify polarity, such as the comparison direction of contrast, if it is' use '_ S polarity, that is, the contrast direction of the matching object should be consistent with that of the template. For example, if the template object is a bright object in a dark background, the object in the detection diagram should also be a bright object in a dark background.

If polarity is set to 'ignore'_ global_ Polarity 'ignores the global polarity, so even if the polarity of the object to be detected is opposite to that of the template, it will be detected. That is, dark objects in a bright background can also be detected, and vice versa. This method is more flexible and the lost matching time is also very low.

If 'ignore' is selected_ local_ Polarity 'can better match color images (multi-channel images), such as examples \ hddevelop \ applications \ FA \ matching_ multi_ channel_ yogurt. dev

In set_ shape_ model_ In metric, you can determine the polarity of the template edge. In the training image, the matching metric is automatically set to 'use'_ 'polarity' or 'ignore_global_polarity’. Example in

examples\hdevelop\Matching\Shape-Based\
create_shape_model_xld.dev.

2.3 synthetic image model
Template objects are difficult to create, instances

examples
solution_guide\shape_matching\synthetic_circle.dev;

a. Create XLD profile

RadiusCircle := 43
SizeSynthImage := 2*RadiusCircle + 10
gen_ellipse_contour_xld (Circle, SizeSynthImage / 2, SizeSynthImage / 2, 0, RadiusCircle, RadiusCircle, 0, 6.28318, 'positive', 1.5)

gen_ellipse_contour_xld generates a circular area, and an appropriate detection radius can be set. The composite image should be larger than the area, because the pixels near the area will be used to create the image pyramid.

b. Create an image and insert an XLD profile

gen_image_const (EmptyImage, 'byte', SizeSynthImage, SizeSynthImage)
paint_xld (Circle, EmptyImage, SyntheticModelImage, 128)

Using gen_image_const to create an empty image, and the XLD contour uses paint_xld drawn inside

c. Create a template

create_scaled_shape_model (SyntheticModelImage, 'auto', 0, 0, 0.01, 0.8,1.2, 'auto', 'none', 'use_polarity', 30, 10, ModelID)

Use the composite image to create a shape template.

2.4. Create a template or DXF file from an XLD profile
The XLD outline can be used directly as a template, create_shape_model, create_scaled_shape_model,
or create_ aniso_ shape_ The model is created_ shape_ model_ xld, create_ scaled_ shape_ model_ XLD, or create_aniso_shape_model_xld operation override.

In addition, you can also create a DXF file. First, extract the XLD outline read from the DXF file_ contour_ xld_ DXF, you can follow the example examples\hdevelop
Applications\FA\pm_multiple_dxf_models.dev, or XLD contour can be used directly.

3. Optimize search processing

find_shape_model, find_scaled_shape_model,
find_aniso_shape_model, find_shape_models, find_scaled_shape_models, or
find_aniso_shape_models

3.1 limit search space
3.1.1 finding areas of interest

find_shape_model to find the ROI area

a. Create ROI

Row1 := 141
Column1 := 159
Row2 := 360
Column2 := 477
gen_rectangle1 (SearchROI, Row1, Column1, Row2, Column2)

b. Constraining regions of interest

for i := 1 to 20 by 1
    grab_image (SearchImage, FGHandle)
    reduce_domain (SearchImage, SearchROI, SearchImageROI)
    find_shape_model (SearchImageROI, ModelID, 0, rad(360), 0.7, 1, 0.5,'interpolation', 0, 0.7, RowCheck, ColumnCheck, AngleCheck, Score)
endfor

3.1.2 constraint direction and scale range

create_shape_model, create_scaled_shape_model,or create_aniso_shape_model creates a template, similar to find_shape_model,
find_scaled_shape_model, or find_aniso_shape_model to find templates, you can use s anglestart, angleextend, scalemin, and scalemax to further limit the scope. Sometimes even if it is slightly beyond the constraint range, you can find it.

3.1.3 visibility

The template object must be visible. MinScores indicates how many template points are matched,

If the detected object is obscured by the boundary and cannot be detected, you can set set set_system(’border_shape_models’,’true’). Examples \ hddevelopment \ applications \ FA \ matching_ image_ border. Dev, in this case, the search time will increase.
When creating the template, the contour contrast is lower than the parameter minimum contrast MinContrast.
Change the polarity of global or local contrast.
If the object changes, including the transformation of camera angle, part of the contour is visible, but the wrong position occurs, so it is no longer suitable for the model. If the increased tolerance mode is activated, deformed or defocused objects may be found. Therefore, the lowest pyramid registration must be in the negative polarity of NumLevels, and return the matching of the lowest pyramid level that still provides matching.
If the angle step is too large, it may also lead to low scores. The same is true for scale steps
Another reason is the use of pyramids. When the candidate model is matched with the model, the specified minimum Score at each layer must be achieved. However, the scores of different layers may be changeable. Only when the Score of the bottom layer is returned in the parameter Score, MinScore must be lower than Score,
MinScore scores the highest and is the fastest to find.
3.1.4 complete matching and fast matching

The parameter greenness will affect the thoroughness and Speed of the algorithm. If the selected value is 0 and the object exists, it will be found. In this case, even if it is not a template object, it will be detected, affecting the detection Speed.

The main idea behind the greedy search algorithm is that it will be interrupted when the comparison between a candidate object and the model seems unlikely to reach the lowest score. In other words, our goal is not to waste time on hopeless candidates. However, this greed may have undesirable consequences. In some cases, a fully visible object cannot be found, because the comparison went in the wrong direction at the beginning, so it was classified as a hopeless candidate and then interrupted.

You can adjust the Greediness of search. Determine when to compare interrupts by selecting a value between 0 (no interrupt: thorough but slow) and 1 (earliest interrupt: fast but unsafe). Note that the parameters greedy greenness and MinScore interact, that is, in order to use a more greedy search, you may have to specify a lower minimum score MinScore. Usually, you can achieve higher speed with high greed and low enough minimum score.

3.2 setting shape parameters
Most parameters are set during the creation of the shape model_ shape_ model_ param. You can also set two parameters, one is min_contrast one is timeout, min_contrast can be changed, especially when the contrast of some search images is very low, "timeout" can only be set_shape_model_param, but not during model creation. You can specify the maximum time period to find candidates. You can create interrupts. The example is examples\hdevelop
Matching\Shape-based\extended_contrast.dev.

3.3 finding multiple instances of objects
NumMatches sets the number of detection objects, find_shape_model (or find_scaled_shape_model or find_aniso_shape_model) returns a bunch of matching parameter results Row, Column, Angle, Scale and Score. If it is set to 0, all matching objects will be returned. Searching for multiple objects is slower than searching for one object.

MaxOverlap specifies the extent to which matching objects may overlap,

3.4 searching multiple templates at the same time
If you are looking for multiple matching templates in a graph, you need to call find multiple times_ shape_ model (or find_scaled_shape_model or find_aniso_shape_model). If find is used_ shape_ models, find_ scaled_ shape_ models,or find_ aniso_ shape_ Models will be faster, generally similar, but slightly different.

ModelIDs replaces a single image with a tuple of template IDs. As finding multiple instances, these matching result parameters, such as ROW, return a tuple value.
The output parameter Model will display all instances that can be found. Instead of returning the model IDs itself, the parameter returns the tuple index of the model IDs instead
The query result is in a single image, and the search area can also be limited by passing an image tuple.
You can specify an element AngleStart for each template, or you can specify different elements for each template and store them in tuples.
You can find multiple instances of multiple templates and specify the number of NumMatches. 0 represents all instances. For example, [3,0] returns the three most matching instances of the first model and all instances of the second model
Similarly, specifying a single value for MaxOverlap will check whether the instance is overwritten by other independent instances. Through the tuple of the specified value, each instance will only check all other instances of the same type.
Example: examples\solution_guide\shape_matching\multiple_models.dev uses find_scaled_shape_models to find multiple models at the same time.

a. Step 1: create multiple templates

create_scaled_shape_model (ImageROIRing, 'auto', -rad(22.5), rad(45), 'auto', 0.8, 1.2, 'auto', 'none', 'use_polarity', 60, 10, ModelIDRing)
create_scaled_shape_model (ImageROINut, 'auto', -rad(30), rad(60), 'auto',0.6, 1.4, 'auto', 'none', 'use_polarity', 60, 10, ModelIDNut)
ModelIDs := [ModelIDRing, ModelIDNut]

b. Specifies the ROI for a single search

gen_rectangle1 (SearchROIRing, 110, 10, 130, Width - 10)
gen_rectangle1 (SearchROINut, 315, 10, 335, Width - 10)
concat_obj (SearchROIRing, SearchROINut, SearchROIs)
add_channels (SearchROIs, SearchImage, SearchImageReduced)

Because the instances do not overlap, each template can be searched separately. The ROI searched has a certain moving range, so the long and narrow horizontal ROIs can be used.

concat_obj can form two region s into an array.

add_channels is to add the region array to the search image. The result of the operation is that the image array containing multiple images uses the same image matrix. The first image is used to constrain the first ROI and the second ROI is used to constrain the ROI of the second image.

c. Find all instances of both templates.

find_scaled_shape_models (SearchImageReduced, ModelIDs, [-rad(22.5),-rad(30)], [rad(45), rad(60)], [0.8, 0.6], [1.2, 1.4], 0.7, 0, 0, least_squares', 0, 0.8, RowCheck, ColumnCheck, AngleCheck, ScaleCheck, Score, ModelIndex)

find_scaled_shape_models is used to create an image array because the two templates are allowed to be used in different rotation and scale ranges.

3.5 more accurate
During matching, there will be some transformations in rotation, displacement and Scale. The matching score of each model will be calculated. If SubPixel is set to none, the result parameters Row, Colum, Angle and Scale contain multiple values for best matching. In this case, the accuracy of position is 1 pixel, while the accuracy of direction and Scale is the accuracy of AngleStep and scalestap. If the distance between the center point and the boundary is 100 pixels, the accuracy of direction is generally \ approx \frac{1}{10} °

Because interpolation is a fast operation, SubPixel should be set to 'interpolation'.

The selected value is' least '_ squares’,'least_squares_ High 'or' least_squares_ very_ D like to use double multiplication for the existing difference operation. The calculation accuracy is high, but the time complexity will increase.

Use set_ shape_ model_ When origin returns to the value position, the accuracy will not be so accurate. The reference origin remains unchanged, but only the direction of moving the reference point. The position error will be related to many factors, such as the offset of the reference point and the direction of the discovery object. The error increases linearly with the distance between the reference point and the origin.

The imprecise evaluation scale is also the error introduced in the reference position, and increases with the increase of the linear distance between the modified point and the original position.

In order to ensure the maximum accuracy when the reference point moves, the least square adjustment should be used to determine the position. Note that modifying the reference point will not affect the accuracy of the estimated direction and scale.

3.6. Find the best matching speed
If there is enough memory, it will be a multi-scale and multi-directional template. That is, a set of representative test images, such as his position, direction, occlusion, illumination, etc.

3.6.1. step 1: ensure that all objects are found.

Find all object instances. If the default value is not detected, use the following steps to exclude them:

a. Is the object trimmed at the boundary

set_system('border_shape_models','true')

b. Whether the search algorithm is too greedy.

Greediness is set to 0 to find.

c. Whether the object is partially obscured.

Reduce parameter MinScore

d. Failed to match at the top of the pyramid.

If the minimum score is not reached on the highest pyramid, find should be alternately_ shape_ NumLevels and MinScore in the model are reduced.

e. Is the contrast of the object very low

Change create_ shape_ Mincontract parameter of model

f. Is the polarity of contrast global or local

If only a small number of objects are affected, MinScore should be modified

g. Do the two detection objects overlap

Add MaxOverlap

h. Multiple matches found in the same object

If the objects are stacked, you can constrain the rotation range or reduce MaxOverlap.

3.6.2 stpe2: adjust parameters related to speed.

MinScore should be increased as much as possible. It only needs to be matched successfully
Increase Greediness until the match fails, and then try to reduce MinScore. If it fails, restore the previous value.
Limit rotation and scale transformation as much as possible, find_shape_model, find_scaled_shape_model, or find_aniso_shape_model
Constraining regions of interest
Adding mincontract may cause matching failure
If you search for a large object and select a higher point, reduce the Optimization parameter
Add angleStep to match as successfully as possible
4. Use matching results
find_shape_model,find_ scaled_ shape_ Results returned by model

Return location information through Row and Column
Return direction information through Angle
Return Scale information through Scale
Return matching Score through Score
The matching score reflects the similarity between the template and the matching object, which can be corrected or aligned by affine transformations.

4.1 affine transformation
'after transformation': a term describing transformation in mathematics. translated(moved), rotated, and scaled are included. Rotation and zooming require a special point, fixed point or point of reference. These transformations operate around this point, because this point is basically unchanged during rotation and scaling.

4.3. Creating and using affine transformations
The regions, images and XLD contours that can be affine transformed in halcon correspond to affine_trans_region, affine_trans_image, and affine_trans_contour_xld.

affine_trans_region (IC, TransformedIC, ScalingRotationTranslation, 'false')

Scaling rotation translation is called homogeneous transformation matrix, which is used to describe the corresponding transformation. The matrix can be created in very few steps,

a. Create identity matrix

hom_mat2d_identity(EmptyTransformation)

b. Zoom around the center point

hom_mat2d_scale(EmptyTransformation, 0.5, 0.5, RowCenterIC, ColumnCenterIC, Scaling)

c. Similarly, you can increase rotation and translation

hom_mat2d_rotate(Scaling, rad(90), RowCenterIC, ColumnCenterIC, ScalingRotation)
hom_mat2d_rotate(ScalingRotation, 100, 200, ScalingRotationTranslation)

Note that x,y, not row and column coordinates are used in these operations.

Transformation matrices can sometimes be implemented through reverse engineering. That is, if we know the transformation relationship of some points, we can determine the transformation matrix. For example, if the position of the center point and the transformed defense line are all the time, we can obtain the corresponding matrix through the vector_angle_to_rigid

vector_angle_to_rigid(RowCenterIC, ColumnCenterIC, 0, TransformedRowCenterIC, TransformedColumnCenterIC, rad(90), RotationTranslation)

Corresponding method for calculating radiation area

affine_trans_region(IC, TransformedIC, RotationTranslation, 'false')

4.3. Use estimated location and direction
Show discovered instances in a single match
ROIs alignment is used for other detection tasks
The detected object is affine transformed so that it can be aligned with the template
Guidance of manipulator
It should be noted that the position and direction are included in the parameters Row, Column and Angle. The estimated position of this point is not an accurate reference point position because it cannot be used directly. Therefore, the transformation matrix should be optimized to achieve the above purpose.

The rotation angle of the template object is 0, even if it looks oblique.

4.3.1 display matching objects

In some applications, the template will directly cover the matching object, which can be easily implemented.

the HDevelop program examples\solution_guide\shape_matching\first_example_shape_matching.dev

a. Accept templates containing XLD profiles

creat_shape_model(ImageROI, NumLevels, 0, rad(360), 'auto', 'none', 'use_polarity', 30, 10, ModelID)
get_shape_model_contours(ShapeModel, ModelID, 1)

Next, we want to display the template in the extracted position and direction, and the related can be through inspect_ shape_ model_ Contour, which is the most effective way to display the template on the template image. It is more recommended to use the XLD version of the template, because the XLD contour can be converted more accurately and quickly. You can use get_ shape_ model_ After creating the template, it should be noted that the XLD is located at the origin of the image (upper left corner), not the template position of the template graph.

b. Determine affine transformation

find_shape_model(SearchImage, ModelID, 0, rad(360), 0.7, 1, 0.5, 'least_squares', 0, 0.7, RowCheck, ColumnCheck, AngleCheck, Score)
if(|Score|=1)
    vector_angle_to_rigid(0, 0, 0, RowCheck, AngleCheck, MovementOfObject)

Using find_ shape_ After model, the returned result is checked. If the matching fails, the Score is empty. If the matching is successful, the radiation matrix can have created a vector through position and orientation_ angle_ to_ Rigid, the first two parameters are the relative position of the reference point, that is, the distance to the default reference point, which is the center of gravity of ROI. The default reference point position is (0,0),

c. Transform XLD

affine_trans_contour_xld(ShapeModel, ModelAtNewPosition, MovementOfObject)
dev_display(ModelAtNewPosition)

Use affine_trans_contour_xld performs affine transformation of XLD and uses dev_dispaly is displayed.

4.3.2 processing multiple matches

If multiple instances of an object are detected, the related parameters Row,Column,Angle, and Score are in the form of tuples. example:

The HDevelop program examples\solution_guide\shape_matching\multiple_objects.dev

a. Determine the reflection transformation matrix

find_shape_model(SearchImage, ModelID, 0, rad(360), 0.6, 0, 0.55, 'least_squares', 0, 0.8, RowCheck, ColumnCheck, AngleCheck, Score)
for j:= 0 to |Score| - 1 by 1
    vector_angle_to_rigid(0, 0, 0, RowCheck[j], ColumnCheck[j], AngleCheck[j], MovementOfObject)
    affine_trans_contour_xld(ShapeModel, ModelAtNewPosition, MovementOfObject)

The transformation is the same, except that before, if was used to judge Score, and now the loop for is used to operate.

b. Use transform

affine_trans_pixel(MovementOfObject, -120, 0, RowArrowHead, ColumnArrowHead)
dis_arrow(WidowHandle, RowCheck[j], ColumnCheck[j], RowArrowHead, ColumnArrowHead, 2)

In this example, an arrow showing the direction is added, and after is used_ trans_ Pixel displays the position of the arrow, which is the same as the transformation matrix of XLD template.

It must be noted that after is used_ trans_ Pixel instead of affine_trans_point_2d, because the latter uses different image coordinate systems,

(affine_trans_pixel,affine_trans_contour_xld,affine_trans_region and affine_trans_image)

4.3.3 processing multiple templates

It is useful to store information about templates when searching multiple templates at the same time. Multiple XLD templates are stored in tuples. Refer to the example: hddevelop program examples \ solution_ guide\shape_ matching\multiple_ models. dev

a. Enter multiple XLD templates

creat_scaled_shape_model(ImageROIRing, 'auto', -rad(22.5), rad(45), 'auto', 0.8, 1.2, 'auto', 'none', 'use_polarity', 60, 10, ModelIDRing)
get_shape_model_contours(ShapeModelRing, ModelIDRing, 1)
creat_scaled_shape_model(ImageROINut, 'auto', -rad(30), rad(60), 'auto', 0.6, 1.4, 'auto', 'none', 'use_polarity', 60, 10, ModelIDNut)
inspect_shape_model(ImageROINut, PyramidImage, ModelRegionNut, 1, 30)

get_ shape_ model_ Counts is used to get the contour

b. Save multiple template information in tuples

count_obj(ShapeModelRing, NumContoursRing)
get_shape_model_contours(ShapeModelRing, ModelIDRing, 1)
ModelIDs := [ModelIDRing, ModelIDNut]
concat_obj(ShapeModelRing, ShapeModelNut, ShapeModels)
StartContoursInTuple := [1, NumContoursRing+1]

In order to facilitate subsequent access to models, the XLD outline is also saved in tuples, which is the same as the template ID,
Using concat_ When obj connects XLD, you must note that XLD is tuples because it contains multiple contours. In order to use the contours belonging to the specified model, the number of contours of the model and the starting index in the connection tuple are required. The former is calculated by count_obj generation, the starting coordinate of the number of contours is 1, and the subsequent contour index is 1 + the number of contours before.

c. Find the corresponding instance

find_scaled_shape_models(SearchImageReduced, ModelIDs, [-rad(22.5), -rad(30)], [rad(45), rad(60)], [0.8, 0.6], [1.2, 1.4], 0.7, 0, 0,'least_squares', 0, 0.8, RowCheck, ColumnCheck, AngleCheck, ScaleCheck, Score, ModelIndex)
for i:= 0 to |Score| - 1 by 1
    Model := ModelIndex[i]
    vector_angle_to_rigid(0, 0, 0, RowCheck[i], ColumnCheck[i], AngleCheck[i], MovementOfObject)
    hom_mat2d_scale(MovementOfObject, ScaleCheck[i], ScaleCheck[i], RowCheck[i], ColumnCheck[i], MoveAndScalingOfObject)
    copy_obj(ShapeModels, ShapeModel, StartContoursInTuple[Model], NumContoursInTuple[Model])
    affine_trans_contour_xld(ShapeModel, ModelAtNewPosition, MoveAndScalingOfObject)
    dev_display (ModelAtNewPosition)
endfor

The output parameter model indicates which model the match belongs to by storing the index of the corresponding model ID in the ID tuple specified in the parameter ModelIDs

Because an XLD may contain multiple contours, you cannot simply use select_obj, you can use the operator copy_obj selects the contour belonging to the model, and specifies the starting index of the model in the connection tuple and the number of contours as parameters. copy_obj does not copy the contour, but only the corresponding HALCON objects, which can be regarded as references to the contour.

4.3.4 align other areas

Example: examples\solution_guide\shape_matching\align_measurements.dev
a. Positioning ROIs

Rect1Row := 244
Rect1Col := 73
DistColRect1Rect2 := 17
Rect2Row := Rect1Row
Rect2Col := Rect1Col + DistColRect1Rect2
RectPhi := rad(90)
RectLength1 := 122
RectLength2 := 2

There are two common measurement matrices ROIs. In order to move with XLD, we move it to the position corresponding to XLD, and the reference point makes the origin of the image,

Note that cutting must be turned off before moving the area.

area_center (ModelROI, Area, CenterROIRow, CenterROIColumn)
get_system ('clip_region', OriginalClipRegion)
set_system ('clip_region', 'false')
move_region (MeasureROI1, MeasureROI1Ref, - CenterROIRow, - CenterROIColumn)
move_region (MeasureROI2, MeasureROI2Ref, - CenterROIRow, - CenterROIColumn)
set_system ('clip_region', OriginalClipRegion)
DistRect1CenterRow := Rect1Row - CenterROIRow
DistRect1CenterCol := Rect1Col - CenterROIColumn
DistRect2CenterRow := Rect2Row - CenterROIRow
DistRect2CenterCol := Rect2Col - CenterROIColumn

b. Find all instances

find_shape_model (SearchImage, ModelID, 0, 0, 0.8, 0, 0.5, 'least_squares', 0, 0.7, RowCheck, ColumnCheck, AngleCheck, Score)

c. Determine the affine transformation matrix

for i := 0 to |Score|-1 by 1
    vector_angle_to_rigid (0, 0, 0, RowCheck[i], ColumnCheck[i], AngleCheck[i], MovementOfObject)
    affine_trans_contour_xld (ShapeModel, ModelAtNewPosition, MovementOfObject)

Calculate the position and orientation of each object.

d. Create a survey object at the appropriate location.

affine_trans_pixel (MovementOfObject, DistRect1CenterRow, DistRect1CenterCol, Rect1RowCheck, Rect1ColCheck)
affine_trans_pixel (MovementOfObject, DistRect2CenterRow, DistRect2CenterCol, Rect2RowCheck, Rect2ColCheck)

Calculate the new position after measuring ROI affine_trans_pixel, must use affine_trans_pixel instead of affine_trans_point_2d, a new measurement object is created.

RectPhiCheck := RectPhi + AngleCheck[i]
gen_measure_rectangle2 (Rect1RowCheck, Rect1ColCheck, RectPhiCheck, RectLength1, RectLength2, Width, Height, 'bilinear', MeasureHandle1)
gen_measure_rectangle2 (Rect2RowCheck, Rect2ColCheck, RectPhiCheck, RectLength1, RectLength2, Width, Height, 'bilinear', MeasureHandle2)

Here, only simple translation is used instead of rotation transformation. You can use transflate_measure to translate the measured object.

e. Measure width

measure_pairs (SearchImage, MeasureHandle1, 2, 25, 'negative', 'all', RowEdge11, ColEdge11, Amp11, RowEdge21, ColEdge21, Amp21, Width1, Distance1)
measure_pairs (SearchImage, MeasureHandle2, 2, 25, 'negative', 'all',RowEdge12, ColEdge12, Amp12, RowEdge22, ColEdge22, Amp22, Width2, Distance2)

f. Detection and measurement

NumberTeeth1 := |Width1|
if (NumberTeeth1 < 37)
    for j := 0 to NumberTeeth1 - 2 by 1
        if (Distance1[j] > 4.0)
            RowFault := round(0.5*(RowEdge11[j+1] + RowEdge21[j]))
            ColFault := round(0.5*(ColEdge11[j+1] + ColEdge21[j]))
            disp_rectangle2 (WindowHandle, RowFault, ColFault, 0, 4, 4)

If the measured object is too short or not at all, and the edge cannot be effectively extracted, it can be determined by measuring the distance.

4.3.5 alignment search results

forward transformation maps the detection object from the template graph to the lookup graph through affine. Through transformation, the modelimage can be placed in the correct position on the search image.

inverse transformation means that search image is mapped to search image.

It is worth noting that the alignment image is only rotate d and translated. If you want to remove perspective or lens distortions, you must correct the image.

Example: hddevelop program examples \ solution_ guide\shape_ matching\rectify_ results. dev
a. Calculate inverse transformation

vector_angle_to_rigid (CenterROIRow, CenterROIColumn, 0, RowCheck, ColumnCheck, AngleCheck, MovementOfObject)
hom_mat2d_invert (MovementOfObject, InverseMovementOfObject)

hom_mat2d_invert obtains some inversion information. Note that unlike the previous processing, the transformation is calculated according to the absolute coordinates of the reference point, because we want to transform the image into the template graph.

b,rectify the search image

affine_trans_image (SearchImage, RectifiedSearchImage, InverseMovementOfObject, 'constant', 'false')

Use affine for search image_ trans_ Image, potential pixels are marked gray.

c. Extract numbers

reduce_domain (RectifiedSearchImage, NumberROI, RectifiedNumberROIImage)
threshold (RectifiedNumberROIImage, Numbers, 0, 128)
connection (Numbers, IndividualNumbers)

A string of numbers is set in the original picture, which is easy to extract. Unfortunately, affine_trans_image conversion, even with reduce_ So is domain. Because of time constraints, it needs to be used before radiation.

a. Crop search image

affine_trans_region (NumberROI, NumberROIAtNewPosition, MovementOfObject, 'false')
smallest_rectangle1 (NumberROIAtNewPosition, Row1, Column1, Row2, Column2)
crop_rectangle1 (SearchImage, CroppedNumberROIImage, Row1, Column1, Row2, Column2)

Using smallest_rectangle1 gets the smallest rectangle, and search image is the part of this clipping.

b. Create an after transformation

hom_mat2d_translate (MovementOfObject, - Row1, - Column1, MoveAndCrop)
hom_mat2d_invert (MoveAndCrop, InverseMoveAndCrop)

Clipping can be used as an additional affine transformation: translation by the negative coordinates of the upper left corner of the clipping rectangle. hom_mat2d_translate append translation operation, hom_mat2d_invert inverse transform this operation.

c. Pan crop image

affine_trans_image (CroppedNumberROIImage, RectifiedROIImage,InverseMoveAndCrop, 'constant', 'true')
reduce_domain (RectifiedROIImage, NumberROI, RectifiedNumberROIImage)

affine_trans_image correct the image, and then reduce ROI.

4.4 using estimated scaling
Similar to rotation, if set is not used_ shape_ model_ Origin, then the scale change is in the center of ROI, and the evaluation scale will return scale, which can be used for similar calculation positions and directions, but the vector_angle_to_rigid cannot create scale flexibly. Example:

HDevelop program examples\solution_guide\shape_matching\multiple_scales.dev

examples\hdevelop\Matching\Shape-Based\find_aniso_shape_model.dev

a. Specify point

RowUpperPoint := 284
ColUpperPoint := 278
RowLowerPoint := 362
ColLowerPoint := 278

Grassing points is directly planted in model image,
In order to use them with the XLD model, they are all located in the XLD model.

area_center (ModelROI, Area, CenterROIRow, CenterROIColumn)
RowUpperPointRef := RowUpperPoint - CenterROIRow
ColUpperPointRef := ColUpperPoint - CenterROIColumn
RowLowerPointRef := RowLowerPoint - CenterROIRow
ColLowerPointRef := ColLowerPoint - CenterROIColumn

b. Computational affine

find_scaled_shape_model (SearchImage, ModelID, -rad(30), rad(60), 0.6, 1.4, 0.65, 0, 0, 'least_squares', 0, 0.8, RowCheck,ColumnCheck, AngleCheck, ScaleCheck, Score)
for i := 0 to |Score| - 1 by 1
    vector_angle_to_rigid (0, 0, 0, RowCheck[i], ColumnCheck[i], AngleCheck[i], MovementOfObject)
    hom_mat2d_scale (MovementOfObject, ScaleCheck[i], ScaleCheck[i], RowCheck[i], ColumnCheck[i], MoveAndScalingOfObject)
    affine_trans_contour_xld (ShapeModel, ModelAtNewPosition, MoveAndScalingOfObject)

vector_angle_to_rigid can determine translational and rotational. Add scale change hom_mat2d_scale. Note that the matching configuration should be used as the reference point. This operation needs to be placed after translation and rotation.

c. Calculating grassing points

affine_trans_pixel (MoveAndScalingOfObject, RowUpperPointRef, ColUpperPointRef, RowUpperPointCheck, ColUpperPointCheck)
affine_trans_pixel (MoveAndScalingOfObject, RowLowerPointRef, ColLowerPointRef, RowLowerPointCheck, ColLowerPointCheck)

Affine transformation can also be used as affine transformation_ trans_ When the pixel operator is applied to other points in the model image, you must use affine_trans_pixel and not affine_trans_point_2d.

5. Other Miscellaneous
5.1 parallel programming
It can run in multiple threads. For a single operation, parallelization will be performed automatically, that is, if you just want to take advantage of parallelization, you don't need to change any other parts. However, if you want to apply complex code parts in different threads and do not want additional parallelization of operators, you can turn off the automatic and refinement switch and adjust the parallelism manually. If the pre generation of the model is turned off, the actual model is created during the search process_ scaled_ shape_ The generation of model model leads to performance problems caused by the synchronization work required to access different threads. Two problems need to be avoided:

When creating the initial model, you can switch the pre generation by setting the second value of parameter Optimization to "pre generation". The model is not created during the search process, but during the creation process. Therefore, the search in different threads does not need to be synchronized. Note that for this approach, the model becomes very large and therefore consumes a lot of memory.
Or you can load the same non pre generated model multiple times, find_scaled_shape_model.
5.2 adapt to camera direction change
If you shoot at an oblique angle, the shape based matching will fail. If it is not perpendicular to the plane of object motion, the distortion of the camera will be introduced, and the corresponding position and direction will change. Image correction should be done before matching. Three processing processes are required:

a,calibrate the camera. Using camera_calibration to determine position and orientation.

b. The correction coefficient is obtained by gen_image_to_word_plane_map create mapping function

c. Apply to map_image.

It is also possible to replace the matching of the given shape with the matching of the feasible variable without correction, and consider the perspective deformation to replace the 2D home graph with the 2D pose,

5.3 reuse model
Example: hddevelop program examples \ solution_ guide\shape_ matching\reuse_ model. dev

Is to save the model to a file

create_scaled_shape_model (ImageROI, 'auto', -rad(30), rad(60), 'auto', 0.6, 1.4, 'auto', 'none', 'use_polarity', 60, 10, ModelID)

Storage model

write_shape_model (ModelID, ModelFile)

Read model

read_shape_model (ModelFile, ReusedModelID)
get_shape_model_contours (ReusedShapeModel, ReusedModelID, 1)
get_shape_model_origin (ReusedModelID, ReusedRefPointRow, ReusedRefPointCol)
get_shape_model_params (ReusedModelID, NumLevels, AngleStart, AngleExtent,AngleStep, ScaleMin, ScaleMax, ScaleStep, Metric, MinContrast)

read_shape_model read model

Get contour get_shape_model_contours

Get template map_ shape_ model_ origin

Get parameters for creating model_ shape_ model_ params

find_scaled_shape_model (SearchImage, ReusedModelID, AngleStart, AngleExtent, ScaleMin, ScaleMax, 0.65, 0, 0, 'least_squares', 0, 0.8, RowCheck, ColumnCheck,AngleCheck, ScaleCheck, Score)
for i := 0 to |Score| - 1 by 1
    vector_angle_to_rigid (ReusedRefPointRow, ReusedRefPointCol, 0, RowCheck[i], ColumnCheck[i], AngleCheck[i],MovementOfObject)
    hom_mat2d_scale (MovementOfObject, ScaleCheck[i], ScaleCheck[i], RowCheck[i], ColumnCheck[i], MoveAndScalingOfObject)
    affine_trans_contour_xld (ReusedShapeModel, ModelAtNewPosition, MoveAndScalingOfObject)
    dev_display (ModelAtNewPosition)
endfor

Keywords: Computer Vision image processing

Added by LanHorizon on Tue, 18 Jan 2022 11:56:47 +0200