License plate recognition based on HyperLPR


Source code download address: HyperLPR: HyperLRP is an open source, high-performance Chinese license plate recognition library based on deep learning. It is developed by Beijing Zhiyun View Technology Co., Ltd. and supports PHP, C/C + +, Python language and Windows/Mac/Linux/Android/IOS platform.

See the first analysis for details of source code configuration

This analysis is pipline The last few lines of Py are mainly for the closing work of the task. The following is the implementation of the code:

# image_gray = horizontalSegmentation(image_gray)
        # cv2.waitKey()

        # cv2.imshow("image",image_gray)
        # cv2.waitKey(0)
        print("correcting",time.time() - t1,"s")
        # cv2.imshow("image,",image_gray)
        # cv2.waitKey(0)
        t2 = time.time()
        val = segmentation.slidingWindowsEval(image_gray)
        # print val
        print("Segmentation and recognition",time.time() - t2,"s")
        if len(val)==3:
            blocks, res, confidence = val
            if confidence/7>0.7:
                image =  drawRectBox(image,rect,res)
                for i,block in enumerate(blocks):

                    block_ = cv2.resize(block,(25,25))
                    block_ = cv2.cvtColor(block_,cv2.COLOR_GRAY2BGR)
                    image[j * 25:(j * 25) + 25, i * 25:(i * 25) + 25] = block_
                    if image[j*25:(j*25)+25,i*25:(i*25)+25].shape == block_.shape:

            if confidence>0:
                print("License plate:",res,"Confidence:",confidence/7)

The first is to use the cv2 function to process a series of pictures. I mentioned the function in the previous article, so I won't say more.

The most important are:

val = segmentation.slidingWindowsEval(image_gray)

The source code is:

def slidingWindowsEval(image):
    windows_size = 16;
    stride = 1
    height= image.shape[0]
    t0 = time.time()
    data_sets = []

    for i in range(0,image.shape[1]-windows_size+1,stride):
        data = image[0:height,i:i+windows_size]
        data = cv2.resize(data,(23,23))
        # cv2.imshow("image",data)
        data = cv2.equalizeHist(data)
        data = data.astype(np.float)/255
        data=  np.expand_dims(data,3)

    res = model2.predict(np.array(data_sets))
    print("division",time.time() - t0)

    pin = res
    p = 1 -  (res.T)[1]
    p = f.gaussian_filter1d(np.array(p,dtype=np.float),3)
    lmin = l.argrelmax(np.array(p),order = 3)[0]
    interval = []
    for i in range(len(lmin)-1):

        mid  = get_median(interval)
        return []
    pin = np.array(pin)
    res =  searchOptimalCuttingPoint(image,pin,0,mid,3)

    cutting_pts = res[1]
    last =  cutting_pts[-1] + mid
    if last < image.shape[1]:
    name = ""
    confidence =0.00
    seg_block = []
    for x in range(1,len(cutting_pts)):
        if x != len(cutting_pts)-1 and x!=1:
            section = image[0:36,cutting_pts[x-1]-2:cutting_pts[x]+2]
        elif  x==1:
            c_head = cutting_pts[x - 1]- 2
            if c_head<0:
            c_tail = cutting_pts[x] + 2
            section = image[0:36, c_head:c_tail]
        elif x==len(cutting_pts)-1:
            end = cutting_pts[x]
            diff = image.shape[1]-end
            c_head = cutting_pts[x - 1]
            c_tail = cutting_pts[x]
            if diff<7 :
                section = image[0:36, c_head-5:c_tail+5]
                section = image[0:36, c_head - diff:c_tail + diff]
        elif  x==2:
            section = image[0:36, cutting_pts[x - 1] - 3:cutting_pts[x-1]+ mid]
            section = image[0:36,cutting_pts[x-1]:cutting_pts[x]]
    refined = refineCrop(seg_block,mid-1)

    t0 = time.time()
    for i,one in enumerate(refined):
        res_pre = cRP.SimplePredict(one, i )
        # cv2.imshow(str(i),one)
        # cv2.waitKey(0)
        name+= res_pre[1]
    print("character recognition ",time.time() - t0)

    return refined,name,confidence

Sliding windows Eval is a character segmentation and recognition method based on sliding window.
A classic paper (end-to-end text recognition with revolutionary neural networks) is introduced in. This method is also mentioned in Andrew Ng's Machine Learning course on Cousera. There is also a corresponding implementation in the text module of OpenCV.

Its main idea is to use a trained positive and negative sample classifier to slide on the image, then generate a probability response graph, and then NMS (non maximum suppression) the raw response. After determining the number of characters bdbox, the viterbi algorithm is used to obtain the best segmentation path.

The template matching method designs a fixed template and an evaluation function to measure the matching degree of the template according to the standard style of the license plate and the structure, size and spacing characteristics of the characters. Then, the template slides from left to right in the normalized image, and the corresponding evaluation value is calculated each time, Finally, the template sliding position with the highest matching degree is selected as the position of character segmentation. The disadvantage of template matching method is that it has high requirements for license plate image. When the inclination angle of license plate is very small, the frame is removed cleanly, the image noise is less, and the character size and character spacing are standard, the algorithm has fast speed and good segmentation effect; When the quality of license plate image is general, the segmentation effect of this algorithm is often unsatisfactory.
On the basis of template matching, HyperLpr also tries to solve the problem of character segmentation by projection method, which is roughly divided into the following steps:
1. On the basis of image filtering, the projection is obtained

2. Use the obtained projection value to find the segmentation point

Keywords: Python

Added by luxe on Mon, 27 Dec 2021 22:26:10 +0200