Some knowledge points of ffmpeg learning diary 8-YUV

Some knowledge points of ffmpeg learning diary 8-YUV

There are many articles about yuv. I won't introduce the relevant concepts found here. I'll record some points here. As a novice, I've pondered for a long time to understand some points.

Storage read-write problem of YUV420P

YUV data is stored in the frame structure. The three kinds of YUV data are continuous in memory when pix_ fmt=PIX_ FMT_ When yuv420p, the data in data is stored in YUV format, that is:

data -->YYYYYYYYYYYYYYUUUUUUUUUUUUUVVVVVVVVVVVV
^ ^ ^
| | |
data[0] data[1] data[2]
linesize refers to the size of each row. Therefore, when reading yuv data in the whole segment, the following operations are required:

av_image_copy(video_dst_data, video_dst_linesize,
                  (const uint8_t **)(frame->data), frame->linesize,
                  pix_fmt, width, height);

    /* write to rawvideo file */
    fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);

If you want to read yuv data separately, do the following:

int y_size = pCodecCtx->width * pCodecCtx->height;
// y brightness information is finished
fwrite(yuvFrame->data[0], 1, y_size, fp_yuv);
fwrite(yuvFrame->data[1], 1, y_size/4, fp_yuv);
fwrite(yuvFrame->data[2], 1, y_size/4, fp_yuv);

The reason why the yuv data extracted from the above two types are the same is that they are stored in the same continuous memory.

Read a piece of yuv data in the file in char * format (or uint * format, etc.) and fill it into the avframe structure. The following operations are required:

AVFrame* picture;
AVCodecContext* c;  
        c = video_st->codec;  
        size = c->width * c->height;  
  
        if (fread(picture_buf, 1, size*3/2, fin) < 0)  
        {  
            break;  
        }  
          
        picture->data[0] = picture_buf;  // brightness  
        picture->data[1] = picture_buf+ size;  // chroma   
        picture->data[2] = picture_buf+ size*5/4; //

Explain, picture_buf is the first address of the data, data[1] points to the U component address, and size is the resolution size and y component size, so picture_buf+ size points to u, and data[2] points to the address of v component. The size of u component and v component is 1 / 4 of the size of y, so picture_buf+ size*5/4 points to the v component

YUV and RGB format conversion reasoning

YUV in H264 should belong to YCbCr. The conversion relationship between YUV and rgb is as follows:

Y' = 0.257*R' + 0.504*G' + 0.098*B' + 16
Cb' = -0.148*R' - 0.291*G' + 0.439*B' + 128
Cr' = 0.439*R' - 0.368*G' - 0.071*B' + 128
R' = 1.164*(Y'-16) + 1.596*(Cr'-128)
G' = 1.164*(Y'-16) - 0.813*(Cr'-128) - 0.392*(Cb'-128)
B' = 1.164*(Y'-16) + 2.017*(Cb'-128)

Take black as an example. The rgb value (0,0,0) of black is converted into yuv value, but the y value is not 0, but 16. The reasoning of 16 is as follows:
Substituting the black rgb(0,0,0) into the formula, we can get:

0 = 1.164*(Y'-16) + 1.596*(Cr'-128)
0 = 1.164*(Y'-16) - 0.813*(Cr'-128) - 0.392*(Cb'-128)
0 = 1.164*(Y'-16) + 2.017*(Cb'-128)

Converted to:

(Cr'-128) = xxx
(Cb'-128) = xxx

Substitute the above two values into the formula:

0 = 1.164*(Y'-16) - 0.813*(Cr'-128) - 0.392*(Cb'-128)

Obtain:

Y = 16 
Cr = 128
Cb = 128

The source of this problem is Video overlay algorithm - white material overlay Black pixel processing in this paper.

Positioning of YUV in frame

reference resources

Added by ivytony on Fri, 11 Feb 2022 11:20:22 +0200