Summary of Interface Fluency Learning

Recognize that you are still garbage

After deleting all the previous articles and closing the blog, I feel that some articles I wrote before are too rubbish, so I will review them for my own amusement. It's a good thing to think of your own dishes. At least you know that you need to continue to learn, and at least you have the motivation to continue to want to improve. Just like YYKit, the brilliant iOSer s are still coding day and night, and we loser s are even more motivated. Although genius can't catch up with him, there is still a goal to keep up with him. Ha-ha.

Before hungry, Youku interviews, were asked by the technology boss about the interface performance, image optimization, FPS... And other related questions, basically answered the disadvantage, it is too rubbish. Resolve to spend some time figuring out these things.

Silence for more than a month, although not a thorough understanding, or at last a little bit of harvest, out to share with such a fool like me to this level of dishes to share. By the way, I'll see if there are some mistakes in my understanding. I'll ask the gods for their advice.

First of all, I list my learning steps in this month.

1. Take a look at CoreAnimation Advanced Programming (basic, you have to look at it)
2. YYKit's article with a smooth interface (I've seen it before, but you have to read it carefully and see more before you can get something)
3. Take a close look at the optimization code for VVeboTableView (this is best done once)
4. Reviewing the text and drawing of CoreText (this is some basic pull)
5. Look carefully at the source code of YYAsyncLayer (from Daniel, you must read, understand, use and change it into your own)

I think, to learn a thing, first of all, there must be a clear plan. Understanding dishes, you can follow my plan step by step. It took me a little more than a month, maybe a week or two for the gods to ___________.

I would like to add a few words. I see a lot of people learning objc today, swift tomorrow, weex the day after tomorrow, reactnative the day after tomorrow and HTML 5 the day after tomorrow.

It's good to learn more, but I think it's better to master a system instead of just blindly learning this and that because of the current upsurge.

Language and system, I think, are different. Learning a language is very fast. It takes three to five years to master a system skillfully and proficiently.

Once, in an article by YYKit's author, I read the following sentence:

Few people will read the code of an open source library in its entirety, let alone go line by line.

I feel very much that it takes a lot of energy and some direction to really learn a system.

For some small white, the way to improve quickly is to look at excellent open source libraries. And it's not just a look. The first step is to understand, the second step is to find out why it's written, the third step is to imitate and simplify, and the fourth step is to make more optimized and reusable module code.

start

Articles will be constantly updated, because the interface is smooth and involves too many things, learning a little will add a little on the Internet.

I won't talk about some simple things, and the optimization points mentioned in YYKit's article. This is mainly about the demo that scrolls through the optimization list in the VVeboTableView.

Maybe a lot of people have seen it too. I asked many friends and colleagues of mine and said they had. But do you know what the optimization is? It's not clear.

So, I took the code of VVeboTableView, then I cut it a little bit, looked at it line by line, followed by a demo, and then I optimized the rendering by myself:


demo.gif

This one is run by simulator. It's basically run by 60FPS with real machine.

One of the most obvious effects is that when you scroll through multiple cells quickly, all the blanks appear, that is, you don't draw the cells immediately, but wait until the list stops scrolling.

Note that what I'm talking about here is not to create various UIView objects and then add Subview.... This drawing means drawing. For example, I draw a text on a canvas, draw a circle, draw an image... I do not use uiview objects directly to complete the interface display.

YYKit's authors say that this blank effect may be one of the biggest drawbacks of the code, but that's all, it's good for performance optimization of the entire TableView scroll.

For most of the scrolling interfaces, to optimize according to this idea will not encounter the problem of Carton, but need to directly use CoreText, CoreGraphics and other direct drawing, or slightly troublesome.

The main optimization logic is to roll the cell out of the current visible area quickly in the middle, ignore it directly and do not draw it. In this way, there will be a lot less invisible cell rendering, whether for CPU or GPU has saved a lot of time.

Naturally, FPS is very high.

Maybe there are some beginners who don't know what FPS is, because I've only recently understood it.

  • (1) What we see on the screen monitor, in fact, is constantly refreshing every second. That is, continuously erasing the data displayed in the previous second, and then drawing the data to be displayed in the next second.

  • (2) FPS is the display, the number of times the screen is refreshed in one second.

  • (3) For our PC, for example, when I play CF, I get a little card below 30. I play in the Internet cafe, which is about 60 to 100, fairly smooth.

  • (4) For mobile devices (mobile phones) display, under normal circumstances, it is best to maintain at about 60 times. But there are also situations:

    • In general, it's better to keep around 60.
    • If there is a complex animation, it may be kept to a minimum of 30.
  • (5) How FPS calculates:

    • First of all, make it clear that the FPS of the main thread is calculated (drawing and displaying are all in the main thread)
    • Register a CADisplayLink timer event source with the main thread's Ruboop
    • The system then calls back the callback function specified by CADisplayLink whenever the screen is refreshed and redrawn.
    • In the CADisplayLink callback function, by dividing the total refresh times in a time interval by the time interval, you can get the current refresh times in a second, which is FPS.

FPS is low, why does interface carton effect appear?

Every UIView we write sets up text, pictures, background colors, fonts, border styles, and so on. When it finally appears on the screen, it goes through the following steps:

  • (1) Object creation, adjustment and calculation

    • UIView Object Creation, frame Computing, frame Setting
    • Reading of Files
    • Custom Drawing
  • (2) The UIView objects are extracted by CoreAnimation and sent to Render Server, a rendering service, to process the data processed by Render Server, and then to GPU for final processing.

  • (3) GPU processes the received data into a data format (texture?) that the display can recognize. Specifically, the data used for a frame display. Then the GPU throws the final data into the frame data buffer pool.

  • (4) Display, every other period of time, the frame data from the frame data buffer pool is taken out and displayed on the screen.

Simple can be understood as such a few steps, to go deeper, it can be a bit complicated, I will not start, because I do not understand so deep (Haha ~).

For the above steps, based on hardware, it can be divided into two main directions:

  • (1) CPU
  • (2) GPU

Copied from the YYKit article, the two main hardware tasks are as follows:

What CPU does

  • (1) Creation, adjustment and abandonment of objects
  • (2) frame calculation, text size calculation and autolayout calculation of UI objects
  • (3) Text rendering, picture rendering, and other custom rendering
  • (4) Reading and writing of various disk files.
  • (5) For picture files, decompression of pictures and decoding of pictures

What the GPU does

  • (1) Texture rendering
  • (2) Composing of multiple layers
  • (3) Special effect off-screen rendering (border, rounded corner, shadow, mask)...)

In short, when I want to express, I have to wait for the CPU to finish processing, and then give the GPU to finish processing, and finally notify the display to display the data to show such a process.

If the GPU processing time of CPU cargo exceeds the time limit, the display will not refresh the screen. Without refreshing the screen for a long time, the screen will always display the previous frame of data, which will give users a feeling of being stuck.

Suddenly the GPU is finished and the display is notified to read the data. At this time, suddenly refresh the next frame of data display on the screen, there will also be a kind of not peaceful excess.

To sum up, it is suddenly the card owner, and suddenly another appearance, there is no excessive peace, this is the production of the Carton sense.

In the YYkit article, the specific optimization methods corresponding to these things are also recorded separately, so I will not list them.

Let me summarize the advantages of this code:

The first major task is to use CoreText for text rendering, which is the basis for subsequent optimization. It's a bit cumbersome to implement, but you have to do it, otherwise you can't complete the optimization later.

1. Customize UILabel and draw with CoreText text internally
 2. Subthread completes the text drawing asynchronously and renders the image.
3. Finally, the image is set to the backing layer display of UILabel.
4. The highlighting and clicking effects of the special text involved are cut regularly in advance, and then the frame appearing in the highlighted text is saved.
5. Put all the rendering codes into the sub-threads

The second highlight, TableView scrolls, listens for scrollView status

1. Save a state when you start scrolling
 2. When you stop scrolling, save a state and calculate the indexpath when you finally stop scrolling
 3. Filter out the indexpath that scrolls quickly in the middle
 4. Save the visible indexpath that eventually stops appearing to an array and add three additional indexpaths nearby.
5. After the final list stops, the indexpath saved in the array is taken out one by one to draw the content.

The text part of the cell is handed over to the user-defined UILabel. The other very small images and text that can not be reused can be drawn by using UIKit. Then the rendered image can be obtained from the drawing context, which can be inserted into a backgroundView or backing layer inside the cell.

This is what I learned in this demo list optimization ideas, a simple list of snippets of code bar. For example, the above uses CoreText text to draw, render the image, and set it to the layer to display:

@implementation XXXLabel
- (void)asyncDraw {

    /**
     *  The process of rendering and generating images is all completed asynchronously in sub-threads.
     */

    XZHDispatchQueueAsyncBlockWithQOSBackgroud(^{

        //1. Set the size of the canvas
        CGSize size = self.frame.size;
        size.height += 10;

        //2. Create a canvas
        UIGraphicsBeginImageContextWithOptions(size, ![self.backgroundColor isEqual:[UIColor clearColor]], 0);
        CGContextRef context = UIGraphicsGetCurrentContext();
        if (context==NULL) {return;}

        //3. Fill in the background color of the sketchpad, otherwise it will be black by default.
        if (![self.backgroundColor isEqual:[UIColor clearColor]]) {
            [self.backgroundColor set];
            CGContextFillRect(context, CGRectMake(0, 0, size.width, size.height));
        }

        //3. Flip the y-axis of the context, because the y-axis of the coordinate system of CoreText and UIKit is `opposite'.
        CGContextSetTextMatrix(context,CGAffineTransformIdentity);
        CGContextTranslateCTM(context,0,size.height);
        CGContextScaleCTM(context,1.0,-1.0);

        //4. Attempt to read CTFrameRef from the cache using the value of MD5 that draws the original text
        NSString *md5 = [_text xzh_MD5];
        CTFrameRef ctFrame = CTFrameForKey(md5);

        //5. Temporary variables generated during rendering need to be released after rendering.
        CTFontRef font;
        CTFramesetterRef framesetter;

        //6. Determine whether to use cached CTFrameRef for direct rendering
        CGRect rect = CGRectMake(0, 5,(size.width),(size.height-5));
        if (!_highlighting && ctFrame) {
            //6.1 Use cached CTFrame to render without text parsing or rendering.
            [self drawWithCTFrame:ctFrame inRect:rect context:context];
        } else {
            //6.2 Re-text rich text settings, parsing, CTFrameRef rendering

            //6.2.1 Rich Text Content to Draw > > NSMutable Attributed String
            UIColor* textColor = self.textColor;
            CGFloat minimumLineHeight = self.font.pointSize,maximumLineHeight = minimumLineHeight, linespace = self.lineSpace;
            font = CTFontCreateWithName((__bridge CFStringRef)self.font.fontName, self.font.pointSize,NULL);
            CTLineBreakMode lineBreakMode = kCTLineBreakByWordWrapping;
            CTTextAlignment alignment = CTTextAlignmentFromUITextAlignment(self.textAlignment);
            CTParagraphStyleRef style = CTParagraphStyleCreate((CTParagraphStyleSetting[6]){
                {kCTParagraphStyleSpecifierAlignment, sizeof(alignment), &alignment},
                {kCTParagraphStyleSpecifierMinimumLineHeight,sizeof(minimumLineHeight),&minimumLineHeight},
                {kCTParagraphStyleSpecifierMaximumLineHeight,sizeof(maximumLineHeight),&maximumLineHeight},
                {kCTParagraphStyleSpecifierMaximumLineSpacing, sizeof(linespace), &linespace},
                {kCTParagraphStyleSpecifierMinimumLineSpacing, sizeof(linespace), &linespace},
                {kCTParagraphStyleSpecifierLineBreakMode,sizeof(CTLineBreakMode),&lineBreakMode}
            },6);

            NSDictionary* attributes = [NSDictionary dictionaryWithObjectsAndKeys:(__bridge id)font,(NSString*)kCTFontAttributeName,
                                        textColor.CGColor,kCTForegroundColorAttributeName,
                                        style,kCTParagraphStyleAttributeName,
                                        nil];
            NSMutableAttributedString *attributedStr = [[NSMutableAttributedString alloc] initWithString:_text
                                                                                              attributes:attributes];

            //6.2.2 NSMutableAttributedString >>> CTFramesetterRef
            CFAttributedStringRef attributedString = (__bridge CFAttributedStringRef)[self highlightText:attributedStr];
            framesetter = CTFramesetterCreateWithAttributedString((CFAttributedStringRef)attributedString);

            //6.2.3 CGPath + CTFramesetterRef >> CTFrameRef. Set the path to draw text:
            CGMutablePathRef path = CGPathCreateMutable();
            CGPathAddRect(path, NULL, rect);

            //6.2.4 Generate CTFrameRef for Text Drawing in the Area
            CTFrameRef ctFrame = CTFramesetterCreateFrame(framesetter,
                                                        CFRangeMake(0, _text.length),
                                                        path,
                                                        NULL);

            //6.2.5 Cache CTFrameRef instances to avoid repeated parsing of the same text
            CacheCTFrameWithKey(ctFrame, md5);

            //6.2.6 resolves the CTFrame for rendering
            [self drawWithCTFrame:ctFrame inRect:rect context:context];

            //6.2.7 does not discard frame s and caches them in memory
            //CFRelease(ctFrame);
        }

        //7. Continue to flip the y-axis to get the y-axis direction of UIKit
        CGContextSetTextMatrix(context,CGAffineTransformIdentity);
        CGContextTranslateCTM(context,0,size.height);
        CGContextScaleCTM(context,1.0,-1.0);

        //8. Getting rendered images from context
        UIImage *screenShotimage = UIGraphicsGetImageFromCurrentImageContext();

        //9. End Drawing Context
        UIGraphicsEndImageContext();

        //10. Return to the main thread and display the rendered image to the layer.
        dispatch_async(dispatch_get_main_queue(), ^{
//            if (font) {CFRelease(font);}
//            if (framesetter) {CFRelease(framesetter);}

            if (_highlighting) {
                _highlightImageView.image = nil;
                if (_highlightImageView.width!=screenShotimage.size.width) {
                    _highlightImageView.width = screenShotimage.size.width;
                }
                if (_highlightImageView.height!=screenShotimage.size.height) {
                    _highlightImageView.height = screenShotimage.size.height;
                }
                _highlightImageView.image = screenShotimage;
            } else {
                if (_labelImageView.width!=screenShotimage.size.width) {
                    _labelImageView.width = screenShotimage.size.width;
                }
                if (_labelImageView.height!=screenShotimage.size.height) {
                    _labelImageView.height = screenShotimage.size.height;
                }

                // Clear the image of highlighted view
                _highlightImageView.image = nil;

                _labelImageView.image = nil;
                _labelImageView.image = screenShotimage;
            }

//            [self debug Draw]; // Draw touchable areas
        });
    });
}

- (void)drawWithCTFrame:(CTFrameRef)frame
                 inRect:(CGRect)rect
                context:(CGContextRef)ctx
{
    //1.
    if (NULL == frame) {return;}
    if (NULL == ctx) {return;}

    //2. All rows
    CFArrayRef lines = CTFrameGetLines(frame);
    NSInteger numberOfLines = CFArrayGetCount(lines);

    //3.
    CGPoint lineOrigins[numberOfLines];
    CTFrameGetLineOrigins(frame, CFRangeMake(0, numberOfLines), lineOrigins);

    //4.
    for (CFIndex lineIndex = 0; lineIndex < numberOfLines; lineIndex++) {

        //4.1
        CGPoint lineOrigin = lineOrigins[lineIndex];
        lineOrigin = CGPointMake(CGFloat_ceil(lineOrigin.x), CGFloat_ceil(lineOrigin.y));

        //4.2
        CGContextSetTextPosition(ctx, lineOrigin.x, lineOrigin.y);

        //4.3
        CTLineRef line = CFArrayGetValueAtIndex(lines, lineIndex);

        //4.4
        CGFloat descent = 0.0f;
        CGFloat ascent = 0.0f;
        CGFloat lineLeading;
        CTLineGetTypographicBounds((CTLineRef)line, &ascent, &descent, &lineLeading);

        //4.5 Adjust pen offset for flush depending on text alignment
        CGFloat flushFactor = NSTextAlignmentLeft;
        CGFloat penOffset;
        CGFloat y;

        //4.6
        penOffset = (CGFloat)CTLineGetPenOffsetForFlush(line, flushFactor, rect.size.width);
        y = lineOrigin.y - descent - self.font.descender;
        CGContextSetTextPosition(ctx, penOffset, y);
        CTLineDraw(line, ctx);

        //4.7
        if (!_highlighting && (self.superview != nil)) {
            CFArrayRef runs = CTLineGetGlyphRuns(line);
            for (int j = 0; j < CFArrayGetCount(runs); j++) {
                CGFloat runAscent;
                CGFloat runDescent;
                CTRunRef run = CFArrayGetValueAtIndex(runs, j);

                NSDictionary* attributes = (__bridge NSDictionary*)CTRunGetAttributes(run);

                if (!CGColorEqualToColor((__bridge CGColorRef)([attributes valueForKey:@"CTForegroundColor"]), self.textColor.CGColor)
                    && _clickRangeFramesDict!=nil) {
                    CFRange range = CTRunGetStringRange(run);
                    CGRect runRect;
                    runRect.size.width = CTRunGetTypographicBounds(run, CFRangeMake(0,0), &runAscent, &runDescent, NULL);
                    float offset = CTLineGetOffsetForStringIndex(line, range.location, NULL);
                    float height = runAscent;
                    runRect = CGRectMake(lineOrigin.x + offset, (self.height+5)-y-height+runDescent/2, runRect.size.width, height);
                    NSRange nRange = NSMakeRange(range.location, range.length);
                    [_clickRangeFramesDict setValue:[NSValue valueWithCGRect:runRect] forKey:NSStringFromRange(nRange)];
                }
            }
        }
    }
}
@end

Then the TableView monitors the logic of fast scrolling

#pragma mark - UIScrollViewDelegate

// Clear all cached cell s
- (void)scrollViewWillBeginDragging:(UIScrollView *)scrollView{

    //1. Markers are scrolling
    _isScrolling = YES;

    //2. Clear the indexPath of the drawing cell saved before it is drawn
    [_drawableIndexPaths removeAllObjects];
}

// [Fingers off the screen], if the difference between the [final stop index path] and the [current index path] exceeds the specified number of rows
//Then only three rows of cell s are specified before and after the target rolling range to render the content data.
- (void)scrollViewWillEndDragging:(UIScrollView *)scrollView
                     withVelocity:(CGPoint)velocity
              targetContentOffset:(inout CGPoint *)targetContentOffset
{
    //1. indexPath of the cell at the top of the current visible region before scrolling occurs
    NSIndexPath *curIndexPath = [_tableView xzh_firstVisbledCellIndexPath];

    //2. When the scroll finally stops, coordinate points (x,y) appear at the bottom of the visible area on the screen.
    CGPoint stoppedPoint = CGPointMake(0, targetContentOffset->y);

    //3. By stoppedPoint, query the IndexPath where you are
    NSIndexPath *stopedIndexPath = [_tableView indexPathForRowAtPoint:stoppedPoint];

    //4.
    NSLog(@"curIndexPath = %@, stopedIndexPath = %@", curIndexPath, stopedIndexPath);

    //5. Set the maximum difference between row before scrolling and row when scrolling stops.
    NSInteger skipCount = 8;

    /**
     *  6. If [row before scrolling] distance [row when scrolling stops] exceeds skipCount
     *  - (1) Neglect the rendering of skipCount cell s in the middle
     *  - (2) Draw only three rows of cell s specified before and after stopping scrolling
     */
    BOOL isOverSkipCount = labs(stopedIndexPath.row - curIndexPath.row) > skipCount;

    //7. If skipCount is exceeded, then (1), (2) on completion
    if (isOverSkipCount) {

        //7.1 When obtaining the final stop scrolling position, the cell corresponding IndexPath visible on the screen
        NSArray *stoppedVisbleIndexpaths = [_tableView indexPathsForRowsInRect:CGRectMake(0,
                                                                                          targetContentOffset->y,
                                                                                          _tableView.width,
                                                                                          _tableView.height)];

        //7.2
        NSMutableArray *mutableIndexPaths = [NSMutableArray arrayWithArray:stoppedVisbleIndexpaths];

        //7.3 Judgment is to continue rolling in the positive direction or in the negative direction.
        if (velocity.y > 0) {
            // 7.3.1 Forward rolling
            // Get IndexPath of a Visible cell
            NSIndexPath *idx = [mutableIndexPaths lastObject];

            // Add the IndexPath of the following three cell s in the sequence
            if ((idx.row + 3) < _tweetList.count) {
                NSIndexPath *next1 = [idx xzh_nextRow];
                NSIndexPath *next2 = [next1 xzh_nextRow];
                NSIndexPath *next3 = [next2 xzh_nextRow];
                [mutableIndexPaths addObject:next1];
                [mutableIndexPaths addObject:next2];
                [mutableIndexPaths addObject:next3];
            }

        } else {
            //7.3.2 Rolling in the opposite direction
            // Get IndexPath for a visible cell
            NSIndexPath *idx = [mutableIndexPaths firstObject];

            // Add IndexPath of the penultimate three cell s above
            if ((idx.row - 3) >= 0) {
                NSIndexPath *prev1 = [idx xzh_previousRow];
                NSIndexPath *prev2 = [prev1 xzh_previousRow];
                NSIndexPath *prev3 = [prev2 xzh_previousRow];
                [mutableIndexPaths addObject:prev1];
                [mutableIndexPaths addObject:prev2];
                [mutableIndexPaths addObject:prev3];
            }
        }

        //7.4 Save the indexPath of the cell that needs to be drawn
        [_drawableIndexPaths addObjectsFromArray:mutableIndexPaths];

    } else {

        /**
         *  When you get here, you won't go to the delegate functions under scrollview.
         *  So, direct markup stops scrolling and draws subviews of the visible area of the current scrollview
         */

        //7.1 Mark Stops Scrolling
        _isScrolling = NO;

        //7.2 Draw cell in the current visible region
        [self drawVisbledCells];
    }
}

// [Is scrolling allowed to the top]
- (BOOL)scrollViewShouldScrollToTop:(UIScrollView *)scrollView{

    //1. Markers are scrolling
    _isScrolling = YES;

    //2. Allow scrolling to the top
    return YES;
}

// [Scrolled to the top]
- (void)scrollViewDidScrollToTop:(UIScrollView *)scrollView{

    //1. Markers have stopped scrolling
    _isScrolling = NO;

    //2. Draw the currently visible cell
    [self drawVisbledCells];
}


// [Stop rolling]
- (void)scrollViewDidEndDecelerating:(UIScrollView *)scrollView {

    //1. Markers have stopped scrolling
    _isScrolling = NO;

    //2. Draw the currently visible cell
    [self drawVisbledCells];
}

The main lesson I learned from this demo is that sub-threads use CoreText text to render images asynchronously, which are directly stuffed into layer display, and scrollview filters out cell s that have been scrolled more than a certain number and does not draw when scrollview scrolls quickly.

This demo, which also rewrites part of the SDWeb Image code, downloads the image, draws the image directly to the context, then gets the rendered image from the context, and then feeds it to the layer to display.

I also added some of my own optimization points.

1. Caching of dispatch_queue_t
 2. Caching of CTFrameRef

Because dispatch_get_global_queue() gets concurrent queues, that is to say, it may create n threads, which is beyond our control, depending entirely on the mood of the underlying GCD.

The author of YYKit cites the possibility that a thread will be locked for a long time when drawing asynchronously using concurrent queues. Once this happens, the GCD underlying layer creates new threads to allocate the other drawing codes currently waiting to be executed.

Once there are more and more waiting threads, more than n new threads will be created indefinitely (which may be exaggerated), but there will still be some CPU impact.

The author of YYKit wrote a cache container for dispatch_queue_t serial instances and built it using QOS recommended by iOS 8. Each QOS corresponds to a Context, under which the number of dispatch_queue_t serial instances of the current CPU activation core is saved. The structure of its Pool:

- Pool
    - (1) QOS_CLASS_USER_INITIATED Dispatch Context object
        - Cached dispatch_queue_t instance 1
        - Cached dispatch_queue_t instance 1
        - ....
        - Cached dispatch_queue_t instance n
    - (2) QOS_CLASS_DEFAULT Dispatch Context object
        - Cached dispatch_queue_t instance 1
        - Cached dispatch_queue_t instance 1
        - ....
        - Cached dispatch_queue_t instance n
    - (3) QOS_CLASS_UTILITY Dispatch Context object
        - Cached dispatch_queue_t instance 1
        - Cached dispatch_queue_t instance 1
        - ....
        - Cached dispatch_queue_t instance n
    - (4) QOS_CLASS_BACKGROUND Dispatch Context object
        - Cached dispatch_queue_t instance 1
        - Cached dispatch_queue_t instance 1
        - ....
        - Cached dispatch_queue_t instance n

In this way, threads can be reused, the number of control threads is not too large, and the CPU core number is full, it is really cost-effective. Then? At the beginning, I also thought about why not cache the NSThread object directly, just like the NSThread which is a back-end service like AFNetworking.

Later, I tried it, mainly for one reason, trouble.... Ha-ha, it's really too troublesome, and it's a little difficult. The first problem is to keep the NSThread object alive. It's not about not letting it go, it's about letting the NSThread always accept events for execution.

You can try to save an NSThread object and then assign tasks to it continuously to see what's wrong. Then there are a lot of synchronous mutually exclusive operations, and the thread pool needs to be reused, but when it is not enough, it needs to be created. In short, it is still very difficult, so it is abandoned. In other words, if you already have something as good as dispatch_queue_t, don't go the original way.

I've simplified it again. Its main API s are as follows:

// Create pool
void XZHDispatchQueuePoolCreate();
// Assign a drawing block to a cached queue under QOS

- (void)test2 {

   XZHDispatchQueueAsyncBlockWithQOSUserInteractive(^{
        NSLog(@"task1 : %@", [NSThread currentThread]);
    });

    XZHDispatchQueueAsyncBlockWithQOSUserInitiated(^{
        NSLog(@"task2 : %@", [NSThread currentThread]);
    });

    XZHDispatchQueueAsyncBlockWithQOSUtility(^{
        NSLog(@"task3 : %@", [NSThread currentThread]);
    });

    XZHDispatchQueueAsyncBlockWithQOSBackgroud(^{
        NSLog(@"task4 : %@", [NSThread currentThread]);
    });
// When it's no longer in use, it's completely discarded. But I don't think we should abandon it. We should use this cache pool globally.

void XZHDispatchQueuePoolRelease();

In this way, different drawing tasks are assigned according to different QOS levels, so that the drawing tasks can be executed according to their priorities, and serial queues are used to ensure that there will be no creation of multiple threads for drawing.

Then it caches the CTFrameRef instance that CoreText finally computes from the text:

- (void)asyncDraw {

    /**
     *  The process of rendering and generating images is all completed asynchronously in sub-threads.
     */

    XZHDispatchQueueAsyncBlockWithQOSBackgroud(^{
        ..........................     

        // md5 for text
         NSString *md5 = [_text xzh_MD5];

        // Use MD5 to fetch CTFrameRef from memory cache
         CTFrameRef ctFrame = CTFrameForKey(md5);

        // Determine whether to use cached CTFrameRef for direct rendering
        if (!_highlighting && ctFrame) {
            // Drawing with cached CTFrame s eliminates the need for text parsing and rendering
            [self drawWithCTFrame:ctFrame inRect:rect context:context];
        } else {
            // Re-walk the process of resolving CTFrameRef
            CTFrameRef ctFrame = ............;

            // Cache CTFrameRef instances to avoid repeated parsing of the same text
            CacheCTFrameWithKey(ctFrame, md5);

            ...........

      }
}

It's also a small optimization to avoid parsing the same text in the same paragraph.

YYAsyncLayer's Source Learning

Before I start, I have a question. The system has provided many kinds of special CALayer subclasses for efficient drawing. Do you need to write another CALayer?

- (1) CAShapeLayer Vector Drawing
 - (2) CATextLayer Text Drawing
 - (3) CATransform Layer Deformation Rendering
 - (4) CAGradient Layer Gradient Drawing
 - (5) Careplicator Layer Repeats Drawing in Multiple Styles
 - (6) CAScroll Layer is similar to ScollView
 - (7) CATiled Layer cuts large maps into n small maps and loads them on demand
 - (8) CAEmitter Layer is not commonly used
 - (9) CAEAGLLayer is not commonly used
 - (10) AVPlayerLayer is not an efficient layer to play video.

With these special CALLayers, most of the efficient drawing codes can be completed. Do you need to customize them?

CALayer provides drawsAsynchronously as an asynchronous drawing property. Do you need to write another CALayer for asynchronous drawing?

Here's a simple example of using this asynchronous rendering attribute to complete a simple CoreGraphics image rendering

#import <QuartzCore/QuartzCore.h>
#import <UIKit/UIKit.h>

@interface CALayerSub : CALayer

@end
@implementation CALayerSub

- (void)drawInContext:(CGContextRef)ctx {
    NSLog(@"thread = %@", [NSThread currentThread]);

    // Draw an image
    //CGContextDrawImage(ctx, self.bounds, [UIImage imageNamed:@"demo"].CGImage);

    // Draw an ellipse
    CGContextAddEllipseInRect(ctx, self.bounds);
    CGContextSetFillColorWithColor(ctx, [UIColor orangeColor].CGColor);
    CGContextFillPath(ctx);

    NSLog(@"thread = %@", [NSThread currentThread]);
}

@end

Testing in VC

#import "ViewController.h"
#import "CALayerSub.h"

@interface ViewController ()

@end

@implementation ViewController

- (void)test1 {
    CALayerSub *layer = [CALayerSub layer];
    layer.drawsAsynchronously = YES;
    layer.frame = CGRectMake(50, 100, 200, 100);
    [self.view.layer addSublayer:layer];
    [layer setNeedsDisplay];
}

- (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
    [self test1];
}

@end

Print Output

2017-03-09 18:21:48.673 XZHAsyncLayerDemo[21807:1380124] thread = <NSThread: 0x60000006acc0>{number = 1, name = main}
2017-03-09 18:21:48.674 XZHAsyncLayerDemo[21807:1380124] thread = <NSThread: 0x60000006acc0>{number = 1, name = main}

Find that it's still in the main thread, that's not the case. That's strange. Isn't it about asynchronous sub-threads?

Finally, it is explained in a foreign technical article that drawInContext: is still executed in the main thread, but the final drawing code, such as Core Graphics, is done in the sub-thread.

Even though drawing code such as CoreGraphics is done in a sub-thread, some of the code before drawing is still in the main thread. For example:

  • (1) Reading of picture files, text files, etc.
  • (2) Decompression of pictures
  • (3) and the creation of obsolete auxiliary objects involved in various rendering

It's still done on the main thread. That is to say, the asynchrony is not thorough enough, which is one of the shortcomings of the system CALayer.

Attempt to create CALayer object on sub-thread, set CALayer object, and finally return to the main thread to add CALayer to VC.view layer.

@implementation ViewController

- (void)test1 {
    CALayerSub *layer = [CALayerSub layer];
//    layer.drawsAsynchronously = YES;
    layer.frame = CGRectMake(50, 100, 200, 100);
    layer.borderWidth = 1;
    layer.contents = (__bridge id)([UIImage imageNamed:@"demo"].CGImage);

    dispatch_async(dispatch_get_main_queue(), ^{
        [self.view.layer addSublayer:layer];
    });
}

- (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
          [self test1];
    });
}

@end

After running, there is no crash, but the picture display is slightly slower. The reason is that before the sub-thread has finished rendering the internal data of CALayer, it immediately returns to the main thread to add CALayer.

I changed it to try to render the sub-threads and then go back to the main thread to add CALayer:

@interface ViewController () {
@property (weak, nonatomic) IBOutlet UIImageView *imageview;
@end
@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    self.imageview.image = [UIImage imageNamed:@"demo"];
}

- (void)testScreenShotAsync {
    CALayer *layer = self.imageview.layer;
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        UIImage *image = [layer xzh_screenShot];//Find a code for CALayer data rendering on the Internet
        CALayer *bottomLayer = [CALayer layer];
        bottomLayer.contentsScale = [UIScreen mainScreen].scale;
        bottomLayer.frame = CGRectMake(50, 200, 200, 150);
        bottomLayer.borderWidth = 1;
        bottomLayer.borderColor = [UIColor redColor].CGColor;
        bottomLayer.contents = (__bridge id)(image.CGImage);
        dispatch_async(dispatch_get_main_queue(), ^{
            [self.view.layer addSublayer:bottomLayer];
        });
    });
}

- (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
      [self testScreenShotAsync];
}

@end

Here are the results of the operation


demo.gif

So for the previous example, without waiting for the sub-thread to finish rendering the picture, I immediately returned to the main thread, I think it may take the image rendering process back to the main thread to continue to complete.

Because there is no code to manipulate the layer at all, the subsequent image rendering must be done in the main thread.

Comparing the code of these two examples, it is obvious that the latter one is better. Because the rendering of the picture is completed in the sub-thread, and finally only returned to the main thread for display, which saves a lot of time for the main thread.

What can be done more abnormal, even picture decompression, decoding, rounding, shadowing, masking... Are placed in the sub-threads to complete, and then it is cached in memory. Think about how much the main thread would save if it did that.

Finally, I looked at the header file of CALayer and found that all attribute modifiers of CALayer (other dedicated Layers) were basically non-atomic, that is, they all used atomic. That means using atomic attributes to queue multithreaded access by default, which should be arbitrary in a multithreaded environment. Only when you finally manipulate the UIView object, you have to go back to the main thread.

I guess it should be possible to operate on a single CALayer object on a sub-thread, and its correctness needs to be verified.

CALayer's setNeeds display, display

- (void)test1 {
    CALayerSub *layer = [CALayerSub layer];
//    layer.drawsAsynchronously = YES; comments do not affect

    layer.frame = CGRectMake(50, 100, 200, 100);
    [self.view.layer addSublayer:layer];


    for (int i = 0; i < 10; i++) {
        layer.backgroundColor = [UIColor randomColor].CGColor;
//        [layer display]; will be forced to redraw 10 times
        [layer setNeedsDisplay]; // Only one last remake.
    }
}

If it is the output of [layer setNeeds Display]

2017-03-09 19:16:35.469 XZHAsyncLayerDemo[22421:1428266] thread = <NSThread: 0x6080000680c0>{number = 1, name = main}

If it is the output of [layer display]

2017-03-09 19:17:28.803 XZHAsyncLayerDemo[22472:1429927] thread = <NSThread: 0x60000006a3c0>{number = 1, name = main}
2017-03-09 19:17:28.803 XZHAsyncLayerDemo[22472:1429927] thread = <NSThread: 0x60000006a3c0>{number = 1, name = main}
2017-03-09 19:17:28.803 XZHAsyncLayerDemo[22472:1429927] thread = <NSThread: 0x60000006a3c0>{number = 1, name = main}
2017-03-09 19:17:28.804 XZHAsyncLayerDemo[22472:1429927] thread = <NSThread: 0x60000006a3c0>{number = 1, name = main}
2017-03-09 19:17:28.804 XZHAsyncLayerDemo[22472:1429927] thread = <NSThread: 0x60000006a3c0>{number = 1, name = main}
2017-03-09 19:17:28.804 XZHAsyncLayerDemo[22472:1429927] thread = <NSThread: 0x60000006a3c0>{number = 1, name = main}
2017-03-09 19:17:28.804 XZHAsyncLayerDemo[22472:1429927] thread = <NSThread: 0x60000006a3c0>{number = 1, name = main}
2017-03-09 19:17:28.804 XZHAsyncLayerDemo[22472:1429927] thread = <NSThread: 0x60000006a3c0>{number = 1, name = main}
2017-03-09 19:17:28.805 XZHAsyncLayerDemo[22472:1429927] thread = <NSThread: 0x60000006a3c0>{number = 1, name = main}
2017-03-09 19:17:28.805 XZHAsyncLayerDemo[22472:1429927] thread = <NSThread: 0x60000006a3c0>{number = 1, name = main}

The difference is that each time it is redrawn and only the final data is drawn.

Note that setNeedsDisplay will temporarily stop drawing when the previous drawing operation starts or is ready to start, and only draw for the last time.

This is a point of YYAsyncLayer simulation, which is used to optimize the redrawing process frequently, many times, that is, to draw only the last set of data.

And YYAsyncLayer is completely asynchronous sub-threads, which can pass the rendered Image directly to the content attribute of CALayer, completely bypassing the UIView object, which may be a higher optimization point than the system CALayer.

Why does YYAsyncLayer inherit from CALayer implementations rather than from dedicated CALayer implementations?

YYAsyncLayer inherits from CALayer implementation, but it does not inherit from any special Layer. I think maybe once inherited from a special Layer, it can only do this kind of thing, so inherit from CALayer to make a general asynchronous sub-thread CALayer.

When YYAsyncLayer performs internal drawing, it creates only a Context and then passes it back to the outside for drawing. That is to say, externally received ontext can be drawn arbitrarily, such as text, pictures, custom graphics, paths, etc. It is possible to draw the partition manually instead of using various UIView objects.

Eventually, YYAsyncLayer renders all the external rendering in Context s directly into a CGImageRef instance in the sub-thread, and then directly feeds the value of YYAsyncLayer.contents attribute to display.

Read all the files, decompress the pictures, decode the pictures and draw all kinds of special effects on the sub-threads, which is more efficient than the system CALayer.

YYAsyncLayer was not designed as a separate CALayer

Because CALayer is cumbersome to use alone, it doesn't have event response, it can't respond when the screen rotates, and it's not as easy to manage hierarchical relationships as UIView.

Therefore, YYAsyncLayer is not designed as a separate Layer, but as a backing layer for a View (for example, UILabel for custom text rendering).

Unlike CALayer, this backing layer puts all the operations on the main thread, but puts all the files read, draw and render on the sub-thread to fully release the pressure of the main thread.

Therefore, it's better to have a UIView container on the outside, and all the drawing in the interior can be rendered and rendered directly through the Context returned by YYAsyncLayer.

YYAsyncLayer Source Learning

As its name implies, it is an asynchronous xxx CALLayer. In general, it is the rendering of CALayer data on asynchronous sub-threads to get the final image stuffed to CALayer. This is probably the core function of this set of code.

For example, the following is a demo that is rendered by the simplest asynchronous sub-thread and displayed directly:

@implementation CoreTextDemoVC {
    UIImageView *_labelImageView;
}

- (void)viewDidLoad {
    [super viewDidLoad];
    [self drawText2_3];
}

- (void)drawText2_3 {
    CGRect rect = CGRectMake(10, 100, 300, 300);
    _labelImageView = [[UIImageView alloc] initWithFrame:rect];
    _labelImageView.layer.borderWidth = 1;
    _labelImageView.contentMode = UIViewContentModeScaleAspectFit;
    _labelImageView.clipsToBounds = YES;
    [self.view addSubview:_labelImageView];

    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0);
        CGContextRef context = UIGraphicsGetCurrentContext();
        [[UIColor whiteColor] set];
        CGContextFillRect(context, rect);
        CGContextSetTextMatrix(context, CGAffineTransformIdentity);
        CGContextTranslateCTM(context, 0, rect.size.height);
        CGContextScaleCTM(context, 1.0, -1.0);

        NSMutableAttributedString *attrString = [[NSMutableAttributedString alloc] initWithString:@"iOS When the program starts, it creates a main thread, while a thread can only perform one thing. If the main thread performs some time-consuming operations, such as loading network pictures, downloading resource files and so on, it will block the main thread (causing the interface to be stuck and unable to interact), so we need to use multithreading technology to avoid this kind of situation. Condition. iOS There are three kinds of multithreading technologies NSThread,NSOperation,GCD,These three technologies follow IOS With the introduction of development, the level of abstraction has changed from low to high, and its use has become more and more simple."];
        CTFramesetterRef frameSetter = CTFramesetterCreateWithAttributedString((CFAttributedStringRef)attrString);
        CGMutablePathRef path = CGPathCreateMutable();
        CGPathAddEllipseInRect(path, NULL, CGRectMake(0, 0, rect.size.width, rect.size.height));
        [[UIColor redColor]set];
        CGContextFillEllipseInRect(context, CGRectMake(0, 0, rect.size.width, rect.size.height));
        CTFrameRef frame = CTFramesetterCreateFrame(frameSetter, CFRangeMake(0, [attrString length]), path, NULL);
        CTFrameDraw(frame, context);
        CFRelease(frame);
        CFRelease(path);
        CFRelease(frameSetter);
CGContextSetTextMatrix(context,CGAffineTransformIdentity);
        CGContextTranslateCTM(context, 0, rect.size.height);
        CGContextScaleCTM(context, 1.0, -1.0);
        UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
        UIGraphicsEndImageContext();

        dispatch_async(dispatch_get_main_queue(), ^{
            _labelImageView.image = img;
        });
    });
}

@end

The above is just a few lines of text, but it is one of the basic ideas for performance.

YYAsync Layer source code learning I will not post, say the whole workflow is smooth, want to learn to look at it.

1. The backing layer of UIView is set to YYAsyncLayer
 2. To modify the UIView object's text, textColor, textSize, and so on, we need to redraw the content.
3. [UIView setNeedsDisplay] or [UIView object. layer setNeedsDisplay] triggers redrawing of a modified attribute value
 4. Package {UI object, text, setNeedsDisplay}, {UI object, textColor, setNeedsDisplay}, {UI object, textSize, setNeedsDisplay} into Transaction object respectively.
5. -[Transaction commit]
6. Save the Transaction object currently executing commit using a temporary Set container
 7. When runloop is about to rest, all Transaction objects stored in the current Set container are submitted to runloop for temporary storage, and runloop goes to sleep.
8. runloop performs all Transaction objects set up last time on the next round-robin wake-up
 9. The message of the SEL formulated by a Transaction object traversed is sent, generally executing [CALayer setNeeds Display]
10. -[YYAsyncLayer setNeedsDisplay] Called
 11. -[YYAsyncLayer display] is called
 12. -[YYAsyncLayer_displayAsync: Asynchronous Drawing] Called
 13. Create a new Task for the current redraw task: YYAsyncLayerDisplayTask *task = [YYAsyncLayer object. delegate new AsyncDisplayTask]; 
14. task.willDisplay(); tells the outside world that drawing is about to begin
 15. Constructions are passed to the outside world to determine whether the cancelled isCancelled block is terminated, and local counter values and counter objects are captured.
16. Start asynchronous subthreads
 17. Create CGContext canvas, and initialize
 18. if (task.didDisplay) task.didDisplay(self, whether the current drawing ends);
19. Complete drawing, UIImage * image = UIGraphics GetImageFromCurrent ImageContext ();
20. YYAsyncLayer object. contents = (_bridge id) (image. CGImage);

Another thing that has benefited a lot from this is the YYTransaction class. YYKit refers to the ASDK source code and strips it out. I've learned a little about the simulation of Core Animation for interface redrawing.

  • (1) Each redraw operation is packaged into a Transaction object by CoreAnimation and commit is executed
  • (2) The Transaction object that executes commit does not start redrawing immediately, but is added to a temporary memory cache for storage.
  • (3) When the main thread RunLoop is about to rest, the redraw operation of the Transaction object already commit in the cache is registered with RunLoop. Just registered with RunLoop, and not immediately processed by RunLoop. Because Run Loop was about to rest.
  • (4) When RunLoop rests and starts a new cycle, it will process all Transaction objects registered before and notify them to redraw.
  • (5) Eventually, go back and execute the setNeeds Display of UIView or CALayer

There's an isCancelldblock that lets the outside world know if the current drawing operation has been cancelled internally by YYAsyncLayer. When - [UIView setNeeds Display] or - [CALayer setNeeds Display] is executed, a new drawing task is initiated. YYAsyncLayer simulates the system CALayer internally, which will immediately terminate the current or upcoming rendering task, and then directly carry out the last rendering task.

You can call back to execute YES == isCancelld(); that means that the current drawing task has been cancelled, so it will not be drawn. For example, a fast-rolling cell will be reused by other NSIndexPath, that is, redrawn, that is, call setNeedsDisplay to initiate a new redrawing task, which naturally will not perform the rendering of data content corresponding to the fast-rolling NSIndexPath.

In this way, the problem of drawing invisible cell s by VVeboTableView scrolling is solved, and the problem of blank VVeboTableView is also solved.

Because VVeboTableView draws visible cells only after the fast scroll stops. YYAsyncLayer, on the other hand, submits a YYTransaction object containing the redraw operation whenever setNeedsDisplay is called. Just wait for MainRunLoop to wake up in the next round and perform the redrawing. Once the cell scrolls quickly to invisibility and needs to be reused, it calls setNeedsDisplay and cancels the previous drawing.

When drawing at present, it judges whether there is the latest drawing, that is, whether the UIView setNeeds Display or CALayer setNeeds Display has been executed again. The ingenious use of the counter's self-increment and the method of block capturing copy values really have to admire the author's head.

Simply enumerate the logic of YYAsyncLayer's drawing.

  • (1) Get the Task object of the current YYAsyncLayer drawing task
  • (2) Then distinguish the drawing task Task, which is synchronous or asynchronous.
  • (3) Judging whether a new rendering task has been started by comparing the counter values
  • (4) If the width or height of the drawing area is any less than 1, the drawing is directly terminated and contents are released.
  • (5) Asynchronize to a sub-thread and start creating context context
  • (6) Return creates the initialized context, allowing the outside world to draw freely on the context
    • Draw a paragraph of text
    • Draw some pictures
    • Or a mix of pictures and texts
  • (7) After rendering, notify context to complete rendering on the sub-thread
  • (8) Get the rendered image of context, and then insert it to layer.contents to display.
  • (9) During this period, several judgments were made to determine whether the outside world had performed the -[CALayer setNeeds Display], opening a new rendering task. If so, it terminates the current rendering operation, calls back to the outside world, and releases the various objects generated during the rendering process.

Based on this, I think that's why YYAsyncLayer inherits from CALayer instead of other special CALayer subclasses. It's because it wants to be a CALayer that can render arbitrary content.

YYAsyncLayer just renders a single CALayer asynchronously, adding many overlapped CALayers? I see from the article that such a way of thinking, but also ASDK to optimize the way:

  • (1) Drawing and rendering an image from an asynchronous sub-thread of a single CALayer
  • (2) Render the image from multiple layers of CALLayer, and then synthesize the image.
  • (3) The final composite image is then inserted into the final outer container View.layer or container layer for display.

Well, after digesting YYAsync Layer, go and see the ASDK mountain.

Keywords: Attribute iOS REST less

Added by shturm681 on Tue, 16 Jul 2019 20:49:35 +0300