1, Create MTKView
self.mtkView = [[MTKView alloc] initWithFrame:self.view.bounds]; self.mtkView.device = MTLCreateSystemDefaultDevice(); self.mtkView.delegate = self; self.mtkView.framebufferOnly = NO; [self.view insertSubview:self.mtkView atIndex:0];
framebufferOnly is added here. It is set to NO, and the normal default is YES. If this parameter is not set, it will be called when drawInMTKView is executed
[filter encodeToCommandBuffer:commandBuffer sourceTexture:self.texture destinationTexture:view.currentDrawable.texture];
An execution error will be prompted
-[MTLDebugComputeCommandEncoder setTexture:atIndex:]:380: failed assertion `frameBufferOnly texture not supported for compute.'
Here, the sourceTexture is written to the destinationTexture through the encoding commandbuffer method of filter; If it is not set to NO, the default view currentDrawable. Texture is only read-only
Added when creating MTKView
CVMetalTextureCacheCreate(NULL, NULL, self.mtkView.device, NULL, &_textureCache);
Create cvmetaltexturecacheref_ Texturecache, which is the Metal texture cache of Core Video
2, Since the content of the camera is displayed in real time, a process from capturing the content of the camera to on-screen display is required
1. This process is called AVCaptureSession; Then create a session first
self.mCaptureSession = [[AVCaptureSession alloc] init];
self.mCaptureSession.sessionPreset = AVCaptureSessionPreset1920x1080;
A preset value of 1920x1080 is added here to specify that the receiver (the resolution to be displayed on the screen) is within the resolution range of the camera. The larger the setting value, the clearer it will be.
2. Set the camera as the device for capturing session input
// Input settings // There are pre and post inputs. Here, the pre input is used AVCaptureDevice* inputDevice = nil; NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; for (AVCaptureDevice* device in devices) { if ([device position] == AVCaptureDevicePositionBack) { inputDevice = device; } } self.mCaptureInput = [[AVCaptureDeviceInput alloc] initWithDevice:inputDevice error:NULL]; if ([self.mCaptureSession canAddInput:self.mCaptureInput]) { [self.mCaptureSession addInput:self.mCaptureInput]; }
As you can see here, first obtain all cameras, and then enumerate and find the rear camera as the input camera.
Judge whether to join the input device, and then join.
3. Sets the format of the data output to the screen
// Output settings, data output to the interface // Since the output is needle by needle, do you want to lose the needle when the display is delayed // Even display format // Set the displayed function callback and use multithreaded sequential threads self.mCaptureOutput = [[AVCaptureVideoDataOutput alloc] init]; [self.mCaptureOutput setAlwaysDiscardsLateVideoFrames:NO]; // The format here is BGRA,Instead of YUV Avoid using color space Shader transformation [self.mCaptureOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; self.mProcessQueue = dispatch_queue_create("mProcessQueue", DISPATCH_QUEUE_SERIAL); [self.mCaptureOutput setSampleBufferDelegate:self queue:self.mProcessQueue]; if ([self.mCaptureSession canAddOutput:self.mCaptureOutput]) { [self.mCaptureSession addOutput:self.mCaptureOutput]; }
Because each stitch is displayed in order, the sequence of threads is used. If the class itself is set as an agent, the corresponding function captureOutput is implemented through the agent protocol
4. Sets the connection direction displayed
// Sets the connection direction displayed AVCaptureConnection* connection = [self.mCaptureOutput connectionWithMediaType:AVMediaTypeVideo]; [connection setVideoOrientation:AVCaptureVideoOrientationPortrait];
The direction of the video relative to the home key is set by the output device
5. Finally, after the input and output are connected in series, it starts to run
[self.mCaptureSession startRunning];
3, captureOutput is captured from the output device. The captured content needs to be properly processed to display the content required by drawInMTKView
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); size_t width = CVPixelBufferGetWidth(pixelBuffer); size_t height = CVPixelBufferGetHeight(pixelBuffer); CVMetalTextureRef tempTexture = nil; // Used here BGRA Follow kCVPixelFormatType_32BGRA Correspondence of CVReturn status = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, self.textureCache, pixelBuffer, NULL, MTLPixelFormatBGRA8Unorm, width, height, 0, &tempTexture); if (status == kCVReturnSuccess) { self.mtkView.drawableSize = CGSizeMake(width, height); self.texture = CVMetalTextureGetTexture(tempTexture); CFRelease(tempTexture); } }
4, Display callback drawInMTKView of MTKView for each frame
- (void)drawInMTKView:(nonnull MTKView *)view { if (self.texture) { id<MTLCommandBuffer> commandBuffer = [self.commandQueue commandBuffer]; MPSImageGaussianBlur* filter = [[MPSImageGaussianBlur alloc] initWithDevice:self.mtkView.device sigma:10]; [filter encodeToCommandBuffer:commandBuffer sourceTexture:self.texture destinationTexture:view.currentDrawable.texture]; [commandBuffer presentDrawable:view.currentDrawable]; [commandBuffer commit]; self.texture = nil; } }
1. The header file of mpsimagegaussian blur is #import < metalperformanceshaders / metalperformanceshaders h>
This is related to the shader displayed by the metal. You need to import the header file
2. After creation, execute through encodeToCommandBuffer: Encode a MPSKernel into a command Buffer The operation shall proceed out-of-place.
reference resources: https://www.jianshu.com/p/d3d698120891