[Zhuan] [Away3D Code Interpretation] (3): Rendering Core Processes (Rendering)

As always, we still need to take a brief look at the rendering code for the render method in View3D, with the comment added:

//If Filter3D is used, it will determine if depth maps need to be rendered, and if so, depth maps will be rendered before actual rendering.
if (_requireDepthRender)
    //The depth map will be rendered on the _depthRender map object
    renderSceneDepthToTexture(_entityCollector);

// todo: perform depth prepass after light update and before final render
if (_depthPrepass)
    renderDepthPrepass(_entityCollector);

_renderer.clearOnRender = !_depthPrepass;

//Determine if a filter is used
if (_filter3DRenderer && _stage3DProxy._context3D) {
    //Special rendering with filters
    _renderer.render(_entityCollector, _filter3DRenderer.getMainInputTexture(_stage3DProxy), _rttBufferManager.renderToTextureRect);
    //Filter Render Object Render Again
    _filter3DRenderer.render(_stage3DProxy, camera, _depthRender);
} else {
    //If no filter is used, render directly with the default rendering object
    _renderer.shareContext = _shareContext;
    if (_shareContext)
        _renderer.render(_entityCollector, null, _scissorRect);
    else
        _renderer.render(_entityCollector);
    
}

Adding filters in the filters package can be done by setting the filters 3D property of View3D, which is consistent with the filters property of the native display object, but it is important to note that this property is rendered for the entire 3D scene.

The rendering process of this notebook is interpreted without the filter temporarily, and is dedicated to the core of the rendering process.

 

When it comes to Away3D rendering, we have to look at the package away3d.core.render. When we open this package, we will find several classes ending with Renderer. These classes are the core rendering classes. We can mainly look at the DefaultRenderer class, which is the default rendering class for Away3D.

Rendering process:

Let's first look at the render method of the DefaultRenderer class in its parent RendererBase:

arcane function render(entityCollector:EntityCollector, target:TextureBase = null, scissorRect:Rectangle = null, surfaceSelector:int = 0):void
{
    if (!_stage3DProxy || !_context)
        return;
    
    //Update the projection matrix of the camera
    _rttViewProjectionMatrix.copyFrom(entityCollector.camera.viewProjection);
    _rttViewProjectionMatrix.appendScale(_textureRatioX, _textureRatioY, 1);
    
    //Method of performing rendering
    executeRender(entityCollector, target, scissorRect, surfaceSelector);
    
    // clear buffers
    for (var i:uint = 0; i < 8; ++i) {
        _context.setVertexBufferAt(i, null);
        _context.setTextureAt(i, null);
    }
}
We found that we actually executed the executeRender er code, starting with the DefaultRenderer section:
protected override function executeRender(entityCollector:EntityCollector, target:TextureBase = null, scissorRect:Rectangle = null, surfaceSelector:int = 0):void
{
    //Update lighting data
    updateLights(entityCollector);

    //For filters 3d, we can skip it
    // otherwise RTT will interfere with other RTTs
    if (target) {
        drawRenderables(entityCollector.opaqueRenderableHead, entityCollector, RTT_PASSES);
        drawRenderables(entityCollector.blendedRenderableHead, entityCollector, RTT_PASSES);
    }

    //Parent Method
    super.executeRender(entityCollector, target, scissorRect, surfaceSelector);
}

Look at the RendererBase section again:

protected function executeRender(entityCollector:EntityCollector, target:TextureBase = null, scissorRect:Rectangle = null, surfaceSelector:int = 0):void
{
    _renderTarget = target;
    _renderTargetSurface = surfaceSelector;
    
    //Sort collected entity objects
    if (_renderableSorter)
        _renderableSorter.sort(entityCollector);
    
    //Render to a texture, always false in DefaultRenderer, skip it
    if (_renderToTexture)
        executeRenderToTexturePass(entityCollector);
    
    //Render to target object if target exists, otherwise render to background buffer
    _stage3DProxy.setRenderTarget(target, true, surfaceSelector);
    
    //Clear 3D Scene
    if ((target || !_shareContext) && _clearOnRender)
        _context.clear(_backgroundR, _backgroundG, _backgroundB, _backgroundAlpha, 1, 0);
    //Set Depth Test
    _context.setDepthTest(false, Context3DCompareMode.ALWAYS);
    //Set Clipping Area
    _stage3DProxy.scissorRect = scissorRect;
    //Background rendering, which draws an image at the bottom, referring specifically to the BackgroundImageRenderer class
    if (_backgroundImageRenderer)
        _backgroundImageRenderer.render();
    
    //Call Drawing Method
    draw(entityCollector, target);
    
    //Restore depth testing is required when integrating with Starling
    //line required for correct rendering when using away3d with starling. DO NOT REMOVE UNLESS STARLING INTEGRATION IS RETESTED!
    _context.setDepthTest(false, Context3DCompareMode.LESS_EQUAL);
    
    //Do you need to take a screenshot into the _snapshotBitmapData object
    if (!_shareContext) {
        if (_snapshotRequired && _snapshotBitmapData) {
            _context.drawToBitmapData(_snapshotBitmapData);
            _snapshotRequired = false;
        }
    }
    //Empty Clipping Area
    _stage3DProxy.scissorRect = null;
}

Let's first look at the classes that sort entity objects in the away3d.core.sort package (which contains the interface IEntitySorter and the class RenderableMergeSort), and we'll get a general idea of what sort classes do: they render more efficiently by sorting.Solid objects are sorted based on the texture they use and the distance from the camera, opaque objects are sorted from front to back, and objects with mixed modes are sorted to ensure correct results.

We found that the main rendering was done by calling our own draw method. Let's look at DefaultRenderer's draw method:

override protected function draw(entityCollector:EntityCollector, target:TextureBase):void
{
    //Set Mixing Factor
    _context.setBlendFactors(Context3DBlendFactor.ONE, Context3DBlendFactor.ZERO);

    //Draw the sky box first if it exists
    if (entityCollector.skyBox) {
        if (_activeMaterial)
            _activeMaterial.deactivate(_stage3DProxy);
        _activeMaterial = null;

        _context.setDepthTest(false, Context3DCompareMode.ALWAYS);
        drawSkyBox(entityCollector);
    }

    //Set Depth Test
    _context.setDepthTest(true, Context3DCompareMode.LESS_EQUAL);

    var which:int = target? SCREEN_PASSES : ALL_PASSES;
    //Render opaque entity objects
    drawRenderables(entityCollector.opaqueRenderableHead, entityCollector, which);
    //Rendering entity objects with mixed mode
    drawRenderables(entityCollector.blendedRenderableHead, entityCollector, which);

    //Set Depth Test
    _context.setDepthTest(false, Context3DCompareMode.LESS_EQUAL);

    //Destroy_activeMaterial Object
    if (_activeMaterial)
        _activeMaterial.deactivate(_stage3DProxy);

    _activeMaterial = null;
}

Let's see what the drawRenderables method does:
private function drawRenderables(item:RenderableListItem, entityCollector:EntityCollector, which:int):void
{
    var numPasses:uint;
    var j:uint;
    var camera:Camera3D = entityCollector.camera;
    var item2:RenderableListItem;

    //An item is a chain-like structure that iterates through all the entity objects that need to be rendered using the next property of the item
    while (item) {
        //Getting Texture Objects
        _activeMaterial = item.renderable.material;
        _activeMaterial.updateMaterial(_context);

        //Gets the number of Pass objects for a texture, each texture can be attached to multiple Pass objects for final drawing
        numPasses = _activeMaterial.numPasses;
        j = 0;

        //Below are all Pass objects that traverse the texture
        do {
            item2 = item;

            var rttMask:int = _activeMaterial.passRendersToTexture(j)? 1 : 2;

            //This judgement also determines whether the texture needs to be rendered
            if ((rttMask & which) != 0) {
                //Activate Pass Object
                _activeMaterial.activatePass(j, _stage3DProxy, camera);
                //Render subsequent entity objects with the same texture
                do {
                    //Rendering using Pass objects
                    _activeMaterial.renderPass(j, item2.renderable, _stage3DProxy, entityCollector, _rttViewProjectionMatrix);
                    item2 = item2.next;
                } while (item2 && item2.renderable.material == _activeMaterial);
                //Deactivate Pass Object
                _activeMaterial.deactivatePass(j, _stage3DProxy);
            } else {
                //Skip subsequent entity objects with the same texture
                do
                    item2 = item2.next;
                while (item2 && item2.renderable.material == _activeMaterial);
            }

        } while (++j < numPasses);

        item = item2;
    }
}

In fact, here we can see that rendering in Away3D actually traverses all SubMesh objects, rendering them by calling the renderPass method of the Material specified by SubMesh, and the first parameter of the renderPass method represents the number of Pass used for rendering, changing this parameter to achieve the purpose of traversing all Pass objects for rendering.

 

Next, our interpretation pauses to see the new Pass and Method objects:

  • Each type of texture corresponds to one or more Pass, which provides the main rendering control, and each texture corresponds to a different rendering method, specifically, the main code that provides the AGAL.
  • Each Pass can add multiple methods, and each Method corresponds to an appended rendering script, which is meant to provide certain variables in the existing rendering process, such as color conversion, texture blurring, shading, and so on.

It's understandable that Pass has core rendering codes, and Methods can provide functionality similar to filters in native display lists. Each time a Method is added to a texture, the corresponding Method adds multiple lines of implementation code to the Pass's fragment shader, as well as some variable parameters that are submitted as constants to the GPU.

Consolidating Method's AGAL code into Pass is implemented in ShaderCompiler's compileMethods method.

 

Let's continue with the rendering code and take a look at the renderPass method of the MaterialBase class:

arcane function renderPass(index:uint, renderable:IRenderable, stage3DProxy:Stage3DProxy, entityCollector:EntityCollector, viewProjection:Matrix3D):void
{
    //Handle light sources if they exist
    if (_lightPicker)
        _lightPicker.collectLights(renderable, entityCollector);
    
    //Get the corresponding pass object
    var pass:MaterialPassBase = _passes[index];
    
    //Update animation status if animation exists
    if (renderable.animator)
        pass.updateAnimationState(renderable, stage3DProxy, entityCollector.camera);
    
    //Rendering with pass
    pass.render(renderable, stage3DProxy, entityCollector.camera, viewProjection);
}
When I debugged following the Basic_View example, I entered the render method of the CompiledPass class. Currently, the actual rendering code will be different for different Pass objects:


arcane override function render(renderable:IRenderable, stage3DProxy:Stage3DProxy, camera:Camera3D, viewProjection:Matrix3D):void
{
    var i:uint;
    var context:Context3D = stage3DProxy._context3D;
    if (_uvBufferIndex >= 0)
        renderable.activateUVBuffer(_uvBufferIndex, stage3DProxy);
    if (_secondaryUVBufferIndex >= 0)
        renderable.activateSecondaryUVBuffer(_secondaryUVBufferIndex, stage3DProxy);
    if (_normalBufferIndex >= 0)
        renderable.activateVertexNormalBuffer(_normalBufferIndex, stage3DProxy);
    if (_tangentBufferIndex >= 0)
        renderable.activateVertexTangentBuffer(_tangentBufferIndex, stage3DProxy);
    
    if (_animateUVs) {
        var uvTransform:Matrix = renderable.uvTransform;
        if (uvTransform) {
            _vertexConstantData[_uvTransformIndex] = uvTransform.a;
            _vertexConstantData[_uvTransformIndex + 1] = uvTransform.b;
            _vertexConstantData[_uvTransformIndex + 3] = uvTransform.tx;
            _vertexConstantData[_uvTransformIndex + 4] = uvTransform.c;
            _vertexConstantData[_uvTransformIndex + 5] = uvTransform.d;
            _vertexConstantData[_uvTransformIndex + 7] = uvTransform.ty;
        } else {
            _vertexConstantData[_uvTransformIndex] = 1;
            _vertexConstantData[_uvTransformIndex + 1] = 0;
            _vertexConstantData[_uvTransformIndex + 3] = 0;
            _vertexConstantData[_uvTransformIndex + 4] = 0;
            _vertexConstantData[_uvTransformIndex + 5] = 1;
            _vertexConstantData[_uvTransformIndex + 7] = 0;
        }
    }
    
    _ambientLightR = _ambientLightG = _ambientLightB = 0;

    if (usesLights())
        updateLightConstants();

    if (usesProbes())
        updateProbes(stage3DProxy);
    
    if (_sceneMatrixIndex >= 0) {
        renderable.getRenderSceneTransform(camera).copyRawDataTo(_vertexConstantData, _sceneMatrixIndex, true);
        viewProjection.copyRawDataTo(_vertexConstantData, 0, true);
    } else {
        var matrix3D:Matrix3D = Matrix3DUtils.CALCULATION_MATRIX;
        matrix3D.copyFrom(renderable.getRenderSceneTransform(camera));
        matrix3D.append(viewProjection);
        matrix3D.copyRawDataTo(_vertexConstantData, 0, true);
    }
    
    if (_sceneNormalMatrixIndex >= 0)
        renderable.inverseSceneTransform.copyRawDataTo(_vertexConstantData, _sceneNormalMatrixIndex, false);
    
    if (_usesNormals)
        _methodSetup._normalMethod.setRenderState(_methodSetup._normalMethodVO, renderable, stage3DProxy, camera);
    
    var ambientMethod:BasicAmbientMethod = _methodSetup._ambientMethod;
    ambientMethod._lightAmbientR = _ambientLightR;
    ambientMethod._lightAmbientG = _ambientLightG;
    ambientMethod._lightAmbientB = _ambientLightB;
    ambientMethod.setRenderState(_methodSetup._ambientMethodVO, renderable, stage3DProxy, camera);
    
    if (_methodSetup._shadowMethod)
        _methodSetup._shadowMethod.setRenderState(_methodSetup._shadowMethodVO, renderable, stage3DProxy, camera);
    _methodSetup._diffuseMethod.setRenderState(_methodSetup._diffuseMethodVO, renderable, stage3DProxy, camera);
    if (_usingSpecularMethod)
        _methodSetup._specularMethod.setRenderState(_methodSetup._specularMethodVO, renderable, stage3DProxy, camera);
    if (_methodSetup._colorTransformMethod)
        _methodSetup._colorTransformMethod.setRenderState(_methodSetup._colorTransformMethodVO, renderable, stage3DProxy, camera);
    
    var methods:Vector.<MethodVOSet> = _methodSetup._methods;
    var len:uint = methods.length;
    for (i = 0; i < len; ++i) {
        var set:MethodVOSet = methods[i];
        set.method.setRenderState(set.data, renderable, stage3DProxy, camera);
    }
    
    context.setProgramConstantsFromVector(Context3DProgramType.VERTEX, 0, _vertexConstantData, _numUsedVertexConstants);
    context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 0, _fragmentConstantData, _numUsedFragmentConstants);
    
    renderable.activateVertexBuffer(0, stage3DProxy);
    context.drawTriangles(renderable.getIndexBuffer(stage3DProxy), 0, renderable.numTriangles);
}
There is a lot of code and no comment, basically dealing with light sources, animation, methods, and finally calling the drawTriangles Method of Context3D to draw our current Mesh object.

Keywords: Fragment

Added by drifter on Tue, 11 Jun 2019 20:27:28 +0300