Android native development, custom View hand shaking tiktok "Submarine Challenge", interviewer assault question

  • Attribute Animation: control the movement of obstacles and submarines and various dynamic effects

Stop talking and look at things first! The following describes the implementation of each part of the code.

Background

Bar

First, define the obstacle base class Bar, which is mainly responsible for drawing the bitmap resources to the specified area. Since the height of the obstacle is random when it is refreshed regularly from the right side of the screen, the x, y, w and h of its drawing area need to be set dynamically

/**
 * Obstacle base class
 */
sealed class Bar(context: Context) {

    protected open val bmp = context.getDrawable(R.mipmap.bar)!!.toBitmap()

    protected abstract val srcRect: Rect

    private lateinit var dstRect: Rect

    private val paint = Paint()

    var h = 0F
        set(value) {
            field = value
            dstRect = Rect(0, 0, w.toInt(), h.toInt())
        }

    var w = 0F
        set(value) {
            field = value
            dstRect = Rect(0, 0, w.toInt(), h.toInt())
        }

    var x = 0F
        set(value) {
            view.x = value
            field = value
        }

    val y
        get() = view.y

    internal val view by lazy {
        BarView(context) {
            it?.apply {
                drawBitmap(
                    bmp,
                    srcRect,
                    dstRect,
                    paint
                )
            }
        }
    }

}

internal class BarView(context: Context?, private val block: (Canvas?) -> Unit) :
    View(context) {

    override fun onDraw(canvas: Canvas?) {
        block((canvas))
    }
}

Obstacles are divided into upper and lower types. Because the same resource is used, they should be treated differently when drawing. Therefore, two subclasses are defined: UpBar and DnBar

/**
 * Obstacles above the screen
 */
class UpBar(context: Context, container: ViewGroup) : Bar(context) {

    private val _srcRect by lazy(LazyThreadSafetyMode.NONE) {
        Rect(0, (bmp.height * (1 - (h / container.height))).toInt(), bmp.width, bmp.height)
    }
    override val srcRect: Rect
        get() = _srcRect

}

The resource of the lower obstacle is drawn after rotating 180 degrees

/**
 * Obstacles under the screen
 */
class DnBar(context: Context, container: ViewGroup) : Bar(context) {

    override val bmp = super.bmp.let {
        Bitmap.createBitmap(
            it, 0, 0, it.width, it.height,
            Matrix().apply { postRotate(-180F) }, true
        )
    }

    private val _srcRect by lazy(LazyThreadSafetyMode.NONE) {
        Rect(0, 0, bmp.width, (bmp.height * (h / container.height)).toInt())
    }

    override val srcRect: Rect
        get() = _srcRect
}

BackgroundView

Next, create a BackgroundView container for the background, which is used to create and move obstacles regularly.

Manage all current obstacles through the list barsList. In onLayout, layout the obstacles above and below the screen respectively

/**
 * Background container class
 */
class BackgroundView(context: Context, attrs: AttributeSet?) : FrameLayout(context, attrs) {

    internal val barsList = mutableListOf<Bars>()

    override fun onLayout(changed: Boolean, left: Int, top: Int, right: Int, bottom: Int) {
        barsList.flatMap { listOf(it.up, it.down) }.forEach {
            val w = it.view.measuredWidth
            val h = it.view.measuredHeight
            when (it) {
                is UpBar -> it.view.layout(0, 0, w, h)
                else -> it.view.layout(0, height - h, w, height)
            }
        }
    }

Two methods, start and stop, are provided to control the start and end of the game:

  • At the end of the game, all obstacles are required to stop moving.
  • After the game starts, the obstacles will be refreshed regularly through Timer
/**
     * At the end of the game, stop the movement of all obstacles
     */
    @UiThread
    fun stop() {
        _timer.cancel()
        _anims.forEach { it.cancel() }
        _anims.clear()
    }

    /**
     * Periodically refresh obstacles:
     * 1\. establish
     * 2\. Add to view
     * 3\. move
     */
    @UiThread
    fun start() {
        _clearBars()
        Timer().also { _timer = it }.schedule(object : TimerTask() {
            override fun run() {
                post {
                    _createBars(context, barsList.lastOrNull()).let {
                        _addBars(it)
                        _moveBars(it)
                    }
                }
            }

        },  FIRST_APPEAR_DELAY_MILLIS, BAR_APPEAR_INTERVAL_MILLIS
        )
    }

     /**
     * Clear obstacles when the game restarts
     */
    private fun _clearBars() {
        barsList.clear()
        removeAllViews()
    }

Refresh obstacles

There are three steps to refresh obstacles:

  1. Create: create obstacles for a group of two
  2. Add: add the object to the barsList and the View to the container
  3. Move: move from right to left through attribute animation and delete after moving out of the screen

Random height will be set for obstacles when they are created. The random height should not be too high. Appropriate adjustment should be made based on the previous obstacle to ensure both random and coherence

/**
 * Create obstacles (top and bottom as a group)
 */
private fun _createBars(context: Context, pre: Bars?) = run {
    val up = UpBar(context, this).apply {
        h = pre?.let {
            val step = when {
                it.up.h >= height - _gap - _step -> -_step
                it.up.h <= _step -> _step
                _random.nextBoolean() -> _step
                else -> -_step
            }
            it.up.h + step
        } ?: _barHeight
        w = _barWidth
    }

    val down = DnBar(context, this).apply {
        h = height - up.h - _gap
        w = _barWidth
    }

    Bars(up, down)

}

/**
 * Add to screen
 */
private fun _addBars(bars: Bars) {
    barsList.add(bars)
    bars.asArray().forEach {
        addView(
            it.view,
            ViewGroup.LayoutParams(
                it.w.toInt(),
                it.h.toInt()
            )
        )
    }
}

/**
 * Moving obstacles using attribute animation
 */
private fun _moveBars(bars: Bars) {
    _anims.add(
        ValueAnimator.ofFloat(width.toFloat(), -_barWidth)
            .apply {
                addUpdateListener {
                    bars.asArray().forEach { bar ->
                        bar.x = it.animatedValue as Float
                        if (bar.x + bar.w <= 0) {
                            post { removeView(bar.view) }
                        }
                    }
                }

                duration = BAR_MOVE_DURATION_MILLIS
                interpolator = LinearInterpolator()
                start()
            })
}

}

Foreground

Boat

Define the submarine class Boat, create a custom View, and provide methods to move to the specified coordinates

/**
 * Submarine class
 */
class Boat(context: Context) {

    internal val view by lazy { BoatView(context) }

    val h
        get() = view.height.toFloat()

    val w
        get() = view.width.toFloat()

    val x
        get() = view.x

    val y
        get() = view.y

    /**
     * Move to specified coordinates
     */
    fun moveTo(x: Int, y: Int) {
        view.smoothMoveTo(x, y)
    }

}

BoatView

Customize the View to do the following

  • Through the regular switching of two resources, the effect of searchlight flashing is realized
  • Make the movement smoother with OverScroller
  • Through a Rotation Animation, the submarine can adjust the angle when moving, making it more flexible
internal class BoatView(context: Context?) : AppCompatImageView(context) {

    private val _scroller by lazy { OverScroller(context) }

    private val _res = arrayOf(
        R.mipmap.boat_000,
        R.mipmap.boat_002
    )

    private var _rotationAnimator: ObjectAnimator? = null

    private var _cnt = 0
        set(value) {
            field = if (value > 1) 0 else value
        }

    init {
        scaleType = ScaleType.FIT_CENTER
        _startFlashing()
    }

    private fun _startFlashing() {
        postDelayed({
            setImageResource(_res[_cnt++])
            _startFlashing()
        }, 500)
    }

    override fun computeScroll() {
        super.computeScroll()

        if (_scroller.computeScrollOffset()) {

            x = _scroller.currX.toFloat()
            y = _scroller.currY.toFloat()

            // Keep on drawing until the animation has finished.
            postInvalidateOnAnimation()
        }

    }

    /**
     * Move more smoothly
     */
    internal fun smoothMoveTo(x: Int, y: Int) {
        if (!_scroller.isFinished) _scroller.abortAnimation()
        _rotationAnimator?.let { if (it.isRunning) it.cancel() }

        val curX = this.x.toInt()
        val curY = this.y.toInt()

        val dx = (x - curX)
        val dy = (y - curY)
        _scroller.startScroll(curX, curY, dx, dy, 250)

        _rotationAnimator = ObjectAnimator.ofFloat(
            this,
            "rotation",
            rotation,
            Math.toDegrees(atan((dy / 100.toDouble()))).toFloat()
        ).apply {
            duration = 100
            start()
        }

        postInvalidateOnAnimation()
    }
}

ForegroundView

  • The submarine object is held and controlled by boat members
  • Implement camerahelper Facedetectlistener moves the submarine to the specified position according to the callback of face recognition
  • At the beginning of the game, create the submarine and do the opening animation
/**
 * Foreground container class
 */
class ForegroundView(context: Context, attrs: AttributeSet?) : FrameLayout(context, attrs),
    CameraHelper.FaceDetectListener {

    private var _isStop: Boolean = false

    internal var boat: Boat? = null

    /**
     * The game stops and the submarine no longer moves
     */
    @MainThread
    fun stop() {
        _isStop = true
    }

    /**
     * Accept the callback of face recognition and move the position
     */
    override fun onFaceDetect(faces: Array<Face>, facesRect: ArrayList<RectF>) {
        if (_isStop) return
        if (facesRect.isNotEmpty()) {
            boat?.run {
                val face = facesRect.first()
                val x = (face.left - _widthOffset).toInt()
                val y = (face.top + _heightOffset).toInt()
                moveTo(x, y)
            }
            _face = facesRect.first()
        }
    }

}

Opening animation

At the beginning of the game, move the submarine to the starting position through animation, that is, half of the y-axis

/**
     * Enter through animation at the beginning of the game
     */
    @MainThread
    fun start() {
        _isStop = false
        if (boat == null) {
            boat = Boat(context).also {
                post {
                    addView(it.view, _width, _width)
                    AnimatorSet().apply {
                        play(
                            ObjectAnimator.ofFloat(
                                it.view,
                                "y",
                                0F,
                                this@ForegroundView.height / 2f
                            )
                        ).with(
                            ObjectAnimator.ofFloat(it.view, "rotation", 0F, 360F)
                        )
                        doOnEnd { _ -> it.view.rotation = 0F }
                        duration = 1000
                    }.start()
                }
            }
        }
    }

Camera (camera)

The Camera part is mainly composed of TextureView and CameraHelper. TextureView provides Camera with a preview; The tool CameraHelper mainly performs the following functions:

  • Turn on the camera: turn on the camera through CameraManger
  • Camera switching: switch the front and rear cameras,
  • Preview: obtain the Previewable size provided by Camera and adapt to the display of TextureView
  • Face recognition: detect the position of the face and transform the coordinates on the TestureView

Adapt PreviewSize

The Previewable size provided by the camera hardware may be inconsistent with the actual size of the screen (i.e. the size of TextureView), so it is necessary to select the most appropriate PreviewSize during camera initialization to avoid picture stretching and other abnormalities on TextureView

class CameraHelper(val mActivity: Activity, private val mTextureView: TextureView) {

    private lateinit var mCameraManager: CameraManager
    private var mCameraDevice: CameraDevice? = null
    private var mCameraCaptureSession: CameraCaptureSession? = null

    private var canExchangeCamera = false                                               //Can I switch cameras
    private var mFaceDetectMatrix = Matrix()                                            //Face detection coordinate transformation matrix
    private var mFacesRect = ArrayList<RectF>()                                         //Save face coordinate information
    private var mFaceDetectListener: FaceDetectListener? = null                         //Face detection callback
    private lateinit var mPreviewSize: Size

    /**
     * initialization
     */
    private fun initCameraInfo() {
        mCameraManager = mActivity.getSystemService(Context.CAMERA_SERVICE) as CameraManager
        val cameraIdList = mCameraManager.cameraIdList
        if (cameraIdList.isEmpty()) {
            mActivity.toast("No cameras available")
            return
        }

        //Get camera direction
        mCameraSensorOrientation =
            mCameraCharacteristics.get(CameraCharacteristics.SENSOR_ORIENTATION)!!
        //Gets the StreamConfigurationMap, which manages all output formats and sizes supported by the camera
        val configurationMap =
            mCameraCharacteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!

        val previewSize = configurationMap.getOutputSizes(SurfaceTexture::class.java) //Preview size

        // When the screen is vertical, the width and height values need to be changed to ensure that the width is greater than the height
        mPreviewSize = getBestSize(
            mTextureView.height,
            mTextureView.width,
            previewSize.toList()
        )

        //Set TextureView according to the size of preview
        mTextureView.surfaceTexture.setDefaultBufferSize(mPreviewSize.width, mPreviewSize.height)
        mTextureView.setAspectRatio(mPreviewSize.height, mPreviewSize.width)
    }

The principle of selecting preview size shall be consistent with the aspect ratio of TextureView as much as possible, and the area shall be as close as possible.

private fun getBestSize(
        targetWidth: Int,
        targetHeight: Int,
        sizeList: List<Size>
    ): Size {
        val bigEnough = ArrayList<Size>()     //A list of sizes larger than the specified width
        val notBigEnough = ArrayList<Size>()  //List of sizes smaller than the specified width and height

        for (size in sizeList) {

            //Aspect ratio = = target value aspect ratio
            if (size.width == size.height * targetWidth / targetHeight
            ) {
                if (size.width >= targetWidth && size.height >= targetHeight)
                    bigEnough.add(size)
                else
                    notBigEnough.add(size)
            }
        }

        //Select the smallest value in bigEnough or the largest value in notBigEnough
        return when {
            bigEnough.size > 0 -> Collections.min(bigEnough, CompareSizesByArea())
            notBigEnough.size > 0 -> Collections.max(notBigEnough, CompareSizesByArea())
            else -> sizeList[0]
        }

        initFaceDetect()
    }

initFaceDetect() is used to initialize the face Matrix, which will be described later.

Face recognition

For camera preview, create a CameraCaptureSession object, and the session passes through CameraCaptureSession Capturecallback returns TotalCaptureResult, which can include information related to face recognition through parameters

/**
     * Create preview session
     */
    private fun createCaptureSession(cameraDevice: CameraDevice) {

        // For camera preview, create a CameraCaptureSession object
        cameraDevice.createCaptureSession(
            arrayListOf(surface),
            object : CameraCaptureSession.StateCallback() {

                override fun onConfigured(session: CameraCaptureSession) {
                    mCameraCaptureSession = session
                    session.setRepeatingRequest(
                        captureRequestBuilder.build(),
                        mCaptureCallBack,
                        mCameraHandler
                    )
                }

            },
            mCameraHandler
        )
    }

    private val mCaptureCallBack = object : CameraCaptureSession.CaptureCallback() {
        override fun onCaptureCompleted(
            session: CameraCaptureSession,
            request: CaptureRequest,
            result: TotalCaptureResult
        ) {
            super.onCaptureCompleted(session, request, result)
            if (mFaceDetectMode != CaptureRequest.STATISTICS_FACE_DETECT_MODE_OFF)
                handleFaces(result)

        }
    }

Change the face information matrix through mFaceDetectMatrix to determine the face coordinates so that they can be accurately applied to TextureView.

/**
     * Processing face information
     */
    private fun handleFaces(result: TotalCaptureResult) {
        val faces = result.get(CaptureResult.STATISTICS_FACES)!!
        mFacesRect.clear()

        for (face in faces) {
            val bounds = face.bounds

            val left = bounds.left
            val top = bounds.top
            val right = bounds.right
            val bottom = bounds.bottom

            val rawFaceRect =
                RectF(left.toFloat(), top.toFloat(), right.toFloat(), bottom.toFloat())
            mFaceDetectMatrix.mapRect(rawFaceRect)

            var resultFaceRect = if (mCameraFacing == CaptureRequest.LENS_FACING_FRONT) {
                rawFaceRect
            } else {
                RectF(
                    rawFaceRect.left,
                    rawFaceRect.top - mPreviewSize.width,
                    rawFaceRect.right,
                    rawFaceRect.bottom - mPreviewSize.width
                )
            }

            mFacesRect.add(resultFaceRect)

        }

          mActivity.runOnUiThread {
            mFaceDetectListener?.onFaceDetect(faces, mFacesRect)
        }
    }

Finally, the UI thread sends out Rect containing face coordinates through callback:

mActivity.runOnUiThread {
        mFaceDetectListener?.onFaceDetect(faces, mFacesRect)
    }

FaceDetectMatrix

Write at the end

In the field of technology, there is no course that can let you learn once and for all. No matter how good the course is, it can only be "the master leads the door, and the practice depends on the individual". The phrase "endless learning" is not only a good habit in any technical field, but also a necessary prerequisite for programmers and engineers not to be eliminated by the times, get better opportunities and develop.

If you feel that your learning efficiency is low and you lack correct guidance, you can Join CodeChina's technology circle with rich resources and strong learning atmosphere Let's learn and communicate together!

Join us! There are many technical giants from the front line in the group, as well as code farmers struggling in small factories or outsourcing companies. We are committed to building an equal and high-quality Android communication circle, which may not make everyone's technology advance by leaps and bounds in the short term, but in the long run, vision, pattern and long-term development direction are the most important.

The middle-aged crisis at the age of 35 is mostly due to being led by short-term interests and squeezing out the value prematurely. If you can establish a correct long-term career plan at the beginning. After the age of 35, you will only be more valuable than the people around you.

Keywords: Android Design Pattern

Added by djw821 on Sat, 18 Dec 2021 11:18:09 +0200