Implementation of "ants Hey" -- paddegan expression and action migration - initial experience

Implementation of "ants Hey" – paddegan expression and action migration

Write in front- Open source project

{card default width = "100%" label = "'ants hey '} the" ants Hey "tutorial sought by the whole network - Implementation of First order motion model based on paddegan {/ card default}

Step by step according to the project documents:

Principle of First Order Motion model

The task of the First Order Motion model is image animation. Given a source picture and a driving video, a video is generated, in which the protagonist is the source picture and the action is the action in the driving video. The source image usually contains a subject and the driving video contains a series of actions.

Generally speaking, First Order Motion can migrate the actions of character A in A given driving video to character B in A given source picture, and generate A new video that deduces the expression of character A with the face of character B.

Taking facial expression migration as an example, given a source character and a driving video, a video can be generated, in which the subject is the source character, and the expression of the source character in the video is determined by the expression in the driving video. Usually, we need to label the face key points of the source characters and train the model of expression migration.

However, the method proposed in this article only needs to be trained on the data set of similar objects. For example, to realize Tai Chi movement migration, use Tai Chi video data set for training. To achieve the effect of expression migration, use face video data set voxceleb for training. After training, we can use the corresponding pre training model to achieve the operation of real-time image animation in the preface.

The following operations can be run on the console

Download paddegan code

# Clone paddegan code from github
!git clone https://gitee.com/paddlepaddle/PaddleGAN
# Install the required installation package
cd PaddleGAN/
pip install -r requirements.txt
pip install imageio-ffmpeg
cd applications/
mkdir output

There will be many problems during this period

1. The package cannot be installed - change the source and install it several times
2. Error in installing the pad No module named 'paddle' || paddlepaddle installation
3. Other package errors Cannot uninstall 'llvmlite' || Error #15: Initializing libiomp5.dylib

Implementation steps

According to the instructions, the document is easy to make

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (IMG aplcczwk-1640510694444)( https://xiaole.website/usr/uploads/2021/03/3380712713.png )]
[external chain picture transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-snumkj5t-1640510694472)( https://xiaole.website/usr/uploads/2021/03/696639557.png )]

Operation record

[console operation record] [6]

# Add sound
import cv2
import imageio
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
import warnings

# video display function





from moviepy.editor import *

videoclip_1 = VideoFileClip(r"D:\feil\paddlepaddle-PaddleGAN-master\work\PaddleGAN\src\fullbody.MP4")
videoclip_2 = VideoFileClip(r"D:\feil\paddlepaddle-PaddleGAN-master\work\PaddleGAN\applications\output\result.mp4")

audio_1 = videoclip_1.audio

videoclip_3 = videoclip_2.set_audio(audio_1)

videoclip_3.write_videofile("./output/daqiang_unravel.mp4", audio_codec="aac")
def display(driving, fps, size=(8, 6)):
    fig = plt.figure(figsize=size)

    ims = []
    for i in range(len(driving)):
        cols = []
        cols.append(driving[i])

        im = plt.imshow(np.concatenate(cols, axis=1), animated=True)
        plt.axis('off')
        ims.append([im])

    video = animation.ArtistAnimation(fig, ims, interval=1000.0/fps, repeat_delay=1000)

    plt.close()
    return video

video_path = './output/daqiang_unravel.mp4'
video_frames = imageio.mimread(video_path, memtest=False)

# Obtain the original resolution of the video
cap = cv2.VideoCapture(video_path)
fps = cap.get(cv2.CAP_PROP_FPS)

Software recommendation

Apple Android app

Keywords: Python Deep Learning paddlepaddle

Added by shann on Tue, 28 Dec 2021 07:46:47 +0200