Telegram Group & Telegram Channel
​​Everybody Dance Now
https://arxiv.org/abs/1808.07371
Arxiv, 22 Aug 2018 (perhaps submitted to SIGGRAPH)

What?
Given a video of a source person and another of a target person the method can generate a new video of the target person enacting the same motions as the source. This is achieved by means of Pix2PixHD model + Pose Estimation + temporal coherence loss + extra generator for faces.
Pix2PixHD[1] is "High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs", which I described 2 posts earlier.

✏️ Method:
Three-stage approach: pose detection, pose normalization, and mapping from normalized pose stick figures to the target subject.
1. Pose estimation: Apply a pretrained pose estimation model (OpenPose[2]) to every frame of the input and output videos. Draw a representation of the pose for every frame as a stickman on a white background. So, for every frame y we have a corresponding stickman image x.
2. Train Pix2PixHD generator G to generate a target person image G(x) given a stickman x as input.
Discriminator D attempts to distinguish between 'real' image pairs (x, y) and 'fake' pairs (x, G(x)).
3. Vanilla Pix2PixHD model works on single frames, but we want to have a temporal coherence between consecutive frames. Authors propose to generate a t-th frame G(y_t) using a corresponding stickman image x_t and a previously generated frame G(x_t-1). In this case discriminator tries to discern a 'fake' sequence (x_t-1, x_t, G(x_t-1)) from a 'real' sequence (x_t-1, x_t, y_t-1, y_t).
4. To improve the quality of human faces, authors add a specialized GAN designed to add more details to the face region. It generates a cropped-out face given a cropped-out head region of the stickman.
After training a full image generator G, authors input a cropped-out face and a corresponding region of the stickman to the face generator G_f which outputs a residual. This residual is then added to the previously generated full image to impove face realism.

◼️ Training is done in two stages:
1. Train image generator G and discriminator D, freeze their weights afterward.
2. Train a face generator G_f along with the face discriminator D_f.

◼️ Pose transfer from source video to a target person:
1. Source stickmen are normalized to match position and scale of the target person poses.
2. Frame-by frame input normalized source stickman images to generators G, G_f and get a target person doing the same movements as a source.

✔️ Experiments:
Authors test their method on the dancing videos collected on the internet as a source and their own videos as a target.

💬 Discussion:
Overall the method shows compelling results of a target person dancing in the same way as some other person does.
But it's not perfect. Self ocllusions of the person are not rendered properly (for example, limbs can disappear).
Target persons were deliberately filmed in tight clothes with minimal wrinkling since the pose representation does not encode information about clothes. So it may not work on people wearing arbitrary apparel. Another problem pointed out by the authors is video jittering when the input motion or motion speed is different from the movements seen at training time.

Links:
[1] https://arxiv.org/pdf/1711.11585.pdf
[2] https://github.com/CMU-Perceptual-Computing-Lab/openpose



group-telegram.com/gradientdude/14
Create:
Last Update:

​​Everybody Dance Now
https://arxiv.org/abs/1808.07371
Arxiv, 22 Aug 2018 (perhaps submitted to SIGGRAPH)

What?
Given a video of a source person and another of a target person the method can generate a new video of the target person enacting the same motions as the source. This is achieved by means of Pix2PixHD model + Pose Estimation + temporal coherence loss + extra generator for faces.
Pix2PixHD[1] is "High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs", which I described 2 posts earlier.

✏️ Method:
Three-stage approach: pose detection, pose normalization, and mapping from normalized pose stick figures to the target subject.
1. Pose estimation: Apply a pretrained pose estimation model (OpenPose[2]) to every frame of the input and output videos. Draw a representation of the pose for every frame as a stickman on a white background. So, for every frame y we have a corresponding stickman image x.
2. Train Pix2PixHD generator G to generate a target person image G(x) given a stickman x as input.
Discriminator D attempts to distinguish between 'real' image pairs (x, y) and 'fake' pairs (x, G(x)).
3. Vanilla Pix2PixHD model works on single frames, but we want to have a temporal coherence between consecutive frames. Authors propose to generate a t-th frame G(y_t) using a corresponding stickman image x_t and a previously generated frame G(x_t-1). In this case discriminator tries to discern a 'fake' sequence (x_t-1, x_t, G(x_t-1)) from a 'real' sequence (x_t-1, x_t, y_t-1, y_t).
4. To improve the quality of human faces, authors add a specialized GAN designed to add more details to the face region. It generates a cropped-out face given a cropped-out head region of the stickman.
After training a full image generator G, authors input a cropped-out face and a corresponding region of the stickman to the face generator G_f which outputs a residual. This residual is then added to the previously generated full image to impove face realism.

◼️ Training is done in two stages:
1. Train image generator G and discriminator D, freeze their weights afterward.
2. Train a face generator G_f along with the face discriminator D_f.

◼️ Pose transfer from source video to a target person:
1. Source stickmen are normalized to match position and scale of the target person poses.
2. Frame-by frame input normalized source stickman images to generators G, G_f and get a target person doing the same movements as a source.

✔️ Experiments:
Authors test their method on the dancing videos collected on the internet as a source and their own videos as a target.

💬 Discussion:
Overall the method shows compelling results of a target person dancing in the same way as some other person does.
But it's not perfect. Self ocllusions of the person are not rendered properly (for example, limbs can disappear).
Target persons were deliberately filmed in tight clothes with minimal wrinkling since the pose representation does not encode information about clothes. So it may not work on people wearing arbitrary apparel. Another problem pointed out by the authors is video jittering when the input motion or motion speed is different from the movements seen at training time.

Links:
[1] https://arxiv.org/pdf/1711.11585.pdf
[2] https://github.com/CMU-Perceptual-Computing-Lab/openpose

BY Gradient Dude




Share with your friend now:
group-telegram.com/gradientdude/14

View MORE
Open in Telegram


Telegram | DID YOU KNOW?

Date: |

Investors took profits on Friday while they could ahead of the weekend, explained Tom Essaye, founder of Sevens Report Research. Saturday and Sunday could easily bring unfortunate news on the war front—and traders would rather be able to sell any recent winnings at Friday’s earlier prices than wait for a potentially lower price at Monday’s open. But Telegram says people want to keep their chat history when they get a new phone, and they like having a data backup that will sync their chats across multiple devices. And that is why they let people choose whether they want their messages to be encrypted or not. When not turned on, though, chats are stored on Telegram's services, which are scattered throughout the world. But it has "disclosed 0 bytes of user data to third parties, including governments," Telegram states on its website. Pavel Durov, Telegram's CEO, is known as "the Russian Mark Zuckerberg," for co-founding VKontakte, which is Russian for "in touch," a Facebook imitator that became the country's most popular social networking site. DFR Lab sent the image through Microsoft Azure's Face Verification program and found that it was "highly unlikely" that the person in the second photo was the same as the first woman. The fact-checker Logically AI also found the claim to be false. The woman, Olena Kurilo, was also captured in a video after the airstrike and shown to have the injuries. READ MORE
from us


Telegram Gradient Dude
FROM American