IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation

1State University of New York at Buffalo, 2Microsoft, 3Advanced Micro Devices
European Conference on Computer Vision (ECCV) 2024

Given input reference pose sequence, foreground and background images, IDOL faithfully animates the human foreground into the pose sequence and generates the corresponding depth map, which can be rendered as 2.5D videos.

Abstract

Significant advances have been made in human-centric video generation, yet the joint video-depth generation problem remains underexplored. Most existing monocular depth estimation methods may not generalize well to synthesized images or videos, and multi-view-based methods have difficulty controlling the human appearance and motion. In this work, we present IDOL (unIfied Dual-mOdal Latent diffusion) for high-quality human-centric joint video-depth generation. Our IDOL consists of two novel designs. First, to enable dual-modal generation and maximize the information exchange between video and depth generation, we propose a unified dual-modal U-Net, a parameter-sharing framework for joint video and depth denoising, wherein a modality label guides the denoising target, and cross-modal attention enables the mutual information flow. Second, to ensure a precise video-depth spatial alignment, we propose a motion consistency loss that enforces consistency between the video and depth feature motion fields, leading to harmonized outputs. Additionally, a cross-attention map consistency loss is applied to align the cross-attention map of the video denoising with that of the depth denoising, further facilitating spatial alignment. Extensive experiments on the TikTok and NTU120 datasets show our superior performance, significantly surpassing existing methods in terms of video FVD and depth accuracy.

Method

Unified dual-modal U-Net

Left: Overall model architecture. Our IDOL features a unified dual-modal U-Net (gray boxes), a parameter-sharing design for joint video-depth denoising, wherein the denoising target is controlled by a one-hot modality label (\(y_{\text{v}}\) for video and \(y_\text{d}\) for depth).

Right: U-Net block structure. Cross-modal attention is added to enable mutual information flow between video and depth features, with consistency loss terms \(\mathcal{L}_{\text{mo}}\) and \(\mathcal{L}_{\text{xattn}}\) ensuring the video-depth alignment. Skip connections are omitted for conciseness.

Learning video-depth consistency

Visualization of the video and depth feature maps and their motion fields without consistency losses. We attribute the inconsistent video-depth output (blue circle) to the inconsistent video-depth feature motions (the last row). This problem exists in multiples layers within the U-Net, and we randomly select layer 4 and 7 in the up block for visualization. For the feature map visualization, we follow Plug-and-Play to apply PCA on the video and depth features at each individual layers, and render the first three components. The motion field is visualized similar to optical flow, where different color indicates different moving direction.

To promote video-depth consistency, we propose a motion consistency loss \(\mathcal{L}_{\text{mo}}\) to synchronize the video and depth feature motions, and a cross-attention map consistency loss \(\mathcal{L}_{\text{xattn}}\) to align the cross-attention map of the video denoising with that of the depth denoising.

Pose Editing Comparison

Example #1

Example #2

Foreground-Background Composition

Background editing examples

Foreground editing examples

2.5D Video Comparison

Compared with other multi-modal generation methods (MM-Diffusion and LDM3D), our IDOL generates (1) spatial-aligned video and depth, (2) smoother video, and (3) better preserves the human identity.

Example on TikTok videos

Example on NTU120 videos

Quantitative Comparison

Our IDOL achieves the best video and depth generation quality on the TikTok and NTU120 datasets.

BibTeX

@inproceedings{zhai2024idol,
  title={IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation},
  author={Zhai, Yuanhao and Lin, Kevin and Li, Linjie and Lin, Chung-Ching and Wang, Jianfeng and Yang, Zhengyuan and Doermann, David and Yuan, Junsong and Liu, Zicheng and Wang, Lijuan},
  year={2024},
  booktitle={Proceedings of the European Conference on Computer Vision},
}