AI Share My Research Technology

Cross-domain Correspondence Learning for Exemplar-based Image Translation

We present a general framework for exemplar-based image translation, which synthesizes a photo-realistic image from the input in a distinct domain, given an exemplar image.

Content provided by Bo Zhang, the co-author of the paper Cross-domain Correspondence Learning for Exemplar-based Image Translation.

We present a general framework for exemplar-based image translation, which synthesizes a photo-realistic image from the input in a distinct domain (e.g., semantic segmentation mask, or edge map, or pose keypoints), given an exemplar image. The output has the style (e.g., color, texture) in consistency with the semantically corresponding objects in the exemplar. Our method is superior to state-of-the-art methods in terms of image quality significantly, with the image style faithful to the exemplar with semantic consistency. Moreover, we show the utility of our method for several applications.

What’s New: We propose to jointly learn the cross domain correspondence and the image translation, where both tasks facilitate each other and thus can be learned with weak supervision.

  • We address the problem of learning dense cross-domain correspondence with weak supervision—joint learning with image translation.
  • With the cross-domain correspondence, we present a general solution to exemplar-based image translation, that for the first time, outputs images resembling the fine structures of the exemplar at instance level.
  • Our method outperforms state-of-the-art methods in terms of image quality by a large margin in various tasks, such as segmentation mask to image, sketch to face and pose generation.
  • The proposed technique also enables a few intriguing applications, such as image editing, makeup transfer, etc.

How It Works: In order to achieve better controllability and higher image translation quality, our method allows uses to provide an example image, such that the output has the style (e.g., color, texture) in consistency with the semantically corresponding objects in the exemplar. We propose to jointly learn the cross-domain correspondence and the image translation, where both tasks facilitate each other and thus can be learned with weak supervision. We prove that the proposed method not only improves the translation quality, but also obtains the cross-domain dense correspondence for the first time.

Key Insights: Explicitly setup the cross-domain correspondence helps improve the image translation quality. During the joint training, correspondence can be established with weak supervision.

The paper Cross-domain Correspondence Learning for Exemplar-based Image Translation is on arXiv. Click here to visit the project website.


Meet the authors Pan Zhang, Bo Zhang, Dong Chen, Lu Yuan, Fang Wen from University of Science and Technology of China, Microsoft Research Asia and Microsoft Cloud+AI.

Microsoft Research Asia (MSRA), Microsoft’s fundamental research arm in the Asia Pacific region and the company’s largest research institute outside the United States, was founded in 1998 in Beijing. Through collaboration with the best talents from Asia and across the globe, MSRA has grown into a world-class research lab, conducting both basic and applied research.

Share Your Research With Synced Review

Share My Research is Synced’s new column that welcomes scholars to share their own research breakthroughs with over 1.5M global AI enthusiasts. Beyond technological advances, Share My Research also calls for interesting stories behind the research and exciting research ideas. Share your research with us by clicking here.

0 comments on “Cross-domain Correspondence Learning for Exemplar-based Image Translation

Leave a Reply

Your email address will not be published.

%d bloggers like this: