As the seasons change so do wardrobes. While people typically crowd boutiques or department stores to shop for new clothes, retail closures and stay-at-home measures adopted to counter the spread of COVID-19 have left many with little choice but to shop online.
Online shopping now accounts for 38.6 percent of all apparel sales in the US, according to the Digital Commerce 360’s 2020 Online Apparel Report. The total is up 10 percent in the last three years, and recent acceleration is expected to continue through 2021. The trend has boosted the deployment and the quality of real-world AI applications such as chatbots and visual search, as well as the emerging field of online clothes try-on, which digitally recreates the fitting rooms and full-length mirrors of brick-and-mortar clothing stores.
In a new paper, a team from Google Research, MIT CSAIL and University of Washington propose VOGUE, an AI-powered optimization method that deforms garments according to a given body shape while preserving pattern and material details to deliver state-of-the-art photorealistic, high-resolution try-on images.
Leveraging the power of StyleGAN2, the researchers’ novel controllable image generation algorithm can “seamlessly” identify and integrate person-specific components such as body shape, hair, skin colour, etc. from a target-person image with areas of interest such as folds, material properties, shape, etc. in a garment image.
Unlike previous general GAN editing approaches that require manual choice of noise injection structure or clusters and fixed parameters for all layers, the proposed method automatically computes the best interpolation coefficients by optimizing a loss function designed to preserve the identity and pose of the person while switching only the garment.
The researchers first trained a modified StyleGAN2 network conditioned on 2D human body pose on 100K unpaired fashion photographs. Given person image and garment images, the trained model then automatically finds the optimal interpolation coefficients per layer to enable semantically improved and photorealistic results at the high resolution of 512 × 512 pixels.
While the approach shows promise in the task of easing consumers’ online clothes-shopping anxiety, the researchers note there are limitations: the method still struggles for example with extreme poses or underrepresented garments. Also, since interpolation assumes perfect projection, unsatisfactory projection of real images can negatively affect the results. The researchers therefore propose improving the projection of real images onto the StyleGAN latent space as a possible future research direction.
The paper VOGUE: Try-On by StyleGAN Interpolation Optimization is on arXiv.
Analyst: Yuqing Li | Editor: Michael Sarazen; Yuan Yuan
This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. Along with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.
Click here to find more reports from us.
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.