AI Research

NAS-Generated Model Achieves SOTA In Super-Resolution

In its new paper Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search, Xiaomi's research team introduces a deep convolution neural network (CNN) model using a neural architecture search (NAS) approach. Performance is comparable to cutting-edge models such as CARN and CARN-M.

Single image super resolution (SISR) is a critical research challenge for smartphone image processing, but current state-of-the-art models in this domain are hand-crafted by human experts. Chinese smartphone giant Xiaomi is challenging this labour-intensive approach with a new machine-generated model that achieves impressive results in the super-resolution domain.

In its new paper Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search, Xiaomi’s research team introduces a deep convolution neural network (CNN) model using a neural architecture search (NAS) approach. Performance is comparable to cutting-edge models such as CARN and CARN-M.

Xiaomi’s novel NAS approach has three principle ingredients: an elastic search space, a hybrid model generator, and a model evaluator based on incomplete training. The team employed a hybrid controller and a cell-based elastic search space that enables both macro and micro search.

The paper sets three objectives for super-resolution tasks:

  • Quantitative metric to reflect the performance of models (PSNR),
  • Quantitative metric to evaluate the computational cost of each model (mult-adds)
  • Number of parameters

with additional constraints:

  • Minimal PSNR for practical visual perception
  • Maximal mult-adds regarding resource limits

Xiaomi researchers compared their fully trained FALSR (Fast, Accurate and Lightweight Super-Resolution) models with state-of-the-art methods on commonly used test datasets for super-resolution. The team only included models with comparable FLOPS, thus excluding the RDN [Zhang et al., 2018b] and RCAN [Zhang et al., 2018a]. The team focused their comparisons on x2 tasks and all mult-adds with 480×480 inputs. The results are below.
image.png image.png Xiaomi researchers required less than three days on a Tesla-V100 with eight GPUs to execute the pipeline once, and used DIV2K as the training set.

It’s expected that the new super-resolution AI algorithm will be integrated with Xiaomi devices such as its newest flagship smartphone Xiaomi Mi 9 — which features a powerful Snapdragon 855 SoC and a 48MP rear camera — to deliver improved image quality.

The paper Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search is on arXiv; Xiaomi recently open-sourced the FALSR code on Github.


Author: Robert Tian | Editor: Michael Sarazen

0 comments on “NAS-Generated Model Achieves SOTA In Super-Resolution

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: