In the recently published paper Designing Network Design Spaces, researchers from Facebook AI introduce a novel low-dimensional design space, RegNet, which produces simple, fast and versatile networks. In experiments, RegNet models outperform SOTA EfficientNet models and can be up to five times faster on GPUs.
The researchers’ intentions were straightforward: “Aim for interpretability and to discover general design principles that describe networks that are simple, work well, and generalize across settings.” Rather than designing and developing individual networks, the team focused on designing actual network design spaces comprising huge and possibly infinite populations of model architectures.
Manual network design typically considers convolution, network and data size, depth, residuals, etc. However, with increasing design choices, manually identifying optimized networks is no easy or efficient task. While Neural architecture search (NAS) is a popular approach, the models it finds can be limited by search space settings. Moreover, NAS does not necessarily help researchers discover network design principles or generalize networks.

So, how to design the best network design space? The Facebook AI team describes their approach as “akin to manual network design, but elevated to the population level.”
Researchers start with an initial design space as input, and gather model distributions via sampling and training. Design space quality is analyzed using error empirical distribution function (EDF). Various properties of the design space are visualized, and after an empirical bootstrap method predicts the likely range where the best models might fall, researchers use these insights to refine the design space.
The Facebook AI team conducted controlled comparisons with EfficientNet with no training-time enhancements and under the same training setup. Introduced in 2019, Google’s EfficientNet uses a combination of NAS and model scaling rules and represents the current SOTA. With comparable training settings and Flops, RegNet models outperformed EfficientNet models while being up to 5× faster on GPUs.
Analyzing the RegNet design space also provided researchers other unexpected insights into network design. They noticed, for example, that the depth of the best models is stable across compute regimes with an optimal depth of 20 blocks (60 layers). While it is common to see modern mobile networks employ inverted bottlenecks, researchers noticed that using inverted bottlenecks degrades performance. The best models do not use either a bottleneck or an inverted bottleneck.
The paper Designing Network Design Spaces is on arXiv.
Journalist: Fangyu Cai | Editor: Michael Sarazen
It’s worth reading your.