AI Computer Vision & Graphics Machine Learning & Data Science Research

Stanford ‘SIRENs’ Apply Periodic Activation Functions to Implicit Neural Representations

The researchers introduce sinusoidal representation networks (SIRENs) as a method for leveraging periodic activation functions for implicit neural representations.

The challenge of how best to represent signals is at the core of a host of science and engineering problems. In a new paper, Stanford University researchers propose that implicit neural representations offer a number of benefits over conventional continuous and discrete representations and could be used to address many of these problems.

The researchers introduce sinusoidal representation networks (SIRENs) as a method for leveraging periodic activation functions for implicit neural representations and demonstrate their suitability for representing complex natural signals and their derivatives.

ts-623.png

Traditionally, discrete representations for signals are used when modelling different types of signals in images and videos, processing audio sound waves, performing 3D shape representations via point clouds, etc. The approach can also be used to solve more general boundary value problems such as the Poisson, Helmholtz, or wave equations.

Over the last few years, implicit neural representations have emerged as a novel way to represent 3D shapes, with most of these representations built on ReLU-based multilayer perceptrons (MLPs). The key benefits of ReLU MLPs are that they’re agnostic to grid resolution, and the memory required generally scales with signal complexity, independent of spatial resolution.

The Stanford researchers found that these ReLU-based architectures, while promising, lack the capacity to represent details in the underlying signals and typically do not effectively represent the derivatives of a target signal. They therefore struggle when encoding complex or large scenes with fine details.

To address these limitations, the researchers leveraged MLPs with periodic activation functions for implicit neural representations. They demonstrated in the paper that their approach is not only capable of representing details in the signals better than ReLU MLPs or positional encoding strategies proposed in concurrent work, but that these properties also uniquely apply to the derivatives — which is critical for many applications.

“Another motivation for modelling signals with continuous representations is to solve physics-based problems,” the researchers explain in a video released along with the paper. “Implicit neural representations could enable solving these problems faster and finding better solutions by learning priors over the space of functions they represent.”

SIREN is a simple neural network architecture for implicit neural representations that uses the sine as a periodic activation function. The researchers found that any derivative of a SIREN is itself a SIREN, as the derivative of the sine is a cosine. Therefore, the derivatives of a SIREN inherit the properties of SIRENs, which enables the researchers to supervise any derivative of SIREN with complicated signals.

SIREN can not only rapidly converge to an accurate fit for complicated functions for high-frequency details, it’s also capable of fitting functions via their first and second-order derivatives, and can therefore be used in a partial differential equation solver.

ts-6.25.png

The researchers also present a principled initialization scheme for SIREN training to preserve the distribution of activations through the network so that the final output at initialization does not depend on the number of layers.

In experiments, the researchers used SIRENs to solve the Poisson equation, a particular form of the Eikonal equation, the second-order Helmholtz partial differential equation, and the challenging inverse problem of full-waveform inversion.

The researchers demonstrate SIRENs performance across a range of applications that include representing natural signals by fitting their values directly for images, audio, or videos; and solving problems rooted in physics that impose constraints on first order derivatives or second-order derivatives.

The research results also impressed Turing award winner Geoffrey Hinton, who tweeted his approval while highlighting the method’s potential.

Several exciting avenues for future work are proposed, including the exploration of other types of inverse problems and applications in areas beyond implicit neural representations, such as neural ordinary differential equations.

The paper Implicit Neural Representations with Periodic Activation Functions is on arXiv.


Journalist: Yuan Yuan | Editor: Michael Sarazen

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

0 comments on “Stanford ‘SIRENs’ Apply Periodic Activation Functions to Implicit Neural Representations

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: