Turing Award winner and founder of the Montreal Institute for Learning Algorithms (Mila) Yoshua Bengio used his first-ever blog post to address the climate crisis, which he believes “is one of the most serious threats to humanity and the planet that our generation and coming generations will have to deal with.”
Last year, Bengio and researchers from nearly 20 institutions published a paper describing how machine learning can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. Now he’s directing and supervising Mila’s Climate Change AI team.
In a bid to raise awareness of the threats posed by climate change, the Mila team recently published a paper that uses GANs to generate images of how climate events may impact our environments — with a particular focus on floods.
Studies have suggested that making the consequences of climate change more concrete can help mobilize both individual and collective actions. But real data of climate impacts such as floods in urban environments is scarce and when available often lacks key information. The Mila study therefore explored the potential of using images from a simulated 3D environments and a domain adaptation task.
The team is developing an interactive website that can fetch a user-entered address using Google Street View, and then alter that location view to display a future state based on the predictions of climate models. They hope the tool will help the public more easily visualize and appreciate climate change related risks.
The proposed Mila model is adapted from the Multimodal Unsupervised Image-to-image Translation (MUNIT) framework introduced in a 2018 study, and is able to leverage both simulated and real images to generate credible flood scenes. Starting with a street-level image, the researchers used unsupervised image-to-image translation techniques to alter the images, projecting floods where they are most likely to occur.
The team chose the MUNIT architecture as their starting point after testing different image-to-image translation networks such as CycleGAN and InstaGAN and carrying out quantitative and qualitative evaluations on the results. MUNIT was selected as the best fit for the study because it can generate more realistic water textures and its dual discriminator and generator style transfer approach effectively modifies style while keeping content untouched.
To boost model compatibility the researchers restricted the network’s cycle consistency loss and introduced an extra semantic consistency loss to maintain the semantic segmentation structures of the source image in all regions of the generated image except those where modifications would be made; and introduced binary masks on the areas that should be flooded in a given image.
To improve generational capacity the researchers implemented an adversarial classifier on the latent space features within the MUNIT architecture. This allowed the generator to learn high dimensional features that are relevant for the translation task on the source domain and invariant with respect to the shift between the domains. The team says another advantage of using a simulator when the amount of real data is limited is the low cost of noiseless extra information, such as ground-truth semantic segmentation and depth information.
The current simulated dataset can already be augmented with effects such as fire while producing the same capture points and metadata. In future research the team plans to integrate representation for additional weather events such as wildfires and droughts while continuing to leverage both real and simulated data.
The paper Using Simulated Data to Generate Images of Climate Change is available on arXiv.
Journalist: Yuan Yuan | Editor: Michael Sarazen