AI Computer Vision & Graphics Machine Learning & Data Science Research

Google Introduces Neuroevolution for Self-Interpretable Agents

Researchers from Google Brain Tokyo and Google Japan have proposed a novel approach that helps guide reinforcement learning (RL) agents to what’s important in vision-based tasks.

Good gamers can tune out distractions and unimportant on-screen information and focus their attention on avoiding obstacles and overtaking others in virtual racing games like Mario Kart. However, can machines behave similarly in such vision-based tasks? A possible solution is designing agents that encode and process abstract concepts, and research in this area has focused on learning all abstract information from visual inputs. This however is compute intensive and can even degrade model performance. Now, researchers from Google Brain Tokyo and Google Japan have proposed a novel approach that helps guide reinforcement learning (RL) agents to what’s important in vision-based tasks.

image.png
Figure 2: Method overview. Illustration of data processing flow in the proposed method.

The researchers say that just as the human brain assigns most of its attention capacity to task relevant elements and becomes temporarily blind to other signals, their proposed agent learns to ignore all but the task critical regions in input images.

The team characterizes the current gradient descent or evolution strategies that calculate network weight parameters as direct encoding methods, and proposes instead treating self-attention as a form of indirect encoding, where large implicit weight matrices are generated from a small number of key-query parameters to construct highly parameter-efficient agents in a simple but powerful way. The researchers used neuroevolution AI techniques to train self-attention agents. This removed the unnecessary complexity required for gradient-based methods, resulting in simpler architectures. The team also incorporated modules to improve non-differentiable self-attention effectiveness.

image.png
Figure 1: In this work, researchers evolve agents that attend to a small fraction of its visual input critical for its survival, allowing for interpretable agents that are not only compact, but also more generalizable. Here, they show examples of the agent’s attention highlighted in white patches. In CarRacing (top), the proposed agent mostly attends to the road borders, but shifts its focus to the turns before it changes heading directions. In DoomTakeCover (bottom), the agent is able to focus on fireballs and monsters, consistent with the intuitions.
image.png
Table 3: Scores from CarRacing and DoomTakeCover. Researcher report the average score over 100 consecutive tests with standard deviations. For reference, the required scores above which the tasks are considered solved are also included. Best scores are highlighted.

The research team evaluated the method in two challenging vision-based RL tasks: CarRacing and DoomTakeCover. In experiments the proposed method solved both tasks and outperformed existing methods while requiring 1000x fewer parameters. The proposed agents also outperform conventional methods in ability to generalize to environments with different task irrelevant elements. Researchers further noted that the attention patches visualized in the pixel space made the agent’s decision process easier for humans to understand.

image.png
Figure 8: YouTube video background. The agent stops to look at the cat with the white belly, rather than focus on the road.

Alongside its state-of-the-art performance, researchers also identified some limitations in this approach, for example that much of the extra generalization capability is due to “attending to the right thing, rather than from logical reasoning”. The visual module also struggles to generalize to cases when there are dramatic changes to backgrounds.

The paper Neuroevolution of Self-Interpretable Agents is on arXiv.


Author: Yuqing Li | Editor: Michael Sarazen

3 comments on “Google Introduces Neuroevolution for Self-Interpretable Agents

  1. Pingback: Google Introduces Neuroevolution for Self-Interpretable Agents – Tech Box

  2. Topic is really excellent. Thank you very much.

  3. very good article

Leave a Reply

Your email address will not be published. Required fields are marked *

%d