Reinforcement learning’s prowess in 3D understanding, real-time strategy decision, fast reaction, long-term planning, language and communication have enabled machines to top humans in contests ranging from Atari’s Breakout to the ancient game of Go.
However, current reinforcement learning research is largely focused on over-simplified tasks that are conducted in monotonous virtual environments and are not transferable to the real world. Today’s smart robots are far from functional when it comes to generalized tasks and fall short when dealing with our semantically-rich world.
To boost learning research aimed at endowing robots with better generalization capabilities, Yi Wu from UC Berkeley and Yuxin Wu, Georgia Gkioxari, and Yuandong Tian from Facebook AI research recently published the paper Building Generalizable Agents with a Realistic and Rich 3D Environment, which introduces a diverse set of training environments in a virtual property called House3D.
House3D comprises 45,622 human-designed 3D scenes extracted from the SUNCG dataset, which includes housing models ranging from single-room studios to multi-story houses, subdivided into 20 room types such as bedroom, living room, kitchen, bathroom, etc. All scenes are semantically annotated to the level of each object. Agents can make observations of multiple modalities such as RGB images, depth, segmentation masks, top-down 2D view, etc.
The benchmark task in this paper is “concept-driven navigation” known as “RoomNav.” The agent is given a high-level task description such as “Go to the kitchen,” then prompted to explore the House3D environment to reach the target room.
RoomNav is considered a multi-target learning problem to which the paper’s authors propose two baseline models with gated-attention architecture: “A gated-CNN network for continuous actions and a gated-LSTM network form for discrete actions.” The gated-CNN policy network was trained using deep deterministic policy gradient (DDPG), while the gated-LSTM policy was trained using the asynchronous advantage actor-critic algorithm.
The research results indicate improved generalization capabilities, with the team making the observations that “using the semantic signal as the input considerably enhances the agent’s generalization ability. Increasing the size of the training environments is important but at the same time introduces fundamental bottlenecks when training agents to accomplish the RoomNav task due to the higher complexity of the underlying task.”
“We believe our [House3D] environment will benefit the community and facilitate the efforts towards building better AI agents. We also hope that our initial attempts towards addressing semantic generalization ability in reinforcement learning will serve as an important step towards building real-world robotic systems.”
Yi Wu, who received the NIPS Best Paper Award in 2016 for his paper Value Iteration Network, is advised by esteemed UC Berkeley Professor Stuart Russell. Facebook AI Research team meanwhile have conducted explorational research on the application of reinforcement learning to real-time strategy games like Starcraft. Yuandong Tian has proposed ELF, a platform for reinforcement learning research in gaming. According to his Facebook research blog, “ELF allows researchers to test their algorithms in various game environments, including board games, Atari games, and custom-made, real-time strategy games.”
The paper Building Generalizable Agents with a Realistic and Rich 3D Environment has been submitted to the International Conference on Learning Representations (ICLR) 2018. You can read it here: https://arxiv.org/abs/1801.02209
The House3D project has been open-sourced on Github: https://github.com/facebookresearch/House3D
Journalist: Meghan Han | Editor: Michael Sarazen
0 comments on “UC Berkeley & Facebook AI Researchers Introduce House3D for Reinforcement Learning”