Although machine learning has achieved huge advances in speech recognition, gaming and many other applications, some critics still regard it as little more than glorified “curve fitting” that lacks high-level cognitive abilities and reasoning skills.
A new paper from Tsinghua University, Google and ByteDance researchers proposes a neural-symbolic architecture for both inductive learning and logic reasoning. The Neural Logic Machines (NLM) model combines neural networks with logic programming and has exhibited high performance in various reasoning and decision making tasks. The research paper has been accepted by ICLR 2019.

Taking the popular decision-making problem “blocks world” as example, given an initial state with blocks on the ground and a specific target state of stacked blocks, the problem can be solved through a series of block-moving operations. The automatic accomplishment of this task with machine learning involves finding good plans and achieving different block movement subgoals in the correct order in order to convert the initial state to the target state. Challenges include the model’s capability for generalizing a set of rules to larger blocks worlds than those encountered during training, managing high-order relational data and quantifiers, scaling up regarding rule complexity, recovering rules from a minimal set of learning priors, etc.
NLM tackles these challenges through the neural realization of logic machines. Given a series of basic logic predicates grounded on a fixed set of objects, the model acquires object properties and relations data, then applies first-order rules for sequential logic deduction and outputs conclusive properties or relations of the objects for decision making. In blocks world for example, based on “the object x is the ground (IsGround(x) is true) and there is no block on x (Clear(x) is true)”, the NLM can infer whether x is moveable.

All logic predicates are represented with probabilistic tensors, on which the logic rules are applied as neural operators. The NLM model consists of multiple layers with high-level abstractions representing more complicated object properties formed at deeper layers.
An NLM innovation is the model’s architecture, specifically the introduction of meta-rules for example for Boolean logic and quantification in the symbolic logic systems. This enables the NLM to effectively capture a large number of complex, lifted rules for all objects while maintaining relatively low computational complexity compared to logic-based algorithms such as ILP, which suffer from exponential computational complexity with respect to the number of logic rules.

Leveraging reinforcement learning, the NLM model also successfully solved other decision making problems including “sorting arrays” and “finding shortest paths.” Researchers also extended the NLM experiments to relational reasoning family tree tasks and general graph reasoning with fully supervised learning. They evaluated the performance of NLM compared with representative frameworks Memory Networks (MemNN) and Differentiable Inductive Logic Programming (∂ILP).
NLM outperforms the baselines with 100 percent accuracy in both reasoning tasks, and with 100 percent completeness in solving decision making problems. In contrast, MemNN failed to accomplish some tasks, and achieved relatively low accuracy in certain reasoning tasks; while ∂ILP performed perfectly in almost all reasoning tasks, but had difficulties scaling beyond small-sized rule sets and could not solve decision-making problems such as blocks world.

The paper Neural Logic Machines is on arVix. The project code will soon be available at GitHub.
Source: Synced China
Localization: Tingting Cao | Editor: Michael Sarazen
0 comments on “ICLR 2019 | Tsinghua, Google and ByteDance Propose Neural Networks for Inductive Learning & Logic Reasoning”