Conference Industry Research United States

Cool IJCNN Stuff from Alaska

From May 14 to 18, the 30th International Joint Conference on Neural Networks (IJCNN 2017) was held in Anchorage, AK, USA. Continuing the long tradition, the conference is organized by the International Neural Network Society (INNS), in cooperation with the IEEE Computational Intelligence Society (IEEE-CIS).

From May 14 to 18, the 30th International Joint Conference on Neural Networks (IJCNN 2017) was held in Anchorage, AK, USA. Continuing the long tradition, the conference is organized by the International Neural Network Society (INNS), in cooperation with the IEEE Computational Intelligence Society (IEEE-CIS). This year, there were 621 papers accepted out of 933 submissions (66.6% acceptance rate). The conference featured 372 oral presentations and 249 poster presentations.

The plenary talks by Alex Graves, Stephen Grossberg, Odest Chadwicke Jekins, Christof Koch, Jose Principe, and Paul Webos reflect the diverse themes across topics of deep learning, consciousness, robotics, neuroscience, cognitive and brain architectures and foundations of advanced learning systems.

Alex Graves’ talk covers the development of deep learning, ANN, and especially he and his colleagues’ efforts towards how to overcome the limits of RNN. He proposed the possibilities of extensions towards better RNN architectures. The first extension is the extension of memory. Conventionally, the RNN memory is stored in the vector of the hidden activation, which makes it difficult to memorize enough information. The solution he proposed is to give the net access of the external memory. He and his colleagues are attempting to solve this problem with neural Turing Machines[1] and other state-of-the-art memory-augmented architectures. The second extension is how to let the RNN learn when to halt. This is a very practical problem, for instance, the let the RNN learn or generate the sentences without explicit signs of ending. This may also relate to a meta-learning problem. One of the solutions is adaptive computation time (ACT)[2]. The third extension is how we can learn beyond the back-propagation through time (BPTT). There are a couple of solutions, for instance, truncated back-propagation, real time recurrent learning (RTRL), and Decouple Neural Interface[3]. And the last extension is guided learning. The main problem of learning is how can we guide the network towards data when learning is most efficient. Active learning has been investigated for a few years. Like humans, the network should be able to find where it can find the most ‘interesting’ knowledge based on the measurement of prediction gain or complexity gain. The main solutions are intrinsic motivation (more biologically-inspired)[4] or Curriculum learning[5].

dsc_1318 copy.jpg

Another interesting talk was given by Prof. Jose C. Principe, a distinguished professor of Electrical and Biomedical Engineering at the University of Florida since 2002. His group synergistically focuses on the biological information processing models. He asserts that, in the case of perception from a cognition point of view, what’s needed for perception are invariants in the MODEL space. Perception is an active process when we should go through the perception-action-reward-cycle in cognitive process. The Bayesian approach, for instance, proposed by him and his colleagues, applies a hierarchical, distributed architecture of dynamic processing elements that learns in a self-organizing way to cluster objects in the video input. Unlike computer vision methods, Prof. Jose’ group also created an architecture with a top-down pathway across layers in the form of causes, which facilities a bidirectional pathway with feedback[6]. To predict the information with prior in a hierarchical way, they also proposed the deep predictive coding network (DPCN)[7], in which the temporal dependencies in time-varying signals are captured in a hierarchal manner. It further uses top-down information to modulate the representation in lower layers. At the second part of his talk, he also expressed his opinion about “hard science ” — how we can interpret the psychological physics of the mind. He suspects that even modern machine learning techniques will fail to be a mathematical model of the mind, based on Godel’s Incompleteness, which states a “sufficiently powerful” formal system cannot consistently produce certain theorems which are isomorphic to true statements of number theory[8].

Further introduction of IJCNN is to be continued.

[1] Graves, Alex, Greg Wayne, and Ivo Danihelka. “Neural turing machines.” arXiv preprint arXiv:1410.5401 (2014).
[2] Graves, Alex. “Adaptive computation time for recurrent neural networks.” arXiv preprint arXiv:1603.08983 (2016).
[3] Jaderberg, Max, et al. “Decoupled neural interfaces using synthetic gradients.” arXiv preprint arXiv:1608.05343 (2016).
[4] Bellemare, Marc, et al. “Unifying count-based exploration and intrinsic motivation.” Advances in Neural Information Processing Systems. 2016.
[5] Graves, Alex, et al. “Automated Curriculum Learning for Neural Networks.” arXiv preprint arXiv:1704.03003 (2017).
[6] Principe, Jose C., and Rakesh Chalasani. “Cognitive architectures for sensory processing.” Proceedings of the IEEE 102.4 (2014): 514-525.
[7] Chalasani, Rakesh, and Jose C. Principe. “Deep predictive coding networks.” arXiv preprint arXiv:1301.3541 (2013).
[8] https://plato.stanford.edu/entries/goedel-incompleteness/


AuthorJunpei Zhong | Localized by Synced Global Team: Hao Wang

0 comments on “Cool IJCNN Stuff from Alaska

Leave a Reply

Your email address will not be published. Required fields are marked *