Community Research

2017 Trends — What You Want & What Comes

A glance at the state of the art research shows that neural networks would still serve us, and artificial general intelligence is not yet in sight.

Source: c’t Magazin für Computer Technik 3/17

[Vedio Source] [Article Source]

Don’t be worried about AI. A glance at the state of the art research shows that neural networks would still serve us, and artificial general intelligence is not yet in sight. Therefore, robots and language assistants are nowhere near as smart as what the hype would like us to believe. Nevertheless, technology is changing our world ever faster: from Amazon’s Alexa eavesdropping at home and in the car, to algorithms writing articles for magazines, to Messenger changing the way of communicating within a company.

image-14847272391316

[Source]

Waiting for Skynet

— What happened in 2017 with AI and what did not

Movies frequently depict AI as hostile entities, often as the “killing machine”. The reason behind this may be the idea of creating intelligence and a kind of “living being” is too arrogant – how dare mankind play the role of Creator. What will happen if the creation somehow overpowers its creators, like children turning on their parents?

If you are a firm believer of this idea, the current state of AI will disappoint you. AI algorithms can only learn unique and specific skills, which they can perform mechanically and without emotion— not at all like a human being. It will remain this way for the next few years, as AI will only be a tool for clearly defined problems, such as image description or speech recognition. If an algorithm learns how to solve a single problem from a training dataset, it is called a weak artificial intelligence. Weak AIs will continue to be the center of AI research to produce solutions which can help automate more and more tasks, thus saving money for businesses.

If an artificial intelligence can solve any problem, or even develop its own personality like a human being, it is called a strong artificial intelligence. This will not happen for many years, as there is very little work being done in this area. Presumably, neuroscience would first need to clarify how human consciousness works in the first place. The current AI research can provide great help in this aspect in the form of Computational Neuroscience, a small branch of AI research that can replicate the processes in the brain. Deep learning, on the other hand, is a very rough replication of the brain. In Deep Learning, people are more concerned about the computational efficiency, thus the acceleration ability of CPUs and GPUs play a greater role. Neurons are always arranged in layers, since their activation will be calculated as matrix multiplication using an element-wise activation function. For less than four layers, it is simply deep learning. For a network with more than four layers, we call it a deep learning network.

It will take a few decades until computers become smarter than humans — the so-called singularity. AI films usually only start after this point. Nevertheless, researchers at OpenAI and the Machine Intelligence Research Institute financed by Silicon Valley billionaires have already started thinking about how to develop AIs to avoid creating “Skynet”.

Now, let’s take a look at the areas where AI technology will accelerate in 2017.

Medical Data (logic: 2 examples + 1 outlook & problem)

Tool-type AIs will be reinforced in 2017 by medical data. Google’s AI subsidiary DeepMind have been cooperating with British clinics and National Health Service to specifically develop a network trained on large data sets. These can be used to teach neural networks, which can recognize some disease symptoms better or faster than human physicians. DeepMind can identify, for example, an age-related muscular degeneration only based on several images of the patient’s eye. [https://deepmind.com/applied/deepmind-health]

For medical applications, what was well-hyped in 2016: Convolutional Networks — Residual Networks and Variante Highway Networks which Microsoft won the Image-Net Competition on image recognition – are also likely to be used. These systems use an efficient computable mathematical operation of convolution, thus saving parameters against fully connected layers. Another strength is that in order to raise the accuracy of the diagnosis, they utilize an enormous depth of over 100 layers. Because they can transmit the activation by copying from the bottom to the top of the network at the beginning of the training.

If the cooperation between AI researchers and hospitals comes to fruition, German patient data may also be analyzed with AI methods. But these data are difficult to keep anonymous in such a way that no one can trace it back to the particular patients. This difficulty could delay application, but certainly would not stop it. How to keep health insurance companies away from this data is one of the forefront problems in AI research.

The logic is like this:
thematic sentence: AIs will be reinforced on medical data

  • Google DeepMind → recognize disease faster
  • Convolutional Networks → more neurone layers lead to ?
  • Cooperation between AI researchers and hospitals →
  • Keeping insurance companies away

As you can see, the four bullets above essentially have no connection between each other. You fail to organize these bullets to tell a integrated story — AIs on medical data.

Adversarial Networks

In basic theoretical research, Adversarial Networks will rise to become a significant trend. This concept provides two different neural networks which work against each other. A generator-network generates a composite output, such as an image from abstract inputs such as a textual description and some random numbers. A second network, the discriminator, tries to distinguish the output of the generator from real images. The factor of how well the discriminator can detect the generated from actual data serves as a training signal for the generator.

This technology is becoming a trend because in December 2016, several networks were presented at the Conference on Neural Information Processing Systems (NIPS), the world’s largest gathering of AI researchers. Some of these networks have used this idea to create photorealistic images. To distinguish them, some researchers used already trained networks as a discriminator for image recognition and thus trained the generator networks.

One advantage of this technique is that since the generator operates with relatively arbitrary input data, it can also be trained to use the activations of any layers of an image recognition network (discriminator) as the input. In this way, he creates images that can give us an impression of what the discriminator “sees” on certain neurons.

Visualizations

In general, there is a trend to no longer perceive the thousands of parameters of a neural network as an unmanageable ocean of numbers or as a black box. Google DeepDream’s Visualizations program laid the foundation for this field. Just like the Adversarial Networks mentioned above, the Visualization program can construct a full image from only a few visible neuron activation. Based on the similarities of this synthetic output and the input pictures, people are able to draw astonishing conclusions about the function of individual neurons.

In other words, this visualization technique allows you to see what a neural network sees by the process of image recognition, but also what they cannot see. What happens within deep networks, however, may also be interesting beyond the area of image recognition. Perhaps the process of voice recognition would become hearable, or text recognition would be readable for us soon.

Modeling the world

Humans can create a three-dimensional image of the environment in their memory, and permanently update it with a new perception. AI methods, in contrast, do not construct their environment in this way so far. But they would need to match a human’s ability in order to allow for applications such as autonomous driving, since cars should also be able to react immediately to events inside or even outside the sensor range.

Currently, these systems consisting of several networks are not trained together with parameters like other existing networks. Such network would be expected to learn from the data about how to reconstruct its environment or even the whole world by themselves. Future AIs will learn this skill and thus will no longer be dependent on a scheme clearly defined by us. With RNN architecture “Long Short-Term Memory” (LSTM) neurons already own a short-term memory. However, other than the structure of a network, it still lacks a mechanism to learn how to model the world.

A breakthrough of this technique in 2017 not only would help autopilot, but also text analysis, language translation and speech synthesis systems to raise their intellectual level.

Reinforcement-Learning

All previous examples are classification problems in which each input training data contain an expected output, Reinforcement-Learning (RL) does not have the luxury of such direct feedback on its learning. If presented with a problem, say a computer game, only after many decisions and experiences would it learn whether its choices were right or wrong.

RL’s biggest problem is the long period when it has to make decisions without a single training signal as a reward, akin to flying blind. And furthermore, it needs to generate training signals by itself, based solely on its memory and a trained model. Humans behave similarly, as we secrete happiness hormones not only when we receive an award, but also when everything is going according to plan. The same can be said about the training signals of RL, when it turns out that the previous decisions were bad ones, it would experience frustrating feeling just like a human being.

Equipped with these learning skills, future RL would be able to face complex problems such as computer games like StarCraft II. Last year, the game’s creator, Blizzard, created a suitable interface for AI-players which will be enabled as long as AIs achieve a decent performance in the Korean BattleNet (online gaming platform). However, it will still take years: Google’s network could barely filter out the nonsensical from the sensible strategies systematically for the turn-based Go. For a more complicated situation, like a real-time strategy game, the decision tree for RL would be almost arbitrarily large.

Motorized planning

Until now, the infinitely many arbitrary decision-making possibilities of motion has kept neural networks far away from robot technology. At the NIPS conference, a representative from Boston Dynamics had to admit that their robots have not used any machine learning methods at all. Instead, their products move only following the rules programmed by humans.

The problem lies in motor planning: a robot should be able to predict the future and think about what happens when it makes a movement. For this, neural networks lack both the model and the ability to assess a planned action.

This problem is a tough nut. For this reason, the first robot to plan its movements via a neural network will take a few years. Once ready, AI powered robots could quickly enter factories and start working on production lines.

Whether these robots adhere to their killing machine roles in films will not only depend on their intelligence, but also on whether they want it. Hopefully, humanity will have a better understanding if why people tends towards to violence. Because after all, our creation will ultimately be a reflection of ourselves — the good and the bad.


Analyst: Hao Pang | Localized by Synced Global Team : Xiang Chen

0 comments on “2017 Trends — What You Want & What Comes

Leave a Reply

Your email address will not be published. Required fields are marked *