Cloud & ML Ops Industrial AI Industry Machine Learning & Data Science Research Talk Review

2020 in Review With Tatsuya Nagata

Synced has invited Mr. Tatsuya Nagata, the COO of Navier Inc., to share his insights about the current development and future trends of artificial intelligence.

In 2020, Synced has covered a lot of memorable moments in the AI community. Such as the current situation of women in AI, the born of GPT-3, AI fight against covid-19, hot debates around AI bias, MT-DNN surpasses human baselines on GLUE, AlphaFold Cracked a 50-Year-Old Biology Challenge and so on. To close the chapter of 2020 and look forward to 2021, we are introducing a year-end special issue following Synced’s tradition to look back at current AI achievements and explore the possible trend of future AI with leading AI experts. Here, we invite Mr. Tatsuya Nagata to share his insights about the current development and future trends of artificial intelligence.

Tatsuya Nagata.png

Meet Tatsuya Nagata

Tatsuya Nagata is the COO of Navier Inc., a Japanese AI startup focused on deep learning-based image processing. Navier offers its cutting-edge image processing technologies including super-resolution and photo-enhancement to a variety of industries such as EC, mobile phone makers, and security companies. He also acts as the AI advisor for JETRO (Japan External Trade Organization) to provide insights on the Japanese market for international AI startups. Prior to current roles, he was engaged in investment in seed/early-stage AI startups internationally in a venture capital arm of SoftBank Group.

The Best AI Technology Developed in the Past 3 to 5 Years: “GANs”

I would choose GANs as the best AI technology of the last few years, in terms of the expansion of computers’ capability against human imagination. Although based on preexisting arts, series of GANs succeeded in creating something essentially new to the extent where people acknowledge that computer programs gain a certain kind of creativity. Among GANs, more specifically, I would say CycleGAN is one of the most impactful technologies. Its application spans a variety of use-cases: the algorithm can create a new symphony of Beethoven and convert an MRI image of a male brain into an equivalent one of a female brain.

CycleGAN is the first of the kind network to transfer domains even with unpaired datasets: for instance, when the network learns to transfer Van Gogh drawing into Rembrandt’s style, it does not require pairs of the same pictures of Van Gogh-style and Rembrandt-style. This characteristic allows machine learning practitioners to apply GANs into real-world use-cases without much hustle of data gathering and cleansing.

The Most Promising AI Technology in the Next 1 to 3 Years: “Optimization and Performance Improvement at the Hardware Level”

Series of technology intended for optimization and performance improvement at the hardware level will progress. This trend contributes to further acceleration of on-device processing speed, and deep learning will permeate as a familiar technology even more. On the device sides, chips more compatible with AI are coming. In November 2020, Apple officially announced its first Mac-specific processor, M1, which includes a 16-core Neural Engine dedicated to machine learning, and Qualcomm launched the new SoC Snapdragon888 to improve AI performance at mobile. On the algorithm side, research on compression and optimization of neural networks such as NAS will contribute to further improvement of AI processing speed without sacrificing performance. Outcomes from both sides will enable machine learning tasks to be implemented on devices conventionally deemed to have limited computing resources. For instance, deep learning-based super-resolution, the field of AI on which our company Navier is focused, requires high computing power due to its complexity, but smartphones to be released in the next few years can probably handle the task within a practical speed and quality. Consequently, benefits derived from AI/ML will be more visible to customers.

The Biggest Challenge in the Field of AI: “Inherently Limited Scalability of AI-Based Solutions Hinders the Ubiquity of AI”

The inherently limited scalability of AI-based solutions hinders the ubiquity of AI. The problem stems primarily from the trade-off between the neural network’s generalization and performance. If you train a network to maximize its generalization, you need to sacrifice its performance to some extent. If you optimize your network to a customer’s specific situation, the network might lose the ability to serve different customers. This inherent characteristic of deep learning makes it harder to develop one-size-fits-all products, the source of scalability. From my experience as a venture capitalist, I would say a majority of early-stage AI startups fail to serve as many customers as they could due to the limitation, although they have a vision and passion to change the world. As a result, many new and wild ideas cannot be tested at a sufficiently large scale, and many unmet needs are left unaddressed. In this way, the delay of democratization of AI causes a huge loss in society. As a potential solution, I expect future outcomes of research on training methods such as one-shot learning and self-supervised learning to help AI practitioners to navigate their way through the trade-off and make their ideas deliverable.

The Latest Noteworthy Development: “Super-Resolution”

Our company is focused on super-resolution (SR). As its latest progress, we marked the current state of the art in our paper, “Unpaired Image Super-Resolution using Pseudo-Supervision”, published in CVPR 2020. Conventional network training methods of SR networks require paired datasets for training, which consist of pairs of high-resolution images and low-resolution versions of the same images. Recent methods enable training neural networks based on unpaired images. Although their average performance is generally lower than the average of networks trained by paired datasets, paired datasets are scarcely available in real-world settings. Our proposed method cleared drawbacks of previous unpaired SR models and improved perceptual quality.

For recent years, research on SR has put emphasis oftentimes on perceptual quality of output images; however, performance improvement based on realistically available datasets is also crucial from the viewpoint of practicality. I expect research motivated by practical needs to form a trend. While sensational outcomes often draw public attention, those technologies usually presuppose massive training datasets and unlimited computing resources and would take years to be applied into real-world challenges. Rather than eye-blowing outputs, research to address limitations in real-world application is awaited. This is common to other fields of AI technologies.


B4.png

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon KindleAlong with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.


AI Weekly.png

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

2 comments on “2020 in Review With Tatsuya Nagata

  1. Pingback: 2020 in Review With Tatsuya Nagata - News.PeopleInTech.io

  2. The post-pandemic global smart home device market is expected to maintain a compound annual growth rate of 15 percent.

Leave a Reply to testmyspeed.onl Cancel reply

Your email address will not be published. Required fields are marked *