To promote fairer ML systems, the team introduced a collaborative causal theory formation that incorporates diverse stakeholder perspectives for discovering and considering societal context.
Alibaba’s cloud computing division Alibaba Cloud announced today that its AI-powered “ET City Brain” will be deployed in the Malaysian capital Kuala Lumpur in partnership with the Malaysia Digital Economy Corporation (MDEC), the country’s digital economy development agency, and Kuala Lumpur City Hall (DBKL).
Maritime transportation industry has been facing employment shortage. Manless operations may solve this problem and reduce the high cost of manpower. In comparison to land autonomous driving, there are fewer obstacles on the sea where travel routes are more open.
Yuanqing Lin’s AI startup Aibee.ai — or AI2B, AI to business — has landed US $25 million in angel round funding. The company was founded by the former Director of Baidu’s Institute of Deep Learning (IDL), and provides general AI solutions such as computer vision and image and speech recognition for companies in education, finance, retail, real estate and other sectors.
CES exhibitors promoted everything from AI-powered vehicles to smart refrigerators and robot puppies. But the tech driving these diverse products was the same: the virtual assistant, a conversational AI that can understand human speech and generate humanlike responses.
To boost learning research aimed at endowing robots with better generalization capabilities, Yi Wu from UC Berkeley and Yuxin Wu, Georgia Gkioxari, and Yuandong Tian from Facebook AI research recently published the paper Building Generalizable Agents with a Realistic and Rich 3D Environment.
Myntra Applies AI to Clothing Design; San Jose Negotiates With Partners to Deploy Autonomous Vehicle Transit; Can Google Street View Images See the Future?; Ctrip and Baidu Launch AI Pocket Chinese-English Translator; Canada To Use AI to Track Suicide Trends on Social Media
To combine the advantages of these two methods, the authors of this paper first adapts the multi-source NMT model, by employing different encoders to capture the semantics of the source language, then the decoder is used to generate the final output by the multiple context vector representations coming from the encoder.