NumPy is the foundation upon which the scientific Python ecosystem is constructed.
As robots take over industrial manufacturing, specific and accurate robot control is becoming more important. Conventional feedback control methods can effectively solve various types of robot control problems by capturing structures with explicit models such as motion equations.
Facebook AI Research has announced it is open-sourcing PyTorch-BigGraph (PBG), a tool that can easily process and produce graph embeddings for extremely large graphs. PBG can also process multi-relation graph embeddings where a model is too large to fit in memory.
A year ago, Shenzhen-based self-driving start-up Roadstar.ai was cruising along promisingly. In May 2018 the company announced a US$128 million funding round led by Wu Capital and state-backed Shenzhen Capital Group — one of the largest autonomous driving investments ever in China.
Now, China’s elite Central Conservatory of Music (CCOM) has announced it is recruiting PhDs for a new Music AI and Information Technology program. CCOM says prospective students should have a background in Computer Science, AI, or Information Technology; along with musical abilities (instrument playing or singing).
DeepMind trained and tested its neural model by first collecting a dataset consisting of different types of mathematics problems. Rather than crowd-sourcing, they synthesized the dataset to generate a larger number of training examples, control the difficulty level and reduce training time.
It’s a fanciful little one-piece in shimmering green and aquamarine with bold fuschia shoulder accents — perfect for a night out on the town. Is this a new dress from a Milan or Tokyo collection? Nope, it was designed by an AI-powered machine, and produced by a couple of MIT graduates.
Today is April Fool’s Day, and despite Microsoft’s efforts to ban such pranks, many tech companies could not resist joining in the centuries-old spoofing tradition. However, given the already incredible achievements of cutting-edge AI technologies, some of today’s hoaxes actually look pretty convincing.
Chinese AI company iFLYTEK has bested the SQuAD2.0 challenge once again. The model “BERT + DAE + AoA” submitted by the joint iFLYTEK Research and HIT (Harbin Institute of Technology) laboratory HFL outperformed humans on both EM (exact match) and F1-score (fuzzy match) indexes to top the SQuAD2.0 leaderboard.
In a scene that looks like it’s from a sci-fi movie, a YouTube video posted today by robotics company Boston Dynamics shows a huge, ostrich-like robot “Handle” whirling round while deftly moving boxes in a warehouse. The video has garnered over 138,000 views in less than four hours.
Andrew Brock, first author of the high-profile research paper Large Scale GAN Training for High Fidelity Natural Image Synthesis (aka “BigGAN”), has posted a GitHub repository of an unofficial PyTorch BigGAN implementation that requires only 4-8 GPUs to train the model.
Facing the incomplete information environment, the asynchronous neural virtual self-play (ANFSP) method allows AI to learn to generate optimal decisions in multiple virtual environments. The approach has performed well in Texas Hold’em and multiplayer FPS video games.
NVIDIA CEO and Co-Founder Jensen Huang says a rumored next-generation GPU architecture is not a priority for the company, and that he remains optimistic about clearing the chip inventory built up for cryptocurrency mining. Huang made the remarks in a press conference Tuesday at the GPU Technology Conference (GTC) in Santa Clara.
It is no secret that deep neural networks (DNNs) can achieve state-of-the-art performance in a wide range of complicated tasks. DNN models such as BigGAN, BERT, and GPT 2.0 have proved the high potential of deep learning. Deploying DNNs on mobile devices, consumer devices, drones and vehicles however remains a bottleneck for researchers.
DeepMind’s Research Platform Team has open-sourced TF-Replicator, a framework that enables researchers without previous experience with the distributed system to deploy their TensorFlow models on GPUs and Cloud TPUs. The move aims to strengthen AI research and development.
NVIDIA’s annual GPU Technology Conference (GTC) attracted some 9,000 developers, buyers and innovators to San Jose, California this week. CEO and Co-Founder Jensen Huang’s two-and-a-half hour keynote speech fused GPU-based innovations in domains ranging from graphic design to autonomous driving.