Landing AI ‘Social Distancing Detector’ Monitors Workplaces
Silicon Valley based Landing AI introduced a new AI-enabled social distancing detection tool designed to help monitor and enforce physical distancing protocols in workplaces.
AI Technology & Industry Review
Silicon Valley based Landing AI introduced a new AI-enabled social distancing detection tool designed to help monitor and enforce physical distancing protocols in workplaces.
Wolfram announced this week that he may have found a path that leads to a fundamental theory of physics, and that it is “beautiful.”
Anyone can simply upload a selfie to the ‘Selfie 2 Waifu’ website to create their own AI-generated waifu-style anime character in seconds.
Just as biologists gain insights into organisms by putting model specimens under their microscopes, AI Microscope was designed to help researchers analyze the features that form inside leading CV models.
In a bid to generate high-resolution images showing realistic daytime changes while keeping accurate scene semantics, researchers have proposed a novel image-to-image translation model, HiDT (High Resolution Daytime Translation).
Researchers have introduced Active Neural SLAM, a modular and hierarchical approach to learning policies for exploring 3D environments.
A team of researchers from NVIDIA and Heidelberg University recently introduced an open-source self-supervised learning technique for viewpoint estimation of general objects that draws on such freely available Internet images.
Researchers from Virginia Tech, National Tsing Hua University and Facebook have introduced a game-changing algorithm that generates impressive 3D photos from a single RGB-D (colour and depth) image.
Synced has identified some interesting AI-powered virtual humans to introduce to our readers.
In a new study, researchers use a physics simulator to learn to predict physical forces in videos of humans interacting with objects.
Deep Fashion3D contains 2,078 3D garment models reconstructed from real-world garments in 10 different clothing categories.
The new benchmark for wide-baseline image matching includes a 30k image dataset with depth maps and accurate pose information.
Researchers recently developed and open-sourced COVID-Net, a convolutional neural network for detecting COVID-19 through chest radiography.
Their proposed framework outperforms state-of-the-art approaches for 3D reconstructions from 2D and 2.5D data, achieving 12 percent better performance on average in the ShapeNet benchmark dataset and up to 19 percent for certain classes of objects.
Researchers from the University of Chicago Oriental Institute (OI) and the Department of Computer Science have introduced an artificial intelligence tool called DeepScribe designed to read cuneiform tablets from 25 centuries ago.
A research team from MIT, Adobe Research, and Shanghai Jiao Tong University have introduced a novel method for reducing the cost and size of Conditional GAN generators.
Researchers from Google Brain Tokyo and Google Japan have proposed a novel approach that helps guide reinforcement learning (RL) agents to what’s important in vision-based tasks.
Researchers investigate how different ImageNet models affect transfer accuracy on domain adaptation problems.
The Association for Computing Machinery (ACM) this morning announced Patrick M. (Pat) Hanrahan and Edwin E. (Ed) Catmull as its 2019 Turing Award winners.
A research team has proposed non-contrast thoracic chest CT scans as an effective tool for detecting, quantifying, and tracking COVID-19.
A new study suggests that VSR models could perform even better if they used additional available visual information.
The earliest evidence of China’s recorded history is found in the Shang dynasty (~1600 to 1046 BC), and this hasContinue Reading
The model outperforms existing methods in image manipulation and offers researchers a possible solution to the scarcity of paired datasets.
UC Berkeley and Adobe Research have introduced a “universal” detector that can distinguish real images from generated images regardless of what architectures and/or datasets were used for training.
Proposed by researchers from the Rutgers University and Samsung AI Center in the UK, CookGAN uses an attention-based ingredients-image association model to condition a generative neural network tasked with synthesizing meal images.
The KaoKore dataset includes 5552 RGB image files drawn from the 2018 Collection of Facial Expressions dataset of cropped face images from Japanese artworks.
The crowdsourcing produced 111.25 hours of video from 54 non-expert demonstrators to build “one of the largest, richest, and most diverse robot manipulation datasets ever collected using human creativity and dexterity.”
Fast and accurate diagnosis is critical on the front line, and now an AI-powered diagnostic assessment system is helping Hubei medical teams do just that.
The proposed system is capable of searching the continental United States at 1 -meter pixel resolution, corresponding to approximately 2 billion images, in around 0.1 seconds.
MonoLayout, a practical deep neural architecture that takes just a single image of a road scene as input and outputs an amodal scene layout in bird’s-eye view.
In a bid to raise awareness of the threats posed by climate change, the Mila team recently published a paper that uses GANs to generate images of how climate events may impact our environments — with a particular focus on floods.
Joseph Redmon, creator of the popular object detection algorithm YOLO, tweeted last week that he had ceased his computer vision research to avoid enabling potential misuse of the tech.
Researchers from Italy’s University of Pisa present a clear and engaging tutorial on the main concepts and building blocks involved in neural architectures for graphs.
Researchers have proposed a novel generator network specialized on the illustrations in children’s books.
Researchers have proposed a simple but powerful “SimCLR” framework for contrastive learning of visual representations.
The tool enables researchers to try, compare, and evaluate models to decide which work best on their datasets or for their research purposes.
Google teamed up with researchers from Synthesis AI and Columbia University to introduce a deep learning approach called ClearGrasp as a first step to teaching machines how to “see” transparent materials.
Researchers from Google Brain and Carnegie Mellon University have released models trained with a semi-supervised learning method called “Noisy Student” that achieve 88.4 percent top-1 accuracy on ImageNet.
Researchers introduced semantic region-adaptive normalization (SEAN), a simple but effective building block for conditional Generative Adversarial Networks (cGAN).
In a bid to simplify 3D deep learning and improve processing performance and efficiency, Facebook recently introduced an open-source framework for 3D computer vision.