AI Self Driving Technology

UC Berkeley Open-Sources 100k Driving Video Database

UC Berkeley's Artificial Intelligence Research Lab (BAIR) has open-sourced their newest driving database, BDD100K, which contains over 100k videos of driving experience, each running 40 seconds at 30 frames per second.

UC Berkeley’s Artificial Intelligence Research Lab (BAIR) has open-sourced their newest driving database, BDD100K, which contains over 100k videos of driving experience, each running 40 seconds at 30 frames per second.

BDD100K’s total image count is 800 times larger than Baidu ApolloScape (released this March), 4,800 times larger than Mapillary and 8,000 times larger than KITTI.
table0.png
The videos were collected from some 50k trips on the streets and highways of New York, San Francisco Bay Area, etc. and come with GPS/IMU information illustrating driving paths. They were recorded at different times of the day and in various weather conditions, including sunny, overcast, and rainy.

Prior to releasing the database, BAIR researchers worked on an annotation tool to speed up labeling bounding boxes, semantic segmentation, and lanes in the driving database. The tool can be accessed via a web browser. For box annotation, the team trained a Fast-RCNN object detection model to learn from 55k labeled video clips. The model will work alongside human annotators and save 60 percent of the time required for drawing and adjusting bounding boxes.

BAIR has implemented the tool on BDD100K, wherein the team extracts selected video frames and annotates them for image tagging, road objecting bounding boxes, drivable areas, lane markings, and full-frame instance segmentation.

The database contains almost one million cars, more than 300k street signs, 130k pedestrians, etc. BDD100K will be especially suitable for computer vision training to detect and avoid pedestrians on the street, as it contains more people than other datasets. CityPerson, a dataset specialized for pedestrian detection, has only about one-quarter the people per image that BDD100K does.

Annotated images also have two types of lane markings: vertical lanes are marked in red and parallel lanes in blue. Drivable areas are separated between the red-marked directly drivable path and blue-marked alternative drivable paths.

Screen Shot 2018-05-31 at 5.26.09 PM.png

Screen Shot 2018-05-31 at 5.26.17 PM.png
Marking lanes (top) and drivable areas (bottom)

The BDD100K database is backed by Berkeley DeepDrive (BDD) Industry Consortium, which studies computer vision and ML applications for vehicles. BDD aligns UC Berkeley with top-tier companies such as Ford, Nvidia, Qualcomm, GM, and Baidu. Back in March, Baidu released its ApolloScape driving dataset — at that time the largest yet — as part of the BDD project.


Journalist: Meghan Han | Editor: Michael Sarazen

 

0 comments on “UC Berkeley Open-Sources 100k Driving Video Database

Leave a Reply

Your email address will not be published.

%d bloggers like this: