The Mapillary Vistas Dataset includes 25,000 images that have been manually annotated into 100 different object classes from many different locations (covering six continents) as well as different scenarios (time of day, weather, season). It provides training data for developing autonomous driving so that the algorithms used by self-driving cars can be properly trained to recognize different objects that occur in traffic situations, such as vehicles, pedestrians, lanes with different markings, traffic lights, etc.
AID’s mission is to enable mobility providers and car OEMs to drive fully autonomously in urban environments and beyond. AID uses the Vistas Dataset as a high-quality and versatile resource for improving their Deep Neural Networks. Since it contains images from all over the world and from different kinds of cameras, it is ideal for improving the robustness of the models that AID is developing, and for verifying their performance in applications on potential markets all over the world. A clear benefit of the Mapillary Vistas Dataset, according to AID, is that the annotations are extraordinarily detailed which makes it very easy to map to it to their internal datasets.
Via: Mapillary Blog