Following the success of the Nexar Challenge I, we are happy to announce the second Nexar deep learning challenge and to release NEXET, the largest, most diverse annotated dataset for road-understanding research.
NEXET, the Nexar dataset, is a massive set consisting of 50,000 images from all over the world with bounding box annotations of the rear of vehicles collected from a variety of locations, lighting, and weather conditions. We are releasing this dataset to you, our challengers, to empower you to build a truly smart collision prevention system that can work extremely well anywhere and at any time.
So, are you aching to prove that you’re the next rising star of road understanding? Want to test your deep learning mettle and win big prizes? Do you have what it takes prevent car collisions and save the day? Then start your engines, and read on for details.
Each submission must include a runnable code including documentation and resources/dependencies required to train and test the model, with reproducible results to the submitted CSV.
The robustness of learning end-to-end driving policy models depends on having access to the largest possible training dataset exposing the true diversity of the 10 trillion miles that humans globally. Current approaches are limited to models trained using homogenous data from a small number of vehicles running in controlled environments or in simulation, which fail to perform adequately in real-world dangerous corner cases. Safe driving requires continuously resolving a long tail of those corner cases. The only possible way to learn a robust driving policy model is therefore to continuously capture as many of these cases as possible.
At Nexar, we are building an Advanced Driver Assistance System (ADAS) based on a monocular camera stream from regular consumer dashcams mounted on cars all across the planet. These cameras are continuously taking images of the world’s road in all weather, light conditions, and driving scenarios.
In this challenge, your task is to build a rear vehicle detector function that computes bounding boxes around each clearly visible vehicle on front. The detector should be looking for vehicle(s) in front of the camera which are also driving in the same direction. The purpose of this task is to improve the Forward Vehicle Collision Warning feature, which requires a very accurate bounding box around the rear side of the vehicle(s) ahead.
Evaluation & Submission:
The test set was produced as follows:
- A set of 41,190 annotated images was extracted from the same distribution as the train set
- An expert reviewed the annotations and made sure all bounding boxes are tightly and accurately positioned
You should run your model on the test set and produce a csv file with all bounding boxes detected at a very low probability threshold. For an example, consider the dt.csv file in the data directory of the github repository of the challenge.
Upload your csv using the submit button along with your model to get your result.
The submitted csv file will be compared to our ground truth csv and an average precision score will be computed by the program eval_challenege.py in the evaluation directory of the github repository of the challenge. The results are ordered in decreasing average precision.
The NEXET dataset
The set is comprised of 50,000 training images and 5,000 test images.
The training images were collected through randomly sampling Nexar’s database of images, which were all taken by drivers using the Nexar dashcam. Filtering was deployed on the dataset in order to balance between images taken at day (~50%) and images taken during the night (~46%), with a small amount of images taken in twilight lighting conditions (~4%).
To support model-adaptation research, we split the images geographically as follows:
- 10K images taken in greater New York City
- 10K images taken in the San Francisco Bay Area
- 10K images taken in greater Tel Aviv
- 20K images taken from the rest of the world, spanning 77 countries
In addition to the training images, the NEXET dataset includes 5,000 test images that were taken from the same distribution as the training images. These images were manually annotated with an estimated 96% accuracy, which should be enough for training.
We created bounding boxes for 5 vehicle categories: car, van, pickup-truck, truck, and bus.
In this challenge, we will calculate the scores only according to the bounding box detection, regardless of class label. We will use the Average Precision (AP) metric with an IoU of 0.75 to score the submissions.
Using this information, we can compute the Average Precision (AP) scoring as described in theILSVRC paper (algorithm 2). This measure is similar to the PASCAL VOC measure deployed from 2010 onwards. A code for computing the AP score will be released by the submission server open date (see section Timeline).
- One account per participant
- No private sharing of code or data
- A submission will be considered ineligible if it was developed using code containing or depending on software that is not approved by the Open Source Initiative, or a license that prohibits commercial use
- Participants must submit a runnable code, including documentation and resources/dependencies required to train and test the model, with reproducible results.
- No hand-labelling of test dataset allowed
- Max of three submissions per account. If a group with many accounts participate in the challenge, then we expect that only one of the group members submit results on the behalf of the group.
- Up to 3 submissions are allowed
- Submission Server is Open (test-set available to download): 28 Aug 2017
- Submission Deadline: 02 Oct 2017
How to get started:
- Apply for the challenge. Join the challenge team by logging on with your Github account (you will need to have2FA activated) and download the dataset.
- Join the challenge Slack team and join the challenge-two channel (invitation will be sent to your email account after applying to the challenge).
- Begin building and testing your deep network using one of the popular deep-learning frameworks: Caffe, TensorFlow, Theano, Torch, or MXNet.
- Submit your results in the submission system with a csv results file, a trained model, and the code for training and testing the model (in either a public or a private GitHub repository).
- Each submission must include a runnable code , including documentation and resources/dependencies required to train and test the model, with reproducible results to the submitted CSV.