Use Case #2:
Data Selection for AI Training
The development of an accurate AI model requires a diverse training data set, rich in edge cases over a variety of driving scenarios and environmental conditions. Frame by frame data labeling is costly and error-prone so most development teams try to filter the data before it is sent out for annotation. This is often a crude sampling approach that can lead to missed edge cases and low Average Precision (AP). As a result, there is often the need for additional training data late in the development cycle, leading to delays and cost overruns.
The Ottometric platform helps optimize the training process by: 1) providing timely feedback to the drive team as to the effectiveness of the collected data; 2) automatically distilling and curating the data to create a diverse training data set that is 100x smaller than the raw data yet just as effective if not more effective than the complete data set.
• Better training accuracy (AP)
• Dramatically reduced labelling costs
• Much faster training runtimes
• Reduced processing and storage costs
• Shorter time to market