Accessing Your Training Graph
- Choosing Between Computer Vision Model Sizes - includes insights on the FAST model vs. ACCURATE model
Models trained with Roboflow Train give you the ability to access a custom Model-Assisted Labeling checkpoint for automation of labeling on your projects, testing your model in the Deploy Tab, and our simplified deployment solutions.
After your model training completes, select "Details" within the UI while you are viewing the trained dataset version.
The training graph for version 7 (v7, Roboflow-FAST-model) of the Face Detection dataset on the Featured Section of Roboflow Universe looks like this:
How to Read the Training Graph:
- A percentage (%) value, in decimal form.
- The epoch number for the data point.
- An epoch is the number of times all of your training data will run through your model architecture's network.
- Better known as “box loss.”
- A loss metric, based on a specific loss function, that measures how "tight" the predicted bounding boxes are to the ground truth objects (the labels on your dataset's images).
- A lower value indicates your model is improving for generalization and creating better bounding boxes around the objects the dataset has been labeled to identify.
- Tip: label with tight bounding boxes to ensure you get the most accurate readings possible. Labeling Guide: Object Detection.
- Better known as “classification loss.”
- A loss metric, based on a specific loss function, that measures the correctness of the classification of all predicted bounding boxes. Each individual bounding box may contain an object class, or a “background label” (Null image).
- To calculate mAP for object detection, you calculate the average precision for each class in your data based on the model predictions. Average precision is related to the area under the precision-recall curve, or AUC, for a given class in your dataset. The mean of these average precision for the individual classes gives you the mean Average Precision or mAP.
Note: mAP is influenced by Intersection over Union, or IoU, of ground truth labels and predicted bounding boxes.
Intersection over Union (IoU) is measured as the amount of a predicted bounding box that overlaps with the ground truth bounding box, divided by the total area of both bounding boxes.
- Better known as "mean Average Precision with an IoU of 0.50, or 50%".
- The mean Average Precision (mAP) with predictions evaluated as a “detected object” at an Intersection over Union (IoU) greater than 0.5, or 50%.
- Better known as "mean Average Precision with an IoU interval of 0.50 to 0.95, or 50% to 95%".
- The mean Average Precision (mAP) with predictions evaluated as a “detected object” at an Intersection over Union (IoU) greater than 0.50 and less than or equal to 0.95 (50%-95%].
- A measure of how precise a model is at prediction time. True positives are divided by all positives that have been guessed.
- A measure of performance for a prediction system. Recall is used to assess whether a prediction system is guessing enough. True positives are divided by all possible true positives.
- This refers to the metrics for your training set [split].
- This refers to the metrics for your validation set [split].
If you aren't satisfied with your results, you can take advantage of Roboflow's dataset management system, and code tentacles, such as the Python package, and the Upload API, to add more images to your project, and improve your results in the next training attempt and when the model is next tested and deployed.