YOLOv5
v7.0
New Segmentation Checkpoints
We trained YOLOv5 segmentations models on COCO for 300 epochs at image size 640 using A100 GPUs. We exported all models to ONNX FP32 for CPU speed tests and to TensorRT FP16 for GPU speed tests. We ran all speed tests on Google Colab Pro notebooks for easy reproducibility.
Model | size (pixels) | mAPbox 50-95 | mAPmask 50-95 | Train time 300 epochs A100 (hours) | Speed ONNX CPU (ms) | Speed TRT A100 (ms) | params (M) | FLOPs @640 (B) |
640 | 27.6 | 23.4 | 80:17 | 62.7 | 1.2 | 2.0 | 7.1 | |
640 | 37.6 | 31.7 | 88:16 | 173.3 | 1.4 | 7.6 | 26.4 | |
640 | 45.0 | 37.1 | 108:36 | 427.0 | 2.2 | 22.0 | 70.8 | |
640 | 49.0 | 39.9 | 66:43 (2x) | 857.4 | 2.9 | 47.9 | 147.7 | |
640 | 50.7 | 41.4 | 62:56 (3x) | 1579.2 | 4.5 | 88.8 | 265.7 |
All checkpoints are trained to 300 epochs with SGD optimizer with
lr0=0.01
andweight_decay=5e-5
at image size 640 and all default settings. Runs logged to https://wandb.ai/glenn-jocher/YOLOv5_v70_officialAccuracy values are for single-model single-scale on COCO dataset. Reproduce by
python segment/val.py --data coco.yaml --weights yolov5s-seg.pt
Speed averaged over 100 inference images using a Colab Pro A100 High-RAM instance. Values indicate inference speed only (NMS adds about 1ms per image). Reproduce by
python segment/val.py --data coco.yaml --weights yolov5s-seg.pt --batch 1
Export to ONNX at FP32 and TensorRT at FP16 done with
export.py
. Reproduce bypython export.py --weights yolov5s-seg.pt --include engine --device 0 --half
Last updated