Getatlas Ljqm58vu6p
Help CenterUltralytics TutorialsImage Segmentation using YOLOv8

Image Segmentation using YOLOv8

Last updated September 4, 2024

Image Segmentation with YOLOv8: Delve into Pixel-Level Object Recognition

YOLOv8, the cutting-edge object detection framework, extends its capabilities to image segmentation, enabling you to identify and isolate objects within images at the pixel level. This guide explores the process of performing image segmentation using YOLOv8, guiding you through dataset preparation, model training, and result visualization.

Training a YOLOv8 Model for Image Segmentation

  • Dataset Preparation: Gather a dataset of images containing the objects you want to segment. Annotate these images using a segmentation tool like LabelImg or other software, creating masks that define the boundaries of each object pixel-by-pixel.
  • Model Selection: Select the appropriate YOLOv8 segmentation model:
  • YOLOv8-seg: The default segmentation model.
  • YOLOv8n-seg: A smaller model, prioritizing speed.
  • YOLOv8s-seg: A medium-sized model, balancing speed and accuracy.
  • YOLOv8m-seg: A larger model, emphasizing accuracy.
  • YOLOv8l-seg: The largest model, aiming for the highest accuracy.
  • Training Configuration: Configure training parameters in a `train.yaml` file, specifying options like batch size, epochs, learning rate, and data augmentation techniques specifically designed for segmentation.
  • Training Execution: Use the Ultralytics library's command-line interface to initiate training: `python train.py --data your_data.yaml --cfg your_model.yaml`.
  • Evaluation: Evaluate your trained model's performance on a validation set using metrics like mIoU (mean Intersection over Union) and inference speed.

Performing Inference with a Trained YOLOv8 Segmentation Model

  • Model Loading: Load your trained YOLOv8 segmentation model using the Ultralytics library: `model = torch.hub.load('ultralytics/yolov8', 'your_trained_model')`.
  • Input Processing: Prepare input images in the required format for the model.
  • Inference Execution: Run inference on your input data: `results = model(image)`.
  • Result Interpretation: Extract the segmentation masks, object bounding boxes, class labels, and confidence scores from the `results` object.
  • Visualization: Visualize the segmentation results by overlaying the masks onto the input images using the Ultralytics library's visualization functions.

Applications of YOLOv8 Image Segmentation

  • Medical Imaging: Segmenting organs, tumors, and other structures in medical images for analysis and diagnosis.
  • Autonomous Driving: Segmenting road lanes, traffic signs, and pedestrians for safe and efficient vehicle navigation.
  • Robotics: Segmenting objects of interest in robotic environments for manipulation, navigation, and object recognition.
  • Industrial Inspection: Analyzing images to identify defects, measure dimensions, and monitor production processes.
Was this article helpful?