Getatlas Ljqm58vu6p
Help CenterYOLOv8YOLOv8 Inference and Deployment

YOLOv8 Inference and Deployment

Last updated September 4, 2024

After training a YOLOv8 model on your custom dataset, you're ready to put it to work for real-time object detection. This guide walks you through the steps of performing inference with your trained model and deploying it for various applications.

Performing Inference with YOLOv8

  • Loading the Model: Utilize the Ultralytics library to load your trained YOLOv8 model: `model = torch.hub.load('ultralytics/yolov8', 'yolov8n')`. Replace 'yolov8n' with the name of your trained model.
  • Loading Weights: If you have saved the weights of your trained model, load them into the model object `model.load_state_dict(torch.load('path/to/your/weights.pt'))`.
  • Processing Input: Prepare your input data, which can be images or videos. Ensure the input format is compatible with the YOLOv8 model.
  • Running Inference: Perform inference on your input data using the loaded model: `results = model(image)` or `results = model(video)`.
  • Interpreting Results: Extract the detection results from the returned results object. This typically includes bounding box coordinates, class labels, and confidence scores for detected objects.
  • Visualizing Detections: Visualize the detected objects on the input images or videos using provided functions within the Ultralytics library.

Deploying YOLOv8 Models

  • Web Applications: Deploy your YOLOv8 model as a web service using frameworks like Flask or Django. Integrate the model into a web application to enable real-time object detection on uploaded images or live video streams.
  • Mobile Applications: Leverage frameworks like TensorFlow Lite or CoreML to optimize your YOLOv8 model for mobile devices. This allows you to create mobile applications with object detection functionality.
  • Cloud Platforms: Deploy your model on cloud platforms like AWS, GCP, or Azure. This can efficiently handle large-scale inference tasks and provide access to powerful hardware resources.
  • Edge Devices: For scenarios requiring low latency and offline operation, consider deploying your model on edge devices like Raspberry Pi or NVIDIA Jetson.
  • Custom Applications: Tailor the deployment process to suit your specific application needs, integrating the YOLOv8 model into existing systems or developing custom interfaces for visualization and analysis.
Was this article helpful?