GithubHelp home page GithubHelp logo

Comments (8)

github-actions avatar github-actions commented on July 19, 2024

πŸ‘‹ Hello @akaomerr, thank you for your interest in Ultralytics YOLOv8 πŸš€! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a πŸ› Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

from ultralytics.

glenn-jocher avatar glenn-jocher commented on July 19, 2024

Hello!

Thank you for reaching out and providing detailed information about your issue. It's great to hear that your YOLOv8m model performed well during testing! Let's work together to ensure it performs just as well after converting to CoreML.

Potential Issues and Solutions

  1. Model Export Parameters:
    When exporting to CoreML, certain parameters like nms (Non-Maximum Suppression) can significantly impact performance. Ensure that you are setting the correct parameters during export.

  2. Image Preprocessing:
    Differences in image preprocessing between the training environment and the CoreML environment can cause discrepancies. Make sure that the image preprocessing steps (like resizing, normalization) are consistent.

  3. Model Quantization:
    If you are using quantization (e.g., half=True or int8=True), this can sometimes lead to a drop in performance. Try exporting without quantization to see if it improves performance.

  4. CoreML Model Configuration:
    Ensure that the CoreML model configuration matches the expected input and output formats. Sometimes, mismatches can lead to poor performance.

Suggested Steps

Here’s a refined version of your export code with some additional parameters that might help:

from ultralytics import YOLO

# Load the trained model
model = YOLO("best.pt")

# Export the model to CoreML format with additional parameters
model.export(format="coreml", nms=True, half=False, int8=False)

Testing the CoreML Model

After exporting, you can test the CoreML model using the following code snippet to ensure it works as expected:

import coremltools as ct
import numpy as np
from PIL import Image

# Load the CoreML model
coreml_model = ct.models.MLModel("best.mlpackage")

# Load an image
image = Image.open("path/to/test/image.jpg").resize((640, 640))

# Convert the image to a numpy array
input_data = np.array(image).astype(np.float32)

# Make a prediction
results = coreml_model.predict({"image": input_data})

# Process the results
print(results)

Additional Resources

For more detailed guidance on exporting and deploying YOLOv8 models to CoreML, you can refer to the Ultralytics documentation.

If you continue to experience issues, please feel free to share more details or any error messages you encounter. The YOLO community and the Ultralytics team are here to help!

Best of luck with your project! πŸš€

Warm

from ultralytics.

akaomerr avatar akaomerr commented on July 19, 2024

Hello!

Thank you for reaching out and providing detailed information about your issue. It's great to hear that your YOLOv8m model performed well during testing! Let's work together to ensure it performs just as well after converting to CoreML.

Potential Issues and Solutions

  1. Model Export Parameters:
    When exporting to CoreML, certain parameters like nms (Non-Maximum Suppression) can significantly impact performance. Ensure that you are setting the correct parameters during export.
  2. Image Preprocessing:
    Differences in image preprocessing between the training environment and the CoreML environment can cause discrepancies. Make sure that the image preprocessing steps (like resizing, normalization) are consistent.
  3. Model Quantization:
    If you are using quantization (e.g., half=True or int8=True), this can sometimes lead to a drop in performance. Try exporting without quantization to see if it improves performance.
  4. CoreML Model Configuration:
    Ensure that the CoreML model configuration matches the expected input and output formats. Sometimes, mismatches can lead to poor performance.

Suggested Steps

Here’s a refined version of your export code with some additional parameters that might help:

from ultralytics import YOLO

# Load the trained model
model = YOLO("best.pt")

# Export the model to CoreML format with additional parameters
model.export(format="coreml", nms=True, half=False, int8=False)

Testing the CoreML Model

After exporting, you can test the CoreML model using the following code snippet to ensure it works as expected:

import coremltools as ct
import numpy as np
from PIL import Image

# Load the CoreML model
coreml_model = ct.models.MLModel("best.mlpackage")

# Load an image
image = Image.open("path/to/test/image.jpg").resize((640, 640))

# Convert the image to a numpy array
input_data = np.array(image).astype(np.float32)

# Make a prediction
results = coreml_model.predict({"image": input_data})

# Process the results
print(results)

Additional Resources

For more detailed guidance on exporting and deploying YOLOv8 models to CoreML, you can refer to the Ultralytics documentation.

If you continue to experience issues, please feel free to share more details or any error messages you encounter. The YOLO community and the Ultralytics team are here to help!

Best of luck with your project! πŸš€

Warm

thank you for your advice, I'll try.

from ultralytics.

glenn-jocher avatar glenn-jocher commented on July 19, 2024

You're welcome! 😊 I'm glad to hear that you found the advice helpful. If you run into any more issues or have further questions while trying out the suggested steps, don't hesitate to reach out. The YOLO community and the Ultralytics team are always here to support you.

Best of luck with your CoreML model conversion! πŸš€

Warm

from ultralytics.

akaomerr avatar akaomerr commented on July 19, 2024

You're welcome! 😊 I'm glad to hear that you found the advice helpful. If you run into any more issues or have further questions while trying out the suggested steps, don't hesitate to reach out. The YOLO community and the Ultralytics team are always here to support you.

Best of luck with your CoreML model conversion! πŸš€

Warm

Hello again,
I trained my model with yolov8m;

from ultralytics import YOLO
model=YOLO("yolov8m.pt")
result=model.train(data="/content/Objects-4/data.yaml", epochs=250,batch=32)

I tested the trained results with;

model = YOLO('best.pt')  
results = model(['street2.jpg'])

and got very successful outputs. Then, I converted this trained model to CoreML using;

from ultralytics import YOLO
model = YOLO('best.pt')
model.export(format='coreml', nms=True)

On a MacBook Air with an M1 chip, I tested the CoreML model with the following code and got the same results as with best.pt;

coreml_model = YOLO("best.mlpackage")
results = coreml_model("example.jpg")

However, the problem is that when I used my model in an iOS-based application, I got very poor results. When I tested it by adding photos inside the CoreML file, I got irrelevant outputs. What do you think could be the reason for this? I would appreciate your help to find a solution. Thank you in advance.

from ultralytics.

glenn-jocher avatar glenn-jocher commented on July 19, 2024

Hello again,

Thank you for providing the detailed code and context. It's great to hear that your model performed well during initial testing! Let's dive into the potential reasons for the discrepancy you're experiencing when using the model in your iOS application.

Potential Issues and Solutions

  1. Image Preprocessing Consistency:
    Ensure that the image preprocessing steps in your iOS application match those used during training and testing on your MacBook. Differences in resizing, normalization, or color channel ordering can lead to poor performance.

    Example:

    // Example Swift code for preprocessing in iOS
    let image = UIImage(named: "example.jpg")
    let resizedImage = image?.resize(to: CGSize(width: 640, height: 640))
    let normalizedImage = resizedImage?.normalized()
  2. Model Input Configuration:
    Verify that the input configuration of the CoreML model in your iOS app matches the expected input format. This includes ensuring the correct image size, color channels, and data type.

  3. CoreML Model Configuration:
    Double-check the CoreML model configuration to ensure that it includes Non-Maximum Suppression (NMS) if required. This can be crucial for accurate object detection.

  4. Testing with CoreML Tools:
    Use CoreML tools to validate the model's performance on your MacBook before deploying it to iOS. This can help identify any issues specific to the iOS environment.

    Example:

    import coremltools as ct
    from PIL import Image
    import numpy as np
    
    # Load the CoreML model
    coreml_model = ct.models.MLModel("best.mlpackage")
    
    # Load and preprocess an image
    image = Image.open("example.jpg").resize((640, 640))
    input_data = np.array(image).astype(np.float32)
    
    # Make a prediction
    results = coreml_model.predict({"image": input_data})
    
    # Process the results
    print(results)
  5. Debugging in iOS:
    Add logging to your iOS application to capture the intermediate steps and outputs. This can help identify where the discrepancy occurs.

    Example:

    // Example Swift code for logging in iOS
    print("Preprocessed image: \(normalizedImage)")
    print("Model prediction: \(results)")

Next Steps

  1. Verify Preprocessing: Ensure that the preprocessing steps in your iOS app are identical to those used during training and testing on your MacBook.
  2. Check Model Configuration: Confirm that the CoreML model configuration, including NMS, is correctly set up.
  3. Use CoreML Tools: Validate the model's performance using CoreML tools on your MacBook.
  4. Add Logging: Implement logging in your iOS app to capture intermediate steps and outputs for debugging.

If the issue persists, please share any error messages or logs from your iOS application. This will help us provide more targeted assistance.

Best of luck with your project! πŸš€

Warm

from ultralytics.

akaomerr avatar akaomerr commented on July 19, 2024

After long research, I understood my problem. I adjusted the dimensions of the photos while converting to CoreML. When I converted them to 720x1280, I achieved the best performance. Thank you for your help.

from ultralytics import YOLO
model=YOLO("best.pt")
model.export(format='coreml', nms=True, imgsz=[720,1280])

from ultralytics.

glenn-jocher avatar glenn-jocher commented on July 19, 2024

@akaomerr hello!

Thank you for sharing your findings and the solution you discovered. It's fantastic to hear that adjusting the image dimensions to 720x1280 during the CoreML conversion improved your model's performance! πŸŽ‰

For anyone else encountering similar issues, here's a quick recap of the solution:

Solution Recap:

Adjusting the image dimensions during the CoreML export can significantly impact the model's performance. In this case, converting the images to 720x1280 yielded the best results.

Example Code:

Here's the code snippet you used for a successful conversion:

from ultralytics import YOLO

# Load the trained model
model = YOLO("best.pt")

# Export the model to CoreML format with adjusted image dimensions
model.export(format='coreml', nms=True, imgsz=[720, 1280])

Additional Tips:

  1. Consistent Preprocessing: Ensure that the preprocessing steps in your iOS application match those used during training and testing. This includes resizing, normalization, and any other transformations.
  2. Model Configuration: Double-check the CoreML model configuration to ensure it matches the expected input format and includes necessary settings like Non-Maximum Suppression (NMS).

If you have any more questions or need further assistance, feel free to reach out. The YOLO community and the Ultralytics team are always here to help!

Best of luck with your project! πŸš€

Warm

from ultralytics.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.