GithubHelp home page GithubHelp logo

Comments (4)

glenn-jocher avatar glenn-jocher commented on July 24, 2024 1

@lbeaucourt hi there,

Thank you for providing a detailed report and the minimal reproducible example. This is very helpful! 😊

The error you're encountering, ValueError: Default process group has not been initialized, please make sure to call init_process_group, typically arises when the distributed training setup is not properly initialized.

Here are a few steps to help troubleshoot and resolve this issue:

  1. Ensure Latest Versions: First, please make sure you are using the latest versions of torch and ultralytics. You can upgrade them using:

    %pip install -U torch ultralytics
  2. Environment Variables: It seems you are setting RANK and WORLD_SIZE to -1, which indicates single-node training. However, the error suggests that the code is attempting to use distributed training. Ensure that these environment variables are correctly set before running the training:

    os.environ["RANK"] = "0"
    os.environ["WORLD_SIZE"] = "1"
  3. Manual Initialization: If you still encounter issues, try manually initializing the process group before calling model.train(). This can be done as follows:

    import torch.distributed as dist
    
    if torch.cuda.device_count() > 1:
        dist.init_process_group(backend='nccl', init_method='env://')
  4. Training Code: Here is an updated version of your code snippet incorporating the above suggestions:

    %pip install -q -U ultralytics mlflow torch
    dbutils.library.restartPython()
    
    import os
    from ultralytics import YOLO
    import torch.distributed as dist
    import torch
    
    os.environ["RANK"] = "0"
    os.environ["WORLD_SIZE"] = "1"
    
    token = dbutils.notebook.entry_point.getDbutils().notebook().getContext().apiToken().get()
    dbutils.fs.put("file:///root/.databrickscfg","[DEFAULT]\nhost=<host>\ntoken = "+token,overwrite=True)
    
    model = YOLO('yolov8m-seg.pt')
    data_path = "data_path"
    
    model.tune(data=data_path + 'data.yaml', device=0,
               epochs=5, iterations=1, optimizer="AdamW", plots=False, save=False, val=False)
    
    if torch.cuda.device_count() > 1:
        dist.init_process_group(backend='nccl', init_method='env://')
    
    model.train(data=data_path + 'data.yaml', name='yolov8m_seg_train_after_tune', epochs=3, optimizer="AdamW", device=0,
                cfg="<path>/best_hyperparameters.yaml")

Please try these steps and let us know if the issue persists. Your feedback is invaluable to us, and we appreciate your patience as we work to resolve this.

from ultralytics.

glenn-jocher avatar glenn-jocher commented on July 24, 2024 1

Hi @lbeaucourt,

Thank you for the detailed follow-up and for sharing your working solution! 😊

It's great to hear that the provided solution works for model.train(). The difference in behavior between model.tune() and model.train() regarding the environment variables and process group initialization is indeed intriguing. This could be due to differences in how these methods handle distributed training under the hood.

For now, your approach of setting the environment variables before model.train() and keeping the previous settings for model.tune() seems to be a practical workaround. If you encounter any further issues or have more questions, feel free to reach out.

Happy training! 🚀

from ultralytics.

github-actions avatar github-actions commented on July 24, 2024

👋 Hello @lbeaucourt, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

from ultralytics.

lbeaucourt avatar lbeaucourt commented on July 24, 2024

Hi @glenn-jocher , Thank you very much for this clear reply !

I tested your solution and it works fine BUT only for model.train(). I explain a bit, if a set the environment variables BEFORE model.tune() as follow

os.environ["RANK"] = "0"
os.environ["WORLD_SIZE"] = "1"

model.tune(...)

Then tunning fail with error: "Default process group has not been initialized, please make sure to call init_process_group"

But, if I keep previous env variable setting for tuning and I change the setting before training, it works !

So, thanks for your answer, it solves my problem. I still not sure to understand why behaviour is different from model.tune() to model.train() but it's not a pain point.

The final version of the code which is working for me is:

%pip install -q -U ultralytics mlflow torch
dbutils.library.restartPython()

import os
from ultralytics import YOLO
import torch.distributed as dist
import torch

os.environ["RANK"] = "-1"
os.environ["WORLD_SIZE"] = "-1"

token = dbutils.notebook.entry_point.getDbutils().notebook().getContext().apiToken().get()
dbutils.fs.put("file:///root/.databrickscfg","[DEFAULT]\nhost=<host>\ntoken = "+token,overwrite=True)

model = YOLO('yolov8m-seg.pt')
data_path = "data_path"

model.tune(data=data_path + 'data.yaml', device=0,
           epochs=5, iterations=1, optimizer="AdamW", plots=False, save=False, val=False)

os.environ["RANK"] = "0"
os.environ["WORLD_SIZE"] = "1"
if torch.cuda.device_count() > 1:
    dist.init_process_group(backend='nccl', init_method='env://')

model.train(data=data_path + 'data.yaml', name='yolov8m_seg_train_after_tune', epochs=3, optimizer="AdamW", device=0,
            cfg="<path>/best_hyperparameters.yaml")

from ultralytics.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.