Comments (6)
I don't think that's a parameter https://docs.ultralytics.com/modes/train/#train-settings
from ultralytics.
Hello,
You're correct; the confidence threshold is not directly adjustable during the training phase using model.train()
. The confidence threshold is typically used during inference to filter out detections based on their confidence scores. When training, the model learns to predict bounding boxes and class confidences, and the actual threshold to use can be set later during validation or prediction.
If you have further questions or need assistance with anything else, feel free to ask! Happy coding! ๐
from ultralytics.
Umm... I understood your answer, but unsuitable at my case.
I'm sorry for my poor English.
by the way, To summarize what I want to do it is as follows.
- I have 17 segmentation models, each classifying only one class.
- I use these models to predict 17 kinds of object.
- Do prediction each models for 1 image.
- get results each prediction, and sum results.
- draw Mask by sum results with openCV.
- but this case, I have a critical problem.
- like this.
- In this case, detect and mask same object, and each confidence is very similar and high.
- furthermore, the object is pen, but model predicted pencil that higher confidence.
- so, I wanna solve this problem by postprocessing result
- Whether a mask for the same area is specified
- Additionally, priority selection for overlapping areas
- check this process all of each model's prediction
- objects can be detected more than 30.
- Is their any solution for this case? except only enhance model performance?
from ultralytics.
@jihunddok hello! Thank you for providing the clear summary of your scenario. It seems like the challenge you're facing is mainly about handling overlapping masks and prioritizing certain detections when multiple models predict different objects in similar areas.
One effective approach could be to implement a post-processing step where you can merge or prioritize overlapping masks based on certain criteria. Hereโs a basic strategy:
-
Intersection Over Union (IoU) - Calculate the IoU for overlapping masks. If IoU exceeds a threshold (e.g., 0.5), you may consider them as overlapping.
-
Confidence Score Priority - In cases of overlap, keep the mask with the higher confidence score and discard the lower one.
-
Class Priority List - If you know certain objects (like 'pen' over 'pencil') are more likely or important, you can create a priority list. Use this list to determine which mask to keep when overlaps occur.
Hereโs a simple pseudocode implementation:
def process_predictions(predictions, confidence_threshold=0.5, iou_threshold=0.5):
# Assuming predictions is a list of tuples (mask, confidence, class_id)
# Sort predictions by confidence score
predictions.sort(key=lambda x: x[1], reverse=True)
final_masks = []
for current_mask, current_conf, current_class in predictions:
keep = True
for final_mask, _, _ in final_masks:
iou = calculate_iou(current_mask, final_mask)
if iou > iou_threshold:
keep = False
break
if keep:
final_masks.append((current_mask, current_conf, current_class))
return final_masks
# Utility function to calculate IoU
def calculate_iou(mask1, mask2):
# Implement IoU calculation between two masks
pass
This method does not require enhancing model performance but helps in intelligently managing the outputs from multiple models. Use OpenCV to draw masks from final_masks
which should now have reduced overlaps and prioritized detections. If modifying overlapping areas based on model output is feasible, adjusting the iou_threshold
and confidence_threshold
in the process_predictions function could grant better control over the outcomes.
Please test and adapt the code as necessary for your specific application context. Hope this helps! Let me know if you have further questions or need more specific examples. Happy coding! ๐
from ultralytics.
Hello,
You're correct; the confidence threshold is not directly adjustable during the training phase using
model.train()
. The confidence threshold is typically used during inference to filter out detections based on their confidence scores. When training, the model learns to predict bounding boxes and class confidences, and the actual threshold to use can be set later during validation or prediction.If you have further questions or need assistance with anything else, feel free to ask! Happy coding! ๐
Thanks for your answer.
OK. when I try to train the model on my data. the precision is good, but Recall is very low.
How can I fix this problem?
from ultralytics.
Hello,
Low recall often indicates that your model is missing detections, leading to fewer true positives. Here are a couple of suggestions that you might find helpful:
-
Adjust the IoU threshold during training - Lowering the Intersection over Union (IoU) threshold may increase the number of positives the model detects as it relaxes the criteria for a positive match.
-
Data Augmentation - Consider augmenting your training data with varied transformations to help the model generalize better, potentially detecting more true positives.
-
Reevaluate the dataset - Ensure that your dataset is balanced and annotations are accurate. Sometimes, an imbalance or inaccurate annotations can lead to lower recall.
Here's a quick example on how to adjust the IoU threshold if you're using Ultralytics YOLO:
from ultralytics import YOLO
# Load a model
model = YOLO('path_to_model.pt')
# Set lower IoU threshold
results = model.train(data='data.yaml', iou_t=0.3)
Adapting the iou_t
directly in the training settings might help increase recall.
Reviewing the 'results.csv' you attached could provide more insight into specific reasons why recall might be low with your current settings. Looking forward to hearing back from you! ๐
from ultralytics.
Related Issues (20)
- Converting Yolov8 Segmentation Labels To Be Suitable For Maskrcnn Segmentation HOT 9
- cache the last data dir, the second time to train will be updated HOT 2
- Dependency of ray tune HOT 2
- About class name HOT 6
- Angle change and atan2 function HOT 2
- Obb loss value problem HOT 1
- MultiThread or MultiProcess Prediction with YOLO HOT 4
- full[!WATCH!]โ Inside Out 2 (2024) FuLLMovie Online Free on English
- full[.WATCH]โ Inside Out 2( 2024 ) FuLLMovie Online Free on English
- full[.WATCH]โ Inside Out 2 (.2024.) +FuLLMovie! Online Free on English HOT 1
- [!Watch!]fullโ Inside Out 2 2024 (!FuLLMovie.) Free Online MP4/720p,1080p HD 4K English HOT 1
- [.Watch.]fullโ Inside Out 2 2024 (!FuLLMovie.) Free Online MP4/720p 1080p HD 4K English
- [!Watch!]fullโ Inside Out 2 2024 (!FuLLMovie.) Free Online MP4/720p 1080p HD 4K English
- *(Fullmovie) Inside Out 2 (2024) (FullMovie) Free Online At Home
- (Fullmovie) Inside Out 2 (2024) (FullMovie) Free Online on English
- (Fullmovie) Inside Out 2 (2024) (FullMovie) Online For Free At Home
- **Inside Out 2( 2024 ) FuLLMoviE HINDI DUbbed Free 720p,480p 1080p HOT 1
- ~Inside Out 2( 2024 ) FuLLMoviE HINDI DUbbed Free 720p,480p 1080p HOT 1
- **Inside Out 2( 2024 ) +FuLLMoviE! HINDI DUbbed Free 720p,480p 1080p
- Hello, could you please share the hyperparameters of obb training on the DOTAv1.0 dataset? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ultralytics.