chen742 / pipa Goto Github PK
View Code? Open in Web Editor NEWOfficial Implementation of PiPa: Pixel- and Patch-wise Self-supervised Learning for Domain Adaptative Semantic Segmentation
Home Page: https://arxiv.org/abs/2211.07609
Official Implementation of PiPa: Pixel- and Patch-wise Self-supervised Learning for Domain Adaptative Semantic Segmentation
Home Page: https://arxiv.org/abs/2211.07609
I only found PixelContrastLoss but did not find regional loss. The code of the article here is the same as that of Exploring Cross-Image PixelContrast for Semantic Segmentation. What the article describes conflicts with the code implementation.There is also no bank that stores positive and negative pixel comparisons
Hi authors,
I installed the codebase following your instruction, but got the below error,
could you helpme ? any suggestion is good.
"RuntimeError: nms is not compiled with GPU support"
Thanks
Dear Author,
This is an excellent piece work and thanks a lot for sharing it. I was trying to run your code for experimentation. Can you please let me know how to run this code on CPU only device?
--
Best Regards,
Dinesh
Why your patch loss use the feature of teacher network?And how to train your self.classifier in dacs.py?
Hi, thanks for your awesome code.
I noticed that the released code is disigned for HRDA, could you please provide the code for DAFormer?
Great project! I am very insterested in your work and thanks for the release.
However, after reproducing the experiment with the code(HRDA+PiPa), I was not able to achieve the result reported in the paper. Specifically, I reproduced the experiment by directly run
python run_experiments.py --config configs/pipa/gtaHR2csHR_hrda.py
with the random seed 0, 1, 2
, and achieved mean intersection over union score 74.52, 74.34, 74.73
respectively. Here is my logs:
seed=0
, mIoU=74.52 20230109_160110.logseed=1
, mIoU=74.34 20230208_233817.logseed=2
, mIoU=74.73 20230210_153346.logI don't know the reason of the performance drop. Could you please tell me the possible reason or any hint to reproduce the results?
Thanks in advance.
Best,
Yuanbing
Hi, thanks for your sharing codes, it's really an awesome work for UDA.
I have noticed that the embedding features for “Source Pixel Contrast” and "Target Patch Contrast" are not the same.
Concretely, embedding features for “Source Pixel Contrast” are obtained by concatenating 4 features from the backbone by yourself.
PiPa/PiPa_DAFormer/mmseg/models/segmentors/encoder_decoder.py
Lines 174 to 181 in 4eb532e
However, embedding features for “Target Patch Contrast” are directly obtained from the fuse_layer
PiPa/PiPa_DAFormer/mmseg/models/decode_heads/daformer_head.py
Lines 176 to 180 in 4eb532e
Why not use the same embedding features?
How to train your self.classifier in dacs.py?
And How to train your self.cls_head in encoder_decoder.py?
I did not find related loss such as cross entropy.
I want to know the results you report are the best checkpoint or the last checkpoint?
From the code you released, the imagenet feature distance is used in your work. But in your paper, the fd loss is not included in the total loss. Can you explain it. Thank you very much!
Dear authors,
Thanks for your impressive work and code for the community, I am a student studying your paper and your work inspires me a lot.
I want to learn more about your work by running the code. Could I train it with my 3080 GPU (laptop)?Do I need to use more GPUS for distributed?
Hello,
I wanted to know if it's possible to generate realistic images (like cityscape images) instead of segmentation masks?
Thanks
May I ask the author of PIPA how much video memory is required for this work? Why does the 24G 3090 report an error of out of memory?
The google drive link to download the [MiT weights] is not working,please share working link @chen742
Hi,
Thanks for your great work!
I am doing a research project and I would like to adopt your model as my baseline, but I would like to know if this model could only be trained on one GPU.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.