GithubHelp home page GithubHelp logo

mtaf's People

Contributors

mohamedelmesawy avatar sportaxd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

mtaf's Issues

the problem for training loss

Thanks for enlightening work agian.

I train the Mdis method for one source and one target, but I am confused for the loss, and I plot by tensorboard.
And as I think, the adv loss should walk low and the discrimitor loss should walk higher.
but in the loss below, the two losses oscillate around a number. whats wrong with it?

Besides, I infer the training results should be better when training in manner of 1source 1target instead of 1source multi target. But in my training, I dont get good results.

So hope your thought sincerely.

And my training config:
adv loss weight: 0.5
adv learning rate: 1e-5
seg learning rate: 1.25e-5

adversarial loss of one source and one target
image

dicriminator loss of one source and one target
image

About the generation of segmentation color maps

Thanks for the great research!

I have a question though, the mIoU you report in your paper is for 7 classes, but the segmentation colour map in the qualitative analysis seems to be for the 19 classes commonly used in domain adaptive semantic segmentation.

In other words, how can a model trained on 7 classes be used to generate a 19-class segmentation colour map? Or am I wrong in my understanding?

I look forward to your response.

Thank you!

About MTKT code

In train_UDA.py
758 line

        d_main_list[i] = d_main
        optimizer_d_main_list.append(optimizer_d_main)
        d_aux_list[i] = d_aux
        optimizer_d_aux_list.append(optimizer_d_aux)

If this were done(d_main_list[i] = d_main and d_aux_list[i] = d_aux), it would make all the discriminators in the list use the same one, shouldn't there be one discriminator for each classifier?

question about adversarial training code in train_UDA.py

Thank you for sharing the code for your excellent work. I have some basic questions about your implementation.
pred_trg_main = interp_target(all_pred_trg_main[i+1]) ## what does [i+1] mean? pred_trg_main_list.append(pred_trg_main) pred_trg_target = interp_target(all_pred_trg_main[0]) ## what does [0] mean? pred_trg_target_list.append(pred_trg_target)

In train_UDA.py, line 829-836, why should we use index[i+1] and [0]? What's the meaning of that? Also, where is the definition of the target-agnostic classifier in your code?

Thanks again and look forward to hearing back from you!

Running MTAF on a slightly different setup

Hello, thanks for sharing the code and such a good contribution. I would like to run your method on a setup that is a bit different, specifically adapting from Cityscapes ---> BDD, Mapillary. I have seen that the code accepts Cityscapes for both source and target, so that shouldnt be a problem, and I have added a dataloader for BDD to be the target 1.

In order to get the best performance, do I need to train the baseline and then train the method using MTKT or MDIS loading the baseline as pretrained? Or do I get the best performance directly by running the training script for MTKT or MDIS without the baseline?

about eval_UDA.py

Thanks for sharing your codes.

I was impressed with your good research.

Could you explain why the output map is not resized for target size(cfg.TEST.OUTPUT_SIZE_TARGET) in the case of Mapillary dataset in line 57 of eval_UDA.py?

When I tested the trained model on Mapillary dataset, inference took a long time due to the large resolution.

I'm looking forward to hearing from you.

Thank you!

problem for training data

Thanks for enlightening and practical work about multi-target DA !
I have read your paper, and I found one source dataset and 3 target datasets of unequal quantity, does the quantity of data for every domain matters? And what is the appropriate amount of training data for MTKT?
Another question, I want to know why KL loss is used for knowledge transfer? If I want to train an embedding word instead of a segmentation map, is the KL loss appropriate, and is there a better alternative?

About labels of IDD dataset

Hello! @SportaXD
Thank you for your great work!

I was reproducing the code and noticed: the labels in the IDD dataset are in JSON file format instead of segmentation label form.

How is this problem solved?

About 'the multi-target baseline'

Thank you for sharing the code for your excellent work. I have some basic questions about your implementation.

d_main = get_fc_discriminator(num_classes=num_classes)
d_main.train()
d_main.to(device)
d_aux = get_fc_discriminator(num_classes=num_classes)
d_aux.train()
d_aux.to(device)

Can you tell me why the multi-domain baseline code does not use multiple discriminators but only one discriminator.
It looks like a single domain approach.
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.