astra-vision / comogan Goto Github PK
View Code? Open in Web Editor NEWCoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.
License: Apache License 2.0
CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.
License: Apache License 2.0
I train on nvidia 3090(24GiB total),it still out of memory,Could you help me see if there is any problem?
thanks!
Thanks for your great job! I want to know how many iterations/epochs the given pretrained model is trained?
I'm especially interested in foggy scene i2i translation function than others, if you mind provide one of your foggy scene pretrained weights?
Thanks in advance.
I created the conda environment under the guidance of README.md. However, my terminal threw the errors as below:
Solving environment: failed
ResolvePackageNotFound:
Hi!
Thanks for your research and code!I have some questions about linear FIN.If I want to change a cyclic FIN to a linear FIN, do I just need to modify the definition of phi and the __apply_colormap function?I found that I also needed to change the code in many parts in comomunit.py and comomunit_model.py. Do you have any easy way?
I went to their website and download https://console.cloud.google.com/storage/browser/waymo_open_dataset_v_1_0_0 training_0000.tar and try to run the dump_waymo.py. but it says no such file or directory: 'sunny_sequences.txt'
my file structure is like
./scripts
train
dump_waymo.py
sunny_sequences.txt
I am very excited and pretty interested in the results of your model.
May I ask ... will you be able to release your trained model?
Hi !
First of all thank you for your work, I was waiting your code since I read your paper !
I was wondering if you can give some advice to modify your code from a cyclic function of the FIN layer to a Linear one ?
I actually try to only replace every single cos_phi / sin_phi to a simple phi, but I'm not sure that will be enough.
Maybe I will miss some major points by only changing these.
Thank you again !
I download waymo_open_dataset_v_1_2_0_individual_files, which file name look like this : "segment-9145030426583202228_1060_000_1080_000_with_camera_labels.tfrecord".
And dump_waymo.py seems to extract nothing, it seems there isn't any file fits in sunny_sequences.txt. So all files are skip.
How to solve this problem? Can I just comment this sunny_sequences.txt?
Thank you for your research and for sharing your code!
I want to train a custom dataset rgb2rgb ex. blured_image 2 focused_image.
From your paper it seams that the I should use the Linear target approach.
How would I go about creating a dataset structure? Should it be as simple as trainA (blured images) trainB (focused images)?
Can you provide your Linear target dataset loading files?
Thank you!
Hi, I think your work is really interesting! I have a question about the equation(7) in the paper, the h^Y and h^y_M are summed up by three kinds of features, respectively, but in the codes they are summed up by four kinds of features . Did i misunderstand something?
When running translate.py to convert the daytime images to night scenes, it says segmentation failed(core dumped). The size of dataset is only about 700.
cuda version 11.4
RAM:376GB
GPU: RTX TITAN
system:ubuntu 18.04
Dear author,
Thans for your impressive work,I'm very honored to ask you a few questions. First,which physical model can I choose if I want to do RGB image 2 Infrared image translation?Is there a filter like the one described in the paper that would help me do this?Second,I think I should use a linear model, so what should I modify?I am looking forward to your advice.
Thank you!
Thanks for your codes!I have one question about restart a training.In the README.md, I seem to be able to use: python train.py --id train_ID --path_data path/to/waymo/training/dir --gpus 0.But when I use the pretrained model, it builds a new version.Is it right?
I realize that the equation for the ton mapping (eq 2) in the supplementary is inconsistent with the line 162, 165 of file day2timelapse_dataset.py. Is this an error or I am missing something here ?
Dear author,
Thank you for this very impressive work. I just visualized the tone mapping results and I think it is very similar to the images obtained by using color jittering. So, can it be simply replaced by color jittering? And what do the values in the daytime_model_lut.csv represent?
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.