futurexiang / ddae Goto Github PK
View Code? Open in Web Editor NEW[ICCV 2023 Oral] Official Implementation of "Denoising Diffusion Autoencoders are Unified Self-supervised Learners"
[ICCV 2023 Oral] Official Implementation of "Denoising Diffusion Autoencoders are Unified Self-supervised Learners"
Could you please provide the whole experiment environment to run the training script? I encountered distributed training error all the time and tried many ways regarding the training parameters but resulted in the same error. So I guess this error is caused by environment settings.
P.S: I installed recommended packages in readme and ran the training script under pytorch 1.12 with: python -m torch.distributed.launch --nproc_per_node=4 train.py --config config/DDPM_ddpm.yaml --use_amp
Hi,
Firstly, I'd like to express my appreciation for your excelent work and the provided implementation code. It's been incredibly helpful.
I'm particularly interested in the fine-tuning process mentioned in your paper. Could you kindly share the fine-tuning code used to achieve these results? Also, it would be immensely helpful if you could include the training configurations for the DDAE-EDM model (as referenced in Figure 5b of the paper).
Thank you for your time and consideration.
Best
Dear authors,
Thanks for open-sourcing your amazing and inspiring work! I am really interested in applying generative diffusion models to discriminative tasks, and I would really appreciate if you could help me through a couple of concerns regarding your paper.
Thanks again for your great work, looking forward to your response!
Hello,
Firstly, I'd like to express my appreciation for your outstanding work and the provided implementation code. It's been incredibly helpful.
I'm writing to kindly request if you could release the trained models mentioned in your paper. Access to these models would be highly beneficial for further research.
Thank you for considering this request.
Best,
Hi,
Thanks for your codes. But I have questions about FID when dataset is larger. If my dataset is either Tiny-ImageNet or ImageNet 64x64, how many images should I generate to calculate FID? The exact number of Tiny-ImageNet or ImageNet 64x64 (larger than 50k)? And I should change the batch number (125) and 400 (125*400=50k) in sample.py, right?
BTW, I see other codes use total_training_steps instead of epoch. What is the relationship between these?
I hope this message finds you well. First and foremost, I would like to express my sincere appreciation for the incredible work you have done on your project shared on GitHub.
I am particularly interested in your Tiny-ImageNet project and would greatly appreciate it if you could share some of the code related to it.
Best regards, kecheng
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.