ssrsgjyd / neuraltexture Goto Github PK
View Code? Open in Web Editor NEWUnofficial implementation of the paper "Deferred Neural Rendering: Image Synthesis using Neural Textures" in Pytorch.
Unofficial implementation of the paper "Deferred Neural Rendering: Image Synthesis using Neural Textures" in Pytorch.
Hello, thanks for the implementation.
I was wondering if you experimented with UV Atlas like specified in the paper? I couldn't find any information in the paper about how they used UV Atlas other than "uv-parameterization is computed based on the Microsoft uv-atlas generator".
Thanks.
I've been at this for a while tweaking config.py but I can't seem to crack the code of generating a clean, grid-free output. Does anyone have any suggestions?
A secondary point that you should note is the weird clipped gamma curve on Obama for some reason.
image
When I run train.py, line 52 in pipeline.py: "x [:, 3:12,:,:] = x [:, 3:12,:,:] * basis [:,:]" reports an error:
RuntimeError: The size of tensor a (512) must match the size of tensor b (9) at non-singleton dimension 3
Then I print out the corresponding dimension.
basis.shape torch.Size([32, 9])
x [:, 3:12,:,:].shape torch.Size([32, 9, 512, 512])
What's the problem?
Hi, thanks for the code!
I just started to learn this paper with your code.
As far as I understand, there are two neural networks, and I trained them jointly using the train.py, with 410 basketball images.
When I run render.py with the trained model, there are something I don't understand on the outputted image. What are they and how can I erase them?
Thanks!
I've looked all over the internet for how to simply render a UV map or "texel-to-pixel mapping" and have found nothing. As mentioned in the README, OpenGL_NeuralTexture is an option, but I am hesitant to use that repo because it's kinda messy, most of it is hardcoded and inflexible/unusable, and I couldn't even compile it on Windows.
Is there another way to get the UV input data?
For some reason, I have been getting different results every time I run train.py and I have no idea as to why. It has stopped me from getting any meaningful results.
Thank you very much for your work.
How can I get the training data?
when i set enable_output = false,The screen can be displayed normally.
when i set enable_output = true,recompile and run the program.
Uv and Camera extrinsics are saved correctly
The screen is black, but the saved picture is incorrect. When using object_id = 0, the saved picture is basically all black. When object_id = 1, the saved picture is similar to noise
Does anyone know the possible cause? @SSRSGJYD
What exactly are the camera extrinsics here?I think camera extrinsics are generally parameters matrix include rotation and translatio, which has 6 DoF. But in the paper, camera extrinsics parameters' shape is 3.
Thank you for your repository!
In the utils.py file you provide the following code for the Spherical harmonics coefficients:
sh[0] = 1 / np.sqrt(4 * np.pi)
sh[1:4] = 2 * np.pi / 3 * np.sqrt(3 / (4 * np.pi))
sh[4] = np.pi / 8 * np.sqrt(5 / (4 * np.pi))
sh[5:8] = 3 * np.pi / 4 * np.sqrt(5 / (12 * np.pi))
sh[8] = 3 * np.pi / 8 * np.sqrt(5 / (12 * np.pi))
It seems that you use the first 9 harmonics as it is done in the paper. I found the following formulas and they are identical to yours except the coefficients:
Picture taken from http://www.cse.chalmers.se/~uffe/xjobb/Readings/GlobalIllumination/Spherical%20Harmonic%20Lighting%20-%20the%20gritty%20details.pdf
I tried to play with the math a bit but didn't get any insight anyway.
Is there some reasoning behind your coefficients? Thank you!
In the paper "Deferred Neural Rendering: Image Synthesis using Neural Textures", the paper this repo is based off of, they explain how they can use a generated UV map of a 3D model of a face, and the rgb of the face, to generate a "deepfaked" face output, with their Neural Rendering method. I understand this is possible with this repo for a single facial pose but I would like to use this, like in the demonstration, with an animated UV and texture so-to-speak, where the camera doesn't move but the model/UV and texture does.
I have been using another repo Face2face to generate the model and transfer the expressions with some custom code and it would be absolutely incredible if I can realistically neurally render it with this repo.
Is this possible?
(Even if it isn't, I kinda want to try anyway and see what I get just for fun)
Hi, thanks for the code!
I just started to learn this paper with your code.
could you provide some face data?
thanks @ @SSRSGJYD
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.