GithubHelp home page GithubHelp logo

Comments (42)

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024 2

@ipa-mah : I have updated my code with more clear settings.h. I think it's easier to read now :)

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024 1

@ipa-mah : but my results are with too much warp in the targets images, while the paper's doesn't lead to so much changes. I'm still checking if something is wrong.

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024 1

@Why384040146 : you need to install PCL and OpenCV first, then the eagle_utils.so and pm_minimal.exe should be compiled , finally, INCLUDEPATH and LIBS should be editted to your PCL and OpenCV path to ensure all the libs can be found in the .pro file.
I tried it several mouths ago, but failed at compiling PCL's libs.

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024 1

@KayChou : 中文签名,我就直接写中文吧
因为我最近一直在做别的东西,这个都有点要忘记了……

· 关于图片模糊
可能是缩放程度有问题,在Settings.h里面可以调整缩放次数和各轮的比例,最小的缩放比例最好不要小于原图宽的 1/4,再小就很容易在放大过程中产生模糊了;
可能是patch大小设置偏大,可以试试改成 5;
可能是在进行PatchMatch时一直搜索到了偏离的块,这个需要调整Settings.h里面的随机搜索范围参数;
可能是 lamda 设置过大,当过大时,模糊的texture图片会过度影响target图片,导致一起越来越模糊,可以试试调小一点;

· 关于模型
这是因为我在生成模型时,为了测试数据而加了一个限制,会将weight值过小的点和深度值偏差较大的点去除,通过删去 getAlignResults::generateTexturedOBJ 函数里面的 weights[img_i].at(p_img.y, p_img.x) < 0.1f 就好

from eagle-texturemapping.

manhha1402 avatar manhha1402 commented on August 22, 2024

Hi @OneEyedEagle,
I have a look at the repo: https://github.com/github12364/TextureMapping and see your discussion with others. I'm also interested in the paper "Patch-Based Optimization for Image-Based Texture Mapping". I just finished the PatchMatch algorithm and working on next steps of the paper. What I understand from the paper is that it generates texture mesh (.obj and .mtl format), not colored vertex mesh (PLY format) in your code at the moment. Is it correct?

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@ipa-mah : hi, it's my limitation to code the cpp with obj and mtl, but I think it doesn't affect the align-generate steps.
I remap the vertex of the ply to each image, then pixels in every image of the same vertex are linked with std::map. for pixels on the mesh, I use the interpolation algorithm to get their positions of other views with the 3 points of the triangle mesh.

from eagle-texturemapping.

manhha1402 avatar manhha1402 commented on August 22, 2024

@OneEyedEagle ,
Thanks for your reply. I see the your result from your first comment, the colored vertex mesh is much improved. I will read your code in detail to understand the algorithm.
To generate the texture mesh, I think you need to partition the mesh, then create texture atlas and map all color to texels of texture atlas.

from eagle-texturemapping.

manhha1402 avatar manhha1402 commented on August 22, 2024

@OneEyedEagle : thank you :)

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@ipa-mah : could you please recommand some methods to generate texture atlas? I have searched some, but none of them helps.

I find that my code doesn't work well on Remapping step. As I project the vertex of the model to each image, I use the z-depth method to check the visibility. So if the vertex is at the back of the model in the view, it will make the remapping results blurry.

Another small trick I did is that I changed the way to calculate the sum of Su and Sv. In the paper, (1/L)×sum(Su) + (α/L)×sum(Sv) is used, but I found a different way in the MATLAB code, it uses (1/L + α/L)×(0.9×sum(Su)+0.1×sum(Sv)) , and it works well.

Averaging the Source images with weights:
S_001

Averaging the Target images with weights (after iterations) :
M_001

It's quite clear that at the boundary of the model, where it is not visible at another view, the remapping messes.

from eagle-texturemapping.

manhha1402 avatar manhha1402 commented on August 22, 2024

@OneEyedEagle
I haven't read the paper in detail, so I cannot give you some advance, but I will definitely read it. Yesterday, I finally can run your code and got the result:result.log
The second energy E2 is always nan value. Do you have the same problem?
And I think the "result_S.ply" and "result_T.ply" have the same quality.
For the texture atlas, I see some methods, but still not yet tested. You can see open source code for texture mapping and texture atlas here:
https://github.com/chaowang15/plane-opt-rgbd
https://github.com/nmoehrle/mvs-texturing
https://github.com/jpcy/xatlas

from eagle-texturemapping.

RatTac avatar RatTac commented on August 22, 2024

@OneEyedEagle: Sorry, I was busy with different things. You still seem to be working on getting the method running.

You write that you use the depth buffer to discard occluded vertices. Why do you say it is not working? If the projected depth is less than the z-buffer you should simply discard the vertex. This way there should be no artifacts. To make it even more robust w.r.t. to thin surfaces, you can also check for the normals if both point to the same direction.

In your screenshot it seems that meanwhile you could achieve some improvement regarding sharpness. To you use the same method as in the original paper with the only modification (1/L + α/L)×(0.9×sum(Su)+0.1×sum(Sv))? What to you set as parameters for lambda and alpha?

But also, at some parts of the pillow it seems to get a little bit more messy than before. Do you have an explanation for this?

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@ipa-mah : I have updated the getalignresults.cpp and the getalignresults.h for many times recently, so I am not sure if it's my fault in the old version. The E2 isn't NaN when everything is OK, but I encountered such problems when the camera matrix was wrong ( as projection from world to camera is needed, the input data is inversed. )
You can check the latest getalignresults.cpp and the getalignresults.h (and maybe settings.h) to see if the problem still exists.

@RatTac : I use the depth buffer method to check the vertex's visibility in the view, which works.
When I do the remapping step, some meshs are visible in the view i, while they are invisible in the view j. After using depth buffer to check if the pixel is from the mesh, some offsets may exist in my remapping results.
( you can check the result "Averaging the Source images with weights", everything's fine at the center of the pillow, because this part can be viewed at any image. But the boundary of the pillow gets messy with many obvious lines, which are errors and make the align result worse. )

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@RatTac : I just copy the method of generating Ti in the MATLAB code, so I set the alpha = 0.1, lamda = 2.0.
But this version is still being tested, I won't update to the github until I get finer results.

from eagle-texturemapping.

RatTac avatar RatTac commented on August 22, 2024

@OneEyedEagle: I think you can easily fix this in the reconstruction step of the paper. For each view, you should only sum up the information from the target images that is visible in view i (you can simply set w_j to zero if the information is occluded).

from eagle-texturemapping.

JonathanLehner avatar JonathanLehner commented on August 22, 2024

Have you tried to contact the original paper authors?
Does your code work in general?

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@JonathanLehner : I believe all of user here have tried to contact the author for many times, but get no reply.
It's still a testing version, I'm doing my best as a greener in texture mapping.

from eagle-texturemapping.

Why384040146 avatar Why384040146 commented on August 22, 2024

can your code run in windows? I found system function not run successfully in windows.

from eagle-texturemapping.

Why384040146 avatar Why384040146 commented on August 22, 2024

@RatTac I saw you and OneEyedEagle conversation, I was also confused how to get altas texture for texture mapping show in 3D softeware. have you realize the portion?

from eagle-texturemapping.

Why384040146 avatar Why384040146 commented on August 22, 2024

@Why384040146 : you need to install PCL and OpenCV first, then the eagle_utils.so and pm_minimal.exe should be compiled , finally, INCLUDEPATH and LIBS should be editted to your PCL and OpenCV path to ensure all the libs can be found in the .pro file.
I tried it several mouths ago, but failed at compiling PCL's libs.

Also I confused, how can you show mesh in .ply data with color ? I found it dosent work if i set vertics with colors in mesh data.

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@Why384040146 : In my new version (it hasn't been uploaded yet), I generate the .obj with .mtl instead of .ply, and I calculate every mesh's vertices' uv on one best-viewed image. So every mesh would have its texture, and as I just put all images into the .mtl, it's no need to generate a texture atlas.

from eagle-texturemapping.

Why384040146 avatar Why384040146 commented on August 22, 2024

"We continue this process of alignment and reconstruction iteratively until convergence." this articels write....
But , I saw your code has iter_count 50 times. what is the Exact convergence conditions?

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@Why384040146 : it's the author's theory that the iteration ends until convergence, but in chapter 4 IMPLEMENTATION DETAILS, the paper says "We perform 50 iterations of alignment and reconstruction at the coarsest scale and decrease it by 5 at each finer scale" in Multiscale Optimization section's end.

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@Why384040146 : I received an E-mail notification that you want me to upload the s2t.jpg and t2s.jpg, but I don't get the comment here. Did you delete it?
These files are generated when iterating, you only need to change the path of the patchmatchBinFile in the settings.h.

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

I have updated my code with remastered getalignresults.cpp and README.

  1. I switch to use ray intersection method to get the depth map instead of projection from the object to do remappings, and it works much better on the boundary than the old one.
  2. I add a file output method to generate .obj and .mtl using the results images as textures, which makes the results much more clear to check.
  3. Also, I delete useless files under patchmatch folder. Now it's no need to change the INCLUDEPATH and patchmatchBinFile for it.

from eagle-texturemapping.

Why384040146 avatar Why384040146 commented on August 22, 2024

i found the weigth img is so strang which is discontinuity in some area. and my result has a mess result. Do you have qq contact information?
QQ图片20191115103657

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@Why384040146 : Sure, you can add my QQ ( UPDATE: For security, I delete the QQ ID. Anyone who has problems could create a new issue.), and be sure to add a comment that you are from GitHub.

from eagle-texturemapping.

smlee00 avatar smlee00 commented on August 22, 2024

Hi I was new in this and I was trying to run the code in QtCreator but faced this error. I've tried to locate libopencv_core.so.3.4 and found it in /usr/local/lib. Then I included the path in the .pro file but the same error still occur, how can I solve this?
image

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@smlee00 : hi, it's that qtcreator couldn't find the library. Did you add the full path of the lib to the LIBS as I do in the pro file? Like: LIBS += /use/local/lib/libopencv_core.so.3.4

from eagle-texturemapping.

smlee00 avatar smlee00 commented on August 22, 2024

@OneEyedEagle thanks for the advice, it is working after I added it. I am going through your code to understand the algorithm, currently it is the best reference for me in my project as I need to texture map a scanned 3d object using images. Thanks :)

from eagle-texturemapping.

yorkt avatar yorkt commented on August 22, 2024

@OneEyedEagle
Thanks for your works, i am trying to run your codes.
The image below is the result that i can get. I found it contained a lot of holes in the mesh. Am i doing something wrong? And what is the current status ? Is it finished implementation or still have something missing or to be solved?
Thank you.
ppp

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@yorkt: hi, holes exist because I set some conditions when generating the obj. You can find the reason in the function that does the generation (it's at the end of the getalignresult.cpp). I discard points that with small weight and (maybe) wrong depth.

from eagle-texturemapping.

KayChou avatar KayChou commented on August 22, 2024

hi,thanks for your excellent work first. I compiled this project in windows and finally run it. But i have some problems, the resulting images like pic1 below,and the final mesh just like pic2, do you know how this problem happens? I wish i could talk with you in wechat or other instant chatting APP if you don't mind.
M_000_1
Capture

from eagle-texturemapping.

KayChou avatar KayChou commented on August 22, 2024

@OneEyedEagle 谢谢你的回答,很有帮助。此外我想把这个算法应用在另一个场景:我有4台深度相机从前后左右四个方向拍摄画面,现在根据4个深度图生成了不带纹理的mesh模型,现在想将4个彩色图用本项目映射到模型上用来产生纹理。请问你觉得这个是可行的吗?

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@KayChou 这个我不确定,4张可能数据量有点小,如果其中一张出现了明显的相机位姿偏差,就会导致纹理互相映射时出现错误投影,然后就会模糊

你可以直接在我这个工程里试试,仿照Settings.h里面写的init_zhou函数很容易就能扩展成自己的数据了

from eagle-texturemapping.

smlee00 avatar smlee00 commented on August 22, 2024

@OneEyedEagle Hi, I wanted to try another set of data with your project but I don't have the camera poses of the data. So, I've tried different camera calibration method on your data but the values that I obtained is different from those in camTraj.txt, can I know how the camera poses is obtained?

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@smlee00 Hi, I get the camTraj.txt by using KinFu in PCL,which generates the mesh and the camera poses.

from eagle-texturemapping.

KayChou avatar KayChou commented on August 22, 2024

@OneEyedEagle 你好,我删去 getAlignResults::generateTexturedOBJ 函数里面的 weights[img_i].at(p_img.y, p_img.x) < 0.1f发现Mesh模型还是有很多空洞,除非同时删除!pointProjectionValid(X_img.at(2), img_i, p_img.x, p_img.y)才会产生完整的模型,但是这样会导致texture出现错误,请问该如何解决

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@KayChou 是因为 pointProjectionValid 方法里面也有类似的判定,原文件汇总 323 行就是同样的对weight进行一次过滤,同时 320 行也对depth进行了过滤,都可以注释掉

from eagle-texturemapping.

KayChou avatar KayChou commented on August 22, 2024

你好,我用你的数据尝试调了很多参数,但是结果都不是很好,请问可以分享一下你的参数文件吗?

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@KayChou 我的就是用的工程里的settings可以跑,没有什么特殊的调整了……

from eagle-texturemapping.

KayChou avatar KayChou commented on August 22, 2024

我最终得到的texture的结果如图所示,而且我调节几乎所有的参数都不能解决这个问题,我看其他人好像都没遇到这个问题……就挺奇怪
00000

from eagle-texturemapping.

OneEyedEagle avatar OneEyedEagle commented on August 22, 2024

@KayChou 这个感觉像是后面一些帧的相机外参读取错误了?你检查看看txt文件以及代码里相机外参看看

from eagle-texturemapping.

Related Issues (4)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.