GithubHelp home page GithubHelp logo

taiya / dgp Goto Github PK

View Code? Open in Web Editor NEW
37.0 37.0 11.0 3.53 MB

Digital Geometry Processing - Fall 2016 - University of Victoria

CMake 9.52% C++ 70.65% GLSL 0.28% HTML 0.02% CSS 0.13% Makefile 0.13% C 18.97% Python 0.15% Ruby 0.05% MATLAB 0.10%

dgp's People

Contributors

dsieger avatar ikoruk avatar leonsenft avatar marovira avatar nlguillemot avatar taiya avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dgp's Issues

As rigid as possible

I wish my understanding was correct:

(1) Pre-factor Laplacian weight matrix w_ij: 0.5 * (cot_alpha + cot_beta). (No area value)

(2), When the selected handle is moved, the new positions of the handle vertices are applied to v_k. Then v_u and v_k are used as the "initial guess" (only the selected handle's positions are changed).

(3), Compare the original v_u/v_k with the ones from (1) and compute rotation R_i for vertex i. R_i is got by solving SVD of the covariance matrix (P_i * D_i * P'_i), where P_i has all cell i's edge vectors and D_i a diagonal matrix containing the weights w_ij.
Question: here I use the original v_u / v_k to compute P_i, and use the initial guess to compute P'_i, is this correct?

(4), Then solve a linear system Lp'=b. Lp's is simply the Laplacian-Beltrami operator applied to p' (but without the area coefficient); b's ith row is sum(w_ij/2 * (R_i+R_j)(e_ij)) for cell i.
Question: can we use Cholesky to solve this linear system as well? Now I'm using v_u = solver.solve(-L_uk*v_k + b), is this correct?

(5) After solving Lp'=b, before carrying on with the next iteration, update the guess with the new v_u (update P'_i but not P_i, is it correct?)

How to detect an "invalid collapse"?

This is from the assignment description: "Invalid collapses are defined as the ones that produce a non-manifold configuration (or a face fold-over)." For the face fold-over, we can use the threshold provided by the base code (min_cost). What about the "non-manifold configuration"? Any rules (or do we need to deduct the rules ourselves)?

Code base issue in Laplacian.h

@nlguillemot I found some issues in Laplacian.h, starting from line 87. Please kindly see the comments:

            cotanAlpha = 1.0f / std::tan(alpha);
            cotanBeta = 1.0f / std::tan(beta);
            // Should get rid of minus values

            omegaList.push_back(Triplet(v_i.idx(), v_j.idx(), -1)); // Should push in (cotanAlpha+cotanBeta) ?
            cotanSum += cotanAlpha + cotanBeta;

            area += (1 / 6.0f) * (d_ij.cross(d_ia)).norm();
            degree++;
        }

        omegaList.push_back(Triplet(v_i.idx(), v_i.idx(),
                                    (Scalar)degree)); // Should push in -cotanSum?

        areaList.push_back(Triplet(v_i.idx(), v_i.idx(),
                                   1.0f / (2.0f * area))); 
    }

    L_omega.setFromTriplets(omegaList.begin(), omegaList.end());
    Area.setFromTriplets(areaList.begin(), areaList.end());

    return Area * L_omega;

[2% bonus] Parameterization inpector

note: If you plan to work on this bonus question please reply to this message, only the first correct submission will be accepted.

I would like to have a small application to show the distortion a parameterization (discussed when we talked about the first fundamental form) causes around a point. Starting from the OpenGP multi-window example, build an application to realize this kind of visualization:

screenshot 2016-10-12 11 36 57

On the left you'll show the texture map [0,1]x[0,1] of the model, on the right you'll show the 3D model. When the mouse is moved in the texture map window a red circle is displayed at the mouse location, and mapping of this circle to the 3D surface is displayed.

I remain available to answer any question you might have.

Tasks:

  • Fix OpenGP so it can import texture atlases
  • Display a texture atlas'd mesh in the 3D viewer
  • Modify the multiwin viewer to manage events
  • Modify the fragment shaders so that the circle overlay is drawn

[1% Bonus] (1D) RBF Fitting in Matlab

A 1% bonus to the the first person that replies to this message with a correct Matlab implementation (you can use the same sample data as for the other LS exercise) of what was described in class (and in the image below).

You should generate both of the highlighted images and paste them in your reply.
You can embed images and code directly in the message, google to find out how.

If you post a solution that is not correct and a fellow student fixes it, it is considered fair game.
Therefore, be absolutely sure your solution is correct.

screenshot 2016-09-30 12 01 47

How to get L_uu and L_uk

The slides look a bit brief so not sure if I'm following the right logic. In the function factor_matrices, I'm doing:

1, Get Laplacian matrix (either uniform or Beltrami).
2, Multiply the Laplacian matrix by itself to get the squared Laplacian.
3, Permute the (squared) Laplacian matrix by multiplying it with permute (permute the rows).
4, Get block(0,0,u,u) as L_uu and block(0,u,u,k) as L_uk.
5, Permute the vertex matrix, then get the first u vertices as v_u and the rest as v_k.
6, Use the Cholesky solver to compute L_uu.

Then when the mouse is being dragged, I'm doing:
1, Compute the current barycenter of the selected handle, and the displacement between it and the cursor position.
2, Translate all the selected handle's vertices using the displacement by adding the displacement to v_k (only to the selected handle's vertices).
3, Use the Cholesky solver to solve -L_uk*v_k, then save it as the new v_u.
4, Write v_u and v_k as the new vertices by vertices_matrix(mesh) << v_u, v_k.

Is the above logic correct? Right now I'm only getting a pile of exploding points, and I wonder what is going wrong.

Time-Coded Light Patterns

Is there an example explaining how this works? I'm reading the slides but it only has this brief description:

Assign each stripe a unique light code
– Project several b/w patterns over time
– Color pattern identifies row/column

General questions about usage of OpenGP

My question is how to get the key values of such objects: “SurfaceMesh::Vertex_property vpoints;”. Although I can get the number of vertices using cloud.n_vertices(), it doesn't seems to be an standard array, i.e., I cannot access the vertex by vpoints[int i]. Similarly I need to assign normal values to property vnormal, then I also need a way to modify the value by a key.

Another question is can I update the cloud's property using the new vnormal? If yes what would be the usage?

[1% Bonus] Implement Marching Squares

These are three reference images on the dragon dataset:
I recommend to first test on the circle dataset!!!
Feel free to reply to this issue with questions

Skipping every two pixels [1:2:200]
dragon_2

Skipping every four pixels [1:4:200]
dragon_4

Skipping every six pixels [1:6:200]
dragon_6

Some errors in assignment 3 code base

dgp/hw3_deformation/main.cpp:40:10: error: virtual function 'mouse_press_callback' has a different return type ('void') than the function it overrides (which has return type 'bool') void mouse_press_callback(int button, int action, int /*mods*/) override

And

dgp/hw3_deformation/main.cpp:11:25: error: call to implicitly-deleted copy constructor of 'Deform' Deform deformator = Deform(mesh, this->scene);

Remeshing: L_max and L_min

@ataiya Could you explain again how to get the relationship between L_max/L_min and L (slide 42)? I guess L is the average length of edges. What does the equation |L_max-L|=|0.5L_max-L| mean (actually I think this is the only thing I'm not yet clear)?

HW 1 - Numerical problems

To all the folks still working on it,
be aware that numerical problem could arise when computing the dot product between two normals.

Vec3 a = some_normal.normalized(); //forcing a to be a unit vector
Vec3 b = another_normal.normalized(); //forcing b to be a unit vector
Scalar dot = a.dot(b);

dot could be actually greater than 1, due to numerical errors. Something like 1.00000012.
This could/will affect your normal orientation in Part3 and definitely Part4.

The "Eigen::SimplicialCholesky"??

I've been tortured by the poor documentation of eigen library so much. How to get the decomposed matrix L after running solver.compute(L_uu)? I tried auto new_p = solver.compute(L_uu) but doesn't work at all. There is not any example I can find.

Multiscale deformation algorithm in real-time

Would it be a reasonable approach to keep the base (smoothed) mesh in memory and apply the deformation & high-frequency details every step? Essentially, separating the low and high frequencies only during the initialization, not for every frame.

I've been reading the textbook and the slides and they present the multiscale deformation process as one deformation, whereas the assignment is to make an interactive app, so a series of deformations. I looked for similar papers and implementations but didn't have any luck apart from [Zorin 1997], which seems similar to what I'm suggesting, but I'm not confident yet.

Zero-coefficients in Laplace-Beltrami matrix

I can't get the bi-Laplacian to work in part 1 with the Laplace-Beltrami matrix. (Queried whether the solver successfully factorized the matrix using solver.info(); also printed out some coefficients which were all 0.) Is this because the test meshes are both planar?

Should I test part 1 on a different mesh, or is it okay to just use the graph Laplacian for part 1?

And I haven't started coding part 2, but it seems I might run into the same problem with the bi-Laplacian when smoothing the mesh?

(I don't think it has to do with L being non-symmetric (L = DM) and my D^-1 matrix looks fine.)

[2% bonus] Constrained DT and Refinement

In the class matlab folder you can find ex_delaunay.m that realizes the algorithm described on this slide:

screenshot 2016-11-02 13 07 29

The two tasks for this bonus question are (1% each):

  1. modify the script above to compute the constrained Delaunay triangulation of an input closed boundary (e.g. use dragon.mat)

screenshot 2016-11-02 13 10 19

  1. modify the script to compute the Delaunay refinement (with a given triangle quality criteria):

screenshot 2016-11-02 13 11 52

The results of the two steps should be analogous to the two bottom images in this figure:
screenshot 2016-11-02 13 13 24

Note: if you intend to attempt working on this bonus, please declare your intention by replying to this issue. Only one correct implementation will be accepted.

RBF implicit function

I implemented the RBF part and it works perfect for the sphere obj, but not as expected on the face obj (while Hoppe works well on the face). So I wonder if I misunderstood something. Are the centers: (1) the cloud points, (2) the cloud points plus epsilon*(their normal values)? If this is correct, what could be the reason? Should I change epsilon?

The coordinate of click is not correctly returned

The pos returned from function "unproject_mouse" is not consistent with the mesh coordinate.

I tried printing xPos, yPos, _width, _height after I clicked the right bottom corner, and I got:

xPos: 326.086, yPos: 325.675, width: 800, height: 800

I am using Mac. Anybody has an idea why this happens?

[2% Bonus] Polish Hw#3 for next year

Hw3 is new this semester and the trace code is not super-polished yet. Create a super high-quality code + UI for use in future DGP courses, and redeem these bonus points! Only the first submission will be accepted

This is how you separate parts to be implemented from trace code:

#ifdef STRIP_CODE
        /// TASK: description of task goes here
#else
        /// C++ code goes here
#endif

Lab 3: is my logic correct?

To calculate gauss curvature, I have the following code (very simple and straightforward I think):

(code removed)

But I got a completely different result:
2016-10-12 8 36 37

What could be the problem?

Has anyone figured out the 1st part of hw3 using Laplacian Beltrami?

Things are getting really odd on my side. The algorithm should be very simple:

Get Laplacian matrix L of the mesh, then get it squared (L * L), permute the columns and then the rows so that the unknown vertex coefficient are at the upper-left "quarter" (L_uu or L_11). Then use solver.compute(L_uu). When doing the deformation, get the handles' new positions v_k, then get v_u by solver.solve(-L_uk*v_k).

I'm getting the right output using graph Laplacian (uniform) but the output using Laplacian Beltrami is not:

2016-11-18 11 47 56

Can anyone simply show your Laplacian matrix values if you get the right result? Thanks!

Is the given data obj correct?

I am looking at the very simple quad_mesh object and found something weird. In the picture, when calculating cotan values of vertex i, the vertex of angle beta acquired by using mesh.to_vertex(mesh.next_halfedge(e_ij)) is on the upper-left corner, which does not have an edge connected to vertex i at all. That makes it impossible to get the right laplacian beltrami matrix. The woody object is too big to check, so is it possible this data file also has the same issue?

2016-11-17 10 00 01

Problem in constructing SparseMatrix

I simply have:

Eigen::SparseMatrix<double> G(cloud.n_vertices(), cloud.n_vertices());

When I ran make, I got:

error: no template named 'SparseMatrix' in namespace 'Eigen'; did you mean 'SparseMatrixBase'?

Is the Eigen library in the repo an old version?

[2% bonus] Kinematic Chain Optimization

Up to 3 submissions will be accepted. Post a snapshot showing several iterations of the algorithm and your code here: https://classroom.github.com/assignment-invitations/6149ec28d392c794b2c98a6c87104963

Extending the material presented in the "IK-ICP" develop an algorithm that performs the alignment of the blue segment structure to the yellow point set (yes, you'll have to create this data on your own). Implement the Point-to-Point ICP as well as the Point-to-Plane variant, and then evaluate the respective convergence speed.

image

Some helpful information can be found in this paper:
https://www.math.ucsd.edu/~sbuss/ResearchWeb/ikmethods/iksurvey.pdf
(remember this is a bonus, so you are expected to work independently)

HW2 - Link problem

I don't know if I'm doing it right or not.
Links to Geomorph video and reference [4] pdf redirect to dgp wiki.

Question: Laplacian weight matrix

In hw3 part 2, I tried using Laplacian-Beltrami weight matrix WITHOUT 1/(2*A_i) coefficients, and it gave me the correct output (the original function in the code base uses the area coefficients, with which I always got exploding points).

I also noticed that in ARAP, the weight is without 1/(2*A_i) as well. Should that be removed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.