doyubkim / fluid-engine-dev Goto Github PK
View Code? Open in Web Editor NEWFluid simulation engine for computer graphics applications
Home Page: https://fluidenginedevelopment.org/
License: MIT License
Fluid simulation engine for computer graphics applications
Home Page: https://fluidenginedevelopment.org/
License: MIT License
For instance, creating a face-centered grid or volume particle emitter takes significant number of parameters for their constructor. Adopting a builder pattern would be nice. Grid family does have simple builders, it is not really the builder for that purpose.
I am building using JET_TASKING_CPP11THREADS.
If you create an emitter from a TriangleMesh3 and add it to a LevelSetLiquidSolver3 as an ImplicitSurface3 using SurfaceToImplicit3, then when you do the solver update it will crash in a ParallelFor since multiple threads are triggering the build of the bvh.
To work around this you can call updateQueryEngine() for each emitter.
However the above is obviously the wrong approach since if you use a ImplicitTriangleMesh3 the issue does not present itself. But I thought I would point it out in case another user hits the same bug when getting started.
Hi,
It's me (#109) again 😸 . I find a problem with anisotropic point to implicit3.
Here is a screenshot:
It seems not correct for a dambreak at frame 0.
I just run a sph dambreak demo similar to sph_sim example 3, and here is the .pos file at frame 0 that I got.
frame_000000.pos
And then, I tried to reconstruct the surface using the anisotropic kernel.
./particles2obj -i frame_000000.pos -o frame_000000.obj -r 300,200,150 -g 0.01,0.01,0.01 -k 0.036
(kernal radius:0.036, resolution is 300 200 150, and the gridspacing is 0.01, 0.01, 0.01. sph example 3's domain is 3,2,1.5, so I think it is reasonable)
My goal is to get a surface as tight as possible to render. My sph particle radius is 0.02(and the kernel radius is 1.8 * radius) as default, so I chose 0.036 as reconstruction kernel radius. I'm not sure if it is appropriate. Finally, I get the .obj as the screenshot shows.
I looked into the source code and the original paper
I find that the code is a little bit different from what the paper describes. The main difference is in equation 15. Instead of using the magic number k_s = 1400
, the code uses std::pow(relV, 1.0 / 3.0)
at line 152 in "anisotropic_points_to_implicit3.cpp"
I do not understand why the code uses "pow 1/3", but I tried to use the magic number 1400 in the same way as the paper, and the problem is gone! Here is the screenshot using k_s = 1400.
(Sorry for the light spot in the middle. It is the UI of Blender)
I've not tested the 2D version of the anisotropic kernel, maybe it suffers from the same problem.
That's all I found... I hope I describe the problem clearly. I will maintain the picture and the .pos file till this issue is closed.
The manual and perf tests need some (lightweight) test reporting tool that can:
This includes some refactoring work for the existing test result rendering scripts.
Jet Framework depends on several other libraries, and some of them are outdated or not quite utilized from Jet.
Here’s the wishlist:
Current collider only can take linear motion. Support rotation and add actual simulation examples.
Implement the MGPCG method for faster Poisson solver.
The current test coverage is low. Every public member need unit testing.
I've been asked about any possibility of using HDF5 or similar kinds of data format. While Jet is using FlatBuffers for its serialization solution, supporting commonly sharable/viewable data format also seems to make sense. However, I do not want to introduce too many library dependencies to jet since it is mainly for the book. If I can find a better solution for this (something like a plug-in model), I will consider integrating such formats.
CI should be improved overall including:
Only 2x2, 3x3, 4x4 and FDM-based sparse matrices are currently implemented. For the completeness,
If possible, add a visual studio 2015 and 2017 project template with pre-configured build configurations.
Thanks for your code and book. I am a student learning fluid simulation, and I am a little confused about the implementation in custom_implicit_surface2(3).cpp
For instance, near line 125, function closestIntersectionLocal, file fluid-engine-dev/src/jet/custom_implicit_surface2.cpp:
double frac = fractionInsideSdf(prevPhi, newPhi);
double tSub = t + _resolution * frac;
result.isIntersecting = true;
result.distance = tSub;
As far as I know, the function "fractionInsideSdf" always returns a positive number in [0, 1]. Also, the "t" here gets "newPhi" SDF. So why "t" has to add a positive number " _resolution * frac" instead of minus?
Thanks for your attention to this matter.
As mentioned in the thread #130, there are some perf issues on the reconstruction solvers, especially the anisotropic one. Using faster SVD solver as well as adaptive grid may address the issue.
Some existing data structures like arrays support serializations. However, not all the classes support I/O and we need more policy as well.
A couple of thoughts/requirements:
Current Python API follows corresponding C++ API naming convention which partially breaks PEP 8 (ex. function names). Other frameworks such as TensorFlow for instance, seems to follow language-specific guidelines. That would be ideal for Jet framework as well. However, changing the naming convention will break the API obviously, so it should target for v2.
This is not an urgent issue, but having additional bindings, such as .NET, Obj-C, Go, Rust, etc, could be fun. Jet used to have some .NET and Obj-C bindings but deprecated due to the complexity of the binding code and negative impact on core Jet implementation. If someone can make contribution to this effort, it would be great.
Only 2x2, 3x3, 4x4 and FDM-based sparse matrices are currently implemented. For the completeness,
Do not over engineer. Remove trivial getter/setter functions and replace them with public member variables.
Consider migrating over to clang-format from cpplint. Also provide project-specific settings for VS Code or Sublime Text.
A bunch of library and header includes are missing, syntax errors, ... too many to fix myself.
Expose the API through C++/CLI so that Jet supports .NET languages such as C#.
Binding with WinRT is also desirable which will enable the Windows Store App support.
If a ImplicitSurfaceSet3 is empty and has no surfaces, then the bvh will have no nodes. Calling the function ImplicitSurfaceSet3::closestPointLocal will then crash since the queryResults item is null.
const auto queryResult = _bvh.nearest(otherPoint, distanceFunc);
return (*queryResult.item)->closestPoint(otherPoint);
The Mitsuba renderings in the examples video and the book look great.
Is it possible to add the Mitsuba project files and possibly scripts that were used to create these images?
Unfortunately I only see the volume export for smoke simulations in the source code.
Current examples are computationally heavy. We need more like a "Hello, world" type of example.
At least make a short-cut to some selected manual tests, bound with simple visualization tool. Also, document these examples to the wiki page.
Python binding will improve the productivity. The binding solution is not determined, yet. The candidates are Boost.Python, SWIG, and pybind11. Interoperating with NumPy and Matplotlib would be ideal.
Libraries like cnpy and googletest are missing. In order for this to compile, they need to be added manually.
Complete adding Doxygen comments for all classes and their public members.
I noticed that there is no opengl/directx based previsualizer or renderer builtin this engine. Are there plans to do that?
How about to add a method implementation "Restoring Missing Vortices in Advection-Projection Fluid Solvers" http://www.cs.ubc.ca/~zhxx/IVOCK.html
I found that unit tests failed intermittently on macOS High Sierra (utilforever/CubbyFlow#233). The error log looks like this: Can you check it?
/Users/utilforever/code/TestWorkSpace/fluid-engine-dev/src/tests/unit_tests/grid_single_phase_pressure_solver2_tests.cpp:330: Failure
The difference between 0.0 and vel.u(i, j) is 0.010890662978106036, which exceeds 0.01, where
0.0 evaluates to 0,
vel.u(i, j) evaluates to -0.010890662978106036, and 0.01 evaluates to 0.01.
[ FAILED ] GridSinglePhasePressureSolver2.SolveSinglePhaseWithMg (48 ms)
[----------] 1 test from GridSinglePhasePressureSolver2 (48 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (48 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] GridSinglePhasePressureSolver2.SolveSinglePhaseWithMg
1 FAILED TEST
/Users/utilforever/code/TestWorkSpace/fluid-engine-dev/src/tests/unit_tests/grid_single_phase_pressure_solver2_tests.cpp:336: Failure
The difference between 0.0 and vel.v(i, j) is 0.051521814726812387, which exceeds 0.05, where
0.0 evaluates to 0,
vel.v(i, j) evaluates to 0.051521814726812387, and 0.05 evaluates to 0.050000000000000003.
[ FAILED ] GridSinglePhasePressureSolver2.SolveSinglePhaseWithMg (167 ms)
[----------] 1 test from GridSinglePhasePressureSolver2 (167 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (167 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] GridSinglePhasePressureSolver2.SolveSinglePhaseWithMg
1 FAILED TEST
Support GPGPU backend for faster computation. Ideally Nvidia CUDA, but OpenCL is another option.
Current API is too complicated to use when users try to add sources or emitters to the system. Provide easier API for setting source/emitter, similar to the collider.
Currently we have a TBB backend for parallelFor calls, but parallelSort and parallelRangeFor still fall back to std::thread implementations. TBB support for those functions would be beneficial, not just for the performance but also for consistency.
Hello,
I ran the sph_sim example and it generated an output folder with a lot of .xyz files which i believe are the positions of particles for each frame. Is there any script to generate a .mp4 animation based on this .xyz files?
Thank you!
Only 2x2, 3x3, 4x4 and FDM-based sparse matrices are currently implemented. For the completeness,
At least PyPI should be supported. Debian APT/Homebrew/NuGet/vcpkg also can be considered.
Providing Objective-C binding will enable better integration with iOS and OS X apps. Swift support is also desirable.
There are some rooms for memory perf improvements on FDM-type linear system solvers, especially for CG and ICCG solvers.
Credit: Alexander M who found the issue.
Better performance using SIMD implementation by either directly implementing SIMD operations or utilizing Intel ISPC.
Implement multiphase fluids including:
I was going over the rotation manual tests, and while it seems the dissipation is less than PIC or FLIP as expected, I thought that the dissipation would be much less than it is since APIC is supposed to preserve angular momentum? Is this a bug or am I missing something conceptually?
Jet come with several example simulations and there are some demo videos generated from the examples. It would be great if we can have better and broader set of examples. Below are my short wishlist:
viz
and gpu
branch is merged)It would be also great if we can create something more creative than the existing demos that are too technical.
Some classes do not support copy ctor or = op. Identify those and clean them up.
Implement higher-order fast marching method (FMM) and fast sweeping method (FSM). Consider parallel implementation.
Some features like constexpr, attribute, variable template, and better type deduction would make the code more cleaner. Should be very cautious, though. "Modern" != "Clean/Better"
This means we also want to upgrade the minimum build tool requirements. The out-of-the-box solution for any given OS should support C++14 (or the feature we need at least).
Implement position-based fluids.
http://mmacklin.com/pbf_sig_preprint.pdf
Good morning, I got a question about sampling the gradient or more generically a field.
Let's take as an example the density gradient, the gradient can be sampled at every point, and the result will be the sum of the weighted densities of all nearby particles. Now that is all good if I am sampling in a generic way the gradient, but when I am sampling the gradient in order to update a value of particle I, particle I need not to be in the neighbor's list? Meaning to compute the value at that particle that particle itself is not included in the computation? I guess otherwise the field would be offsetted by "max value" at every particle point?
Basically enforcing
Here is an example of the density gradient, computed at every point, not for every particle:
To me makes sense that if I am sampling a point that is exactly on a particle, there I would expect a high density being near the peak of the kernel function. But my source of confusion is instead if I am evaluating the particles itself
Best regards
M.
PS: hope this is clear enough, if not let me know I can elaborate.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.