GithubHelp home page GithubHelp logo

enigmatisms / adapt Goto Github PK

View Code? Open in Web Editor NEW
34.0 2.0 1.0 1.32 MB

A physically based mesh renderer named Ada Path Tracer based on Taichi lang

License: GNU General Public License v3.0

Python 91.52% Shell 0.56% C++ 7.87% C 0.04%
global-illumination path-tracing raytracing taichi volumetric-rendering bdpt photo-realism renderer ray-tracing transient-rendering bvh-tree gpu graphics indirect-illumination physically-based-rendering rendering rendering-3d-graphics vertex-normal ssao

adapt's Introduction


Ada Path Tracer is a simple Monte Carlo path tracing renderer based on Taichi Lang, with which you can play easily. The name AdaPT is given by my GF and I think this name is brilliant. Currently, this renderer stops at version 1.5.x, since I think I should focus on something else until we have a better version of the backend (Taichi).

This renderer is implemented based on MY OWN understanding of path tracing and other CG knowledge, therefore I DO NOT guarantee usability. The output results look decent nevertheless:

Steady state rendering

For more example scenes, please refer to Enigmatisms/AdaptiveGallery

Sports car scene (~290k primitives, 16 bounces, with many different BxDFs):

Bathroom scene (~400k primitives, 8 bounces, source can be found here):

Material orb scene (~500k primitives, 24 bounces, CUDA backend 16 fps):

Kitchen scene (100k+ primitives):

Stuff scene:

Bunny scenes are not uploaded in the repo (90k+ primitives).

"Spotlight Foggy Bunnies" "Three Bunnies"
pbr-bunny-bdpt-textured pbr-bunny-pt
"The cornell spheres" "The cornell boxes" "Fresnel Blend"
pbr-big-bdpt
Console output

Powered by rich console. Note that the loading speed bottleneck (in the figure) no longer exists after PR #19 !

Screenshot from 2023-05-31 12-36-59

Transient state rendering

Note that the gifs presented here are made by compressed jpeg files and optimized (compressed gif). The actual number of images for making the gif is divided by 2, due to the large size of the resulting gif.

Transient balls (camera unwarped1) Transient cornell box (camera warped1)
ezgif-2-aa699e249c ezgif-4-ab2bd63172

Here are the features I currently implemented and support:

  • A direct component renderer: a interactive visualizer for direct illumination visualization
  • A unidirectional / bidirectional Monte-Carlo MIS path tracer: supports as many bounce times as you wish, and the rendering process is based on Taichi Lang, therefore it can be very fast (not on the first run, the first run of a scene might take a long time due to taichi function inlining, especially for BDPT). The figures displayed above can be rendered within 15-20s (with cuda-backend, GPU supported). The rendering result is displayed incrementally, or maximum iteration number can be pre-set.
  • Volumetric path tracer that supports uni/bidirectional path tracing in both bounded and unbounded condition
  • A transient renderer with which you can visualize the propagation of the global radiance.
  • Texture packing and texture mapping, see scenes/bunny.xml for an configuration example. We support bump map / normal map / roughness map (this is not tested) for now.
  • Shading normal is supported for a smooth appearance.
  • Rendering checkpointing and rich console pannel support.
  • Ray tracing accelerating structure, for now, we only support BVH. KD-tree will be implemented in the future.
  • Global / indirect illumination & Ability to handle simple caustics
  • BRDFs: Lambertian, Modified Phong (Lafortune and Willems 1994), Fresnel Blend (Ashikhmin and Shirley 2002), Blinn-Phong, Mirror-specular.
  • BSDFs (with medium): deterministic refractive (glass-like)
  • mitusba-like XML scene file definition, supports mesh (from wavefront .obj file) and analytical sphere.
  • scene visualizer: which visualizes the scene you are going to render, helping to set some parameters like the relative position and camera pose
  • Extremely easy to use and multi-platform / backend (thanks to Taichi), with detailed comments and a passionate maintainer (yes, I myself). Therefore you can play with it with almost no cost (like compiling, environment settings blahblahblah...)

BTW, I am just a starter in CG (ray-tracing stuffs) and Taichi Lang, so there WILL BE BUGS or some design that's not reasonable inside of my code. Also, I haven't review and done extensive profiling & optimization of the code, therefore again --- correctness is not guaranteed (yet the steady state rendering results are compared with mitsuba(0.6) and pbrt-v3)! But, feel free to send issue / pull-request to me if you are interested.

Repography logo / Recent activity Time period

Timeline graph Issue status graph Pull request status graph Top contributors

Other branches:

  • ad: support for inverse rendering (automatic differentiable feature from Taichi), but due to the AD support of Taichi being too... lame (I think), there will be strange exceptions preventing using differentiable rendering.
  • more: BSDF mixture model (mixing different BSDFs). The BSDF management is entirely rewritten, but the code is slow for both runtime and compile time performance though it indeed supports more interesting features. I figure that since mixture is rarely used, we should opt for faster implementation.

Rendering Example

​To run the rendering, use:

# For bidirectional path tracer
# Make sure you are in root folder.
# pip install -r requirements.txt			# Dependencies should be satisfied
python3 ./render.py --scene cbox --name cbox.xml --iter_num 8000 --arch cuda --type bdpt
# For volumetric path tracer: --type vpt, for vanilla path tracer: --type pt

useful parameters

  • --scene: in the folder scenes/, it specifies the folder to load from.
  • --name: in the specified folder, for example: ./scenes/cbox/, we choose one scene xml file to render.
  • --iter_num: number of iterations for rendering. Note that my implementation is not tile-based, therefore you can stop rendering any time you want (using esc, or closing the window. Do not use ctrl+c)
  • --arch: cuda / gpu / cpu / vulkan is supported (Taichi lang feature)
  • --type: bdpt for bidirectional path tracing, vpt for volumetric path tracing, pt for path tracing
  • --save_iter: Iteration milestones for saving rendering checkpoints (as .pkl file), you can load from it later.
  • -l: load from the check point. For more configurations for checkpointing, please see parsers/opts.py.
  • --img_ext: extension for output images: support ['jpg', 'png', 'bmp']
  • --no_cache: Disable Taichi JIT caching (to save disk space), by default we will enable caching, since BDPT is a complicated renderer and due to the auto-inlining of Taichi functions, JIT is slow (50s-1min, machine-dependent)
  • --no_gui: For simpler renderer and simpler scenes (without too many primitives and path bounces), disable the interactive GUI and accelerate rendering.
  • --normalize: For some scenes and some particular emitter configurations, the rendering result can be very dark. Using for example --normalize 0.99 can normalize the resulting image with 0.99 quantile number of the image.

Note that some scene has default ray-intersection accelerator, like scene/bunny.xml. If you are using Linux, BVH support can be easily turned on, but make sure you have eigen3 installed in /usr/include/eigen3, running:

cd tracer/
python3 ./setup.py install --user

BVH module will be compile to dyn-lib, therefore we can import the BVH module and use this accelerator.

If you want to configure your own scene, you can use the simple scene visualizer (without texture / shading) scene_viz.py:

python3 ./scene_viz.py --scene cbox --name cbox.xml

Using interactive UI to configure the camera extrinsics / intrinsics and object placement.


AdaPT is licensed under GPL-v3. Copyright @Qianyue He.

Footnotes

  1. 'Camera unwarped' means the transient profile shows the time when a position in the scene is hit by emitter ray. 'Camera warped' means the transient profile shows the total time of a position being hit by the emitter ray which should finally transmits to the camera. 2

adapt's People

Contributors

enigmatisms avatar raven-sera avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

raven-sera

adapt's Issues

Problem with spherical emitter NEE sampling.

Clearly the code was written during the time I did not fully understand the essence of MC path tracing. Starting from line 102 in emitters/abstract_source.py:

if is_sphere:
    tri_id    = prim_info[self.obj_ref_id, 0]
    center    = dvs[tri_id, 0]
    radius    = dvs[tri_id, 1][0]
    to_hit    = (hit_pos - center).normalized()
    local_dir, pdf = cosine_hemisphere()
    normal, _ = delocalize_rotate(to_hit, local_dir)
    ret_pos   = center + normal * radius
    # We only choose the hemisphere, therefore we have a 0.5. Also, this is both sa & area measure
    ret_pdf   = 0.5 * pdf

The normal (defining the final position on the sphere), is sampled locally and transformed around the hit pos. First, we can't have something being both area and solid angle measure unless the geometric term is exactly 1. Second, why should we have 0.5 here? We are sampling the same hemisphere with proba 1, i.e, the whole proba (consists of a conditional proba, which means that given the condition that we sample the same hemisphere, the proba of sampling the point, and the a marginal proba, meaning the proba of sampling the same hemisphere, which should be 1 here).

Also, there should be geometric term, now I think this comment made by me is totally bullshit:

We only choose the hemisphere, therefore we have a 0.5. Also, this is both sa & area measure

Yeah, cosine_hemisphere will produce PDF of which the measure is solid angle, but this PDF is still not what we are looking for! We need to now the PDF for the point that reflects the light ray and hit the diffusive area emitter.

The best (and also, the simplest) practice for this sampling problem is to use the current vertex position and the sphere, get the tangent (angles) for the sphere and do a uniform sampling (for polar angles), convert the result to the solid angle measure.

Remaining problems in AdaPT

Currently, BDPT is already implemented therefore I might slow down the development on this repo. Yet, some problems remain in the repo:

  • Specular material rendering result inconsistency. When rendering mirror reflection / Frensel specular effect, BDPT tends to have darker (surface), while other non-specular parts are intact. BDPT rendering is not consistent with PT rendering (which looks better), nor is the result consistent with pbrt-v3 or Mitsuba0.6 (though, pbrt-v3/4 can't render a noise free image of csphere/single-ball.xml, Mitsuba0.6 itself doesn't look as good as pt rendering). This should be taken care of.
  • Anisotropic BRDF burst noise. For FrenselBlend BRDF, there will be shot noises on the image. I suspect it's the problem of PDF calculation (MIS weight)
  • Taichi function inlining introduces long JIT compilation time for BDPT since BDPT is lengthy and its member functions usually have complicated parameter. This should be optimized.
  • VPT / volumetric BDPT seems to be fixed already, but the brightness seems not consistent with PT (no extensive experiment shows this yet, I just remember this vaguely). Brightness is not a huge problem, though.

New features that can be added:

  • Since BDPT is added, we can implement collimated light and spot light in the repo.
  • Scene visualizer should first experiment on mouse control (for view manipulation purpose). Since AdaPT is a real time rendering platform (on GPU and without BDPT, BDPT can only reach 12-16 fps, yet some scene for PT can achieve 120+fps), we can consider changing the camera extrinsic and other configs on the fly.
  • More BSDF (for different transmission / reflection effect), this is relatively easy.
  • Metropolis light transport can be added to further boost the rendering (low priority).

getting errors in importing

from la.cam_transform import *
from emitters.point import PointSource
from tracer.tracer_base import TracerBase

from parser.obj_desc import ObjDescriptor
from parser.xml_parser import mitsuba_parsing

Taichi] version 1.5.0, llvm 15.0.1, commit 7b885c28, win, python 3.10.11
Traceback (most recent call last):
File "d:\Python_RayTracing\AdaPT-1.2.1\tracer\path_tracer.py", line 18, in
from la.cam_transform import *
ModuleNotFoundError: No module named 'la'

Transient lit mode output is strange

Hi! I'm interested in rendering and have been learning taichi lang for a while. I played with your renderer several days ago and I think the outputs of transient lit mode is not so correct. It looks kinda strange if you actually use it in scenes like bunny and so on. Like in bunny scene, some places are lit too early, like those colored walls.

Latest tasks for AdaPT

The latest tasks for AdaPT:

  • Making the geometry more variable and look smooth. Normal mapping / bump mapping / vertex normal should be done. (task for 1.4.0, highest priority)
  • Making python->taichi loading faster (1000 prims per second, too slow) (task for 1.4.0, highest priority)
  • Making rendering faster (careful code optimization) (task for 1.4.0, 2nd priority)
  • More BSDF and emitter:
    • For BSDF, there are three things I wanna do: (1) plastic material implementation. (2) refactoring for mod-phong (possibly-wrong implementation) and det-refract (unable to control the reflected ratio on surface). (3) Oren and Nayar rough diffusive BSDF. (2nd priority)
  • Image reconstruction filter and advanced sampling methods (like equiangular sampling, forward-scattering sampling) (3rd priority)
  • KD-tree accelerator: this is just for training purpose (C++ / pybind / algorithm) (4th priority)
  • Logic check for single-sided mesh BRDF (we can actually hit both sides if the mesh is not closed, and standard is_mi checking will fail) (highest priority)

Done

  • Texture mapping and shading normal should be implemented. This will make AdaPT fully usable for producing realistic scene rendering. (1.3.0)
  • Emitter: Implement spot light source. I think..., spot source (limited angular distribution) can be applied to a wider range like area sources?

After tasks with equal-to and higher-than 3rd-level priority being finished, check the rendering result (against PBRT and mitsuba again). Then v1.3.0 can be released.

Future release plans

For version 1.5.0:

  • Experiment on mixture BSDF, which should achieve: coating / plastic / realistic diffuse and more transmittance models.
  • The demo scene for this version is the Lamborghini scene.

For version 1.6.0:

  • Free view point rendering on the fly (camera pose / intrinsics being variables)
  • Miscellaneous optimization: KD-tree implementation / code optimization for rendering speed
  • Ambient occlusion renderer.
  • The demo scene for this version is the Snooker scene and the Minecraft scene

For version 1.7.0:

  • Volumetric rendering upgrade: equiangular sampling
  • Heterogeneous rendering should be implemented. Density-based rendering / delta tracking.

Version 1.7.0 might be the last active version of this repo. I will shift my focus onto dynamic real-time global illumination rendering in the future.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.