GithubHelp home page GithubHelp logo

wermos / rendera Goto Github PK

View Code? Open in Web Editor NEW
0.0 0.0 8.0 49 KB

A C++20 ray-tracing engine.

License: MIT License

C++ 91.24% CMake 8.76%
cpp cpp20 graphics graphics-programming ray-tracer ray-tracing raytracer raytracing

rendera's Introduction

Hello!

My name is Tirthankar Mazumder.

Here is my blog website.

wermos

Top Langs

Tirthankar's GitHub Stats

GitHub Streak

rendera's People

Contributors

divyanshtiwari237 avatar inferno2401 avatar kartikdhawaniya avatar prateekgargx avatar wermos avatar

Watchers

 avatar

rendera's Issues

Mark functions with `noexcept`

Herb Sutter, one of the C++ gods, mentions in [GoTW 20] (http://www.gotw.ca/gotw/020.htm) that hidden execution paths can pollute an otherwise sane program/executable, because the compiler has to assume that anything that can throw, will throw.

However, our ray tracer does not throw any exceptions, and as of now, there is no plan to incorporate them into Rendera. Hence, we can promise the compiler that all of our functions do not throw exceptions using the noexcept keyword. This would allow us to ensure that we are keeping our promise to not throw exceptions (as the compiler will error out if we accidentally write code which can throw), and also allow the compiler to perform optimizations that it could not otherwise perform.

What to do

The task for this issue is quite simple: Go through the entire repository and mark all the functions with noexcept. (The ones where it makes sense to do so, at least, which should be most of them.)

Implement Materials: Metal

The material of a particular object determines the following:

  1. How/Whether the ray hitting an object is scattered.
  2. The level of attenuation in the scattered ray.

We already have a generic material class. Define a virtual function scatter in the material class.
Also, derive a class Metal from Material and implement the scatter function.

The scatter function can take in the following:

  1. A HitInfo object. This is implemented in this PR. This should provide all the necessary information needed.
  2. The initial ray.
  3. References to the attenuation color and the scattered ray.

In the case of metals, the scatter function is quite straightforward. Just obtain the reflected ray according to the laws of reflections. That will be the scattered ray.
Thus, update the scattered ray. The attenuation color in the case of metal can be any color.

Implementing a Lambertian material class

Currently, we have the Phong Reflection Model implemented in our codebase.
The Lambertian model for reflection/shading is an alternative model which has a distribution of $\cos(\phi)$.

This can be achieved by:

  1. Implementing a function that returns a random point in a unit sphere.
  2. Offsetting the surface normal with these unit vectors.

Unified coding style

In all modern software projects which are worked on by a team, a unified coding style is used. The benefits of this are numerous:

  1. Readers of the code will not know who write which part (unless they go through the Git commit history),
  2. Readers of the code will waste time thinking or wondering about the sudden changes in coding style,
  3. We will not have people in the team disagree about which style is superior, because everyone will be using the same style.
  4. A unified way of making variable names means that it will obvious to a reader when something is a variable (member or otherwise).

What to do

a) Take a look at Clang Format. It's a tool which takes a specific code style (specified in a .clang-format file), and applies that to the file you pass in as the argument.

b) Choose a specific style to enforce: I think the Google Code Style is really good, with some minor modifications, such as having the indent width be 4 spaces instead of 2. You can specify what our specific style looks like using these options. In particular, we want BasedOnStyle: google, and IndentWidth to be 4.

Any other modifications that you think are reasonable, should probably be discussed on the group with the others.

d) Run clang-format on every file in the repository so far (excluding files in the submodules).

e) Implement a GitHub Action which will prevent PRs with incorrect formatting from being merged into the main line of work. You can use this existing Action as inspiration ;)

Adding Cubes

Currently, the RayTracer only supports sphere objects. Implement cubes into the project.

This will involve the following:
1.) Implementing a Cube class that stores the dimensions and other necessary features.
2.) Implement ray-box intersections which are necessary to render the cube onto the image.

References:
1.) Scratchapixel
2.)The RayTracer Challenge

Assignee: @KartikDhawaniya

Generate `.png` and `jpg` files instead of `.ppm` files

Most image viewers, especially those on Windows, don't have support for viewing .ppm files. To view such files on Windows, usually one needs to install a massive application such as GIMP or Photoshop. To make viewing these images easier, we should switch to creating .png and .jpg images.

What to do

  1. Add the stb repository as another submodule to our repository.
  2. Modify the CMakeLists.txt files as appropriate.
  3. Store the rendered image as an image buffer, instead of writing to a file. (The image buffer will most likely need to be allocated on the heap instead of the stack, because most stacks aren't big enough.)
  4. Use the stbi_write_png and stbi_write_jpg functions from stb_image_write.h to generate PNG and JPG images.

Optional:
We can look into either determining at runtime (by asking the user), or make a macro, to decide to generate PNG images, JPG images, or both.

Assignee: @Stealth2002

`vec4` + `vec3` classes, with SIMD intrinsics support

The task is to create a vec4 class, which will be a vector with 4 elements, with the relevant functionality (like adding, subtracting, multiplication and division by a scalar).

Use the xsimd submodule for SIMD intrinsics to speed up the math operations, and provide reasonable constructors.

Make the vec3 class by inherting from vec4, and supply a cross product function for the 3 dimensional vectors.

You might find this link on how you can use SIMD intrinsics for computing the cross product of two 3D vectors interesting.

Assignee: @prateekgargX

Integrate GoogleTest into Rendera

As it stands, we have some complex vector classes in the utilities/ directory. To ensure correctness, we want to write some tests and make sure that they work as expected. However, to do so, we need a testing framework, which is why we need Google Test.

What to do

  1. Integrate Google Test as a submodule to our repository.
  2. Write tests to test all the arithmetic operations supplied by the vec4 and vec3 classes. The tests should go inside a tests directory inside utilities.
    a. The CMakeLists.txt inside utilities should also be updated to properly build Google Test, and expose the headers to Rendera source files.
    b. Test the vector addition and subtraction by testing with some trivial values, and also write some randomized tests, which will randomly generate values and check the answers for correctness.
    c. Test the scalar multiplication code, using the same technique.
    d. Test the cross and dot products. This will be a little bit more tricky to test.

Implement anti-aliasing

Implement a little randomness in the get_ray function in the Camera class (basically, it should return the ray, except that the ray should be off by a little).

Modify the rendering algorithm to do multiple samples per pixel (say 50), and then combine all the samples to make the final image.

Assignee: @Inferno2401

Clean up of the first render PR

In #14, our main goal was to get a basic render up and running. Now that we have achieved that goal, our next step is to clean up that code and give it a glow up. As such, here are some of the improvements we should make:

  1. The Material class should probably be a struct.
    color Color() const{
    return col;
    }
    float amb() const{
    return ambient;
    }
    float dif() const{
    return diffuse;
    }
    float spe() const{
    return specular;
    }
    float shiny() const{
    return shininess;
    }

    As you can see here, we define these variables as private variables but then also define getters for every single private variable, meaning that we actually want them to be a part of the public API. The most reasonable way to achieve this in C++ is by defining the class as a struct, where all the members are public by default.
  2. The image height and image width should be of type std::size_t and not int.
    int img_width;
    int img_height;

    Obviously, negative image height or width doesn't make sense, and for that reason we should use an unsigned type. std::size_t is one such well-defined unsigned type.
  3. We need to figure out and improve the get_int() function.
    //return 8 bit int values
    constexpr color get_int() const{
    int R = (r > 1.0) ? 255 : std::max(0, static_cast<int>(255*r));
    int B = (b > 1.0) ? 255 : std::max(0, static_cast<int>(255*b));
    int G = (g > 1.0) ? 255 : std::max(0, static_cast<int>(255*g));
    return {R, G, B};

    a) The comment suggests that the class should return a color consisting of 8-bit ints, but the int type is 32 bits. We need to amend this by using the std::uint8_t type. (Unsigned because the value will be in the range 0-255, inclusive).
    b) We need to determine why, and if, the call to the std::max function is needed. Perhaps discussing this @Inferno2401 will help here.
  4. In the hit function, we see the following:
    if(disc >= 0){
    t_1 = (-beta - sqrt(disc))/(2 * alpha);
    t_2 = (-beta + sqrt(disc))/(2 * alpha);
    }
    //only positive hits
    return (disc>=0) && (t_2 >=0);
    }

    However, it is better to subscribe to a "return fast" policy. What this means is that inside the if clause (where we checked if the discriminant is non-negative or not), we should return true or false, depending on whether or not t2 is non-negative, and outside the if clause, we should return false. The reason a "return fast" policy is better than the existing implementation is because it minimizes the number of possible comparisons and branches in the code.
  5. The Intersection class should not have getter functions for both t1 and t2, because if the ray doesn't hit the object at all, they will return garbage to the user. However, amending this problem is less of a priority because we are planning to remove this class entirely as a part of the refactoring being done in #24.

Assignee: @KartikDhawaniya

Scene Class

Currently, we have all the basic tools needed to render an image. However, we want a separate scene class that will handle all of the elements involved in a particular scene. As such, implementation of the following is required:

  1. Make a unified design for hittable objects:
    a) Implement a base class for the hittable objects with a pure virtual function called hit.
    b) Make the existing Sphere class inherit from this class, and move the ray-sphere intersection code from intersection.hpp into the sphere class.

  2. Scene class:
    a) Make a class with an array of hittable objects. Also implement an add_object() function to add objects into the scene.
    b) Make a struct called HitInfo to store information relating to an intersection (a boolean is_hitting, a pointer to the material, the point of intersection, the normal at that point, the ray itself, and the recursion depth).
    c) Implement a hit_object function which will traverse the entire hittable object array and return a HitInfo object for a given ray, for the closest intersection from the camera.

  3. Refactor the main function to make it more modular:
    a) The main function will be handling a lot of tasks. We can make it more modular by refactoring all of the scene creation into one function and all the rendering in another. Call these two functions from the main function.
    b) Make a function called create_scene() which will create the scene which will be ray-traced.
    c) Make a function called render() which will contain the actual rendering algorithm, and generate the image.

Assignee: @mahesh0537

Make the project build and make it submission-ready

Currently, we have a lot of people doing lots of different work on the ray tracer. As a result, the project is currently failing to build, and we cannot use the executable at all. There are two parts to this task:

Make the project build

This subtask is pretty self explanatory. Do whatever is necessary (short of removing features) to make the project build.

Make it submission-ready

Obviously, one of the pre-requisites of making the project submission-ready is having it build. Apart from that, to create a nice-looking picture as our final submission, we need to generate the scene first.

The scene used as the cover image of the SoC had 487 spheres. Since we also have a Cube class, we can have a scene filled with spheres and cubes. Here is what you should do:

  1. For every $x$-coordinate and $y$-coordinate in a $22\times 22$ grid, place one object.
    a. Randomly decide whether it should be a cube or a sphere, and randomly decide the material for it and the related data.
  2. Make a Scene object with this scene, and then pass it into the render function (which you can assume to be written by @KartikDhawaniya).

It would be good to have one PR for each sub-task.

  • Make the project build
  • Make it submission-ready

Assignee: @mahesh0537

Add more textual output to Rendera

As it stands, our program just runs for however long it requires, and then just outputs "PNG image generated" and "JPG image generated" at the very end. This is highly undesirable because anyone who runs our program will not know if the program is even working or not, once they start the program.

What to do

  1. We want to output some sort of message once scene parsing/construction is done. (While this won't matter for now, with our simple scenes, it may well make a difference later on.)
  2. Keep the user informed about how many lines have been rendered, and how many are remaining.
  3. Once the rendering is finished, output a message about the PNG/JPG generation, based on the return value of the stbi_write_png and stbi_write_jpg functions. Note that a non-zero return value indicates a success, and a zero return value indicates a failure to generate the relevant image file.

Implement BVHs in Rendera

Note
This is a very tricky issue to get right, because we have to make sure that the BVH Nodes are as lightweight as possible, and also make sure that the BVH construction is as fast and accurate as possible. Otherwise, for smaller scenes, the BVH construction time contributes more to the program runtime than the actual rendering algorithm runtime.

Currently, the rendering algorithm in Rendera checks if a given ray intersects with any object sequentially. This is fundamentally an $O(n)$ algorithm.

Can we do better? The answer is yes! If we partition the space inside the scene and organize it into a bounding volume hierarchy, then we can make the algorithm run in $O(\log n)$ time.

However, like I mentioned in the note, there are many competing factors here, so the actual implementation is quite tricky: one false step and we will introduce an unnecessary slowdown.

What to do

  1. Read up on the theory. The Wikipedia page is a great place to start. Reading Chapter 19 of Ray Tracing from the Ground Up is also highly recommended. Other than that, reading Chapters 4 and 6 in Real-Time Collision Detection by Christer Ericson would help too.
  2. Implement a lightweight struct which will hold the relevant bounding volume data. I recommend starting off by implementing AABBs (Axis-Aligned Bounding Boxes), as their implementation is very straightforward.
  3. Implement a BVH class which takes in a Scene object and creates a tree out of that. For now, a binary tree is good. In future, we can experiment with k-d trees.
    a. In the context of ray tracing, the SAH (surface area heuristic) is a cheap and easy way of figuring out how to split each BVH node into smaller nodes.

Assignee: @prateekgargX and @mahesh0537

Implement Lights

Currently, we don't have any explicit support for different types of lights. Our first render implicitly uses a point light, but there are some drawbacks to only supporting point lights, as discussed in The Ray Tracer Challenge (chapter 17). For some motivation for implementing these different types of lights, as well a short explanation of how they work, read that chapter.

What to do

  1. Read chapter 17 of The Ray Tracer Challenge (the possible improvements which discuss area lights, soft shadows, and spotlights).
    a) Read chapter 14 of Ray Tracing from the Ground Up, to see how they implemented lights.
  2. Implement a base class for lights called Light, with pure virtual functions called get_direction, which returns the direction of the incoming light at a hit point and L, which returns the incident radiance at a hit point.
  3. Implement ambient lights, directional lights, and point lights, similar to the way the book does it.
  4. Implement area lights. You can use this online addendum to the Ray Tracer Challenge as a reference.
  5. Implement spotlights. You can see other people's implementations, like this one for reference. You can find how he modified the ray tracing algorithm to use spot lights here.

Addendum

This is a slightly bigger task than the previous issues. To complete this task, you should break it up over multiple PRs. Do not send one massive PR because that makes it very difficult to review effectively. As such, a reasonable break down would be:

  • One PR for implementing the base class Light.
  • One PR for ambient lights, directional lights, and point lights.
  • One PR for area lights.
  • One PR for spotlights.

Assignee: @Inferno2401

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.