Cs184 Homework Helper

In this assignment you will implement a simple rasterizer, including features like supersampling, hierarchical transforms, and texture mapping with antialiasing. At the end, you'll have a functional vector graphics renderer that can take in modified SVG (Scalable Vector Graphics) files, which are widely used on the internet.



  • Project 1 is due Monday, February 6th at 11:59pm. Assignments which are turned in after 11:59pm are a full day late -- there are no late minutes or late hours.

Getting started

You can either download the zipped assignment straight to your computer or clone it from GitHub using the command

Please consult this article on how to build and submit assignments for CS 184.

As you go through the assignment, refer to the write-up guidelines and deliverables section below. It is recommended that you accumulate deliverables into sections in your webpage write-up as you work through the project.

Note: Do not squander all your hard work on this assignment by converting your png files into jpg or any other format! Leave the screenshots as they are saved by the key in the GUI, otherwise you will introduce artifacts that will ruin your rasterization efforts.

Using the GUI

You can run the executable with the command

A flower should show up on your screen (after you do some work in Part 1). After finishing Part 3, you will be able to change the viewpoint by dragging your mouse to pan around or scrolling to zoom in and out. Here are all the keyboard shortcuts available (some depend on you implementing various parts of the assignment):

return to original viewpoint
decrease sample rate
increase sample rate
toggle the pixel inspector
switch between texture filtering methods on pixels
switch between texture filtering methods on mipmap levels
save a png screenshot in the current directory
switch between svg files in the loaded directory

The argument passed to can either be a single file or a directory containing multiple svg files, as in

If you load a directory with up to 9 files, you can switch between them using the number keys 1-9 on your keyboard.

Project structure

The project has 7 parts, divided into 3 sections, worth a total of 100 possible points. Some require only a few lines of code, while others are more substantial.

Section I: Rasterization (suggested completion checkpoint: Sunday 1/29)

  • Part 1: Rasterizing single-color triangles (20 pts)
  • Part 2: Antialiasing triangles (20 pts)
  • Part 3: Transforms (10 pts)

Section II: Sampling

  • Part 4: Barycentric coordinates (10 pts)
  • Part 5: "Pixel sampling" for texture mapping (15 pts)
  • Part 6: "Level sampling" with mipmaps for texture mapping (25 pts)

Section III: (optional, possible extra credit) Art Competition

  • Part 7: Draw something interesting!

There is a fair amount of code in the CGL library, which we will be using for future assignments. The relevant header files for this assignment are vector2D.h, matrix3x3.h, color.h, and renderer.h.

Here is a very brief sketch of what happens when you launch : An (in svgparser.*) reads in the input svg file(s), launches a OpenGL containing a renderer, which enters an infinite loop and waits for input from the mouse and keyboard. DrawRend (drawrend.*) contains various callback functions hooked up to these events, but its main job happens inside the function. The high-level drawing work is done by the various child classes (svg.*), which then pass their low-level point, line, and triangle rasterization data back to the three rasterization functions.

Here are the files you will be modifying throughout the project:

  1. drawrend.cpp, drawrend.h
  2. texture.cpp
  3. transforms.cpp
  4. svg.cpp

In addition to modifying these, you will need to reference some of the other source and header files as you work through the project.

Section I: Rasterization

Part 1: Rasterizing single-color triangles (20 pts)

Relevant lecture: 2

Triangle rasterization is a core function in the graphics pipeline to convert input triangles into framebuffer pixel values. In Part 1, you will implement triangle rasterization using the methods discussed in lecture 2 to fill in the function in drawrend.cpp.


  • It is recommended that you implement function first, so that you can see rasterized points and lines for basic/test[1/2/3].svg.
  • Remember to do point-in-triangle tests with the point exactly at the center of the pixel, not the corner. Your coordinates should be equal to some integer point plus (.5,.5).
  • For now, ignore the input argument to the function. We will come back to this in Part 4.
  • You are encouraged but not required to implement the edge rules for samples lying exactly on an edge.
  • Make sure the performance of your algorithm is no worse than one that checks each sample within the bounding box of the triangle.

When finished, you should be able to render many more test files, including those with rectangles and polygons, since we have provided the code to break these up into triangles for you. In particular, basic/test3.svg, basic/test4.svg, and basic/test5.svg should all render correctly.

For convenience, here is a list of functions you will need to modify:

    Extra Credit: Make your triangle rasterizer super fast (e.g., by factoring redundant arithmetic operations out of loops, minimizing memory access, and not checking every sample in the bounding box). Write about the optimizations you used. Use to get timing comparisons between your naive and speedy implementations.

    Part 2: Antialiasing triangles (20 pts)

    Relevant lecture: 3

    Use supersampling to antialias your triangles. The parameter in (adjusted using the and keys) tells you how many samples to use per pixel.

    The image below shows how sampling four times per pixel produces a better result than just sampling once, since some of the supersampled pixels are partially covered and will yield a smoother edge.

    To do supersampling, each pixel is now divided into sub-pixels. In other words, you still need to keep track of pixels, but now each pixel has sampled colors. You will need to do point-in-triangle tests at the center of each of these sub-pixel squares.

    We provide a class to store the sub-pixels. Each SampleBuffer instance stores one pixel. Your task is to (1) fill every sub-pixel with its correctly sampled color for every SampleBuffer instance, and (2) average all sub-pixels' colors within a samplebuffer to get a pixel's color. Since you've finished Part 1, you can use function to write color to a sub-pixel for (1).

    However, filling in the correct sub-pixel colors is only the first half. In order to properly rasterize the correct color based on each SampleBuffer instance's subpixels' colors, you will need to implement to perform (2). This function looks at all of the SampleBuffer instance's subpixels and averages their colors together into a single color value which is then the SampleBuffer instance's "true" color. This function is called in in order to render the svg.

    Your triangle edges should be noticeably smoother when using > 1 sample per pixel! You can examine the differences closely using the pixel inspector. Also note that, it may take several seconds to switch to a higher sampling rate.

    For convenience, here is a list of functions you will need to modify:

      Extra Credit: Implement an alternative sampling pattern, such as jittered or low-discrepancy sampling. Create comparison images showing the differences between grid supersampling and your new pattern. Try making a scene that contains aliasing artifacts when rendered using grid supersampling but not when using your pattern. You can also try to implement more efficient storage types for supersampled framebuffers, instead of using one samplebuffer per pixel.

      Part 3: Transforms (10 pts)

      Relevant lecture: 4

      Implement the three transforms in the transforms.cpp file according to the SVG spec. The matrices are 3x3 because they operate in homogeneous coordinates -- you can see how they will be used on instances of by looking at the way the operator is overloaded in the same file.

      Once you've implemented these transforms, svg/transforms/robot.svg should render correctly, as follows:

      For convenience, here is a list of functions you will need to modify:

        Extra Credit: Add an extra feature to the GUI. For example, you could make two unused keys to rotate the viewport. Save an example image to demonstrate your feature, and write about how you modified the SVG to NDC and NDC to screen-space matrix stack to implement it.

        Section II: Sampling

        Part 4: Barycentric coordinates (10 pts)

        Relevant lecture: 6

        Familiarize yourself with the struct in svg.h. Modify your implementation of so that if a non-NULL pointer is passed in, it computes barycentric coordinates of each sample hit and passes them to to request the appropriate color. Note that the barycentric coordinates are stored with type , so you should use to store the barycentric coordinate corresponding to triangle vertex $P_i$ for $i=0,1,2$.

        Implement the function in svg.cpp so that it interpolates the color at the point . This function is very simple: it does not need to make use of or , which are for texture mapping. Note that this function plays the role of a very primitive shader.

        Once Part 4 is done, you should be able to see a color wheel in svg/basic/test7.svg.

        For convenience, here is a list of functions you will need to modify:

          Part 5: "Pixel sampling" for texture mapping (15 pts)

          Relevant lectures: 6, 7

          Familiarize yourself with the struct in svg.h. This is the primitive that implements texture mapping. For each vertex, you are given corresponding uv coordinates that index into the pointed to by .

          To implement texture mapping, should fill in the and members of a struct and pass it to . Then should fill in the correct uv coordinates in the struct, and pass it on to . Then should examine the to determine the correct sampling scheme.

          The GUI toggles 's variable using the key. When , you should use nearest-pixel sampling, and when , you should use bilinear sampling.

          For now, you can pass in dummy values for the and arguments to

          For this part, just pass for the parameter of the and functions.

          Once Part 5 is done, you should be able to rasterize the svg files in svg/texmap/, which rely on texture maps.

          For convenience, here is a list of functions you will need to modify:

            Part 6: "Level sampling" with mipmaps for texture mapping (25 pts)

            Relevant lecture: 6, 7

            Finally, add support for sampling different levels. The GUI toggles 's variable using the key.

            • When , you should sample from the zero-th , as in Part 5.
            • When , you should compute the nearest appropriate level using the one-pixel difference vectors and and pass that level as a parameter to the nearest or bilinear sample function.
            • When , you should find the appropriate level and do full trilinear sampling by getting two bilinear samples from adjacent levels and computing a weighted sum.

            Implement as a helper function. You will need $(\frac{du}{dx}, \frac{dv}{dx})$ and $(\frac{du}{dy},\frac{dv}{dy})$ to calculate the correct level. In order to get these values corresponding to a point $p = (x,y)$ inside a triangle, you must

            1. calculate the barycentric coordinates of $(x+1,y)$ and $(x,y+1)$ in , passing these to as the variables and ,
            2. calculate the uv coordinates and inside ,
            3. calculate the difference vectors and inside , and finally
            4. scale up those difference vectors respectively by the width and height of the full-resolution texture image.

            With these, you can proceed with the calculation from the lecture slides.

            For convenience, here is a list of functions you will need to modify:

              Extra Credit: Implement anisotropic filtering or summed area tables. Show comparisons of your method to nearest, bilinear, and trilinear sampling. Use to measure the relative performance of the methods.

              Section III: (optional, possible extra credit) Art Competition

              Part 7: Draw something interesting!

              Use your newfound powers to render something fun. You can look up the svg specifications online for matrix transforms and for Point, Line, Polyline, Rect, Polygon, and Group classes. The ColorTri and TexTri are our own inventions, so you can intuit their parameters by looking at the svgparser.cpp file. You can either try to draw something "by hand" or try to output an interesting pattern programmatically. For example, we wrote some simple programs to generate the texture mapped svg files in the texmap directory as well as the color wheel in basic/test7.svg.

              Flex your right or left brain -- either show us your artistic side, or generate awesome procedural patterns with code. This could involve a lot of programming either inside or outside of the codebase! If you write a script to generate procedural svg files, include it in your submission and briefly explain how it works.

              Tips and guidelines for your submission:

              • Your resulting png screenshot should be 800x800 resolution. Keep this in mind when writing your svg file.
              • Use the GUI's functionality to save your screenshot as a png. Don't take your own screenshot of your rasterized result, or you'll ruin the quality of your hard work!

              Students will vote on their favorite submissions and the top voted submission(s) will receive extra credit!


              Please consult this article on how to build and submit assignments for CS 184.

              You will submit your code as well as some deliverables (see below) in a webpage project write-up.

              Project write-up guidelines and instructions

              We have provided a simple HTML skeleton in index.html found within the website directory to help you get started and structure your write-up.

              You are also welcome to create your own webpage report from scratch using your own preferred frameworks or tools. However, please follow the same overall structure as described in the deliverables section below.

              The goals of your write-up are for you to (a) think about and articulate what you've built and learned in your own words, (b) have a write-up of the project to take away from the class. Your write-up should include:

              • An overview of the project, your approach to and implementation for each of the parts, and what problems you encountered and how you solved them. Strive for clarity and succinctness.
              • On each part, make sure to include the results described in the corresponding Deliverables section in addition to your explanation. If you failed to generate any results correctly, provide a brief explanation of why.
              • The final (optional) part for the art competition is where you have the opportunity to be creative and individual, so be sure to provide a good description of what you were going for and how you implemented it.
              • Clearly indicate any extra credit items you completed, and provide a thorough explanation and illustration for each of them.

              The write-up is one of our main methods of evaluating your work, so it is important to spend the time to do it correctly and thoroughly. Plan ahead to allocate time for the write-up well before the deadline.

              Project write-up deliverables


              Give a high-level overview of what you implemented in this project. Think about what you've built as a whole. Share your thoughts on what interesting things you've learned from completing the project.

              Part 1

              • Walk through how you rasterize triangles in your own words. Explain how your algorithm is no worse than one that checks each sample within the bounding box of the triangle.
              • Show a png screenshot of basic/test4.svg with the default viewing parameters and with the pixel inspector centered on an interesting part of the scene.
              • Extra credit: Explain any special optimizations you did beyond simple bounding box triangle rasterization, with a timing comparison table (we suggest using the c++ function around the command in to compare millisecond timings with your various optimizations off and on).

              Part 2

              • Walk through how you implemented supersampling and what modifications you made to the rasterization pipeline in the process. Explain how you used supersampling to antialias your triangles.
              • Show png screenshots of basic/test4.svg with the default viewing parameters and sample rates 1, 4, and 16 to compare them side-by-side. Position the pixel inspector over an area that showcases the effect dramatically; for example, a very skinny triangle corner.
              • Extra credit: If you implemented alternative antialiasing methods, describe them and include comparison pictures demonstrating the difference between your method and grid-based supersampling.

              Part 3

              • Create an updated version of svg/transforms/robot.svg with cubeman doing something more interesting, like waving or running. Feel free to change his colors or proportions to suit your creativity.
              • Save your svg file as my_robot.svg in your website/ directory and show a png screenshot of your rendered drawing in your write-up. Explain what you were trying to do with cubeman in words.

              Part 4

              • Explain barycentric coordinates in your own words and use an image to aid you in your explanation. One idea is to use a svg file that plots a single triangle with one red, one green, and one blue vertex, which should produce a smoothly blended color triangle.
              • Show a png screenshot of svg/basic/test7.svg with default viewing parameters and sample rate 1. If you make any additional images with color gradients, include them.

              Part 5

              • Explain pixel sampling in your own words and describe how you implemented it to perform texture mapping. Briefly discuss the two different pixel sampling methods, nearest and bilinear.
              • Check out the svg files in the svg/texmap/ directory. Use the pixel inspector to find a good example of where bilinear sampling clearly defeats nearest sampling. Show and compare four png screenshots using nearest sampling at 1 sample per pixel, nearest sampling at 16 samples per pixel, bilinear sampling at 1 sample per pixel, and bilinear sampling at 16 samples per pixel.
              • Comment on the relative differences. Discuss when there will be a large difference between the two methods and why.

              Part 6

              • Explain level sampling in your own words and describe how you implemented it for texture mapping.
              • You can now adjust choose between pixel sampling and level sampling as well as adjust the number of samples per pixel. Analyze the tradeoffs between speed, memory usage, and antialiasing power between the various techniques at different zoom levels.
              • Show at least one example (using a png file you find yourself) comparing all four combinations of one of and with one of and at a zoomed out viewpoint.
                • To use your own png, make a copy of one of the existing svg files in svg/texmap/ (or create your own modelled after one of the provided svg files). Then, near the top of the file, change the texture filename to point to your own png. From there, you can run ./draw and pass in that svg file to render it and then save a screenshot of your results.
              • Extra credit: If you implemented any extra filtering methods, describe them and show comparisons between your results with the other above methods.

              Part 7

              • Save your best svg file as competition.svg in your website/ directory, and show us a 800x800 png screenshot of it in your write-up!
              • Explain how you did it. If you wrote a script to generate procedural svg files, include it in your submission in the src/ directory and briefly explain how it works.

              Website tips and advice

              • Your main report page should be called index.html.
              • Be sure to include and turn in all of the other files (such as images) that are linked in your report!
              • Use only relative paths to files, such as
              • Do NOT use absolulte paths, such as
              • Pay close attention to your filename extensions. Remember that on UNIX systems (such as the instructional machines), capitalization matters.
              • Be sure to adjust the permissions on your files so that they are world readable. For more information on this please see this tutorial.
              • Start assembling your webpage early to make sure you have a handle on how to edit the HTML code to insert images and format sections.

              Ray generation, intersection, acceleration, direct and indirect lighting and some materials to boot. We built a path tracer for our third project in CS184 at UC Berkeley. Let’s get started!

              Part 1: Ray Generation and Intersection

              Ray Generation

              To begin, it's important to note here that we are generating an image as if it was taken through a pinhole camera. A pinhole camera has all incoming light rays pass through a single point onto the picture plane. Normally this results in an upside-down image, as shown below:

              By moving the image plane in-front of the pinhole, we can achieve a correctly oriented image. To do this, we cast rays (from the pinhole) through the image plane, into the scene. We then render the image plane as output. The diagram below shows this setup:

              To Generate rays, we implement two functions: PathTracer::raytrace_pixel and Camera::generate_ray

              in Camera::generate_ray(x,y) we do the work of creating Ray objects. Ray objects need both their origin and their direction. Since these rays are coming from our camera, their origin is a single point in space (hence, why this is a pinhole system). The parameters x and y are used to determine the direction of the ray. We want to transform the coordinates x,y (both of which are within the range [0,1]) to a point on our picture plane. To do this, we want input points (0,0) to map to the bottom left of the image plane, and input points (1,1) to map to the top right of the image plane. This can be accomplished by a simple interpolation equation:

              Ray_Direction = Vector3D( bottom_left.x + (x * difference.x),bottom_left.y + (y * difference.y),-1);

              where difference is a Vector3D defined by:

              difference = top_right - bottom_left;

              top_right and bottom_left are also Vector3D objects. For sake of brevity, let’s examine x. When x = 0:

              Ray_Direction.x = bottom_left.x + (0 * difference.x) = bottom_left.x;

              and when x = 1:

              Ray_Direction.x = bottom_left.x + (1 * difference.x) = bottom_left.x + top_right.x - bottom_left.x = top_right.x;

              For the situations when 0 < x < 1, we get a nice linear interpolation between bottom_left.x and top_right.x. The same goes for all y values of Ray_Direction. We want our rays to go along the -Z axis (since our openGL libraries use this as “into the screen” in their coordinate system).

              With that, we can effectively create rays starting at a defined origin and passing through our picture plane.

              We then use these rays in PathTracer::raytrace_pixel(x,y). The input parameters are coordinates of the bottom left of the pixel we want to retrace through. Here, we basically either:

              Cast a single ray through the center of the pixel or we sample ns_aa rays through this pixel at random locations. ns_aa is a parameter we provide when running the program from the command line.

              Now we’re casting rays into our scene, but we’re sending them out into a vacuum. We need to define some objects that our rays are going to interact with.

              Scene Intersection

              Right, so this is the picture we get without scene intersection… makes sense, since we aren’t intersection with anything at the moment. It’s time to introduce some primitives: triangles and spheres.

              Triangle Intersection

              This is all about one very important algorithm: the Möller-Trumbore algorithm.

              To begin, we’ve used barycentric coordinates in previous projects to determine wether a point P lies inside the triangle. This is characterized by the equation:

              P = Aw + Bu + Cv

              Where w,u,v are barycentric coordinates and A,B,C are vertices of the triangle we are trying to test intersection with. We use the fact that w + u + v = 1 to derive the following:

              P = A(1-u-v) + Bu + Cw = A − Au − Av + uB + vC = A + u(B−A) + v(C−A)

              So we have P in terms of a single vertex, A, two edges, B-A and C-A, and two barycentric coordinates, u and v.

              With our ray equation, we have another equation for triangle intersection:

              P = O + tD

              where O is the ray’s origin, D is its direction, and t is the ray’s travel time. Combining these two equations yields:

              O + tD = A + u(B−A) + v(C−A)

              moving terms around gives:

              O - A = -tD + u(B-A) + v(C-A)

              We seek u,v since we can use the barycentric coordinates to determine whether or not the point P lies within the triangle. We also seek t, since we’ll be using the intersection time with the triangle in many places throughout this project. These fit nicely in the following matrix equation:

              This part seems like magic,but basically we can use Crammer’s rule. In my own words this basically is a theorem that says we can obtain solutions to the linear equation by multiply the determinant of the matrix by the diagonal of a 1x3 as shown:

              It’s given that a 1x3 matrix has the following determent:

              which finally gives us:

              Phew… that was a mouthful. Anyways, heres what the code looks like:

              Vector3D e_1 = p2 - p1; Vector3D e_2 = p3 - p1; Vector3D T = r.o - p1; Vector3D s_1 = cross(r.d, e_2); Vector3D s_2 = cross(T, e_1); double det = 1.0 / dot(s_1,e_1); //compute t double t = det * (dot(s_2,e_2)); //compute u double u = det * (dot(s_1,T)); //compute v double v = det * dot(s_2,r.d);

              The ray intersects the triangle iff the following conditions hold:

              • t is between min_t and max_t
              • u is between 0 and 1
              • v is more than 0 and u + v is less than one

              With all that said and done, we are rewarded with the following picture:

              Sphere Intersection

              We use the quadratic formula to determine whether or not a ray intersects a sphere. the values for the quadratic are:

              a = ray.d * ray.d; b = 2(o-c)/dot ray.d; c = (o-c)^2 - R^2;


              • ray.d is the rays direction
              • o is the rays origin
              • c is the center of the sphere
              • R is the radius of the sphere

              Since there can be 0, 1, or 2 solutions to the quadratic equations, there are 3 scenarios for a spherical intersections:

              • no solution = The ray misses the spheres
              • 1 solution = the ray hits the sphere at 1 point (tangent to a single point)
              • 2 solutions = the ray enters the sphere at one point, and exits at another.

              when there are 2 solutions, There’s a subtly as to which of these solutions we assign to our rays hit time. In most cases, we will want the smallest value, since this is when the light ray first hits the sphere. There is a case where we want the larger hit time, but we’ll get to that later in the right up.

              With the following spherical intersection implemented, we are treated to the following:

              Putting both pictures together gives the following scene:

              Part 2: BVH Construction and Intersection

              A BVH, or Bounding Volume Hierarchy, does just that: It places the objects into the scene into a structure (in this case, a binary tree) of rectangular volumes. This allows us to test ray intersection with axis-aligned planes, rather than every single primitive in the scene.


              To construct a BVH we need to select a few heuristics to do our splits. It would be ideal to pick values that reduce ray intersection time by the greatest amount. Instead, I opted for a cleaner (and easier) implementation.

              At every split, we choose an axis to split on (either x,y or z). In the case for this bounding box, we choose the longest axis. So if:

              extent.x > extent.y and extent.x > extent.z

              We split the x axis. Same goes for extent.y and extent.z. Also, extent is a nice helper function that returns the length of a bounding boxes axis… extent is a fitting name.

              So we’ve chosen an axis to split on… great, but now we must choose a point on the axis to do the splitting? Well, just use the midpoint.

              The following pictures shows a couple steps of the splitting of bounding volumes:

              Theres an important catch to consider, which is when all primitives fall to either the left or right of the split point. This would be bad, and result in an infinite loop (as we are not getting smaller). If this is the case, just return… there’s nothing more to do… or actually, there could be more… but we are using the centroids of our primitives to split, so it's rare that it will be the case all all primitives fall to a single side until there are few left.

              Intersecting of Bounding Boxes

              This may seem like a tangent, but it's important and easy. If we consider each BVH as a bounding box with either:

              • A list of primitives
              • OR

              • A left and right child which are also BVH’s

              We can use the bounding boxes to test if a ray misses. If a ray misses, we have an easy out… which is great.

              To check if a ray intersects a bounding box, it suffices to take the intersection of 3 axis aligned planes. A ray intersects a bounding box if the union of the 3 intersection intervals is non-zero

              Intersecting BVHs

              This part is very straightforward. To test if a ray intersects a BVH, first check to make sure it hits the BVH’s bounding box. If it doesn’t, we win! And by winning, I mean we don't have to check against every primitive. This is great, and again, why BVH’s are amazing.

              If it does hit a BVH, well, we have to then check to if this BVH is a leaf (aka it has a list of primitives we have to check against) or it is a node (aka it has a left and right BVH).

              If it is a leaf, well, we have to check against all the primitives. We the run the primitive intersection code described above.

              If it is a node, we recursively check intersection in the left and right BHV, being careful to return the smallest hit value, if it returns true for both.

              Once we have BVH construction and intersecting up, we can render some larger .dae files below:

              Part 3: Direct Lighting

              For this part, we are interested in the light that is contributed from only the light source in the scene. We run this on every Ray that returns a hit point.

              Before we sample the light, we first check to see if the light source is an area light or a point light.

              If it is a point light, it doesn’t matter how many samples we take of the light source, they’re always going to fall on the same place. So we just take a single light sample.

              If it is an area light, then we take as many samples as we’re told (and we’re told through the command line when running the program). In this case, we get an RGB value for each light sample.

              From each of these light samples, we get a outgoing vector. We use this vector to determine wether or not we can actually reach the light. This means, we create a single shadow ray starting at the hit_point we are trying to color and test to see if the shadow ray hits any other objects before it reaches the position on the light we sampled. If the shadow ray hits an object, then we mark this sample as black, and continue.

              If our shadow ray doesn't intersect with any scene objects (and therefore reaches the light). Then we add the value of this hit points BSDF (which we will talk about in part 5) multiplied by the contribution on the light we sampled. We take in mind to also multiply by the cos of the outgoing angle, since the irradiance on a surface point drops as the angle of the light and normal to the surface point move towards 90 degrees.

              Also, when summing up these sample values, we must divide each sample by its probability! This is important, since low probability samples should, in theory, contribute less to the overall irradiance, and high probabilities should contribute more.

              In the end, we divide by the total number of samples, and return the irradiance for each Ray hit point.

              Here are some pictures using only direct lighting:

              Notice, that all shadows are quite hard. There seems to be a little blending at the edges, but this is due to the probability of our samples reducing the value of our BSDF sample. If we didn’t multiply by the probability of the light sample, we would get the full BSDF sample value at these locations (no greying).

              Here is the bunny with 2 area light samples:

              It's hard to see the difference. The shadow cast by the tail seems to be getting a bit softer in contrast between black and grey shadow.

              The shadows are getting much better resolution now. Since we were averaging 5 samples.

              20 light samples later, I think we’re getting some smooth shadows

              Part 4: Indirect Lighting

              Indirect illumination follows a similar principle as direct illumination. Again, we are interested in the irradiance at a Ray’s hit point. However, in this case, we sample the BSDF of our hit point first, and then recursively cast another ray leaving this hit point and add it’s value to our irradiance.

              This sounds like it would take forever right? Well, it would, except we implement a Russian Roulette (classy name) algorithm that randomly stops taking Indirect Lighting samples per hotpoint based on the illuminance of the BSDF value. This means that if the BSDF doesn't contribute much irradiance, we should have a high chance of terminating it, since this hit point isn't contributing much light anyways.

              We also implement a maximum ray depth, which is an assurance that our rays will terminate after a certain amount of bounces.

              Again, we make sure to weigh each irradiance value (which is the BSDF sample and the recursive Indirect Lighting sample) by the cosine of the outgoing sample angle, and both the outgoing angle probability and 1 - Russian Roulette probability.

              Below is a scene of lambertian spheres, rendered first with only direct lighting, and then, only indirect lighting:

              In the direct case, we get no color from the walls on either spheres or the ground, which makes sense, since no indirect light from the walls will hit the spheres in the indirect case. We get much tighter, softer shadows in the indirect illumination example compared to the intense direct light shadows in the other example. The black and white light are stark differences, but their sum would result in a white light (also no light reflects back onto the ceiling for direct illumination), adding the two images gives the following (both direct and indirect):

              Here are the same spheres, but rendered with max depth ranging from 1,2,3,4,5 and 100:

              Can't really see a difference? Neither can I. Maybe it will become more apparent in the next part when we have some materials.

              Below is a progression of scene above ranging from 1 sample/pixel to 1024 samples/pixels.

              Part 5: Materials

              Here, we implement both a BSDF for mirrors and glass. But first, what’s a BSDF?
 A BSDF is a Bidirectional Scattering Distribution Function (it makes sense why we abbreviate to BSDF). It is a function that returns an irradiance given the incoming and outgoing rays of light. This is a material property, so different materials will have different functions for distributing light.

              So we did a BSDF for a mirror and for glass, and therefore, had to implement a reflection and a refraction transformation, respectively.

              For a mirror, a reflection is needed. Here’s a nice fact, since our hit points are ever so small, we can assume that they all have a normal that is simply (0,0,1). We use this to reflect an outgoing vector wi, like so:

              wi = (-wo.x, -wo.y, 1)

              Easy. We then take that vector, wi, and return the irradiance:

              reflectance / cos(wi)

              Reflectance is the material property, and we want divide by cos(wi) to negate the multiplication done in part 4.

              For glass, we need refraction. Again, since we always have a surface normal of (0,0,1):

              phi_o = -phi_i

              This will come in handy. Here’s a diagram:

              So in refraction, an incoming vector has an angle theta_i with normal point away from the blue material. This vector, we will call it wo is refracted when it enters the material by an angle theta_i. So we seek to find the outgoing vector, call it wi, given wo. no and ni are constants determined by the material. In this diagram, n_o is considered the glass and n_i is the air.

              Snells law states:

              sin(theta_i) = (n_o sin(theta_o)) /n_i)

              Lets consider the x coordinate of w_i from polar to cartesian coordinates (the y coordinate will behave the same way)

              wi.x = sin(theta_i) + cos(phi_i)sin(theta_i) = wi.x - cos(phi_i)

              same goes for sin(theta_o) specifically:

              sin(theta_o) = wo.x - cost(phi_o)

              Using Snells law, we can substitute our sin(theta) terms out for terms involving our cartesian coordinates:

              wi.x - cos(phi_i) = (n_o/n_i)

              We also know that, as stated above:

              phi_o = -phi_i


              cos(phi_o) = cos(-phi_i) = cos(phi_i)


              wi.x - cos(phi_i) = no/ni wo.x -cos(phi_o)


              wi.x = no/ni wo.x

              In this project, we have wo pointing away from the surface. We want our wi to point into the glass. so we have to negate all our cartesian coordinates. This yields:

              wi.x = -(no.ni) * wo.x

              Same goes for wi.y

              As for the wi.z, we know that wi.z = cos(theta_i) since our normals are always (0,0,1)… but we only have sin(theta_i), well thank goodness for trig again:

              cos(theta_i) = sqrt(1-sin^2(theta_i))

              We give our wi.z the inverse of wo.z, again, because of our conventions in this project.

              Okay so thats refraction. It's also important to note that sin(theta_i) can never be greater than 1 (That would defy the laws of mathematics!) If Snells law results in sin(theta_i) > 1, then we should reflect rather than refract.

              Almost done. Since Glass doesn't just refract (it also reflects) we need to have a probabilistic way of determining whether an incoming ray reflects or refracts. The simple solution is to use the constant, R, yielded for Shlicks approximation to determine wether we reflect or refract given an incoming light direction.

              If it reflects, we return:

              reflectance * R / abs_cos(wi)

              If it refracts, we return:

              Transmittance * (1-R) * (no/ni)^2 / abs_cos(wi)

              Here, abs_cos(wi) is needed because we want all our surface normals to point away from the material, instead of into them, as is our project convention.

              Now we can render both mirrored and glass surfaces. The following sequence is of a max depth of 1,2,3,4,5 and,100:

              The first image is interesting, the light bounces off the mirrored surfaces but light that refracts inside the sphere we cannot see! This is because we terminate the ray before it can leave the sphere.

              Remember way back when we did sphere intersection? This is that one time where we need the second ray hit point. When the ray leaves the sphere, it leave at the second intersection point (meaning that the ray begins somewhere inside the sphere). We see this in the second picture, when the max_ray_depth = 2.

              In the third picture in the series, light leaving the glass sphere arrives at the ground (creating a caustic) and it also arrives at the mirrored sphere (it appears blue in the mirror sphere). Also, notice that at this depth, light that hit the ground has reflected back up into the sphere, resulting in a tiny white highlight on the glass spheres bottom right.

              When the depth = 4, we see this highlight reflected onto the blue wall.

              From depth = 5 to about depth = 100, we only really see the scene get lighter, as light bounces around the entire scene until it is either stopped by depth (with depth closer to 5) or by Russian roulette (with depth closer to 100)

              Heres a high quality version of the mirror and glass spheres:

              Here are some more high-res images:

              Here's a golden dragon in a box, ranging from 1 sample/pixel to 1024 samples/pixel

              That concludes this walkthrough of the path tracer for CS184 Assignment 3. Thanks for reading!

              0 Replies to “Cs184 Homework Helper”

              Lascia un Commento

              L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *