Nov 262012
 

Reducing the size of the light source resulted in an even darker image since now fewer rays found their way into the light source. Adding a back wall behind the camera did not seem to help much.

Now using the cos density sampling, and increasing samples to 100 it gets almost worse:

 

 

Nov 212012
 

In the picture above I am visualizing the normals of each hit. It is using multisampling and purely diffuse material so that some rays get scattered inte space (the box is missing one side). Note that the back wall is missing — the rectangle had the points stored in incorrect order so that the normal was pointing out the opposite side. The fixed back wall can be seen below.

Now that I can scatter and have material properties, I can do actual path tracing. The image below uses essential lambertian surfaces, hence the absorption is relative to $cos(\phi)$ and it uses only diffuse scattering. I increased the number of samples from 20 to 50 which turns out to make the image darker since more more rays end up in space without hits.

Nov 072012
 

Double inverted y axis caused confusion, and a stubborn bug in the triangle intersection code that I only found after looking over the code 4 times — some plus should have been a minus!

But now adding a rectangle object is trivial (screenshot uses multisampling with 10 samples)

Nov 072012
 

Some documentation of how my work progressed:

Initially after setting up a window to draw in my first attempt to render a sphere looked very interesting:

I had no clue what was wrong and suspected for a very long time that my math was off. But after rederiving and comparing to other sources I could not find any errors. So even 6 revisions later the picture did not change much:

It turns out that it was an issue with the types: I was still using integers instead of floats in some places, so some calculations were rounded and some 8bit integers regularly overflowed creating these patterns. With this fixed, in r11 I finally was able to display a simple sphere.

At this time rendering was still quite slow, even with only one sphere to display. The issue was a basic multithreaded architecture which passed every pixel to draw between threads. So in r14 everything looked the same, but the architecture was much improved.

After some more reengineering I implemented bounding boxes, which look quite boring visualized:

Some simple shading based on the distance to the eye, notice how the top left sphere is closer to the screen. However at this point the shading is just a hack and not physically based at all.

Next was implementing multisampling. Right now it is just uniformly distributed around the center of the pixel. Note that the shading is turned off again, which before produced quite aliased pictures.

Rendering time becomes much slower due to the multisampling, so next up was a bounding volume hierarchy. The next scene rendered 288 objects in the same time that the previous scene rendered 3 objects:

Very subtle mistakes are hard to find, but produce these interesting artifacts. Notice how the center of the previous picture is square, although the blue mass is composed of spheres. Taking one wrong branch in the bvh code caused this, so using some other colors the squares of spheres now are distinguishable and don’t have artifacts:

The partially rendered image can be displayed in progress, which combined with multithreading is much more visually pleasing:

Visualizing the reflection vector by shading based on the cos of the hit and the normal:

First attempt at recursive raytracing introduced very interesting visual bugs. Here the light source is a sphere in the center of the screen close to the eye in front of the image plane. It is unclear to me at the time what caused the square to appear – it turns out later there still was  bug in the intersection code.

Finally implementing a different shape, the triangle proves to be difficult. At some angles it will completely fail to render, and the following scene is not what would be expected: the triangle on the left differs from the one on the right by only one vertex, the bottom left one is moved to the left. The others remain the same, yet it renders as if it was moved entirely to the left.

BlinnPhong shader that helped to debug inverted reflection vectors:

Something is still odd with the recursive rays though. The next picture shows those rays that hit an object after one reflection, again with a light source at the center between eye and image plane:

Previously I had to fix the bug that reflected rays don’t have to obey the same range limit as initial rays, but in that bugfix I allowed recursive rays to go backwards too – which is what caused these false hits in the negative direction.

Now with a big light source in front of the image plane and using a shader that visualizes again the cos between hit and normal, we see the three spheres hit the light until a cut-off point where the rays go essentially off to the sides of the image instead of getting reflected back towards the image plane. That is what causes the black halos around the spheres.