Caustics 2D

Caustics 2D

January 15, 2021

This is a 2D shader that uses a Monte Carlo method to render the scene. For every pixel, multiple samples are taken to determine the colour of the pixel. Each sample will have a random direction ray. We will then only check if the ray hits a light source, or misses a light source. If a light source is hit, then that lights colour will contribute to the pixel sample. However, if the ray hits an object, then we will reflect the ray, and march the reflected ray through the scene. The number of times that we reflect the ray when an object is hit, is called the number of bounces. 

Having explained the basic principle behind the rendering technique, the algorithm that follows from it is quite straightforward:

  1. For each pixel position $O$.
  2. Generate a random normalized direction vector $D$. This can be done by selecting a random variable $\underline{x} \sim \textrm{uniform}(0, 1)$, which is then used to create the vector $D = [\cos(2\pi\cdot\underline{x}), \sin(2\pi\cdot\underline{x})]$. Because this point is on the unit circle, it is already normalized.
  3. Shoot the ray $O + t\cdot D$ into the scene, and determine if an object is hit.
  4. If the object that is hit is a light source, add the contribution of the light source to the pixel result, and go to the next sample.
  5. If the object that is hit is not a light source, calculate the reflection ray, and use this ray in step 3. If the number of bounces exceeds a certain threshold, then the result of the pixel is zero.

This implementation uses two reflection ray bounces to determine if a light source is hit. If we never hit a light source we will return an ambient colour, instead of black.

The algorithm also uses an anti-aliasing technique which is called a jitter algorithm, or also stratified sampling. The image below should illustrate how the samples are generated based on this algorithm.


Image from Wikipedia

First, the pixel is divided into smaller sub-pixels, and a random point is selected within these sub-pixels. Secondly, multiple samples are created that are sent into the scene to determine if a light source is hit. The stratification is also applied in the generation of the direction vector. The stratification ensures that we do not randomly generate a clump of pixels, or a set of almost equal direction rays. This method will help the image to converge faster.

The entire shader is hosted on ShaderToy, and I have also included a live version below:

Version A: Buffered result, denoised
Use the left mouse button to move the light source(s). If the button is pressed, the image will denoise a bit quicker, so that it feels more responsive. However, the trade-off is a little bit more noise.


To combine the current frame and the buffered frames, the following formula is used: $$\textrm{color}_\textrm{out} = \alpha \cdot \textrm{buffer} + (1-\alpha)\cdot\textrm{rgb}$$ where $\alpha = 0.995$. Using this method, the image slowly decays to the value of $\textrm{rgb}$, and the noise is barely visible.

Version B: animated light source, increased noise
A lot of noise can be seen in this image. I could add more samples per pixels, or more light samples, but the shader is already heavy as it is.


© 2022 Lars Rotgers
All rights reserved