Relativistic Rendering using Pathtracing

University of California, San Diego
CSE 168 Final Project, 2025

The clip shows the rendered scene of a black hole using my pathtracer. The source sky is that of NASA's picture of the AG Carinae nebular structure. The black circle in the center is the shadow of the black hole roughly 2.6 times the Schwarzschild radius. Within the shadow is the Event Horizon, where the gargantuan curvature of spacetime caused by extreme graviational effects is so strong that not even light, the fastest possible object in the universe, can escape, causing it to appear nearly absolutely black. The outer ring that appears to be a lens is called the Einstein Ring. The nebula which is behind the black hole is visible through this ring due to graviational lensing. The dark ring next to the Einstein Ring is the Photon Sphere. This is the region where the velocity vector of the light is just right for it to orbit the black hole. (time to render: ~40 minutes on M1 Pro Macbook Pro, for reference the Stanford dragon with GGX brdf takes around 700 milliseconds to render post BVH construction)

Introduction

In this final project, I build upon the pathtracer built over time in the CSE 168 class. The pathtracer started as a humble raytracer with based Blinn Phong shading capabilities; overtime, I've added more features to it finally culminating in the relativistic renderer. The Black Hole that appears in the scene is a Schwarzschild black hole. This means the black hole is non-rotating, static, and uncharged. The black hole is rendered by solving the geodesic equations of motion for a photon in the Schwarzschild metric. For the sake of simplicity, I also use natural units where c = G = 1. For a Schwarzschild black hole, by setting the mass (M) to 0.5, the radius of the EH (event horizon) becomes 1. These approximations keep the differential equations manageable.

Describing the Geodesic Equations

Special thanks to Riccardo Antonelli's blog post for going over the complicated math behind geodesics!

The geodesic equations describe the motion of particles and light in curved spacetime. For a Schwarzschild black hole, these equations are derived from the Schwarzschild metric, which represents the spacetime geometry around a non-rotating, uncharged black hole. The equation looks like this:

Geodesic Equation

To simplify this, I first work with polar coordinates. I then work from the plane of reference of the photon/ray. This plane is described by the cross product of the direction of the ray and the vector joining the origin of the ray and the center of the black hole. A Schwarzschild black hole guarantees that the photon will necessarily only traverse on this plane.

Ray Plane

In the above equation, n is vector describing the plane, A is the ray origin, B is the singularity, and d is the direction vector of the ray.

From this plane, I also derive the azimuthal angles which I later use for reconstructing the ray in 3D space.

Skipping a few steps here; following our substitutions, the equation describing the geodesic takes the form of newton's conservation of energy, and the final form the equation is:

Ray Plane

Visualising Paths of the Rays

Following the equation I mentioned above, I ran a simulation to visualise the path of the rays. The following video shows the path taken by rays randomly generated from a point: Paths generated using scikit's powerful ODE solver in just 30 lines of code!

As you can clearly see, the extreme lensing causes many of the light rays to scatter in various ways.

Implementation

Milestone 1: Skybox Implementation

The first implementation was to extend my raytracer to support Skyboxes. I chose to use equirectangular skyboxes which takes a 2:1 sky image and maps it to a sphere at infinity. When the rays exit to infinity, they're mapped to uv coords of the source image from which I sample the pixel colors. UV map generated using a custom python script

Original UV Mapping

Original UV Mapping

Skybox UV Mapping

Skybox UV Mapping

The above images show the skybox through the raytracer. Here's a sample skybox with NASA's picture of the SH21 starfield along with a purely specular sphere.

As it rotates around, you can see the seam at the end of the skybox. This is intended and known and is usally fixed by choosing an image which is symmetric.

Milestone 2: Black Hole Renderer

In this milestone, I implemented the beginnings of the renderer. In this stage the output was just the direction of the exiting photon. Although this cannot be used for raytracing yet, I was able to combine this with the skybox implementation to get some visualisations. The implementation details can be found in my source code. I had to use the boost library's ODE solver. I used the Cash Karp RK54 solver to solve the geodesic IVP. This creates a series of deviations for a given initial condition in polar coordinates in the plane of the ray. As I mentioned earlier, I used the plane azimuthals to the recover the final 3D exit direction for the ray. This gave the visual that you saw at the top of the page and also this:

Black Hole Sample Render

Sample render of the black hole in the same skybox as above

Here's the video of the BH with UV map skybox:

Final Implementation: Exit Ray Reconstruction

I recreated the exit ray directions using the same strategy for the planes. Now, I had a function that acted as a black box: I give it any arbitrary ray, it gives me the exiting ray. I created the following pipeline:

  1. Emit rays from camera source.
  2. Apply the black box function to each ray. The singularity is always assumed to be at the center. For different sizes, we can simply apply the inverse transform to the rest of the objects in the scene.
  3. With the final set of exit rays, perform pathtracing with Importance Sampling/Russian Roulette/NEE etc.

This strategy gave me the following final render images:

Original UV Mapping

Stanford Dragon from Front

Skybox UV Mapping

Stanford Dragon with BH in front (you can see the light sources clearly near the EH, which are not visible in the original image)

Notes

Why not Raymarching?

My initial choice was to use raymarching to perform the raytracing. This would involve solving the geodesic at every pre specified timestep and then querying the BVH for intersections. The obvious and the most significant advantage of raymarching is the ease of getting the signature accretion disk to appear. That being said, raymarching was simply too computationally expensive to run on my machine. This is a CPU based pathtracer, and I have an Apple Metal GPU accelerated pathtracer in the works but that's just a personal project of mine adjacent to the CSE 168 class raytracer. Since the Metal pipeline is relatively self contained with little adjustability, it would interfere with the required implementations in the classwork. Therefore, I deemed the raymarcher to be too expensive and I came up with a bunch of mathematical tricks to get something very close to a fully raymarched solution.

Future Work

These are the things I will be expanding upon in the future (in no particular order):

  • GPU accelerated ray marching + accretion disk
  • Non static blackholes
  • Real Time rendering (p.s. this is extremely difficult even for the most basic blackholes)

Inspiration

Veritasium's video on Black Holes was a huge inspiration for this project.

ScienceClic's video on Interstellar's math was also a major source of inspiration.