What is a Light Field?

A light field is a collection of light rays in a particular area. To capture a light field means to capture the rays of light from a particular perspective — their color, intensity, and directionality — so that they can be recreated in software to create a 3D representation of that perspective.

There are many groups doing incredible work capturing light fields in the real world, like Visby and Google Light Fields.

The Looking Glass is a rare example of a light field display — a device with the capability of reproducing the color, intensity, and directionality of a ray of light. While our display only re-projects horizontal directionality (along the x-axis) this is sufficient to create lifelike recreations of 3D scenes.

In addition to capturing light fields of real perspectives, you can also capture light fields in 3D engines to create imagined holographic scenes.

1. Capturing a light field

We capture light fields using a view paradigm. We grab multiple perspectives of the same scene, offset horizontally, and use our software to re-project the views. You can take any number of views, but there isn't much benefit (based on the optics of our display) to going above 100 views. Our standard for real-time engines ranges between 45 and 48 views.

The size of each view is also variable. The Looking Glass Portrait uses views that are 48 views that are 512 by 682 (adding up to a 4096 by 4096 quilt) while the 15.6" display has 45 views that are 819 by 455. Finally, the Looking Glass 8K uses 45 views that are 1638 by 910. However, if creators want to ensure there is enough data to allow end users to zoom in on details, making each view larger, even up to 4K, is a possible approach.

As for how the content is captured — there are three main camera arrangements that are possible.

Arc Capture

This capture technique moves the camera in an arc around a central point of focus, keeping the camera equidistant to the focal point and rotating around the camera's up vector. The number of degrees to rotate can be queried from HoloPlay Service under the parameter "view cone" if you're using the HoloPlay Core SDK. If you aren't, we recommend rotating the camera a total of 35 degrees.

You can store this data in many formats, however we recommend storing it as a quilt or as a video, with each perspective constituting a frame. HoloPlay Studio is optimized to support these two storage methods.

[Insert drawing and/or gif of an arc capture]

Rail Capture

This capture technique moves the camera on a straight line, where the middle capture will be pointing directly at the focal point. With this approach, you'll end up discarding data to create the 3D effect, so it is best to capture larger renders than you would otherwise need. After the capture, you'll end up with a set of images or a video.

Typically, knowing how to properly frame your capture so that the desired content will be in focus will require some user adjustment. Additionally, the data sets for this format tend to be large, as you're discarding so much data in order to create the light field. However, given the ease of capturing this kind of light field, it is a useful technique, and one we employ in our capture rail setup.

View Shearing Capture

We also refer to this as a capture using a projection matrix deformation. Essentially, we offset the camera and alter its projection matrix for each view, enabling us to properly recreate the light field in our renderer.

This is the best form of capture as it requires no data correction on the rendering side (as with an arc capture), nor does it have data thrown out (as with a rail capture). For this reason, it has been our standard approach to rendering light fields in synthetic environments, and we typically store these captures into our "quilt" format.

However, taking this kind of capture requires the ability to access and modify the camera's projection matrix, which isn't always possible and only applies to synthetic capture environments.

To see exactly how to deform the projection matrix for this kind of capture, including sample code, see our page on "Moving the Camera."

[This article calls it "off-axis" rendering]

2. Storing a light field

Once you have your image data making up your light field, you can store the views in a number of different formats.

3. Displaying a light field

The most robust and our recommended solution is to use HoloPlay Studio [link to documentation for the app]. With HoloPlay Studio, you can view and edit all forms of light fields described above, both video assets and photo assets. It has an accessible UI and can even export content to be played back by the Looking Glass Portrait [link to product page] in standalone mode. However, this functionality does need to be launched as a separate application. Support for leveraging the renderer component without needed to launch the application or have the UI appear is targeted for Q2 2021.