Next gen displays: Are you seeing this?

Adrian Pennington looks closely at the developments in the capture, display and workflow of light fields to create VR experiences you can walk around just as the technology and products are gathering steam.

The race is on to develop the hologram and the next stepping stone may be light fields. But before we get started, what do we mean by light field and hologram? A real hologram projects an accurate visual representation of an object in a 3D space. Pepper’s Ghost, autostereoscopic glasses-free 3D, LEDs mounted to spinning fan blades, AR or VR headsets all fall short of that definition.

Jon Karafin, CEO of Light Field Lab, defines a hologram ‘as a projection of the encoding of the light field’. So if you can capture a light field it should be possible to display it as a hologram. The likely first use cases for the technology will be for location- based entertainment.

Chris Chinnock, president of analyst Insight Media, identifies use of light fields to fuel VR, AR and mixed or extended reality applications viewable on smartphones or smart glasses like HoloLens or Magic Leap.

Ryan Damm, co-founder at holographic imaging software developer Visby, says that light fields will follow VR’s overall path, used first in entertainment like gaming and cinematic applications before moving into corporate verticals covering everything from enterprise to education and design.

Other use cases will include scientific and medical data 3D visualisation (image guided surgery and diagnostics), air control with 3D visualisation of air traffic; command and control systems of very complex 3D environments as well as heads up display (HUD), gaming and augmented reality headsets.

Capture

Capturing light fields means ideally recording all the light traveling in all directions in a given volume of space. This is variously termed plenoptic, in reference to the intensity or chromaticity of the light observed from every position; or as volumetric, in relation to the ability to capture the depth information.

An early leader in this space was Lytro but the company folded in March 2018. Now, much of the LF capture development has fallen to Google, which acquired some of the assets and engineers from Lytro. It has been testing a number of camera rigs with Chinnock noting: “Google has used the rigs to capture some very compelling scenes of the space shuttle flight deck. It is clearly some of the best light field content available.”

In Germany, startup K-Lens developed a lens that it claims can give any standard DLSR camera the attributes of light field capture. It is currently a prototype purely for still images, but there’s a commercial launch pencilled in for this year. The company, which emerged out of the Max Planck Institute for Informatics and Saarland University, is also researching a commercial light field camera with which it plans to target the professional film industry.

Volumetric capture is also gaining traction in Hollywood as a means to record huge amounts of data for use in performance capture films.

The technique videos a scene from arrays of simultaneously shooting cameras. Microsoft’s Holographic Capture studio has over 150 cameras directed to an 8ft, square platform surrounded by a green screen. London’s Hammerhead VR has a similar set-up, licenced by Microsoft, using 106 cameras.

Display

Provided sufficient information about an object or scene is first captured as data, it can then be encoded before being decoded as a holograph. A true holographic display does not require headgear, cabling or accessories. Ideally, a user has complete freedom of movement and is able to see and focus on an object no matter the angle at which it is viewed.

The number of pixels needed for high fidelity are staggering, as are the intensive GPU requirements. Chinnock says: “Extending current [light field display] technologies to get a compelling display will take a while, unless a totally new approach can be developed.”

A case in point is the smartphone display from camera maker RED. Its Hydrogen phone is touted as the ‘world’s first holographic media machine’ but its display, made by Menlo Park-company Leia Inc. is generally considered more autostereoscopic than holographic. It uses Diffractive Lightfield Backlighting that displays different images and textures in different spatial directions to give the appearance of depth.

RED plans to develop a whole suite of cameras and monitors around the creation of 3D and mixed reality content. Since RED cameras are already used to shoot a wide variety of high-end film and TV content the company is well placed to take Netflix or Disney into the future.

Another startup, Holochip, is working on single user and multiple-user displays. With smaller screens, smaller FOVs and a smaller radiance image resolution, single user displays are less challenging.

Samuel Robinson, VP of engineering at Holochip told Insight Media’s Display Summit that they are working on a helicopter flight simulator where 3D depth perception is critical for landing training.

The Looking Glass from Looking Glass Factory is a patent-pending combination of light field and volumetric display technologies within a single three-dimensional display. A total of 45 unique simultaneous views of a virtual scene are captured at 60fps and encoded into video signals sent via HDMI to the display.

The Looking Glass optics decode the signal into a colour ‘superstereoscopic’ scene. Its ‘holograms’ are a 5 x 9 grid of a 3D scene, where each view is a slightly different angle. You can order an 8.9-in desktop version of the Looking Glass or a 15.6-in unit for simulation, design and retail display built out of the firm’s Hong Kong lab.

There’s even a Vimeo channel with content created in game engine Unity for viewing back on the display. The developer claims: “While Looking Glass is technically a light field display with volumetric characteristics, it’s the closest we’ve ever come to putting the holograms we know and love from Star Wars on our desks.”

That claim is hotly disputed at Light Field Lab where Karafin defines a true holographic display as one which “projects and converges rays of light such that a viewer’s eyes can freely focus on generated virtual images as if they were real objects.”

That’s what it is working on – with commercial products launching from 2020. Its prototype display is 4-in x 6-in and has 16k x 19k pixels, which are used to create light rays in many directions (exactly how it does this is not disclosed). Its modular design means the blocks can be joined to create larger display walls or eventually entire rooms. A series of 18-in displays combined into a videowall is said to be capable of projecting holograms tens of feet out. Its target customers are casinos and LBE vendors, theatres, sports venues and experiential retail.

The company recently joined forces with OTOY in an effort to develop a content and technology pipeline that would turn the Holodeck into a reality. Chinnock says: “Light Field Lab claims this vision of the Holodeck is just a few years off. I think that is way too optimistic. It’s more like 10 years. Their prototype display currently only produces a few inches of depth.”

Compression

Streaming a light field would require broadband speeds of 500Gbps up to 1TBps – something not likely in the next 50 years. Being able to work with so much data, let alone transmit it, requires serious compression.

A group at the standard body MPEG is drafting a means of enabling the ‘interchange of content for authoring and rendering rich immersive experiences’. It goes under the snappily titled ‘Hybrid Natural/ Synthetic Scene’ (HNSS). Think tank CableLabs along with OTOY and Light Field Lab contribute to MPEG’s work in this area. The basis appears to be a file format called ORBIX originally developed by OTOY as a large ‘container’ to carry all kinds of graphic and special effects data to make it easy to interchange files between facilities.

Arianne Hinds, principal architect, CableLabs, says: “Work is now underway to create a media format specification, optimised for interchanging light field images. This is based on scene graphs which contain information related to the logical, spatial, or temporal representation of visual and audio information.” An update is due in early 2019.

Another standardisation initiative, JPEG Pleno, addresses interoperability issues among light fields, point clouds and holography. Zahir Alpaslan, director of display systems at Ostendo who is working on this, told the Display Summit in October 2018 that point clouds remain immature and that terapixels of data will be needed to move to true holographic solutions.

Light Field Lab has its own vector-based video format that it says will make it possible to stream holographic content with 300Mbps over ‘next-generation broadband connections’ by which it means 5G connectivity. MPEG is also developing Video-based Point Cloud Compression (V-PCC) with the goal of enabling avatars or holographs existing as part of an immersive extended reality. Ralf Schaefer, director standards at Technicolor Corporate Research, says: “One application of point clouds is to use it for representing humans, animals, real-world objects or complete scenes.”

V-PCC is all about six degrees of freedom (6DoF), or fully immersive movement in 3D space, and is the goal Hollywood studios believe will finally make virtual and blended reality take off. The V-PCC specification is planned to be published by ISO late next year so the first products could be in the market by 2020.

Chinnock says: “There is a clear division in opinions as to how to represent next generation images. One camp sees the evolutionary path of better encoding of the traditional video signal paradigm. The other is more analogous to a gaming pipeline where everything is a 3D model with various levels of fidelity and realism.”

Article Categories






Most Viewed