AMD Tackles Holy Grail of Providing Life Like Presence for Virtual Reality Game Developers

Posted on Tuesday, March 10, 2015 by STUART PARKERSON, Global Sales

AMD has released the LiquidVR platform, with the goal of achieving life-like presence for players through technologies focused on enabling virtual reality content development for AMD hardware, improved comfort in VR applications by facilitating performance, and plug-and-play compatibility with VR headsets. 

LiquidVR tackles a key obstacle to achieving presence by addressing motion-to-photon latency, the time between when a user moves their head and when his or her eye sees an updated image reflecting that new position. In virtual reality, the concept of ‘presence’ is described as the perception of being physically present in a simulated, nonphysical world in a way that fully immerses the user. Minimizing motion-to-photon latency is critical to achieving both presence and comfort, two key elements of great VR.

Reducing latency involves the entire processing pipeline, from the GPU, to the application, to the display technology in the headset. AMD GPU software and hardware subsystems are a part of improving that latency equation, and with LiquidVR, AMD is helping to solve the challenge by bringing smooth, liquid-like motion and responsiveness to game developers and content creators for life-like presence in VR environments powered by AMD hardware.

The alpha version of the LiquidVR SDK 1.0 is now available and offers these features:

- Async Shaders for smooth head-tracking enabling Hardware-Accelerated Time Warp, a technology that uses updated information on a user’s head position after a frame has been rendered and then warps the image to reflect the new viewpoint just before sending it to a VR headset, effectively minimizing latency between when a user turns their head and what appears on screen.

- Affinity Multi-GPU for scalable rendering, a technology that allows multiple GPUs to work together to improve frame rates in VR applications by allowing them to assign work to run on specific GPUs. Each GPU renders the viewpoint from one eye, and then composites the outputs into a single stereo 3D image. With this technology, multi-GPU configurations become ideal for high performance VR rendering, delivering high frame rates for a smoother experience.

- Latest data latch for smooth head-tracking, a programming mechanism that helps get head tracking data from the head-mounted display to the GPU as quickly as possible by binding data as close to real-time as possible, practically eliminating any API overhead and removing latency.

- Direct-to-display for intuitively attaching VR headsets, to deliver a seamless plug-and-play virtual reality experience from an AMD Radeon graphics card to a connected VR headset, while enabling features such as booting directly to the display or using extended display features within Windows.


More App Developer News

NEX22-DO personal observatory dome from NexDome



L eXtreme dual passband light pollution filter from Optolong



Focal Reducer and Field Flattener for TV102 scopes from Tele Vue



Powertank 12V Power Supply from Celestron



ARCO camera rotator and field de rotator



Copyright © 2024 by Moonbeam Development

Address:
3003 East Chestnut Expy
STE# 575
Springfield, Mo 65802

Phone: 1-844-277-3386

Fax:417-429-2935

E-Mail: contact@appdevelopermagazine.com