11/7/2013 1:30:50 PM
Amazon Web Services Offers 3D Gaming Graphics Enhancements
AWS, Amazon Web Services, 3D graphics, 3D gaming, mobile game development, game app
https://appdevelopermagazine.com/images/news_images/aws-services-2_nkdqkenu.jpg
App Developer Magazine

Amazon Web Services Offers 3D Gaming Graphics Enhancements



Stuart Parkerson Stuart Parkerson in Programming Thursday, November 7, 2013
1,285

Do you want to build fast, 3D applications that run in the cloud and deliver high performance 3D graphics to mobile devices, TV sets, and desktop computers? Sure you do or you wouldn’t be reading this! Now you can build 3D streaming applications with Amazon Web Services (AWS) EC2's New G2 Instance Type.

The g2.2xlarge instance has the following specs: NVIDIA GRID (GK104 "Kepler") GPU (Graphics Processing Unit), 1,536 CUDA cores and 4 GB of video (frame buffer) RAM.; Intel Sandy Bridge processor running at 2.6 GHz with Turbo Boost enabled, 8 vCPUs (Virtual CPUs); 15 GiB of RAM; and 60 GB of SSD storage. The instances run 64-bit code and make use of HVM virtualization and EBS-Optimized instances are also available.

The g2.2xlarge is another offering in the AWS GPU Instance family, joining the existing CG1 instance type. The well established cg1.4xlarge instance type is a great fit for HPC (High Performance Computing) workloads. The GPGPU (General Purpose Graphics Processing Unit) in the cg1 offers double-precision floating point and error-correcting memory.  In contrast, the GPU in the g2.2xlarge works on single-precision floating point values, and does not support error-correcting memory.

On the AWS blog, they provide this nice explanation of the new GPU offering:

Let's take a step back and examine the GPU concept in detail. As you probably know, the display in your computer or your phone is known as a frame buffer. The color of each pixel on the display is determined by the value in a particular memory location. Back when I was young, this was called memory-mapped video. It was relatively easy to write code to compute the address corresponding to a particular point on the screen and to set the value (color) of a single pixel as desired. If you wanted to draw a line, rectangle, or circle, you (or some graphics functions running on your behalf) would need to compute the address of each pixel in the figure, one at a time. This was easy to implement, but relatively slow.

Moving ahead, as games (one of the primary drivers of consumer-level 3D processing) became increasingly sophisticated, they implemented advanced rendering features such as texturing, shadows, and anti-aliasing. Each of these features contributed to the realism and the "wow factor" of the game, while requiring ever-increasing amounts of compute power for rendering. Think back just a decade or so, when gamers would routinely compare the FPS (frames per second) metrics of their games when running on various types of hardware.

It turns out that many of these advanced rendering features shared an interesting property. The computations needed to texture or anti-alias a particular pixel are independent of those required for the other pixels in the same scene. Moving some of this computation into specialized, highly parallel hardware (the GPU) reduced the load on the CPU and enabled the development of games that were even more responsive, detailed, and realistic.

The game (or other application) sends high-level operations to the GPU, the GPU does its magic for hundreds or thousands of pixels at a time, and the results end up in the frame buffer, where they are copied to the video display, with a refresh rate that is generally around 30 frames per second. Shown is a block diagram of the NVIDIA GRID GPU in the g2 instance.

GPU deposited the final pixels in the frame buffer for display. This is wonderful if you are running the application on your desktop or mobile device, but does you very little good if your application is running in the cloud.

The GRID GPU incorporates an important feature that makes it ideal for building cloud-based applications. If you examine the diagram featured shown, you will see that the NVIFR and NVFBC components are connected to the frame buffer and to the NVENC component. When used together (NVIFR + NVENC or NVFBC + NVENC), you can create an H.264 video stream of your application using dedicated, hardware-accelerated video encoding. This stream can be displayed on any client device that has a compatible video codec. A single GPU can support up to eight real-time HD video streams (720p at 30 fps) or up to four real-time FHD video streams (1080p at 30 fps).

Put it all together and your applications can now run in the cloud, take advantage of the CPU power of the g2.2xlarge, the 3D rendering of the GRID GPU, along with access to AWS storage, messaging, and database resources to generate interactive content that can be viewed in a wide variety of environments!

Developer’s can launch G2 instances using the AWS console, Amazon EC2 command line interface, AWS SDKs and third-party libraries. G2 instances will initially be available in the US East (N. Virginia), US West (N. California), US West (Oregon) and EU (Ireland) regions and will be made available in other AWS regions in the coming months. G2 instances can be purchased as On-Demand, Reserved and Spot instances. For more information on Amazon EC2 and GPU instances, visit http://aws.amazon.com/ec2.


Read more: http://aws.typepad.com/aws/