NVIDIA releases GPU accelerator to improve AI

Posted on Friday, March 10, 2017 by RICHARD HARRIS, Executive Editor

As innovation progresses, more and more processing is being offloaded to the cloud to do the heavy lifting. But how much cloud usage is too much for cloud providers to handle efficiently? That is the answer that many companies hope never to have to answer as they ramp up their cloud usage exponentially. That’s where NVIDIA and Microsoft look to make big changes in the way cloud computing operates.

Microsoft's newest project, code named Project Olympus, is making a big buzz in the Silicon Valley community as it hopes to address these computing stumbling blocks in the way of many companies’ growing successes. In a blog post by Microsoft, Kushagra Vaid, general manager and engineer at Azure Hardware Infrastructure, said that the project was made to improve “cloud services and computing power needed for advanced and emerging cloud workloads such as big data analytics, machine learning, and Artificial Intelligence (AI).”

Project Olympus is the next generation of cloud hardware and a “new model for open source hardware development.” Built upon a solid foundation of an open source hardware development model, the project has “created a vibrant industry ecosystem for datacenter deployments across the globe in both cloud and enterprise.”

NVIDIA's new GPU accelerator


In concurrence with Project Olympus, NVIDIA has unveiled blueprints for a new hyperscale GPU accelerator to drive AI cloud computing. The projects hyperscale GPU accelerator chassis for AI, also referred to as HGX-1, is designed to support eight of the latest “Pascal” generation NVIDIA GPUs and NVIDIA’s NVLink high speed multi-GPU interconnect technology, and provides high bandwidth interconnectivity for up to 32 GPUs by connecting four HGX-1 together.

In a press release by NVIDIA, they said that “HGX-1 does for cloud-based AI workloads what ATX - Advanced Technology eXtended - did for PC motherboards when it was introduced more than two decades ago. It establishes an industry standard that can be rapidly and efficiently embraced to help meet surging market demand.”

They later went on saying that, “Cloud workloads are more diverse and complex than ever. AI training, inferencing and HPC workloads run optimally on different system configurations, with a CPU attached to a varying number of GPUs. The highly modular design of the HGX-1 allows for optimal performance no matter the workload. It provides up to 100x faster deep learning performance compared with legacy CPU-based servers, and is estimated at one-fifth the cost for conducting AI training and one-tenth the cost for AI inferencing.”

It is important to note that this is the first (Open Compute Project)OCP server design to offer a broad choice of microprocessor options fully compliant with the Universal Motherboard specification to address virtually any type of cloud computing workload.

More App Developer News

Tether QVAC SDK Powers AI Across Devices and Platforms



APAC 5G expansion to fuel 347B mobile market by 2030



How AI is causing app litter everywhere



The App Economy Is Thriving



NIKKE 3.5 anniversary update livestream coming soon



New AI tool targets early dementia detection



Jentic launch gives AI agents api access



Experts warn ai-generated health content risks misinterpretation without human oversight



Ludo.ai Unveils API and MCP Beta to Power AI Game Asset Pipelines



AccuWeather Launches ChatGPT Integration for Live Weather Updates



Stop Using Business Jargon: 5 Ways Buzzwords Damage Job Performance



IT spending rises as banks balance legacy and innovation



Tech hiring slumps as Software Developer job postings fall



AI is becoming more widespread in collaboration tools



FCC prohibits new foreign router models citing critical infrastructure risks



ChatGPT Carbon Footprint Matches 1.3 Million Cars Report Finds



Lens Launches MCP Server to Connect AI Coding Assistants with Kubernetes



Accelerating corporate ai investment returns



Enviromates tech startup launches global participation platform



Private Repository Secures the AI-driven Development Boom



UK Fintech Platform Enviromates Connects Projects Brands and Consumers



Env Zero and CloudQuery Announce Merger



How Industrial AI Is Transforming Operations in 2026



AI generated work from managers is damaging trust among employees



Foresight Secures $25M to Bridge Infrastructure Execution Gap



Copyright © 2026 by Moonbeam

Address:
1855 S Ingram Mill Rd
STE# 201
Springfield, Mo 65804

Phone: 1-844-277-3386

Fax:417-429-2935

E-Mail: contact@appdevelopermagazine.com