GenAI developers and athletes empowered by Intel

Posted on Wednesday, August 14, 2024 by AUSTIN HARRIS, Global Sales

Intel has recently shared exciting details on its collaboration with the International Olympic Committee (IOC) and on an industry-driven generative AI (GenAI) retrieval-augmented generation (RAG) solution. These announcements demonstrate how open AI systems and platforms using Intel Gaudi AI accelerators and Intel Xeon processors put the power in the hands of developers and enterprises to tackle challenges created by the AI boom.

Intel empowers GenAI developers and athletes around the globe with open and accessible AI systems.

"Through our partnership with the International Olympic Committee, we are demonstrating our dedication to making AI accessible. We're fostering an open playing field that encourages innovation and creativity and enables developers and enterprises to build tailored AI solutions that drive tangible results. By embracing an open and collaborative ecosystem, Intel is transforming ways to help our athletes and pushing the boundaries of what’s possible with our customers," said Justin Hotard, Intel executive vice president and general manager of the Data Center and Artificial Intelligence Group.

How Athlete365 Works: Qualifying for the Olympic Games is only the beginning for athletes. To help about 11,000 athletes with varying languages and cultures navigate the venue and comply with rules and guidelines, the IOC collaborated with Intel to develop a chatbot, Athlete365. A RAG solution powered by Intel Gaudi accelerators and Xeon processors, Athlete365 is capable of handling athlete inquiries and interactions and will deliver on-demand information during athletes’ stay at the Olympic Village in Paris, enabling them to focus on training and competing.

Why it matters

Deploying GenAI solutions poses challenges like cost, scale, accuracy, development requirements, privacy and security. RAG is a crucial GenAI workload because it allows companies to leverage proprietary data securely, enhancing the timeliness and reliability of AI outputs. This improves the quality and usefulness of AI applications, which is critical in today's data-driven world.

Intel’s collaborative approach utilizing AI platforms, open standards, and a robust software and systems ecosystem allows developers to build customized GenAI RAG solutions tailored to each enterprise’s needs. The momentum recently shared underscores Intel's commitment to providing open, robust, and composable multi-provider generative AI solutions.

How the GenAI RAG Solution Architecture Works: Intel works with industry partners to create an open-source, interoperable solution for easy RAG deployment. The GenAI solution is an industry-driven, out-of-the-box, production-ready RAG solution built on the Open Platform for Enterprise AI (OPEA) foundation. While the GenAI turnkey solution offers a streamlined approach to deploying RAG solutions for enterprises in their data centers, it is designed to be highly flexible and customizable, integrating components from a catalog of offerings by multiple OEM systems and industry partners.

The GenAI turnkey solution integrates OPEA-based microservice components into a scalable RAG solution designed to deploy Xeon and Gaudi AI systems. It scales seamlessly with proven orchestration frameworks like Kubernetes and Red Hat OpenShift and provides standardized APIs with security and system telemetry.

  • Breaking Down Proprietary Walls with an Open Software Stack: Nearly all large language model (LLM) development is based on the high abstraction framework PyTorch, which is supported by Intel Gaudi and Xeon technologies, making it easy to develop on Intel AI systems or platforms. Intel has worked with OPEA to develop an open software stack for RAG and LLM deployment optimized for the GenAI turnkey solution and built with PyTorch, Hugging Face serving libraries (TGI and TEI), LangChain and Redis Vector database.
  • Meeting Developers Where They are: OPEA offers open source, standardized, modular and heterogeneous RAG pipelines for enterprises, focusing on open model development and support for various compilers and toolchains. This foundation accelerates containerized AI integration and delivery for unique vertical use cases. OPEA unlocks new AI possibilities by creating a detailed, composable framework that stands at the forefront of technology stacks.
  • With GenAI turnkey and the comprehensive enterprise AI stack, Intel delivers a complete solution addressing the challenges of deploying and scaling RAG and LLM applications within enterprises and data centers. Leveraging Intel-powered AI systems or platforms and optimized software in OPEA, businesses can harness the full potential of GenAI with greater efficiency and speed.
     

What’s next

Increasing access to the latest AI compute technology is a challenge enterprises face in enabling critical business outcomes with GenAI. Through strategic collaborations with industry partners and customers, Intel is creating new opportunities for AI services driven by GenAI and RAG solutions.

Committed to the secure and responsible advancement of AI, Intel announced its collaboration with Google, IBM and other industry partners in a new Coalition for Secure AI (CoSAI), created to enhance trust and security in AI development and deployment.

Intel will further demonstrate its unique approach to AI systems and continued customer and partner momentum at Intel Innovation on Sept. 24-25.

More App Developer News

Tether QVAC SDK Powers AI Across Devices and Platforms



APAC 5G expansion to fuel 347B mobile market by 2030



How AI is causing app litter everywhere



The App Economy Is Thriving



NIKKE 3.5 anniversary update livestream coming soon



New AI tool targets early dementia detection



Jentic launch gives AI agents api access



Experts warn ai-generated health content risks misinterpretation without human oversight



Ludo.ai Unveils API and MCP Beta to Power AI Game Asset Pipelines



AccuWeather Launches ChatGPT Integration for Live Weather Updates



Stop Using Business Jargon: 5 Ways Buzzwords Damage Job Performance



IT spending rises as banks balance legacy and innovation



Tech hiring slumps as Software Developer job postings fall



AI is becoming more widespread in collaboration tools



FCC prohibits new foreign router models citing critical infrastructure risks



ChatGPT Carbon Footprint Matches 1.3 Million Cars Report Finds



Lens Launches MCP Server to Connect AI Coding Assistants with Kubernetes



Accelerating corporate ai investment returns



Enviromates tech startup launches global participation platform



Private Repository Secures the AI-driven Development Boom



UK Fintech Platform Enviromates Connects Projects Brands and Consumers



Env Zero and CloudQuery Announce Merger



How Industrial AI Is Transforming Operations in 2026



AI generated work from managers is damaging trust among employees



Foresight Secures $25M to Bridge Infrastructure Execution Gap



Copyright © 2026 by Moonbeam

Address:
1855 S Ingram Mill Rd
STE# 201
Springfield, Mo 65804

Phone: 1-844-277-3386

Fax:417-429-2935

E-Mail: contact@appdevelopermagazine.com