What can you build with ChatGPT in 48 hours

A fundamental shift is underway in how users discover and interact with brands and their digital  experiences, driven by advances in AI and conversational interfaces. In October 2025, that shift showed up clearly at OpenAI DevDay with the introduction of the Apps SDK.

Much like the launch of the App Store reshaped mobile computing, ChatGPT apps point to a new paradigm where conversation becomes the primary way users find, access and use applications. The Apps SDK enables developers to build integrated application experiences directly inside ChatGPT, complete with custom UI components and deep linking, changing not just how apps are built, but the primary interface through which users access them.

Our engineering team at TELUS Digital wanted to understand what this shift means in practice. So we got to work and built a proof of concept in 48 hours: a stock and news tracker with market data, visualizations and custom UI components.

What follows is a practical walkthrough of how we approached the build, the architectural decisions that mattered most and what this style of development enables for enterprise teams building AI-native applications.

Hour 0 to 8: AI-accelerated discovery

We began by diving into the ChatGPT Apps SDK documentation and GitHub sample repositories. To save time, we used a large language model to consolidate the documentation, examples and technical specifications into a single working view, rather than piecing together fragmented information across multiple sources.
Using AI to help understand how to build AI applications gave us a clearer roadmap of the SDK’s capabilities and constraints early on. It allowed the team to align quickly on what was possible, what was out of scope and how we should sequence the implementation, without spending the first day reconciling docs or reverse engineering examples.

For enterprise teams, this is an important step. The fastest way to lose momentum in a proof of concept is to get stuck reconciling partial documentation with implementation details. By accelerating discovery, we were able to move quickly without any guess work.

Hour 8 to 16: stack and tooling decisions

With a clear understanding of the SDK, we moved into selecting our stack and development tooling. While the Apps SDK supports Web Components natively, we chose React for the UI layer. This was a practical decision as our team was already fluent in React, allowing us to move faster without introducing new frontend patterns during a short proof-of-concept window.

That choice also highlighted an important aspect of the framework: it accommodates existing enterprise skill sets rather than forcing teams to adopt unfamiliar technologies.
For the development environment, we used ngrok to proxy our localhost, which allowed us to test the app directly inside ChatGPT as we built it. We paired this setup with Cursor, an AI-integrated IDE with agentic coding capabilities. This meant we created an environment where AI was supporting both sides of the work, helping power the application itself while also assisting the engineers writing and iterating on the code.


Hour 16 to 24: defining the use case

With the foundations in place, we turned to the question that would shape everything that followed: what should this app actually do? We needed a proof of concept that was substantial enough to meaningfully test  the Apps SDK, without becoming too big to build quickly. 

Live data integration felt like the right place to focus. We decided to build a stock portfolio news analyzer, an application that pulls in market-related news and helps users understand how that information might affect specific holdings.

Importantly, we intentionally used mock data instead of live feeds. The goal was not to build production-ready data pipelines, but to see how the SDK handled capabilities, interaction patterns and UI composition. By keeping the data layer abstract, we could focus on architecture first and swap in real data sources later without changing the core design.

Hour 24 to 40: building the MCP foundation

With the use case defined, we moved on to building the core application foundation using an MCP server. This layer became the backbone of the app, responsible for exposing tools and capabilities to the language model in a structured, predictable way.
We started with Figma mockups to clarify the user experience and understand what data and interactions the interface would need to support. From there, we defined the tools and capabilities required to power those interactions.

This is where the capability-driven approach starts to differ from more traditional application architectures. Instead of exposing narrowly scoped functions with rigid contracts, we focused on defining what each capability does, what inputs it expects and what it returns.

Rather than hardcoding logic like “when the user says X, call function Y,” we described capabilities so the language model could decide when and how to invoke them based on user intent. Keeping the MCP server focused on well-defined capabilities allowed us to separate application logic from interaction logic. That separation made it easier to iterate on behavior, refine prompts and evolve the user experience without restructuring the underlying system.


Hour 40 to 48: UI assembly and integration

Once the capabilities were defined, the MCP server connected to our data sources and exposed the corresponding tools to the language model. We used mock data at this stage. 

From there, the model interprets the tool definitions and determines how the UI should be rendered based on how HTML components are associated with specific tool calls. Rather than hardcoding UI behavior, we defined the data structures and layout patterns, and the model populated the interface dynamically using the data returned.
The result was a UI that adapted to context without extensive conditional rendering logic, allowing the interface to respond naturally as data and user intent changed.

The paradigm shift: capability-driven architecture

What we built in 48 hours demonstrates a fundamental shift in how applications can be developed:

- Discovery over documentation: The LLM can discover an app’s capabilities through tool definitions rather than through user manuals or predefined navigation paths.

- Intent over commands: Users describe what they want to achieve, and the system determines how to fulfill that intent without relying on rigid command structures.

- Capabilities over functions: Broad, contextual capabilities replace narrow, predetermined API endpoints.

- Dynamic UI over static templates: The interface adapts to data and context at runtime rather than relying on fixed page structures.

Taken together, this approach supports a more maintainable and extensible development model, while aligning more closely with how users naturally communicate.


What this means for enterprise app development

Working within a 48-hour timeline helped us focus on the architectural decisions that matter most in an enterprise context, including how capabilities are defined, integrated and surfaced to users.

In doing so, the build highlighted a broader shift in enterprise app development, away from traditional APIs that require explicit control flow and toward capability-driven architectures that allow systems to reason about intent and behavior.

Our project surfaced a few practical takeaways for enterprise teams considering this approach. In practice, we found that capability-driven development using ChatGPT Apps and MCP servers supports:

- Rapid prototyping with AI-assisted tooling: Teams can move from idea to working application quickly, without heavy upfront wiring or complex orchestration layers.

- Integration with existing enterprise stacks: The framework works alongside familiar frontend and backend technologies, allowing teams to build AI-native experiences without replatforming.

- Natural language interfaces without custom NLP pipelines: Intent interpretation and routing are handled by the model, reducing the need for bespoke classifiers or intent engines.

- Interfaces that adapt to data and context:  UI behavior is driven by returned data and tool outputs rather than hardcoded states or rigid templates.


As enterprises explore generative AI integration, they should be asking more than just “What can AI do?” They should also be asking, “How should we architect applications in an AI-native world?” Our experience building this proof of concept suggests that starting with clear, well-defined capabilities provides a more practical foundation than relying on rigid commands or predefined flows.

This build reinforced that the opportunity is real, but so are the challenges. Protocol design, state management, data flow and security still require deliberate architectural choices. Treating ChatGPT apps as production software means applying the same rigor enterprise teams expect elsewhere.

What has changed is the development model. With the right approach, teams can build applications that feel native to the conversational interface and surface at the moment of need, without pulling users out of their workflow. As the tools mature and these architectural patterns become clearer, capability-driven architectures are becoming more practical for enterprise teams building real-world applications.


More App Developer News

Tether QVAC SDK Powers AI Across Devices and Platforms



APAC 5G expansion to fuel 347B mobile market by 2030



How AI is causing app litter everywhere



The App Economy Is Thriving



NIKKE 3.5 anniversary update livestream coming soon



New AI tool targets early dementia detection



Jentic launch gives AI agents api access



Experts warn ai-generated health content risks misinterpretation without human oversight



Ludo.ai Unveils API and MCP Beta to Power AI Game Asset Pipelines



AccuWeather Launches ChatGPT Integration for Live Weather Updates



Stop Using Business Jargon: 5 Ways Buzzwords Damage Job Performance



IT spending rises as banks balance legacy and innovation



Tech hiring slumps as Software Developer job postings fall



AI is becoming more widespread in collaboration tools



FCC prohibits new foreign router models citing critical infrastructure risks



ChatGPT Carbon Footprint Matches 1.3 Million Cars Report Finds



Lens Launches MCP Server to Connect AI Coding Assistants with Kubernetes



Accelerating corporate ai investment returns



Enviromates tech startup launches global participation platform



Private Repository Secures the AI-driven Development Boom



UK Fintech Platform Enviromates Connects Projects Brands and Consumers



Env Zero and CloudQuery Announce Merger



How Industrial AI Is Transforming Operations in 2026



AI generated work from managers is damaging trust among employees



Foresight Secures $25M to Bridge Infrastructure Execution Gap



Copyright © 2026 by Moonbeam

Address:
1855 S Ingram Mill Rd
STE# 201
Springfield, Mo 65804

Phone: 1-844-277-3386

Fax:417-429-2935

E-Mail: contact@appdevelopermagazine.com