iOS
Apple's WWDC 2025: Innovation for everyone except app developers
Wednesday, June 11, 2025
![]() |
Richard Harris |

Sleek UI upgrades, deeper ecosystem unity, and useful AI features like translation and call screening were delivered at Apple’s WWDC 2025, but across all ten parts of this recap, it’s clear the company’s generative AI efforts still lag behind rivals in ambition and execution.
Apple’s WWDC 2025 keynote was a spectacle of contrasts. On one hand, it dazzled developers with a Liquid Glass design aesthetic and a boatload of polished features across iOS, macOS, iPadOS, watchOS, and more. They even threw in some whimsical flair, from cheeky Formula 1 references to a piano man crooning App Store reviews on stage. It felt like Apple was serenading us with how shiny and seamless their ecosystem is about to become.
Apple’s WWDC 2025, a shiny new coat with little ai under the hood
Yet, as I sat there both amused and in awe, a nagging absence became clearer with each passing minute. The elephant in the room, or rather, the AI not in the room, cast a long shadow over the event. For all the talk of Apple Intelligence and new capabilities, Siri, Apple’s once-vaunted digital assistant, was virtually silent. Apple’s execs touted how the next decade of Apple software will look and feel, but said precious little about how it will think. As a developer and long-time Apple watcher, I couldn’t shake a growing sense of disappointment. In this golden age of generative AI, it seemed Apple showed up to a software renaissance with an artful new canvas but left its brightest paints at home.
A keynote of glass, gloss, and the invisible elephant
Before we dive into that glaring AI omission and why it matters, let’s run through what was announced, because there was plenty. Apple hasn’t been idle, they’ve given us a slew of updates that will definitely make users happy and developers busy. From iPhones to Macs to Apple Watches, almost every Apple device is getting some love. But as we’ll see, amidst all these welcome improvements, the silence on true AI advancements was deafening.
A unified “liquid glass” design across Apple’s platforms
The first thing Apple drove home was a major design refresh sweeping across iOS, iPadOS, macOS, and even watchOS and tvOS. They’re calling it the Liquid Glass theme, and it’s not just marketing poetry, it literally makes your interface elements look like glossy panes of glass. Buttons, switches, sliders, text fields, all are infused with a new translucency that lets background colors and wallpapers bleed through. On the iPhone’s lock screen, for example, even the date and notification banners now subtly blur and reveal the wallpaper underneath. It’s as if the whole UI gained a splash of transparency, both figuratively and literally.
Apple’s designers seem to have drawn inspiration from the natural world here, imagine frosted glass misted with rain, or the way light refracts through a prism. The effect is both futuristic and nostalgic. Long-time Apple fans might recall the original OS X “Aqua” interface from two decades ago, with its water bubble buttons and translucent menus. Liquid Glass feels like the spiritual successor to Aqua for the 2020s, an interface that’s tangible, tactile, and playfully reflective. In practice, it means when you toggle a switch or open a menu, you get a sense of depth, of the interface layering over your content like a polished sheet of glass. It’s eye candy, sure, but also part of a broader push toward consistency.
For the first time in years, Apple’s making its platforms look and behave more alike. The Liquid Glass look spans iPhones, iPads, Macs, Apple TV, Apple Watch, and even the Vision Pro headset. It signals that Apple wants a cohesive experience whether you’re glancing at your wrist or immersed in a spatial computing environment. As a developer, I find this encouraging, it means fewer radically different UI paradigms to account for. The design language is converging. Apple is basically telling us, learn this new look and feel, and you can apply it everywhere. That’s good news for usability and for those of us who build apps across multiple Apple devices.
The Liquid Glass theme is not just skin-deep either. It comes with new system typography and iconography tweaks to match. Everything appears a bit more minimal, clean, and content-first. Apple demoed how even something like the iPhone’s camera app or Safari browser adopts this edge-to-edge, airy design. Safari’s toolbar, for instance, now melts away so web pages can utilize the full screen. The Camera app was simplified to just two primary modes (Photo and Video) visible, with others like Portrait or Slow-Mo tucked neatly behind a swipe gesture. It’s a decluttering that puts content at center stage, maybe an influence from that Vision Pro ethos of “environment first” design seeping back into 2D screens.
From a narrative perspective, Apple framed this design overhaul as “setting the stage for the next decade” of software. Indeed, it feels like a once-in-a-decade refresh, the kind that in hindsight marks an era. As an enthusiast, I found myself genuinely excited by the new polish, it’s fun to see Apple flex its design muscles. In the halls of WWDC, an Ozarks-accented voice in my head (perhaps channeling Richard Harris of App Developer Magazine) was practically hollering, “Well butter my biscuits, they’ve gone and repainted the whole darn house!” It’s a practical change, but presented with Apple’s trademark showmanship and a touch of science fiction flair.
What’s in a name, year-based version numbers
One subtle but important shift that came along with the redesign is a change in version numbering. Apple’s operating systems are now named by year. So say goodbye to the expected iOS 19 or macOS 16, this year we have iOS 26, iPadOS 26, macOS 26, watchOS 26, tvOS 26, and even visionOS 26. It’s a big renumbering meant to align with the year after release (since these OS versions will largely be in use during 2026). At first this threw me for a loop, but Apple insists it’ll make things easier to remember than the jumble of different version numbers we had before.
Practical developers might chuckle at the change, we’ve seen Apple do odd naming resets before (remember when Mac OS jumped from 10.8 to 10.9 to… macOS 11?). But aligning everything to the year does have a tidy logic. It underscores that all these systems are peers in the same family. And it subtly emphasizes that Apple views this WWDC’s announcements as a unified generation of software. iOS 26, macOS Tahoe 26, iPadOS 26, all siblings born of 2025’s vision.
Speaking of macOS Tahoe, yes, Apple is still on its California landmark naming kick. After last year’s macOS “Sequoia” (15), we get macOS 26 Tahoe, presumably named after the crystal-clear Lake Tahoe, which cheekily echoes the transparent “glass” UI motif. It’s poetic in a way, a lake’s surface reflecting the sky, and now our screens reflecting our content. Apple’s love for metaphor in naming and design remains strong.
Alright, naming aside, let’s talk about the goodies in each of these updates. Buckle up, because Apple unloaded a Santa-sized sack of features. I’ll cover the highlights for iOS, iPadOS, macOS, watchOS, and more, then circle back to the elephant (or AI) that wasn’t on stage.

iOS 26, new tricks for the iPhone
Apple’s latest iPhone operating system, iOS 26, is absolutely packed with features, some brand new, others playing catch up with the Android Joneses. Here’s a quick rundown of the most notable iOS 26 changes that Apple announced:
Liquid Glass redesign
First and foremost, iOS 26 fully embraces that new Liquid Glass look. Your Control Center, widgets, notifications, and even on-screen media player all sport translucent elements that blend into whatever wallpaper or app is behind them. It’s a dramatic visual change that makes the interface feel alive and context aware. During the keynote demo, as the presenter swiped through a photo heavy home screen, you could see colors from the images subtly tinting the surrounding UI chrome. It drew appreciative “oohs” from the crowd, and admittedly, from me too.
Live translation in Messages, Calls, and FaceTime
This one is a killer feature and got one of the loudest applauses. Apple built AI powered live translation right into the core communication apps. In the Messages app, you can now have foreign language texts automatically translated in line, the demo showed an English user seamlessly chatting with a friend typing in Italian, each seeing the conversation in their own language. In phone calls, iOS 26 can listen to the other person speaking Spanish (for example) and read out an English translation to you in real time, while your responses get translated back to Spanish for them. FaceTime video calls will even display live translated captions on the screen as each person speaks. It’s like having a personal interpreter on demand, no third party apps or services required.
Apple pulled this off with on-device machine learning, leveraging the Neural Engine in the A-series chips. From a developer perspective, this is a huge testament to Apple’s hardware, software integration, real time speech translation is AI heavy lifting, and doing it privately on-device (no round trip to the cloud) is impressive. It reminds me of the wide-eyed optimism of a science fiction geek (cue my inner Dr. Ellie Arroway), we’re effectively tearing down language barriers in casual conversation, fulfilling a long-held tech dream. Google’s been doing similar tricks with Google Translate and Pixel phones for a while, but seeing Apple bake it into iOS’s native apps is still jaw-dropping. This was one area where Apple did showcase AI helping users, and it was beautiful to watch in action.
Revamped Phone app with AI call screening and hold assist
Who would have thought the humble Phone app could be exciting? In iOS 26, Apple gave it a significant overhaul. The interface now consolidates your favorites, recent calls, and voicemail into one unified view for easier navigation. But the real star is under the hood, the Phone app now incorporates call screening and hold assistance features, leaning on Apple’s voice intelligence. If an unknown number calls, you can have Siri (or rather, Apple’s automated system) answer and ask the caller to state their purpose, transcribing their response for you in real time, just like Google’s Call Screen on Pixel devices. It’s a feature many iPhone users have envied, now finally here. Likewise, if you’re stuck on hold with customer service, iOS 26 can take over the call and wait on hold for you, then alert you when a human comes on the line. This “hold assist” is straight out of Google’s playbook (Pixel’s Hold for Me), and I chuckled when Apple presented it as if it were brand new. Still, as a practical tool it’s immensely useful, and I’m glad to see Apple catching up on this front. Better late than never, as Mark Twain might quip. No more listening to endless hold music, our iPhones will gladly do that drudgery while we grab a coffee.
Visual intelligence, on-device AI vision
iOS 26 adds what Apple calls Visual Intelligence features that can “search on-screen content” and help you interact with what you see. In simpler terms, your iPhone gains a kind of contextual visual search akin to Google Lens or Samsung’s Bixby Vision. By long-pressing or using a new shortcut (pressing the side button combo you’d use for a screenshot), you can summon an AI assistant for whatever is on your screen. Apple showed that you could, say, have a photo or a paused video frame on your display and then ask, “What kind of bird is this?” or “Find this product online.” In response, iOS can tap into either local machine learning or external services to identify objects and even let you perform actions like shopping for similar items. One demo even indicated you could ask follow-up questions about what you’re seeing and choose to either query ChatGPT or do a web search for more info. The fact Apple explicitly allows jumping to ChatGPT here raised my eyebrows, it’s an acknowledgment that OpenAI’s vision and language model might augment what Apple’s own algorithms can do. Essentially, Apple built a bridge where you can seamlessly hand off a task to ChatGPT if you need deeper analysis of your screen’s content. That’s both surprising (Apple usually doesn’t spotlight third party brands in keynotes) and telling (we’ll talk more about Apple leaning on OpenAI’s tech later).
Messages app upgrades
Apple gave Messages some social-savvy enhancements. There are now built-in message polls for quick group decisions (no more relying on sketchy third party polling apps just to decide dinner plans). You can also apply custom chat backgrounds to specific conversations, adding personality to your iMessage threads. We even saw new live sticker effects and the ability to transcribe audio messages on the fly. All nice quality-of-life improvements that keep Apple’s messaging platform competitive with WhatsApp, Telegram, and the rest. Nothing revolutionary, but developers of messaging extensions will have new APIs to play with here.
All-new “Games” hub
In iOS 26, Apple is merging several gaming related apps and services into one place. Game Center, your owned games library, and Apple Arcade are now unified in a new “Games” app. It’s like an Apple flavored Steam Library for mobile. From this hub you can see all the games you have installed, browse Arcade titles, check leaderboards, etc., without bouncing between the App Store, Arcade app, and Game Center as before. They’ve even designed it to work cross platform, because the Games app is coming to macOS 26 as well. As a gamer and developer, I appreciate this consolidation, it treats games as first class citizens in the content ecosystem. And if you build games, having a dedicated hub might increase visibility for your app. Apple’s also bringing the iPhone’s Live Activities (those real time interactive notifications) to the Mac via this, which is an interesting crossover.
Adaptive power and battery health
Always attuned to battery life concerns, Apple added a new Adaptive Power mode in iOS 26. Think of it like a smarter Low Power Mode. Instead of bluntly cutting performance, Adaptive Power will dynamically scale back background activity and even tweak processor performance just enough to extend battery life when you’re running low, while trying to maintain smooth operation. Apple claims you might squeeze out hours more use without the phone feeling sluggish. We’ll have to test those claims, but any dev who’s dealt with user battery complaints will welcome this. Also nifty, iOS 26 will now tell you how long it will take your battery to fully charge once you plug in. No more guessing or doing mental math with charge percentages, the lock screen will plainly say “Battery will be full in 20 minutes,” for example. A small touch, but a very user-friendly one.
Odds and ends
The list truly goes on, new accessibility features like custom Background Sounds (iOS now has an expanded catalog of soothing sounds to play for focus or sleep), enhancements to Wallet like better boarding pass integration in Apple Wallet, Maps for travel, an Emoji feedback feature hidden in Apple News+ (a quirky trivia, they built a little emoji reaction game for News subscribers), and improvements across core apps like Notes and Reminders (think smart to-do list grouping, enhanced note linking, etc.). It’s one of those releases where the “little things” list fills an entire slide, and Apple did indeed flash a giant slide of dozens of bullet points toward the end of the iOS segment, saying “...and so many more small changes.” As developers, we know those can sometimes have big impact on user experience. It’ll take weeks to discover and appreciate them all.
In sum
iOS 26 is a hefty update. It’s equal parts visual overhaul and feature catch up. Apple clearly prioritized eliminating some pain points (spam calls, translating chats) and adding polish (new design, smoother multitasking, battery insights). There’s a midwestern practicality in these changes, I can almost hear the Moonbeam Development team from Missouri nodding in approval at the phone and battery tweaks, while the ScopeTrader folks appreciate the interface consistency for their finance apps. My inner Mark Twain also tips his hat at the plain commonsense of some features, “Why shouldn’t my phone tell me how long to charge? Seems downright obvious now.”
And yet, amidst this bounty of improvements, something was still nagging at me, and many fellow devs I spoke with during the virtual hallway chats at WWDC, Where are the groundbreaking AI features? Sure, live translation is AI powered and very cool, and the Visual Intelligence features flirt with AI. But those felt like contained use-cases, useful, yes, but narrowly focused. Apple’s flagship intelligence, Siri, was almost a footnote. We’ll dig into that soon, but first, let’s see what Apple had in store for the iPad and Mac, because there were some big changes there as well.
iPadOS 26, the iPad learns to Mac
“Is the iPad more Mac, or more iPhone?” a narrator asked during Apple’s presentation. “This year, the answer is Mac.” That line resonated because it rang so true. With iPadOS 26, Apple is finally unleashing the iPad’s inner Mac, giving tablet users the kind of multitasking and interface power we’ve craved for years. As a developer who often tried to make “desktop class” iPad apps, I found myself grinning ear to ear during this part of the keynote. Apple is effectively removing some long-standing barriers between iPad and Mac.
New windowing system for multitasking
The flagship feature in iPadOS 26 is a brand new windowing system for multitasking. Yes, you read that right, real, fluidly resizable app windows on an iPad, that you can drag and arrange freely, overlapping if you want. No longer are we stuck with the rigid Split View or Slide Over in fixed proportions. You can treat an iPad almost like a touchscreen Mac, open multiple apps, resize them like windows on a desktop, and even use them across external displays with complete flexibility. In Apple’s words, you can place windows “anywhere you want on the screen,” and even Stage Manager (the window grouping feature introduced in iPadOS 16) has evolved to accommodate this freer form windowing. It’s a dramatic leap towards making the iPad a true laptop replacement.
Desktop style menu bar
To complement the windowing, iPadOS 26 introduces a desktop style menu bar that appears at the top of the screen (somewhat hidden until you need it). By swiping down from the top, you reveal a macOS like menu bar with various controls and options for the current app. This means pro apps on iPad can now have menu options just like on Mac, accessible in a common location. It’s a small UI element with big implications, it acknowledges that iPad apps sometimes need the depth of menus and settings that full computer apps have. Apple is signaling that the iPad should no longer be constrained to ultra simplified mobile style interfaces when it’s being used in a workstation context.
More Mac like apps and features
We also heard that many Mac like apps and features are coming to iPad:
- A new, proper Files app redesign that finally behaves more like Finder on Mac. Organizing files on iPad should feel less like an afterthought and more like a real file manager now. They even added support for things like column view and improved external drive support.
- A Preview app on iPad for viewing and editing PDFs and documents. Mac users know Preview as a handy tool for quick edits, annotations, and conversions. Bringing that to iPad means less need to hunt for third party PDF apps or hacks to sign a document on the go.
- The iPhone’s Phone app is actually coming to iPad (and Mac) as a native app. If you have an iPhone, you’ve long been able to take calls on your iPad or Mac via Continuity, but it was clunky without a dedicated interface. Now the iPad will have a Phone app with features like voicemail, call screening, and the new hold assist, effectively making cellular iPads or iPads paired to an iPhone act like full speakerphones. It’s another way the lines between devices are blurring.
- More Apple Intelligence features on iPad, which likely refers to the same visual search and translation features from iOS now working in iPadOS. The neural engines in M series iPad chips will be flexing those AI muscles.
Developer impacts and platform convergence
All told, iPadOS 26 makes the iPad far more “Mac like” than ever before. David Pierce at The Verge put it succinctly, “This year, the iPad is clearly leaning into its Mac side.” Indeed, some folks have joked that pretty soon the iPad and Mac might just merge into one platform. We’re not there yet, but the convergence is accelerating. From a developer standpoint, this is both exciting and challenging. Exciting because our iPad apps can become more powerful and feature rich without feeling out of place. Challenging because users will expect the same flexibility and power on iPad as on a Mac, which means we have to design our iPad apps’ UI, UX with window resizability, multiple instances, and more complex interactions in mind. Apple is handing us new tools (like probably new UIScene session APIs for window management) to do this, but it will be an adjustment if you’ve treated iPad as “just a blown up iPhone app” before.
Final thoughts
I couldn’t help but think of an analogy as I processed these changes, The iPad has always been like a teenager caught between identities, trying to hang with the Mac adults in productivity, while still clinging to the iPhone’s simplicity. With iPadOS 26, that teenager just got the keys to the car and a license to roam. It’s maturing, leaning into a more grown up role. And like a proud (if slightly nervous) parent, Apple is watching it take off. As a developer, I’m right there in the passenger seat, eager to see how far we can go now that the training wheels of overly simplistic UI have come off.
macOS 26 “Tahoe”, Macs Get Glassy and Gamey
On the Mac side, macOS Tahoe 26 brought its own set of substantial updates, some expected, some pleasantly surprising. Apple might not sell as many Macs as iPhones, but boy did they give Mac lovers a lot to chew on this year.
Liquid Glass design overhaul
First up, macOS Tahoe is the Mac’s counterpart to the design revolution we saw on iOS. It adopts the Liquid Glass design wholesale, making window backgrounds, sidebars, and toolbars more translucent and adaptive to your wallpaper and window stacking order. App icons and controls are updated for consistency with iOS. Using a Mac running Tahoe, you’ll notice the aesthetics immediately, it feels like your Mac’s visuals just got a fresh coat of Candy Apple gloss. (Perhaps Tahoe was a hint, the lake’s famously clear waters now reflected in the OS’s clear panels.)
Supercharged spotlight search
Craig Federighi declared this the biggest update to Spotlight ever, and it shows. Now, Spotlight isn’t just for finding files or web results, it’s becoming a command center. You can perform quick actions directly from Spotlight, send an email, create a note, toggle a setting, all without opening an app. In the demo, typing something like “Set timer for 20 minutes” in Spotlight actually set a timer. Or typing a contact name showed a shortcut to call them. It blurs the line between launching apps and doing tasks. Spotlight also now displays all your apps (including iPhone apps if you’re using an Apple Silicon Mac that can run iOS apps) right within search, effectively acting as a mini Launchpad. There’s even a feature called Quick Keys, which lets power users navigate and trigger search results with keyboard shortcuts without leaving the context of their current app. For instance, if you’re in Pages writing and want to run a shortcut or search your files, a quick Spotlight invocation can let you do so and insert results without breaking flow. As a developer, I see Spotlight’s evolution as Apple finally acknowledging tools like Alfred or LaunchBar that Mac power users have loved, they’re baking those ideas in natively. And since they mentioned Shortcuts integration in Spotlight, there’s a new avenue for us devs, if our apps support system Shortcuts, users might trigger those via Spotlight now. It’s all about surfacing functionality smarter.
Phone app, cross device features
MacOS gets the Phone app too, just like iPad. On an Apple Silicon Mac or an Intel Mac with a T2 chip, you’ll be able to natively handle calls on your Mac with a full interface. This ties into Continuity but feels more robust than the old FaceTime, audio workaround. Also, Mac will support Live Activities (those dynamic, ongoing notifications from iOS) on the desktop. Imagine getting your food delivery tracker or sports scores as a live updating widget in the Mac Notification Center. It makes the Mac experience more real time and connected with what your iPhone is doing.
New game mode and game launcher
This was an intriguing one, Apple announced an “entirely new game launcher with overlay” for macOS. Essentially, they are bundling gaming features to make the Mac more attractive to gamers and game developers. The Game Center and Arcade consolidation I mentioned in iOS’s “Games” app extends to Mac, so on macOS 26, you’ll have a Games hub where all your installed games (including Apple Arcade titles) live. More importantly, Apple introduced a special Game Mode that prioritizes CPU, GPU for the active game and minimizes background tasks (for better performance and latency). They even demonstrated an overlay that can show useful info or allow quick toggles while gaming, akin to what Windows’ Game Bar or various GPU vendor tools do. This is Apple making a play, albeit a tentative one, to say “hey, Macs can game too, and we’re making it easier.” The truth is, on the strength of Apple Silicon, the Mac has newfound gaming potential, but historically Apple hasn’t courted game devs strongly. As a developer who dabbles in game development, I’m cautiously optimistic. The fact they highlighted a game overlay and launcher suggests Apple wants more games on Mac, and that could mean better support (maybe more Metal improvements or Unity, Unreal integrations) behind the scenes.
Journal app on Mac
Last year, Apple previewed a new Journal app on iOS (essentially an automated diary that intelligently suggests moments to record). macOS Tahoe brings Journal to Mac and iPad too. So users can jot down their daily thoughts or revisit memories from any device. The developer tie-in here is the Suggested Moments API, apps can donate events (like your workout, a trip logged in a travel app, a music playlist you enjoyed) to the Journal suggestions. If you have an app that captures life moments, integrating with Journal could surface your app’s content in users’ personal timelines. It’s a subtle way Apple is encouraging app interoperability around personal logging.
Launchpad becomes app library
In a nod to the iPhone, Launchpad (the Mac’s grid of apps view) is turning into an App Library with automatic categorization. Instead of just an alphabetical grid, macOS will intelligently group your apps (productivity, creative, games, etc.) similar to how iOS’s App Library works. Minor UI tweak, but it shows Apple borrowing ideas across platforms more fluidly now.
End of Intel Mac support
Under the hood, one of the biggest shifts, macOS Tahoe will be the final major update to support Intel-based Macs. Apple is effectively sunsetting Intel Macs with this release. They specified that only Intel Macs with a T2 security chip (mostly 2018, 2020 models) can even run Tahoe, and none beyond will get new OSs. It’s the end of an era. Come 2026’s macOS, Apple Silicon will reign exclusive. As a developer, this simplifies things (one architecture to optimize for, one set of capabilities), but it’s also a tad bittersweet, those old workhorse Intel machines are being put out to pasture. Still, Apple gave everyone fair warning of this transition back in 2020, and they’ve stuck to the script.
macOS 26 summary
All told, macOS 26 Tahoe feels like one of the most significant Mac updates in years. It modernizes the look, turbocharges an essential tool (Spotlight), brings parity with iOS in telephony and live info, and signals a clear commitment to an Apple Silicon future. It’s equal parts practical and aspirational, a mix of dry, developer focused improvements (I can practically hear an Ozark drawl, “That there Spotlight’s gonna save us a heap of time, y’know”) and forward looking attempts to expand Mac’s role (like gaming).
Yet, just like with iOS, we see the pattern, Many smart enhancements driven by Apple’s in-house intelligence (text recognition here, search ranking there), but no marquee AI assistant integration or new generative capabilities beyond what’s been quietly baked in. As we turn to the rest of the platforms, that pattern persists, incrementally smarter, but not mind-blowing smart. And in 2025, the bar for mind-blowing smart is set by AI. It’s hard to ignore that.
watchOS 26, fitness gets a smart friend
Over on the Apple Watch, watchOS 26 didn’t steal as much spotlight time, but it has a notable new feature that actually carries the AI label openly. Apple introduced an “AI-powered Workout Buddy” for Apple Watch. This is essentially a virtual personal trainer on your wrist. The Workout Buddy can generate personalized workout routines, give you real-time coaching feedback, and even pep talk you through tough intervals. Apple said it uses on-device machine learning to adapt to your workout history and health metrics, offering encouragement or suggestions tailored to you. For example, if it’s noticed you’ve been beating your usual running pace, it might cheer you on to go an extra five minutes, if you’re consistently missing your stand goals, it could gently nudge you with motivational tips.
This is a welcome feature for the fitness enthusiasts and a clear sign that Apple is infusing more intelligence into health. It also shows Apple is comfortable calling something “AI-powered” on stage when it’s a focused, user facing feature. The Workout Buddy basically takes what third-party apps have tried (and often failed, due to limited sensor access) to do on Apple Watch and builds it into the system, likely with deeper access to Apple’s sensor fusion and algorithms. As a developer with interest in the health, fitness space, I’m keen to see if any of that is exposed via new HealthKit or WorkoutKit APIs, e.g., can our apps hook into the Workout Buddy suggestions or provide custom coaching prompts? Apple didn’t specify, but one can dream.
Besides the Workout Buddy, watchOS 26 likely brings the usual suspects, new watch faces (perhaps some Liquid Glass inspired translucent ones), a few new health metrics, and performance improvements. They did confirm which watches are supported by watchOS 26, essentially it requires an Apple Watch Series 5 or later. That’s in line with dropping older models as they add more ML intensive features that need recent chips.
Apple Watch evolution
Overall, the Apple Watch is steadily evolving, and with Workout Buddy, it’s leaning into Apple’s strength in health by adding a layer of intelligence that makes fitness tracking more interactive and motivational. It’s not a general AI assistant on your wrist (no, you won’t be chatting with SiriGPT on your Watch while jogging… not this year, at least), but it is a specific application of AI that makes sense for the device’s purpose. And it all stays private and on-device, aligning with Apple’s ethos.

Other updates, tvOS, visionOS, AirPods and more
Apple didn’t forget the rest of its lineup either, though these got briefer mentions, they round out the story of Apple’s ecosystem update.
tvOS 26, Apple TV’s interface
Apple TV’s interface is also getting the Liquid Glass makeover. The translucent motif apparently looks stunning on a big 4K TV screen, with menus that blur into whatever screensaver or show is playing. One caveat, older Apple TV models might not support all the flashy effects, Apple noted some of the Liquid Glass design elements won’t be available on certain older Apple TV boxes. They also added multi-user enhancements (profiles remembering where you left off in shows, personalized recommendations per user). Under the hood, tvOS 26 likely shares much of iOS 26’s codebase, so it benefits from performance optimizations and the like.
visionOS 26, Vision Pro’s evolution
This is Apple’s AR, VR Vision Pro headset software, which is still so new (Vision Pro only launched in early 2025) that any updates are noteworthy. visionOS 26 brings Spatial Widgets that can float in your environment, Shared Experiences so multiple headset users can collaborate in a virtual space, and more realistic avatars (Apple calls them Personas) with better eye and face tracking for natural FaceTime calls in VR. They’re iterating quickly here, which is great. As a developer who might dabble in AR, VR, it’s good to see Apple expanding capabilities, especially shared experiences, which opens up multi-user AR app possibilities. Notably, visionOS 26 aligns its numbering with everything else, and Apple is clearly treating it as part of the family, not an experiment. The lack of any AI emphasis here suggests Apple is focusing on core AR interaction for now (whereas Meta’s VR products have begun integrating AI assistants in their virtual homes, something Apple might explore later).
AirPods, smarter audio accessories
Surprisingly, AirPods got a segment of attention too. Apple is pushing more intelligence into AirPods via firmware updates (coupled with iOS 26). For instance, they previewed an update where you can tap your AirPods to remotely trigger your iPhone or iPad’s camera shutter, a neat little trick using the AirPods’ motion sensors as remote controls. They’re also improving the AirPods’ adaptive audio further. Building on last year’s Adaptive Transparency and Conversation Awareness, Apple is adding a feature to automatically switch to a new “Studio” mode that isolates your voice for recordings or calls, even in noisy environments. This is leveraging the AirPods’ on-board AI to filter sound in real time. All these new features will come to the latest AirPods models (AirPods 4, AirPods Pro 2) via firmware when iOS 26 launches. For developers, it might not change our apps much, but it does make the hardware more versatile for creative use cases (remote selfie videos with AirPods as clickers, anyone?).
CarPlay and Car Keys, smarter driving integration
CarPlay is getting a refresh to match iOS 26’s design and new capabilities. They mentioned CarPlay will support widgets and Live Activities on the car’s display, imagine seeing your iPhone’s live sports score widget or food delivery tracker right on your car dashboard screen. That’s a smart expansion of Live Activities into the car context. Apple also updated us that 13 more car brands are on board to support Apple’s digital Car Key feature (where your iPhone, Watch can unlock and start your car). The auto industry moves slowly, but Apple is steadily gaining adoption there. No mention of the next-gen CarPlay (the immersive multi-screen version previewed in 2022) going live yet, that’s still likely in the pipeline awaiting carmakers’ implementations.
Ecosystem polish, not revolution
All these “other” updates show that Apple’s ecosystem is moving forward on all fronts. Nothing hugely earth-shattering on their own, but collectively they polish the experience. Apple’s narrative this year was clearly integration and iteration, every device getting something new, and many features weaving those devices closer together (the Phone app everywhere, Live Activities coming to Mac and CarPlay, unified design language, etc.). As a developer, I appreciate this holistic approach. It means the apps I build can potentially run (or at least have analogues) on iPhone, iPad, Mac, Watch, TV, and beyond with more shared design and behavior than ever. “Write once, adapt everywhere” is closer to reality, albeit still requiring care.
The AI elephant in the room, Siri, where art thou?
When Craig Federighi wrapped up the iOS and macOS sections without a big AI reveal, a lot of developers watching were left with furrowed brows. By the end of the keynote, it was the talk of the (virtual) town, Where were the AI fireworks? After all, 2023 and 2024 were dominated by news of ChatGPT, Google’s Bard, Microsoft’s Copilots, Stable Diffusion and so on. Many assumed Apple was quietly preparing a game-changing entry into this arena, maybe a supercharged Siri or a new developer API for generative AI. Instead, Siri got perhaps 90 seconds of stage time, mostly to tell us its previously promised upgrades are still not ready.
It was almost surreal. Apple’s software engineering chief Craig Federighi literally stood on stage and said (I’m paraphrasing), “We haven’t forgotten about making Siri more personal and context-aware, but it’s taking longer than expected.” In fact, those exact personalized Siri features, things like understanding your personal context (knowing “Mom” refers to your mother in your contacts, knowing what “my 3 PM meeting” is without you specifying) and on-screen context awareness, were announced a year ago at WWDC 2024 as coming soon. They missed the iOS 18.4 update they were slated for, got delayed, and now at WWDC 2025 Apple basically said hold tight, still cooking. They even had Federighi on video explicitly apologizing that “This work needed more time to reach our high quality bar.” It’s not often you see Apple openly admit a delay like that. This was clearly a pre-emptive strike to temper expectations, Siri wasn’t getting smarter today.
Siri silence and missed opportunity
The only “new” Siri mention was that it’s “more natural and more helpful” now in iOS 26. That sounded like a modest incremental improvement, likely referring to slightly better speech synthesis and maybe a few new canned abilities. But no generative leap, no conversational upgrade, no citing of large language models or anything of that sort. If Siri was a character on stage, she basically cleared her throat and said, “I’ll be with you all in a bit, thanks.” And exited.
Apple’s near silence on Siri stands out starkly precisely because everyone else is shouting about AI from the rooftops. Apple has been justly criticized for years about Siri falling behind. It’s almost a tech meme how Apple’s AI efforts (often labeled under “Apple Intelligence”) lag in the public eye. We’ve all seen how ChatGPT can compose an email or how Google Assistant can hold a conversation, while Siri struggles with a multi-part question or often defaults to web search. 2023’s explosion of AI capabilities from OpenAI, Anthropic, Google, Meta… that was a Sputnik moment. And here, in mid-2025, Apple gave us a design revolution but an AI whimper.
Competitors’ momentum vs Apple’s caution
To illustrate the contrast, Google ahead of its I/O 2025 conference gave Android users early access to a new Gemini AI feature that literally lets Google’s assistant “see” what’s on your screen and respond to it. Android phones (Pixel devices especially) could do things like interpret an image, read a document and summarize it, or help you compose a message using AI, all as part of the OS. And Microsoft, at Build 2025, announced deep Windows 11 integration of AI, like AI shortcuts in File Explorer that can blur a photo background or summarize a document with one click. They are weaving AI suggestions throughout the Windows interface.
Meanwhile at WWDC, Apple demoed… well, some nice translation and image search tricks, but nothing like a system-wide AI assistant revamp. Siri didn’t get the new model we hoped for. As one The Verge headline aptly put it, “Apple punts on Siri updates as it struggles to keep up in the AI race.” This felt palpable in the keynote, Apple avoided even saying “Siri” too much. They used the term “Apple Intelligence” a lot, Apple’s umbrella term for on-device AI features, but notably not “artificial intelligence” explicitly.
Why?
It seems Apple is still not ready, perhaps technologically, perhaps philosophically, to play the generative AI game at scale. There are likely a few reasons:
Caution and perfectionism
True to form, Apple doesn’t want to release an AI feature that isn’t bulletproof. Federighi basically said as much, the personal Siri features weren’t up to quality standards yet. And to be fair, it’s better to delay than to launch a mess. Apple saw what happened with their own half-baked AI attempt last year, remember the “AI summarized news” feature in iOS 17 that ended up spouting out hilariously wrong headlines by merging multiple news stories? (This was reported by the BBC, Siri’s attempt at summarizing news often created nonsense mashups. It was embarrassing enough that Apple quietly disabled those summaries in some regions.) That kind of public snafu might have made Apple even more gun-shy about rushing AI features out.
Privacy and on-device bias
Apple’s ethos is privacy first, which often translates to on-device first for AI. The company is likely working on AI models that can run on your iPhone or Mac without sending data to the cloud. They did in fact mention that the promised Siri upgrades will require an iPhone that supports “Apple Intelligence”, which presumably means the latest Neural Engine capabilities. On-device AI, however, is inherently limited by hardware. The biggest, most powerful models (like GPT-4) simply can’t run fully on a mobile device yet. Apple might be waiting until their on-device models are good enough, rather than compromising by using cloud processing and stumbling into the thorny privacy issues that entails. It’s a noble stance, but it also means they’re moving slower than cloud-based competitors.
Strategic uncertainty
I suspect even within Apple there’s a bit of soul-searching on how to approach AI. Should Siri become a chatty, open-ended assistant that can write poems and code? Or should Apple double down on functional AI, features that operate in narrow scopes like camera enhancements, translations, etc., which they can tightly control? From this WWDC, it looks like Apple chose the latter for now. They rolled out a “wide swath of small, functional updates powered by Apple Intelligence,” as one analysis put it. Instead of one big AI, they sprinkled little AIs everywhere, translation here, photo style transfer there, call screening here. They even explicitly leveraged ChatGPT in one of those features, Image Playground, rather than roll their own generative model for image creation.
Image Playground and Genmoji
Let’s talk about that for a second, Image Playground and Genmoji. This was Apple’s fun creative update, an app that can turn text descriptions into quirky cartoon images or “Genmoji” characters. And Apple straight-up integrated OpenAI’s image generation into it. You can type a prompt and get an AI-generated image right within Image Playground. Essentially, Apple built a friendly UI and let ChatGPT’s visual brain do the drawing. They even added an API so developers can use Image Playground’s abilities in their own apps. This was one of the only times Apple name-dropped an outside AI service on their stage, a sign that they acknowledge OpenAI’s lead in this area. It’s both cool (users get powerful image gen in a safe Apple wrapper) and a bit telling, Apple didn’t claim “we made our own DALL-E,” they just quietly partnered.

Foundation Models API
On top of that, Apple’s catch-up strategy included giving developers something to chew on, they announced a new Foundation Models API. In plainer terms, Apple is opening up its on-device large language model to third-party apps. This got a brief mention at WWDC and was elaborated in developer sessions after. Essentially, Apple has a set of large language models that power features like autocorrect, dictation, and Siri’s limited smarts, collectively branded as “Apple Intelligence” in the OS. Now, with Foundation Models framework, devs can call those models within their apps.
Privacy-first AI for developers
Apple pitched it as a win for privacy and efficiency, apps can perform advanced AI tasks offline, without sending data to a server, and at no cost per query. They even gave examples, an education app generating quizzes from your notes on-device, or an outdoors app offering natural language search of downloaded trail info, all using Apple’s local language models. For developers, this is interesting, it’s like getting a mini-ChatGPT that runs on the user’s device, albeit presumably much less powerful than cloud AI. Apple also supports fine-tuning these models via low-rank adaptation (LoRA) techniques, meaning a developer can specialize Apple’s base model with custom data without retraining from scratch. That’s promising, actually. It shows Apple is aware that developers want to build AI-powered features, and they’re trying to offer a pathway to do so that aligns with Apple’s way of doing things (on-device, private, optimized for Neural Engine).
Still playing catch-up
However, there’s a flip side, this Foundation Models framework might be too little, too limited. The details suggest Apple’s on-device model, sometimes called “Apple GPT” in rumor circles, is still relatively small. It might handle summarization, classification, maybe short question-answer, but don’t expect it to compose your next screenplay or have an in-depth coding conversation. If a developer wants true GPT-4 level reasoning, they’ll still have to call out to OpenAI or another cloud service. Apple’s offering is like giving us a compact car when the industry’s racing monster trucks. It’ll get you from A to B, but it’s not winning any drag races.
From a developer’s perspective
This dichotomy is frustrating. On one hand, I can appreciate Apple’s elegant, privacy-centric approach, offline models, integrated with iOS, no extra costs, lovely. On the other hand, I see what’s possible with a 100 billion parameter model hooked up via an API, and Apple’s solution isn’t in the same league. It feels a bit like being handed a flashlight when everyone else is wielding floodlights.
Growing disappointment
And this is where the growing disappointment stems from. Apple is doing AI, but it’s doing it so conservatively and quietly that many developers feel they’re falling behind. In the months leading up to WWDC 2025, we saw Google, Microsoft, OpenAI, Meta, and even smaller players rolling out AI tools that genuinely change workflows and app capabilities.
Google’s gradient of AI options
Google, for instance, launched ML Kit GenAI APIs with Gemini Nano models for Android devs. As an Android dev, you can now do on-device text summarization, rewrites, image captioning, etc., with Google’s small Gemini models, and if you need more power, you can seamlessly call the big Gemini models via cloud. They’ve offered a gradient of options, from on-device for privacy and speed, up to cloud for heavy-duty tasks, all through unified APIs. They even demoed an app that transforms selfies into anime avatars using these tools, precisely the kind of fun, generative use case Apple’s barely touching (aside from the Genmoji novelty). Google’s message to developers was, “AI is at your fingertips, however you need it.” And crucially, Google’s Assistant itself is evolving quickly with things like browsing capability, image understanding, and integration into every product from Search to Docs.
Microsoft’s AI-first Windows
Microsoft has gone all in on AI across its ecosystem. At Build 2025, they unveiled Windows AI Foundry, a comprehensive platform to make Windows “the best dev box for AI.” They’re supporting every major silicon (NVIDIA GPUs, NPUs, etc.) with a built-in runtime, integrating model catalogs of open-source models (you can literally pull down a GPT-like model via Windows tools), and providing ready-made AI services in Windows for language and vision tasks. One part that made me perk up, Microsoft is shipping inbox AI models on every new Windows PC with a neural processor (Copilot+ PCs), offering devs APIs for things like text summary, image description, OCR, all out of the box. They even support on-device fine-tuning of these models (with LoRA adapters) so devs can customize Microsoft’s base AI to their app’s needs. In short, Microsoft is saying “Come build AI with Windows, we’ve got the infrastructure ready.” Plus, of course, they have the cloud side covered with Azure OpenAI and GitHub Copilot. Heck, they even talked about a Model Context Protocol for AI agents to control Windows apps, envisioning a future where an AI agent might orchestrate multiple apps to do tasks for you, a futuristic notion of personal AI “butlers.” Compared to Apple’s relatively simple Foundation API, Microsoft’s approach is sprawling and aggressive.
OpenAI’s expanding ecosystem
OpenAI (and others like Anthropic) continue to push the envelope in raw AI capability. There’s an entire ecosystem now of devs building applications on GPT-4, GPT-3.5, Claude, etc. We’ve reached a point where integrating an AI assistant or feature into an app is often as straightforward as calling a REST API. For example, many iOS apps in 2024 quietly added GPT-powered features, writing assistance in a notes app, AI enemies in a game, language practice bots, you name it. They didn’t wait for Apple, they just plugged into OpenAI. Apple’s stance, however, is to not directly facilitate that. They didn’t announce any partnership with OpenAI for Siri or a first-party API bridging to OpenAI. They’re basically leaving it to us developers to integrate those if we want, and likely making us abide by all the usual App Store rules (which can be murky when it comes to AI-generated content moderation, data usage, etc.).
Meta and the local AI underground
Meta open-sourced Llama 2 last year and others have since open-sourced or leaked powerful models that enthusiasts can run locally with some effort. On Macs, thanks to Apple Silicon’s prowess, hobbyists have been running 7B, 13B, even 30B parameter models locally, experimenting with personal ChatGPT-like assistants that don’t need the internet. Apple hasn’t openly embraced or facilitated this trend, but it’s happening. Ironically, Apple Silicon devices are some of the best for local AI due to their neural engines and unified memory, yet Apple doesn’t provide official tooling or support for running, say, a Llama model natively (beyond generic Core ML conversion tools). It’s all community-driven. One might think Apple could lean into that and champion on-device AI more loudly, but they remain quite silent publicly.
A sense of falling behind
All of this creates a sense that Apple is behind, and not just behind Google, Microsoft, but behind the general pace of innovation in AI. The developer sentiment after WWDC was mixed. We love the new designs and features, yes. But there’s a creeping worry that Apple is doing what Apple does best (refining user experience), while ceding the “next big thing” mantle to others. The iPhone was the next big thing in 2007, the App Store in 2008, the M1 chip in 2020. But in 2023, 2025, the next big thing is AI, and Apple’s just not leading that charge.
I’ll be frank
As a developer invested in Apple’s platforms, I want Apple to be an AI leader. I want tools from them that are as exciting as what Google just gave Android devs or what Microsoft is offering. I’d love to tell Siri, “Help me code this function,” or “Design a UI layout for this idea” and have it actually deliver using Apple tech. I’d kill for an Xcode integrated AI assistant that could intelligently suggest improvements or catch bugs (like GitHub Copilot but built into Apple’s dev tools). These things might come someday, but they’re not here now, and they weren’t even hinted at in this WWDC.
Instead, we got a politely worded “we’re working on it” regarding Siri, and a safe, very Apple rollout of under-the-hood frameworks for modest AI tasks. It’s a bit like watching a race where Apple is jogging while others are sprinting, you know Apple has the stamina and strategy to possibly win in the long run, but in the moment, it’s falling behind.
The road ahead, can Apple catch up in AI?
So where does this leave us? Apple’s WWDC 2025 gave us a treasure trove of factual updates, gorgeous UI redesigns, much improved multitasking on iPad, smarter apps and services throughout the ecosystem. It’s a testament to Apple’s unparalleled integration of hardware, software, and design that they can roll out such a cohesive upgrade across so many devices in one swoop. As a developer, I’m excited to dive into the new SDKs and start adopting things like the menu bar on iPad, the new Spotlight actions on Mac, and the translation APIs in iOS. These are tangible improvements that will make users’ lives better and keep the Apple experience state-of-the-art in many respects.
A sense of urgency
However, I’m also left with a sense of urgency and concern. The world of tech is experiencing an AI revolution, arguably on par with the mobile revolution that the iPhone ignited. And right now, it feels like Apple is just watching that revolution from a safe distance, rather than leading it. There’s a Mark Twain quote I’m reminded of, “Standing still is the quickest way of moving backward in a rapidly changing world.” Apple’s not exactly standing still, but in the context of AI, its measured steps risk looking like stagnation while others leap forward.
A hopeful possibility
From a narrative voice perspective, part of me, perhaps the part channeling Dr. Ellie Arroway’s scientific wonder, can’t help but remain hopeful and curious. Apple has immense talent and resources. They’ve been known to play the long game and then surprise the world. Maybe they are quietly developing a paradigm-shifting AI that simply isn’t ready for public demo yet. Maybe their focus on on-device AI will pay off when devices become powerful enough, and they’ll swoop in with something that offers the magic of generative AI without the privacy trade-offs. If and when that day comes, it could truly change the game, imagine a Siri that’s as smart as ChatGPT but runs locally, or developer tools that can locally fine-tune and run sizable models for niche purposes. An Einstein-like curiosity in me muses, Apple might not want to compete in today’s AI, they might be aiming for a different, perhaps more personal and human-centered AI for tomorrow. One that doesn’t just churn out text or images, but deeply integrates with your life, health, and habits in a way that’s uniquely Apple (and without sending your data to some distant datacenter).
The practical reality
On the flip side, the pragmatic Midwestern side of me (hi, Moonbeam and ScopeTrader folks) says, that all sounds nice, but we have to build apps today. And today, if I need cutting-edge AI in my app, I’m not looking to Apple for it. I’m looking to OpenAI, to Google’s APIs, to open-source models I can wrangle. That’s a shame, because it means innovation on Apple’s platforms in the AI space might happen in spite of Apple rather than thanks to them. Apple risks becoming the facilitator of others’ AI (through app wrappers, etc.) rather than the originator. Historically, Apple’s ecosystem thrives when Apple provides the best tools out there, Metal for graphics, Swift for coding, ARKit for augmented reality. When Apple doesn’t provide, developers fill the gap with third-party solutions, which can be fine, but it dilutes the platform’s identity and strengths.
Admiration and frustration
As we wrap up this deep dive into WWDC 2025, it’s a blend of admiration and frustration. Admiration for the immense work Apple’s teams have clearly put into refining the experience, the new features will delight users, no question. Frustration that Apple didn’t wow us in the one domain everyone is watching closely, AI. Watching the keynote, I felt like Apple was showing us a beautiful high-performance car, shiny, fast, comfortable, but when we lifted the hood, we expected a new engine of innovation and instead found last year’s model, polished but not fundamentally transformed.
The race isn’t over
To be clear, Apple hasn’t “lost” any race yet. The company plays its own game in many ways. They prioritize user trust, long-term stability, and seamless integration. Those values sometimes conflict with the move-fast-and-break-things attitude seen in AI lately. In the long run, that could even turn out to be wise, there’s a universe where today’s AI leaders stumble over privacy, misinformation, or regulatory hurdles, and Apple’s cautious approach is vindicated. But in the here and now, Apple’s lag in AI is conspicuous and, for many developers, discouraging.
The barn dance without a fiddler
In the style of Richard Harris’s folksy pragmatism, it’s like Apple threw a grand barn dance and invited all us developers, the music was great, the barn’s never looked prettier, but they forgot to bring the new hotshot fiddler everyone’s talking about. We still had a good time, but we were sure looking around for that fiddle music that never came.
A beautiful update with a quiet omission
Ultimately, WWDC 2025 will be remembered for its Liquid Glass sheen and cross-platform unity, a pivotal moment where Apple reimagined its interface for the next decade. It’ll be remembered by iPad fans as the year the iPad almost became a Mac. By Mac fans as the last hurrah for Intel and a big step into an AI-augmented workflow (Spotlight’s evolution). By iPhone users as the year their phone got a real-time translator and smarter at fending off spam. There’s plenty to celebrate.
What was missing
Yet, in quiet corners of the developer forums and tech podcasts, we’ll also remember WWDC 2025 as the year Apple’s AI story was conspicuously muted. The year Apple’s great new OS versions landed with a thud in the AI department, not because what they have is bad, but because of what’s missing. The disappointment isn’t born of dislike; it’s born of high expectations and a genuine desire to see Apple at the cutting edge.
Final thoughts
As someone who straddles the line between starry-eyed tech optimist (looking at the night sky of innovation like Trevor Jones marveling at galaxies) and a grounded engineer (like a Twain character, skeptical but hopeful), I’ll end on this note, Apple is falling behind in the AI race, but the race isn’t over. They’ve fallen behind before in areas (remember how they lagged in larger screen phones, or in supporting third-party apps pre-App Store) and came back strong on their own terms. The next year or two will be crucial. Either Apple will unveil something that leapfrogs our AI expectations, or they risk cementing a narrative that in the era of AI, Apple was a follower, not a leader.
For now, I’ll enjoy tinkering with iOS 26 and macOS Tahoe, implementing all the nifty new features for my app users. I’ll marvel at the liquid beauty of Apple’s new designs and the tangible improvements in usability. But I’ll also be patiently (or not so patiently) waiting, and pushing, for Apple to show us that “one more thing” we desperately wanted to see, a bold leap into the intelligent future unfolding around us. Because if there’s one company that can harmonize cutting-edge AI with humane design and privacy, it’s Apple. They just haven’t done it yet, and the clock is ticking.
The missing encore
In the meantime, the AI revolution rages on elsewhere. And many of us can’t help but feel like Apple’s wonderful party ended with a missing encore. It was a great show, yes, but we’re left humming the tune of the song we expected to hear, hoping that by next WWDC, Apple finds its voice in that melody and belts it out for all of us to hear. Until then, we carry on, excited by what we have, yearning for what was unsaid, crafting our apps with equal parts inspiration and impatience, in the shadow of two realities, Apple’s brilliant present, and an AI-charged future that Apple has yet to fully embrace.
WWDC 2025 — June 9 | Apple

Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.
MEMBERS GET ACCESS TO
- - Exclusive content from leaders in the industry
- - Q&A articles from industry leaders
- - Tips and tricks from the most successful developers weekly
- - Monthly issues, including all 90+ back-issues since 2012
- - Event discounts and early-bird signups
- - Gain insight from top achievers in the app store
- - Learn what tools to use, what SDK's to use, and more
Subscribe here
Comments