|
|
|
|
TrueSolvers is an independent technology publisher with a professional editorial team. Every article is independently researched, sourced from primary documentation, and cross-checked before publication.
Apple's iOS 27 Siri announcement landed on March 24, the same day iOS 26.4 shipped with no Siri improvements at all. This pattern of announcing ahead while missing current targets has now repeated three times. Here is what the Gemini deal structure, the delay history, and the iOS 27 redesign actually mean for users who have been waiting.

Apple announced a standalone Siri app for iOS 27 and confirmed that iOS 26.4 had shipped with zero Siri improvements on the same day, March 24, 2026, and that collision is the most honest summary of where Apple's AI strategy stands right now.
Macworld confirmed that iOS 26.4 arrived that day with no Siri revamp. For a full picture of what iOS 26.4 actually delivers while Siri waits, the update brought four substantive additions: AI-generated playlists, a redesigned Podcasts video experience, CarPlay chatbot access, and automatic Stolen Device Protection. The Siri overhaul that had been anticipated since the Gemini deal closed in January was entirely absent.
On the same day, 9to5Mac's summary of Bloomberg's Mark Gurman reporting covered what Apple is building for iOS 27: a redesigned, conversational Siri with persistent chat history, file uploads, and a new Dynamic Island interface. Apple Intelligence features first shown at WWDC 2024 will finally arrive in iOS 27. That preview is expected at WWDC 2026, scheduled for June 8 through June 12.
What the combined picture shows, and what no single piece of March 24 coverage addressed, is that Apple's AI release pattern completed a third full iteration on the same calendar day. The cycle is consistent: features are announced, the current release window is missed, and the announcement moves forward to the next cycle. iOS 18 missed its Siri targets. iOS 26.4 has now missed its Siri targets. The iOS 27 announcement has already begun.
The redesigned Siri, developed internally under the codename Campos, is not a separate application sitting on the home screen. It is an OS-level replacement for the current Siri experience, activated the same way users trigger Siri today: holding the side button or using the wake phrase.
MacRumors, citing Bloomberg's Mark Gurman reporting, documented the interface in detail. Past conversations will appear in a list or grid layout, with the ability to favorite, search, and start new chats. Conversations use iMessage-style bubbles. New sessions open with suggested prompts, and both voice and text input are supported throughout. File and photo uploads are included, enabling document analysis and image-based requests.
The Dynamic Island version of the interface introduces a "Searching" pill indicator with a glowing Siri icon while a request is processing. Once results are ready, the interface expands into a translucent panel using Apple's Liquid Glass design language, which users can pull down to continue the conversation.
Beyond the chat interface, the "Ask Siri" button can appear inside the menus of other apps, letting users send content directly to Siri with a request. A "Write with Siri" option surfacing Writing Tools will appear in the system keyboard. Siri is also planned to absorb current Spotlight Search functionality, expanding the assistant's role at the OS level.
What distinguishes the Campos redesign from earlier Siri promises is the category of problem it addresses. Every prior Siri improvement attempt targeted intelligence: making Siri smarter, adding more capable models, improving what the assistant knows. The iOS 27 redesign targets the interaction model. Today's Siri is stateless: each command is treated as a fresh request with no memory of what came before. Campos would be conversational: context carries across turns, the assistant remembers what it just said, and users can follow up without restating the entire task. That is a different product category built on top of Gemini's intelligence layer, not a continuation of the same effort.
Apple's official delay statement, published March 7, 2025, read: it would "take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year." The features in question were personal context, onscreen awareness, and in-app actions. Those had been the selling points of iPhone 16 and the flagship demonstrations of WWDC 2024.
The gap between what Apple demonstrated and what shipped came partly from an architectural split inside the iOS 18 Siri codebase. Two separate backends existed: one handling legacy Siri voice commands, one handling the newer language model features. There was not enough time to build a unified system before the iOS 18 release cycle. Apple's own leadership, including Craig Federighi, raised internal concerns that the features did not perform as advertised.
By February 2026, the same two categories of features were in trouble again. MacRumors, citing Bloomberg's Mark Gurman's reporting, documented that internal iOS 26.4 testing had surfaced multiple failure modes: Siri cutting users off mid-sentence during fast speech, failures on complex multi-step requests, slow response times, and a fallback bug in which the assistant defaulted to the existing ChatGPT integration for queries the Gemini-powered version was supposed to handle. The two features most at risk were personal data access and app intents. Again.
Personal data access was delayed in iOS 18 for reliability. App intents were delayed in iOS 26 for reliability. Both are delayed again in the current cycle, and the fact that they're always the same two features is the signal that the difficulty isn't model quality but the structural tension between capability, response speed, and Apple's privacy isolation requirements.
These two features face a constraint that general AI tasks do not. Generating a playlist from a text prompt, summarizing a document, creating an image from a description: these tasks can tolerate network latency and don't require access to personal data. Asking Siri to find a podcast a friend texted about last month and start playing it immediately requires the assistant to access message history, reason over personal data, cross into the Podcasts app, and return a result fast enough to feel responsive, all while keeping that personal data within Apple's controlled infrastructure. The capability must be high, the latency must be minimal, and the data must not leave Apple's secure environment. Those three requirements form a constraint that Apple has not yet resolved across multiple models and multiple release cycles. The pattern visible across iOS 18, iOS 26.4, and the current cycle is not one that a different AI model resolves.
On January 12, 2026, Apple and Google published a joint statement confirming a multi-year collaboration in which Apple's Foundation Models would be based on Google's Gemini technology. The statement was careful on privacy: "Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple's industry-leading privacy standards." Apple evaluated OpenAI and Anthropic before selecting Google, stating that Gemini's technology "provides the most capable foundation" for what Apple needed.
At Apple's fiscal Q1 2026 earnings call on January 29, Tim Cook addressed the privacy dimension directly. 9to5Mac documented his statement: requests processed through the new Siri run on Apple devices or on Apple's Private Cloud Compute infrastructure, not on Google's servers. The ChatGPT integration, used for opt-in queries the current Siri routes to OpenAI, is not being replaced by this deal. Both arrangements continue in parallel.
The initial coverage of the deal described it as a licensing arrangement: Apple uses Gemini as a backend model, Google receives a substantial fee. That framing understates what the agreement actually involves. Apple has model-level access to Gemini within its own data centers, not just calls to an external API. This structural difference matters because it determines what Apple can build independently over time. The financial terms have not been confirmed publicly; CNBC reported that Apple is paying approximately $1 billion annually for Gemini access, with neither company confirming the figure. Understanding the financial scale helps explain why Apple's engineering ambitions around this deal extend well beyond a standard vendor relationship.
That ambition is evident in what The Information reported on March 25: Apple has complete model access to Gemini within its own facilities and is using that access to perform knowledge distillation. A distilled model learns not just what answers a larger model produces but the internal reasoning steps the larger model uses to reach those answers. The result is a smaller model that can run directly on Apple hardware without network connectivity, approaching the larger model's performance for specific tasks. Apple's Foundation Models team is doing this work, though its broader goals beyond on-device capability have not been disclosed.
What we cannot confirm from Apple's public statements is the long-term trajectory of the Gemini relationship once Apple's own distilled models mature.
Three milestones deserve separate assessment, each carrying a different level of certainty.
Apple's developer conference is the most reliable milestone: a WWDC announcement is a credible signal of direction. Apple has confirmed the dates. The Campos redesign is widely expected to be the flagship iOS 27 feature revealed there. A WWDC preview does not guarantee what ships in September or what ships complete.
September is Apple's standard iOS release window, and the Campos overhaul is on track as the target for that cycle. MacRumors, citing Bloomberg's Gurman reporting, reported in February that iOS 27 is already being discussed internally as the landing zone for features that don't make iOS 26.5. That is useful context: it means the September release could arrive with a narrower initial feature set than the full Campos vision describes.
This is the genuinely uncertain milestone. What Apple announces at WWDC and what ships in September may not be identical, and what ships in September may not be complete. What we cannot verify from available reporting is precisely which features will ship in September versus which will arrive in later iOS 27.x updates. Given that personal data access and app intents are both delayed and both central to the iOS 27 vision, the fall release is a plausible target but not a guarantee of completeness.
Apple has also enabled users in Japan to replace Siri's side-button trigger with third-party assistants, including Gemini and Alexa, driven by Japanese regulatory guidelines that took effect in December 2025. Whether equivalent options come to EU users under the Digital Markets Act remains an open question; Apple has made no public commitment on a timeline.
9to5Mac, citing The Information's March 25 reporting, documented that Apple has complete access to Gemini's internal reasoning chain for distillation into smaller on-device models, changing the long-term picture: Apple isn't simply licensing intelligence from a competitor, it's using Google's model as a training substrate for the models it will eventually run independently. This suggests, though Apple has not confirmed it, that the Gemini deal is a bridge arrangement designed to accelerate Apple's own model development rather than a permanent outsourcing of Siri's intelligence layer.
If that interpretation is accurate, the Siri story beyond iOS 27 looks different from what the current deal structure implies. A Siri powered by Apple's own distilled models would not depend on Gemini's continued involvement. For now, that outcome is inference from deal structure, not confirmed roadmap.
The most useful thing to watch at WWDC in June is not whether Apple announces personal data access and app intents again. Those features have been announced. The meaningful signal is whether Apple demonstrates them live, on stage, with real user data, under conditions that would expose failures.
Live demonstrations with real data are harder to stage than prepared demos with controlled inputs. If Apple runs Siri through a live multi-step task in June, finding a message, cross-referencing a calendar, pulling up a file, sending a reply, that is a meaningfully different signal than a polished video. If the demonstration uses controlled inputs or avoids exactly the classes of tasks that have failed in testing, that is a signal too.
The iOS 27 Siri redesign is the most structurally distinct attempt Apple has made at this problem. The interaction model is genuinely new. The Gemini deal is architecturally more sophisticated than it appeared in January. The delay pattern is real and consistent. Whether WWDC marks the beginning of delivery or the beginning of another cycle of previews depends on execution that has not been publicly demonstrated yet. June will tell.
No. Apple's ChatGPT integration, which routes specific opt-in queries to OpenAI, is not being replaced by the Gemini deal. Apple confirmed this to CNBC when the January 12 announcement was published. The Gemini partnership covers Apple's own Foundation Models, which power the intelligence layer behind Siri's native capabilities. ChatGPT remains available as an opt-in option for queries users choose to send there.
The two arrangements serve different purposes. Gemini powers what Siri knows and can reason about when operating as Apple's own assistant. ChatGPT integration exists for users who want to reach OpenAI's model directly through Siri without switching apps.
Apple's architecture keeps Gemini-powered Siri requests on Apple's infrastructure. Tim Cook confirmed at the January 29, 2026 earnings call that Siri processes requests on device or through Apple's Private Cloud Compute system, not on Google's servers. The joint statement from Apple and Google on January 12 also states that Apple Intelligence will continue running on Apple devices and Private Cloud Compute.
The privacy question has a less-examined dimension worth understanding. Even if user data never reaches Google's servers, Apple is delegating the core reasoning logic of Siri to a model trained by Google. Siri's behavior in edge cases, its safety filters, and its handling of sensitive topics will be shaped by Gemini's training, not by Apple's own engineering choices. That is a different category of concern than data exposure, and it is not addressed by Private Cloud Compute architecture. There are no publicly auditable controls beyond Apple's own assurances on this point.
Apple has not specified which devices will support the full Campos redesign. Apple Intelligence features introduced in iOS 18 required at minimum an iPhone 15 Pro or iPhone 16 series, with on-device processing dependent on Apple's Neural Engine. The more demanding personal data access and app intent features may carry similar hardware constraints. MacRumors, citing Bloomberg's Gurman reporting, has not indicated a specific device cutoff for iOS 27 Siri features beyond the general expectation that Apple Intelligence requirements will apply.
If the iOS 18 hardware requirements are any guide, devices without a capable Neural Engine tier will either receive a limited version of the redesign or none of it. Apple typically announces hardware eligibility at WWDC alongside the feature details.
iPhone 16 was marketed partly on the strength of Siri improvements that were not yet complete when the phone shipped. The personal context features, in-app actions, and onscreen awareness that Apple showed at WWDC 2024 were announced as coming in a future update. Apple's official statement in March 2025 acknowledged the delay: the features would "take longer than we thought."
MacRumors, citing Bloomberg's Gurman reporting, documented that those same features were still in trouble as of February 2026, with testing revealing performance problems and a fallback bug in the Gemini integration. Those features are now formally part of the iOS 27 target. Whether they arrive in September 2026 or slip further depends on the testing outcomes that will be visible in the iOS 27 beta cycle over the summer.
Finished reading? Continue your journey in Tech with these hand-picked guides and tutorials.
Boost your workflow with our browser-based tools
Share your expertise with our readers. TrueSolvers accepts in-depth, independently researched articles on technology, AI, and software development from qualified contributors.