I unlocked Apple’s hidden Siri chatbot in iOS 26 with surprising–and hilarious–results

Apple is reportedly giving Siri a Large Language Model (LLM) upgrade in a year or two. The boost is expected to make the iPhone’s virtual assistant more conversational and equip it with a broader range of world knowledge. What most users don’t know is that the iOS 26 beta already introduces a hidden AI chatbot that testers can try right away.
For reasons I’ll clarify later on, iOS 26 neither offers a dedicated app for Apple’s LLM chatbot, nor bakes it into the default Siri experience. I came across the veiled interface while exploring the updated Apple Shortcuts app. To try it out, you’ll have to build your own shortcut using iOS 26 developer beta.
Before we get started…
Before delving into Apple’s AI chatbot and its capabilities, there are a few matters you must keep in mind:
- When building the chatbot via Shortcuts, you can pick between Apple’s on-device model, Private Cloud Compute, and OpenAI’s ChatGPT (GPT-4 variant with real-time results).
- The on-device and Private Cloud Compute models’ knowledge cutoff date is October 2023, so neither has access to live web results or recently updated information.
- Apple’s models claim that they understand English, Spanish, French, German, Chinese (Mandarin), Japanese, Korean, Italian, Portuguese, Russian, Arabic, Hindi, Dutch, Turkish, and Malay—but they seemingly aren’t reliable in several of these languages.
- The chatbot will avoid discussing illegal activities, hate speech, violence, self-harm, sexual content, personal identifiable information, illegal drug use, and political extremism.
- I tested the chatbot for around a week on an iPhone 16 Pro Max running iOS 26 developer beta 1.
- The features I’m about to break down are generally available on any Apple Intelligence-enabled iPhone, iPad, or Mac running OS version 26.
Setting up the chatbot
Like any shortcut, there’s no one way to build the AI chatbot. You can get creative and customize it so it functions in whichever way you expect it to. The primary action you’ll need to incorporate is the new Use Model option under the Apple Intelligence menu found in the shortcut creation flow, which is seemingly only compatible with text input and output.
When picking the model, I advise you to choose the on-device option. Picking ChatGPT makes no sense, as OpenAI already offers native and web chatbots that work more reliably than a shortcut. Similarly, beyond privacy, I see no reason to use Apple’s Private Cloud Compute, as online services like ChatGPT and Google Gemini are miles ahead.
Apple
The main edge of using Apple’s on-device chatbot is that it provides offline access and does not require any additional downloads (assuming you’re already using Apple Intelligence). If you’re connected to the internet, you’re better off using one of the reputable third-party online chatbots for your everyday questions.
If you prefer a Voice Mode approach, you can add an action that converts your speech to text then feeds the outputted text to the model. You can also have the text-to-speech action read aloud the chatbot’s text response.
My ideal setup is having the shortcut present a text box. Once I type my query, a dedicated action explicitly asks the LLM to maintain brevity before feeding it my text, to avoid unnecessarily long answers. I’ve also enabled the Use Model action’s Follow Up toggle, as it lets me ask additional questions while maintaining context and chat history during a single session.
To replicate my setup, follow these steps:
- Launch the Shortcuts app on iOS 26 beta.
- Tap the plus (+) button in the top right corner to create a new shortcut.
- Search for and add the Text action.
- Tap Text in the Text action and choose Ask Each Time.
- Search for and add the Use Model action. Pick the On-Device option.
- Tap Request in the Use Model action, type Briefly process the following request:, then add the Text variable from the autocomplete row right above the keyboard.
- Tap the right arrow (>) on the Use Model action and enable the Follow Up toggle.
- Leave the shortcut to save it.
When your shortcut is ready, you can trigger it in multiple ways, including a custom voice command, double back tap, Spotlight Search, Action button, etc. If you’ve enabled iCloud sync, you can use the same shortcut on all of your compatible iPhones, iPads, and Macs.

Apple
Putting it to the test
To find out how reliable Apple’s AI chatbot is, I asked it one of humanity’s most perplexing questions: How many Rs are there in the word strawberry? The chatbot, utilizing the on-device LLM, correctly answered with three every single time. Curiously, when opting for the supposedly superior Private Cloud Compute option, it falsely and stubbornly claimed there are only two Rs. Then the real-life tests followed, all using the on-device model.
I asked the offline chatbot questions about cooking; like for how long to boil an egg or cook ground meat in a pressure cooker. The results were primarily accurate and informative. It can also provide ingredient lists and instructions for famous recipes—but I wouldn’t necessarily trust it if I’m having guests over. When asked if pineapple goes on pizza, it refused to state the only correct answer and insisted it was a matter of taste—presumably to avoid offending certain users. Disappointing.
Moving on, I fed the chatbot basic math equations and it solved them all correctly. It is also aware of and follows the PEMDAS rule, so you don’t need to insert parentheses to have it multiply before adding.
When asked to compare the feature sets offered by WhatsApp and Telegram, it provided a well-formatted list breaking down the main options. However, most of the (confidently) stated information was incorrect. Also, for some reason, the chatbot sometimes randomly answered in German even when my queries were explicitly sent in American English.

Apple
Speaking of languages, while the chatbot claims it supports Arabic and Turkish, it failed to have meaningful conversations in these languages. It gets some stuff right, but most of the responses include irrelevant words or phrases. I don’t speak the rest of the languages supported to test how well it knows them, but I assume it’s only proficient in English.
I then moved on to religious questions, which it also didn’t always get right. For example, I asked it about the difference and overlap between kosher and halal food according to Jewish and Islamic teachings, and its response was inaccurate. It’s aware of these dietary laws in concept, but it can’t properly compare them or explain their guidelines.
When asked to generate an original quote that comes to its mind, it stated the following: “In the quiet dance between the echoes of our past and the whispers of our potential future, we find the profound truth that each moment is both a reflection of who we have become and a canvas upon which we paint the essence of our being.” Quite touching, if you ask me.
To test its reasoning capabilities, I asked it when we can expect iOS 26—knowing that iOS 18 launched in 2024. Given the October 2023 knowledge cutoff date, the reasonable answer would’ve been 2032 (eight years after 2024). Meanwhile, it answered with: “If iOS 18 is released in 2024, we can infer that Apple typically releases new iOS versions annually. Therefore, iOS 26 would logically launch in 2025, assuming the same release pattern continues.” Funnily, it got the answer right—not because it can predict the future, but because its reasoning skills are poor. For what it’s worth, it also thinks iOS 27 is launching in 2025 for the same reason.
I continued to test its knowledge of a wide range of topics. For example, it can list symptoms of common health conditions, but most certainly don’t rely on it (or any AI chatbot, really) for medical advice. Surprisingly, it also was able to correctly inform me which metro line to take to get from (popular) point A to point B in Istanbul—specific stations and all. On the contrary, it failed to provide basic Apple OS tech support, like how to hide a photo on iOS. Other fails include falsely stating that Americans can’t obtain a visa on arrival in Lebanon and that Mexican citizens don’t need a visa to enter the US legally.
Why Apple’s AI chatbot is hidden
Apple’s LLM in the Shortcuts app isn’t ChatGPT, but it’s not completely useless either. It’s seemingly using the same model powering Writing Tools and the summarization features across iOS. If you feed it a large wall of text and ask it to paraphrase or rewrite it, it’ll get that done reliably. But why would you do that when the native Writing Tools feature offers a superior UI/UX?
Mainly because, as demonstrated above, Apple’s LLM-powered chatbot is prone to hallucinations and often gives confident wrong answers. Sure, it answers many questions correctly, but it maintains the same confident tone when providing misinformation. Thus, you can’t really tell unless you’re already familiar with the requested answer, which defeats the point of asking. In Apple’s defense, all responses state that you should check for mistakes.
That will all likely change by the time iOS 26 arrives and certainly evolve with Apple Intelligence’s Siri capabilities. Apple didn’t give any indication during WWDC that it was providing a localized chatbot as part of iOS 26’s Apple Intelligence features, so it’s probably not going to rise to the level of ChatGPT or Gemini just yet. But you can try it out if you want, which is as close to a demo as we’re going to get.