On May 5, 2026, OpenAI quietly replaced GPT-5.3 Instant with GPT-5.5 Instant as the default ChatGPT model[1]. No press tour, no demo video. Just a release note and the change going live across web and apps. Plus a model card showing what changed under the hood[2].
A week of real use later, the upgrade is one of the more interesting OpenAI ships in 2026 — for what it stops doing as much as for what it starts doing.
Hallucinations: the headline metric
Source: openai.com (relative figure), trend illustrative
OpenAI's own measurement is "52.5% fewer hallucinated claims than GPT-5.3 Instant on high-stakes prompts" covering medicine, law, finance[2]. The chart above is illustrative — OpenAI publishes the relative reduction, not the absolute rates. The directional read is clean: each minor version of GPT-5 has roughly halved the hallucination rate on hard professional-domain prompts.
For comparison: this is the kind of error rate where ChatGPT becomes genuinely usable as a first-pass reference for professionals (caveats aside). Below 1% absolute is where it might displace some Stack Overflow / WebMD / Investopedia traffic outright.
Brevity: the other headline
OpenAI's release notes describe GPT-5.5 Instant as producing "more concise, less padded" responses than 5.3 Instant. 9to5Mac flagged the specific "fewer emojis" change[4]. The post-training direction is a deliberate push back against the verbosity creep that hit 5.2 / 5.3 — fewer hedges, less "let me also mention", fewer extraneous emojis.
ChatGPT now sounds more like Claude — confident, terse, willing to give you the answer without explaining itself first.
Personalised memory across products
The headline feature that actually shifts UX is cross-product memory[2]. GPT-5.5 Instant can refer back to past ChatGPT conversations, uploaded files, and (with permission) connected Gmail — to give personalised answers without you re-explaining context every time.
There is also a memory-sources panel where you can see which past conversation or document the model is referencing for any given claim, and delete or correct any source. The transparency is more important than the feature. AI tools that pull memory you cannot inspect are spooky. AI tools that show you what they remembered are useful.
API implications
GPT-5.5 is exposed in the API as chat-latest[3]. API pricing per OpenAI's docs is $5 per million input tokens and $30 per million output tokens — same input pricing as Claude Opus 4.7, more expensive on output (Opus is $25). Context window is approximately 1 million tokens, with a higher-cost tier for prompts above 272k.
Two things worth knowing if you build on the platform:
- The personalised memory is opt-in via a header. If you do not pass it, you get the legacy non-memory behaviour. Existing API integrations are unchanged by default.
- Output costs dominate. At $30/1M output, generating long responses is meaningfully more expensive than Opus 4.7 ($25). For high-output workloads (e.g. agent loops that produce long action sequences), Opus 4.7 is now cheaper to run than GPT-5.5 Instant.
Where this sits vs Claude and Gemini
GPT-5.5 Instant is now the default model behind the largest consumer AI product on Earth — ChatGPT crossed 900 million weekly active users in late February[5]. It is broadly faster than Claude Opus 4.7, marginally cheaper on input, and more expensive on output. It hallucinates less than any previous OpenAI model.
For hard coding work, agent runs, and tasks requiring deep reasoning, Claude Opus 4.7 is still ahead. For long-context document work, Gemini 4 Pro (when released at I/O 2026 in eight days) will likely take the crown.
The market is settling into a three-vendor pattern: OpenAI for breadth, Anthropic for depth, Google for context. If you build on a multi-provider gateway, you can route to whichever wins for the specific task.
The verdict
GPT-5.5 Instant is a good upgrade. Not a paradigm shift, but a meaningful improvement on the dimensions that matter for the typical ChatGPT user — fewer wrong answers, shorter responses, finally-usable memory.
For 900 million weekly ChatGPT users, that is the most consequential AI change of the week.