OpenAI pushed a Codex CLI update on January 27 that makes cached web search the default client behavior. The change, tracked as PR #9974 in the open-source repository, eliminates the need to pass --search or configure web_search_request = true in config files before the agent can query external documentation.
Five months of asking
The feature request dates back to September 2025. A GitHub issue titled "Enable Web Search by default" picked up 49 thumbs-up reactions over the following months, with developers arguing that an LLM helping with code should be able to look things up without manual opt-in.
OpenAI's initial approach was conservative. Web search stayed off by default, requiring either CLI flags or edits to ~/.codex/config.toml. The reasoning made sense at the time: the feature was experimental, and giving an agent network access raises prompt injection concerns that sandbox-only execution doesn't. But as the web search tool stabilized through late 2025, the friction started looking less like prudent caution and more like an oversight.
What actually changed
The release notes for the January 27 build are terse: "Cached web_search is now the default client behavior." That's it. The change applies to the CLI and propagates to IDE extensions like VS Code and JetBrains through the shared app server.
There's a detail worth noting here. The default enables cached web search, not unrestricted live fetches. OpenAI added web_search_cached as an intermediate option back in December, allowing the agent to pull from indexed results without hitting external servers in real time. It's faster and sidesteps some of the security concerns around live network access, though it means results can lag behind the actual web by however long the cache window runs.
If you want live searches, you still need to configure that explicitly. The security documentation continues to warn about prompt injection risks when enabling network access, and the sandbox defaults haven't changed.
The practical difference
For someone setting up Codex fresh, the workflow just got simpler. You install via npm, authenticate with your ChatGPT account, and the agent can immediately answer questions about libraries and APIs without you remembering to flip a setting. That's particularly useful for queries about packages released after the model's training cutoff, or for checking current documentation that might have drifted from what the model learned.
The agent surfaces web search activity in the transcript, showing web_search items when it looks something up. The underlying mechanism queries sites like GitHub, npm, and documentation portals, then folds the results into its context window before responding.
Whether this moves the needle for people already using Codex is less clear. Anyone who wanted web search badly enough had already enabled it. The change matters more for adoption, lowering the configuration tax on new users who might otherwise bounce off a tool that couldn't look up a package version.
Still missing
The update doesn't address the broader question of when Codex should search versus relying on trained knowledge. Simon Willison noted in his review of GPT-5 Codex that the model was "surprisingly bad at using the Codex CLI search tool to navigate code," suggesting the model's judgment about when to invoke search could use work regardless of whether the feature is enabled by default.
There's also the JetBrains extension, which OpenAI documented in a Skyscanner case study but which remains less polished than the VS Code integration. The same web search default should apply, but JetBrains users have historically hit more rough edges with Codex.
Updating
If you're on an older Codex CLI version, run npm update -g @openai/codex to pull the latest. The cached web search default takes effect immediately. To disable it and return to the previous behavior, set web_search_request = false in your config or pass --no-search on the command line.




