Tutorial · Estimated reading 20 mins

OpenAI Codex CLI in 2026:
Clash split rules for npm, APIs, and sandbox images

The OpenAI Codex CLI (@openai/codex) is a terminal-native coding agent: you install it from the npm registry, authenticate with ChatGPT or an OpenAI API key, then keep long-lived sessions talking to OpenAI while optional MCP tooling and sandbox workflows pull more bytes from registries and CDNs. That entire chain is exactly where Clash and Mihomo shine—if you order DOMAIN-SUFFIX rows, keep rule provider hygiene sane, and avoid fighting corporate VPN or Docker Desktop defaults. This article walks the real developer path end to end so searches for Codex CLI stop colliding with generic ChatGPT lists alone.

Codex CLI · npm · OpenAI API · Docker Hub · Mihomo · split routing

1 Why the OpenAI Codex CLI deserves more than a generic ChatGPT rule list

If you already maintain split rules for browser ChatGPT, you are halfway there—but the OpenAI Codex CLI stitches three different “internet personalities” into one habit loop. First, you treat the machine like a package consumer: npm install -g @openai/codex (or an equivalent package manager wrapper) talks to the public npm registry, often via registry.npmjs.org, and may fan out to tarball CDNs depending on version and mirror settings. Second, you authenticate like a platform user: the default path is Sign in with ChatGPT, which opens a browser flow, may use device-code beta paths on headless hosts, and ultimately leaves tokens under ~/.codex/auth.json or the OS keyring. Third, you execute like an API client: model calls, streaming responses, and refresh logic lean on OpenAI API infrastructure rather than whatever hostnames your social feed remembered from last year.

None of that is exotic individually; the failure mode is ordering. A catch-all GEOIP rule that accidentally pins the npm CDN to the wrong exit produces “random” install failures. A DNS mode that returns FakeIP addresses while Node still resolves AAAA through a bypassing stub resolver yields TLS errors that look like certificate bugs. A corporate VPN that captures 0.0.0.0/0 while Docker pulls Docker Hub layers through a separate interface silently splits your trust boundary. This guide follows the real sequence—install, login, run, optional MCP, optional containers—so you can reason about Clash / Mihomo policy the same way you reason about the product. For broader OpenAI domain coverage, still keep the ChatGPT and OpenAI API split article in your back pocket; here we emphasize the Codex-shaped edges.

Terminology: this site uses “Clash-compatible” to include cores such as Mihomo that accept familiar rules and rule-providers YAML. You can paste the same suffix rows into Clash Verge Rev, Stash-class clients, or headless gateways—just normalize policy group names to whatever you already export (PROXY, AI, RESEARCH, and so on).

2 Phase one: installing @openai/codex from the npm registry (and GitHub when you skip npm)

Most teams install with npm i -g @openai/codex. That implies reliable HTTPS to the registry front door, consistent DNS, and enough bandwidth for the tarball. In split-routing setups, the conservative pattern is: pin DOMAIN-SUFFIX,npmjs.org and DOMAIN-SUFFIX,registry.npmjs.org to a stable outbound group (or DIRECT if your uplink to npm is faster unmolested), then log which additional hostnames appear in Mihomo’s connection list during upgrades. Package managers are notorious for following redirects to edge nodes with different suffixes; when that happens, capture the SNI from logs instead of guessing from outdated forum posts.

Some engineers prefer Homebrew or direct downloads from GitHub Releases to avoid Node entirely. That shifts traffic toward github.com, objects.githubusercontent.com, and related asset hosts—useful when npm is blocked but Git is not, or vice versa. The overlap with MCP workflows is real: our MCP tooling split-routing guide already walks npm plus GitHub hygiene; read it when Codex pulls optional MCP servers that themselves install from npm or clone repositories.

Practical tip: run npm view @openai/codex version and a dry-run install while tailing Mihomo logs. You want evidence before you freeze YAML. If installs only fail on Wi-Fi but succeed on tethered LTE, suspect DNS or middleboxes rather than Codex itself.

Mirrors: If your organization mandates an internal npm mirror, set registry= in .npmrc and add that mirror’s hostname to rules explicitly. Blindly copying public-registry suffix rows will not cover a private artifactory. hostname.

3 Phase two: ChatGPT sign-in versus API keys—and the OpenAI domains you should expect

Official documentation describes two supported methods: Sign in with ChatGPT (default when no valid session exists) and Sign in with an API key for usage-based access, with keys issued from the OpenAI dashboard. ChatGPT sign-in opens a browser window (or device-code flow on headless machines) and returns an access token to the CLI; API-key mode bills through the Platform account and follows API retention policies instead of ChatGPT workspace controls. Both paths ultimately require trustworthy TLS to OpenAI-operated endpoints; the exact host set evolves, so treat the following as a baseline to validate in your logs rather than scripture carved in stone.

Commonly observed families include api.openai.com for inference-shaped traffic, platform.openai.com for key management and dashboard flows, openai.com and marketing or auth helpers on subdomains, and chatgpt.com when the product leans on ChatGPT identity. Enterprise SSO may introduce additional IdP hostnames (Okta, Azure AD, Google)—those belong in their own suffix groups so you do not “fix Codex” by widening openai.com to the entire internet. When IT terminates TLS, remember Codex supports CODEX_CA_CERTIFICATE or SSL_CERT_FILE for custom CA bundles; that is a policy conversation, not something Mihomo can magically bypass without importing the same trust anchor.

Device-code login and localhost callbacks (default loopback port 1455 per upstream docs for browser return) are largely local; they matter for SSH port forwards, not for DOMAIN-SUFFIX rows. If you tunnel ssh -L 1455:localhost:1455, ensure the tunnel itself is stable; Clash rarely needs a rule for 127.0.0.1.

4 Phase three: long-lived sessions, streaming, and why “it worked once” is not enough

Codex is not a fire-and-forget REST demo. Agents keep context, retry operations, and may hold streaming connections open while the model thinks. Any proxy group that aggressively rotates exits per TCP connection will look fine in a speed test yet frustrate an assistant that expects session stickiness. Prefer a URL-test or fallback group with a sane interval for AI traffic, or pin Codex-related suffixes to a single reliable node when your provider allows it.

Idle timeouts on captive portals, hotel Wi-Fi, or aggressive corporate firewalls also masquerade as “Codex bugs.” If disconnects correlate with inactivity, split rules are not the first knob—MTU, middlebox TCP timers, or VPN idle disconnects are. Still, Mihomo’s logging tells you whether the reset happened upstream of the proxy or between the proxy and OpenAI, which saves hours of tail-chasing.

When comparing with IDE-integrated assistants, the network footprint is similar but the process boundary differs: the CLI runs in your shell profile, inheriting HTTP_PROXY, HTTPS_PROXY, and ALL_PROXY only if you exported them—TUN mode on the host avoids that class of drift. If you already solved routing for GitHub Copilot, reuse the discipline (explicit groups, avoid over-broad PROCESS rules unless you mean it).

5 Optional MCP servers: npm and GitHub again, but with agent urgency

Codex can orchestrate tools; when those tools are implemented as MCP servers, you re-enter the npm and GitHub graph from the hot path of an autonomous loop rather than a human-paced install. That means transient DNS failures hurt more because the agent may not back off politely. Keep MCP-related suffix rows adjacent to the Codex/OpenAI rows in YAML so a single mis-sorted catch-all does not starve both. If you vendor-lock MCP packages through an internal registry, document that hostname beside Codex rules so future you recognizes the dependency chain.

This article intentionally does not duplicate every MCP hostname table; instead, treat MCP as a multiplier on whatever registry strategy you already chose. When in doubt, widen based on observed SNIs from Mihomo’s UI, not based on keyword guessing inside RULE-SET files you do not control.

6 Sandbox execution and container pulls: Docker Hub, GHCR, and mirrors

When Codex—or your own wrappers—runs work inside containers, image pulls become part of the story. Docker Hub (registry-1.docker.io, docker.io, auth endpoints under auth.docker.io) is the default public registry many developers assume “just works.” GitHub Container Registry (ghcr.io) appears frequently for CI-built base images. Some enterprises mirror both into internal registries; if so, add those mirror hostnames explicitly and consider DIRECT to keep large layer downloads off congested overseas exits.

Docker Desktop on macOS or Windows introduces another split-tunnel puzzle: the Linux VM that backs Docker has its own DNS and forwarding path, sometimes bypassing host HTTP proxy variables unless you configure daemon JSON or systemd drop-ins on Linux hosts. If Codex launches containers that need outbound model access, ensure those containers either inherit proxy env vars or route through host TUN where policy allows. Mixed setups—host on Mihomo TUN, Docker set to “system proxy off”—are a frequent source of “works in shell, fails inside container.”

Security note: pulling unsigned or stale images while an agent has repository write access is risky routing cannot solve. Split rules improve availability; they do not replace image pinning and digest verification.

Bandwidth vs policy: Sending multi-gigabyte layer pulls through the same congested exit as interactive API calls can starve latency-sensitive streams. Consider separate policy groups or DIRECT for registry traffic after you measure.

7 Copy-pastable Mihomo-style rules (normalize group names)

The snippet below is intentionally conservative: suffix match rows first, GEOIP or MATCH later. Rename AI to your outbound group. Add or remove lines after you inspect live connections for your tenant and auth method.

rules: excerpt (YAML)
# npm / Node ecosystem
- DOMAIN-SUFFIX,npmjs.org,AI
- DOMAIN-SUFFIX,registry.npmjs.org,AI

# OpenAI / Codex (validate against your logs)
- DOMAIN-SUFFIX,openai.com,AI
- DOMAIN-SUFFIX,api.openai.com,AI
- DOMAIN-SUFFIX,platform.openai.com,AI
- DOMAIN-SUFFIX,chatgpt.com,AI

# Optional: GitHub when installing Codex or MCP from releases
- DOMAIN-SUFFIX,github.com,AI
- DOMAIN-SUFFIX,objects.githubusercontent.com,AI

# Container registries (tune DIRECT vs AI per uplink)
- DOMAIN-SUFFIX,docker.io,AI
- DOMAIN-SUFFIX,registry-1.docker.io,AI
- DOMAIN-SUFFIX,auth.docker.io,AI
- DOMAIN-SUFFIX,ghcr.io,AI

For large teams, move volatile lists into a rule provider and track changes in git. Keep personal “AI domain” lists separate from geo lists so a provider update does not silently reorder critical rows. Mihomo’s behavior is predictable when you respect match order; chaos usually means someone merged two opinionated files without a shared style guide.

8 Coexisting with corporate VPNs and local Docker engines

Enterprise VPN clients often install a virtual adapter with a lower metric and push default-route capture. Mihomo TUN on the same workstation can fight for the same routes unless you use split tunneling features the VPN vendor exposes. The pragmatic pattern: let the VPN own corporate prefixes explicitly listed by IT, let Mihomo own everything else, and never assume “VPN connected” equals “all TCP flows identical.” Document which stack wins for api.openai.com; ambiguity there shows up as flaky logins first.

Docker adds a second parallel network namespace. On Linux, daemon-level proxy configuration differs from rootless Podman; on macOS/Windows, Docker Desktop’s internal Linux VM may not see host HTTP_PROXY unless propagated. If Codex runs on the host but spawns docker run helpers, test both paths: host-only codex invocation and containerized toolchains. When IT forbids TUN entirely, fall back to explicit mixed-port proxies and env exports inside both host and container entrypoints.

9 DNS, TUN, FakeIP, and why Codex amplifies small mistakes

Node, npm, and Go-based CLIs often ship with their own DNS resolution quirks. If Mihomo uses FakeIP while a subprocess still resolves through systemd-resolved or corporate DoH, you can get “certificate name mismatch” errors that are actually two different answers for the same name. Align DNS: either route resolver traffic through Mihomo consistently or exclude sensitive domains from FakeIP. The Meta core DNS leak prevention article explains the trade-offs in core vocabulary that still maps to Mihomo.

TUN mode simplifies life because kernels deliver packets to Mihomo without relying on each CLI respecting env vars—but TUN interacts with VPN routes and Docker bridges. After any change, run a three-step smoke test: npm ping-equivalent (npm view), codex login dry run on a throwaway account, and a one-line model call. Capture PCAP or Mihomo logs only if those three disagree.

TLS inspection: If a corporate proxy terminates TLS with a private root, install the CA via official mechanisms (CODEX_CA_CERTIFICATE) instead of disabling verification in random tools.

10 Verification checklist before you blame OpenAI

  • Install path: Confirm npm can fetch metadata and tarballs with Mihomo logging enabled; note any unexpected CDN SNIs.
  • Auth path: Complete login while watching for blocked IdP or ChatGPT domains; retry with a minimal ruleset if needed to isolate policy.
  • API path: Run a short non-destructive command and verify stable TCP to api.openai.com (or your configured provider) without RST storms.
  • MCP optional: If you use MCP, repeat installs with MCP disabled to see whether failures move.
  • Containers optional: Pull a tiny public image while Codex idle; compare throughput to host curl through the same proxy group.
  • DNS: Compare dig answers with Mihomo DNS disabled versus enabled for one test domain.

11 Wrap-up

The OpenAI Codex CLI is a convenient narrative hook for 2026 because it compresses everything developers already struggled with—npm registry reliability, OpenAI API auth, long-lived streaming sessions, optional MCP installs, and occasional Docker Hub or GHCR traffic—into one terminal-shaped workflow. Clash and Mihomo handle that workflow well when you respect ordering: explicit DOMAIN-SUFFIX rows and curated rule provider data above blunt catch-alls, DNS aligned with your FakeIP choices, and realistic coexistence plans for corporate VPN and Docker Desktop.

Compared with dumping every OpenAI hostname into a single keyword rule, the staged approach here stays maintainable when endpoints shift and keeps domestic or on-net traffic fast. Pair this article with the broader OpenAI domain guide and the MCP tooling guide so each layer stays focused.

When you want a polished desktop client with TUN, readable logs, and subscription ergonomics—without treating upstream release pages as the default installer path—start from this site’s download page, apply the checklist above, and keep iterating from observed SNIs rather than folklore.

→ Download Clash for free and experience the difference

Tags: OpenAI Codex CLI @openai/codex npm registry OpenAI API Docker Hub Mihomo split routing
Clash Verge Rev logo for OpenAI Codex CLI split routing tutorial

Clash Verge Rev

Next-gen Clash client · Free and open source

One profile for npm installs, OpenAI API calls, and optional Docker registry pulls—so Codex CLI sessions, MCP servers, and the rest of your stack share one auditable split-routing policy.

TUN full traffic takeover Mihomo high-performance core Precise rule routing DNS leak helpers Multi-subscription management

Related reading