How to Use ChatGPT in OpenClaw (and Reduce Your Cost)

April 25, 20263 min read

If you're already paying for ChatGPT, you might be paying twice without realizing it.

A lot of builders run OpenClaw with token-based API billing while also keeping a ChatGPT subscription active. The smarter move? Connect OpenClaw to ChatGPT Codex via OAuth and use your subscription-backed access first, with fallbacks only when needed.

Why this setup helps reduce cost

  • You avoid defaulting to per-token API spend for day-to-day coding tasks.
  • You keep your existing OpenClaw channels, memory, and cron jobs intact.
  • You can set fallback models only for overflow scenarios (quota/rate limit windows).

Prerequisites

  • OpenClaw installed and running
  • Active ChatGPT subscription (Plus/Pro/Team)
  • Terminal access (SSH if hosted remotely)

Step 1 - Start onboarding with Codex OAuth

openclaw onboard --auth-choice openai-codex

When prompted with the risk acknowledgment, continue. It's informing you about potential behavior differences between providers, not a failure condition.

Advertisement

Step 2 - Keep your current config (critical)

When the wizard asks how to handle existing values, choose:

  • Use existing values

Do not choose reset unless you intentionally want to wipe channels/memory/cron settings.

Step 3 - Complete OAuth correctly

  • Copy the OAuth URL printed in terminal
  • Open it in your browser
  • Sign in to your OpenAI/ChatGPT account and authorize
  • After redirect to localhost:1455, copy the full URL from your browser bar (including ?code=...)
  • Paste it back into your terminal

That callback URL is what finalizes token exchange for OpenClaw.

Step 4 - Set Codex as primary model

openclaw models set openai-codex/gpt-5.3-codex

Then verify:

openclaw models status --plain

You should see openai-codex/gpt-5.3-codex as active and token status healthy.

Step 5 - Add fallback for uninterrupted usage

If you hit subscription quota windows, fallback models keep automation alive:

openclaw models fallbacks add openrouter/google/gemini-3-flash-preview

This hybrid strategy is where most teams squeeze the best cost/performance ratio.

Troubleshooting (fast fixes)

OAuth opens but authorization fails

Revoke OpenClaw from your OpenAI connected apps, then rerun onboarding:

openclaw onboard --auth-choice openai-codex

Model appears unavailable after auth

Check model catalog:

openclaw models list --all

If needed, clear stale fallbacks and set primary again:

openclaw models fallbacks clear
openclaw models set openai-codex/gpt-5.3-codex

Token expired / stopped working

Re-authenticate:

openclaw models auth login --provider openai-codex

Important operational rule

Don't ask an active OpenClaw agent session to reconfigure OpenClaw itself (onboarding/model/auth changes). Run those commands in a separate terminal to avoid circular dependency failures.

Final take

If your goal is to reduce OpenClaw cost without sacrificing output quality, this is one of the highest-leverage config changes you can make: subscription-backed Codex as primary plus smart fallback strategy for edge cases.

Advertisement