•OpenAI Codex is transitioning from per-message pricing to a more granular API token usage model.
•The new model charges based on input tokens, cached input tokens, and output tokens, offering greater transparency into credit consumption.
•Different Codex models (e.g., GPT-5.4, GPT-5.4-Mini) have varying credit rates per million tokens.
•Existing Plus/Pro/Edu and Enterprise/Edu customers will temporarily remain on a legacy per-message rate card until fully migrated.
•Developers can monitor token usage in Codex settings and should optimize for token consumption to manage costs, with average costs estimated at $100-$200/developer/month.
•OpenAI Codex is transitioning from per-message pricing to a more granular API token usage model.
•The new model charges based on input tokens, cached input tokens, and output tokens, offering greater transparency into credit consumption.
•Different Codex models (e.g., GPT-5.4, GPT-5.4-Mini) have varying credit rates per million tokens.
•Existing Plus/Pro/Edu and Enterprise/Edu customers will temporarily remain on a legacy per-message rate card until fully migrated.
•Developers can monitor token usage in Codex settings and should optimize for token consumption to manage costs, with average costs estimated at $100-$200/developer/month.