Skip to content

Conversation

@dcramer
Copy link
Member

@dcramer dcramer commented Jan 7, 2026

Summary

Adds OpenAI API usage tracking following the existing provider pattern (Anthropic, Cursor). This integrates with OpenAI's Usage API to track token consumption across GPT-4o, o1, o3-mini, and Codex models.

New files:

  • src/lib/sync/openai.ts - Main sync module with syncOpenAIUsage(), syncOpenAICron(), backfillOpenAIUsage()
  • src/lib/sync/openai-mappings.ts - User ID to email mapping via /v1/organization/users
  • src/app/api/cron/sync-openai/route.ts - Daily cron endpoint (7 AM UTC)

Modified:

  • Added OpenAI tool config (green theme)
  • Added OpenAI model pricing (GPT-4o, o1, o3-mini, codex-mini, etc.)
  • Added OpenAI model name normalization
  • Updated CLI with openai:status, sync openai, backfill openai commands
  • Updated status endpoint to show OpenAI sync state

Environment variable required:

OPENAI_ADMIN_KEY=sk-admin-...

From: https://platform.openai.com/settings/organization/admin-keys

⚠️ Untested

This implementation has not been tested as Sentry does not currently have an OpenAI Enterprise plan with admin API access. The code follows the same patterns as the working Anthropic integration, but real-world testing is needed.

ChatGPT Team Plan Limitations

During research, we discovered significant limitations for organizations on ChatGPT Team plans (vs Enterprise):

Feature Team Enterprise
Usage API (/v1/organization/usage/*)
User Analytics dashboard
CSV export of usage
Compliance API

This means: If your team uses Codex CLI authenticated via ChatGPT accounts (not API keys), usage tracking is not available on Team plans. Options:

  1. Upgrade to Enterprise
  2. Have team members use API keys instead of ChatGPT auth
  3. Parse local Codex CLI logs (~/.codex/sessions/) - but this requires manual collection from each user

Sources:

Test plan

  • Set OPENAI_ADMIN_KEY environment variable
  • Run npm run cli openai:status to verify connection
  • Run npm run cli sync openai --days 7 to sync recent data
  • Verify data appears in dashboard with green "OpenAI" tool indicator
  • Test backfill: npm run cli backfill openai --from 2024-01-01 --to 2025-01-01

🤖 Generated with Claude Code

Add OpenAI API usage tracking following the existing provider pattern
(Anthropic, Cursor). This integrates with OpenAI's Usage API to track
token consumption across GPT-4o, o1, o3-mini, and Codex models.

New files:
- src/lib/sync/openai.ts - Main sync module
- src/lib/sync/openai-mappings.ts - User ID to email mapping
- src/app/api/cron/sync-openai/route.ts - Daily cron endpoint

Modified:
- Added OpenAI tool config (green theme)
- Added OpenAI model pricing
- Added OpenAI model name normalization
- Updated CLI with openai:status, sync, backfill commands
- Updated status endpoint to show OpenAI sync state

Requires OPENAI_ADMIN_KEY environment variable from:
platform.openai.com/settings/organization/admin-keys

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@vercel
Copy link

vercel bot commented Jan 7, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
abacus Ready Ready Preview, Comment Jan 7, 2026 6:03pm

Comment on lines +39 to 42
'o3-mini': { input: 1.1, output: 4.4 },
// OpenAI Codex models
'codex-mini': { input: 1.5, output: 6 },
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The calculateCost function uses substring matching that causes it to select the wrong pricing for model variants like 'gpt-4o-mini', as it matches 'gpt-4o' first.
Severity: CRITICAL | Confidence: High

🔍 Detailed Analysis

The calculateCost function determines model pricing by finding the first key in MODEL_PRICING that is a substring of the input model name. Due to the insertion order of keys, when a model like 'gpt-4o-mini' is processed, it incorrectly matches with the 'gpt-4o' key first. This results in applying the pricing for 'gpt-4o' to 'gpt-4o-mini', leading to significant cost overestimation. The same issue affects other model pairs like 'o1' vs 'o1-mini' and 'gpt-4' vs 'gpt-4-turbo', causing incorrect financial tracking.

💡 Suggested Fix

Modify the calculateCost function to prioritize exact matches before falling back to substring matching. Alternatively, reorder the MODEL_PRICING object to place longer, more specific model names (e.g., 'gpt-4o-mini') before their shorter, more general counterparts (e.g., 'gpt-4o'). An exact match is the safest approach.

🤖 Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: src/lib/db.ts#L39-L42

Potential issue: The `calculateCost` function determines model pricing by finding the
first key in `MODEL_PRICING` that is a substring of the input model name. Due to the
insertion order of keys, when a model like `'gpt-4o-mini'` is processed, it incorrectly
matches with the `'gpt-4o'` key first. This results in applying the pricing for
`'gpt-4o'` to `'gpt-4o-mini'`, leading to significant cost overestimation. The same
issue affects other model pairs like `'o1'` vs `'o1-mini'` and `'gpt-4'` vs
`'gpt-4-turbo'`, causing incorrect financial tracking.

Did we get this right? 👍 / 👎 to inform future reviews.
Reference ID: 8294927

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants