We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
If anyone knows anyone who is tier 5 in openai (@royerloic maybe ?), they could benchmark the new o1 model. I am just tier3 and have to wait...
https://x.com/OpenAI/status/1834278218888872042
https://platform.openai.com/docs/models/o1
The text was updated successfully, but these errors were encountered:
It seems that this model has hidden tokens that you still get billed for (see note in the docs here: https://platform.openai.com/docs/guides/reasoning/how-reasoning-works) which may explain why it's for tier 5 users 😃 This could become an expensive experiment.
Sorry, something went wrong.
Yeah I know. I think claude does something similar. That's why I'm curious if it's really so much better than the models we tested so far.
Ok, I have access now. Just FYI: My first 12 prompts cost $6.68 for o1-preview, hence running the entire benchmark would cost about $300. (updated)
No branches or pull requests
If anyone knows anyone who is tier 5 in openai (@royerloic maybe ?), they could benchmark the new o1 model. I am just tier3 and have to wait...
https://x.com/OpenAI/status/1834278218888872042
https://platform.openai.com/docs/models/o1
The text was updated successfully, but these errors were encountered: