-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support batch processing #62
base: main
Are you sure you want to change the base?
Conversation
can you info it in both README and README-CN |
the same problem as #2 I wonder it maybe some problem for some users. |
after test maybe multiple keys problem. |
Thanks for the reply! I have not tested it with multiple keys yet. I only have one key in global Just ran a benchmarking test again to get more precise data:
I will update the README files later today. |
if you want to use async io,the async io sleep is better than time sleep , and for most users they still have the limit , that 's not a good news |
@yihong0618 I tested with two API keys in my global env variable, but did not have the issue in your screenshot. Could you verify your API keys are valid?
Sounds good. Will upate this.
I'm a bit confused here. I guess every user who has API Key should have the same limit rate from OpenAI. Did I miss anything? |
yes one key of mine has problem... |
I did not change the error handling part of the code. But I guess I got your point after reading #14 - to skip the malformed API key automatically. Is that correct? |
@yihong0618 I added lock for |
cool will test tonight(timezone +8) |
FeedbackSummaryI merged #62 into the main branch on my fork at https://github.com/GOWxx/bilingual_book_maker/tree/test_batch_processing, resolved the conflicts, and used the However, the same code runs normally on macOS 13.1 but not on CentOS 8.6. Normal operation on Mac OSNot working on CentOS |
Support batch processing for translation.
test_books/animal_farm.epub
around4.56 minutes with--batch_size=20
.--batch_size
. Users can customize their own batch size according to their OpenAI API plan. (detailes in OpenAI API rate limits)