You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While solve 45th out of 50 puzzles, encountered this error:
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
During task with name 'run_planner' and id 'f8a6a714-998b-52bf-2ca9-ac45f9700b72'
Log for puzzle that encountered the error.
>>>>SOLVING PUZZLE 45
Setting up Puzzle Words: ['subway', 'sonic', 'car', 'outback', 'checkers', 'day', 'sabbath', 'train', 'magic', 'sun', 'stripes', 'boat', 'plane', 'thunder', 'floyd', 'king']
ENTERED SETUP_PUZZLE
Generating vocabulary and embeddings for the words...this may take several seconds
Generating embeddings for the definitions
Storing vocabulary and embeddings in external database
ENTERED EMBEDVEC_RECOMMENDER
found count: 0, mistake_count: 0
(95, 95)
(95, 95)
candidate_lists size: 69
EMBEDVEC_RECOMMENDER: RECOMMENDED WORDS ['checkers', 'plane', 'stripes', 'train'] with connection This group is not connected by a single theme or concept, unlike the others which are connected by the theme of transportation.
Recommendation ['checkers', 'plane', 'stripes', 'train'] is incorrect
Changing the recommender from 'embedvec_recommender' to 'llm_recommender'
ENTERED LLM_RECOMMENDER
found count: 0, mistake_count: 1
attempt_count: 1
words_remaining: ['king', 'floyd', 'thunder', 'plane', 'boat', 'stripes', 'sun', 'magic', 'train', 'sabbath', 'day', 'checkers', 'outback', 'car', 'sonic', 'subway']
LLM_RECOMMENDER: RECOMMENDED WORDS ['floyd', 'king', 'sabbath', 'sonic'] with connection Bands
Recommendation ['floyd', 'king', 'sabbath', 'sonic'] is incorrect
ENTERED LLM_RECOMMENDER
found count: 0, mistake_count: 2
attempt_count: 1
words_remaining: ['subway', 'sonic', 'car', 'outback', 'checkers', 'day', 'sabbath', 'train', 'magic', 'sun', 'stripes', 'boat', 'plane', 'thunder', 'floyd', 'king']
LLM_RECOMMENDER: RECOMMENDED WORDS ['checkers', 'outback', 'sonic', 'subway'] with connection Restaurant chains
Restaurant chains ~ fast food chains: ['checkers', 'outback', 'sonic', 'subway'] == ['checkers', 'outback', 'sonic', 'subway']
Recommendation ['checkers', 'outback', 'sonic', 'subway'] is correct
ENTERED LLM_RECOMMENDER
found count: 1, mistake_count: 2
attempt_count: 1
words_remaining: ['king', 'floyd', 'thunder', 'plane', 'boat', 'stripes', 'sun', 'magic', 'train', 'sabbath', 'day', 'car']
LLM_RECOMMENDER: RECOMMENDED WORDS ['floyd', 'king', 'magic', 'sabbath'] with connection Bands or musicians
Recommendation ['floyd', 'king', 'magic', 'sabbath'] is incorrect
ENTERED LLM_RECOMMENDER
found count: 1, mistake_count: 3
attempt_count: 1
words_remaining: ['day', 'king', 'thunder', 'plane', 'boat', 'car', 'magic', 'sun', 'floyd', 'sabbath', 'train', 'stripes']
Traceback (most recent call last):
File "/workspaces/connection_solver/src/agent/app_embedvec_tester.py", line 163, in <module>
results = asyncio.run(main(None, check_one_solution))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/workspaces/connection_solver/src/agent/app_embedvec_tester.py", line 146, in main
result = await run_workflow(
^^^^^^^^^^^^^^^^^^^
File "/workspaces/connection_solver/src/agent/embedvec_tools.py", line 919, in run_workflow
async for chunk in workflow_graph.astream(None, runtime_config, stream_mode="values"):
File "/usr/local/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1867, in astream
async for _ in runner.atick(
File "/usr/local/lib/python3.11/site-packages/langgraph/pregel/runner.py", line 222, in atick
await arun_with_retry(
File "/usr/local/lib/python3.11/site-packages/langgraph/pregel/retry.py", line 138, in arun_with_retry
await task.proc.ainvoke(task.input, config)
File "/usr/local/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 453, in ainvoke
input = await asyncio.create_task(coro, context=context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 236, in ainvoke
ret = await asyncio.create_task(coro, context=context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 588, in run_in_executor
return await asyncio.get_running_loop().run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 579, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/connection_solver/src/agent/embedvec_tools.py", line 816, in run_planner
next_action = ask_llm_for_next_step(instructions, puzzle_state, model="gpt-3.5-turbo", temperature=0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/connection_solver/src/agent/embedvec_tools.py", line 293, in ask_llm_for_next_step
response = llm.invoke(conversation)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke
self.generate_prompt(
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate
raise e
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
self._generate_with_cache(
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 683, in _generate
response = self.root_client.beta.chat.completions.parse(**payload)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/resources/beta/chat/completions.py", line 156, in parse
return self._post(
^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1280, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 957, in request
return self._request(
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1046, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1095, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1046, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1095, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1061, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
During task with name 'run_planner' and id 'f8a6a714-998b-52bf-2ca9-ac45f9700b72'
real 28m5.474s
user 1m13.752s
sys 0m4.699s
The text was updated successfully, but these errors were encountered:
While solve 45th out of 50 puzzles, encountered this error:
Log for puzzle that encountered the error.
The text was updated successfully, but these errors were encountered: