-
Notifications
You must be signed in to change notification settings - Fork 832
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Response Relevancy: TypeError: object of type 'StringPromptValue' has no len() #1892
Comments
I believe the error you're encountering might be related to the LLM model you're using. Could you kindly let me know which model you're working with and how you're initializing it? |
Hi @sahusiddharth, I have tried using LLMs provided by Hugging Face (specifically the meta-llama/Llama-3.2-3B-Instruct model) as well as AzureOpenAI (specifically the gpt-4o-mini model) Azure OpenAI LLM model initialization
Hugging Face LLM model initialization
RAGAs evaluation invocation
Azure OpenAI model error tracback:output = await scorer.single_turn_ascore(sample) Hugging Face model error traceback:output = await scorer.single_turn_ascore(sample) Next StepsWhere can I find the correct llm usage and compatibility to set up these evaluations? Let me know if there is anything else I can provide. Thank you! |
I noticed that when using the Azure OpenAI model, it is not wrapped in the ragas Langchain LLM Wrapper. You can modify your function like this to wrap it properly: def evaluator_llm():
"""
Load the Azure OpenAI LLM model for evaluation
"""
# Set Azure ML properties
os.environ["OPENAI_API_TYPE"] = openai_api_type
os.environ["OPENAI_API_VERSION"] = openai_api_version
os.environ["OPENAI_API_KEY"] = openai_api_key
os.environ["AZURE_OPENAI_ENDPOINT"] = azure_openai_endpoint
# Initialize the Azure OpenAI model
llm = AzureChatOpenAI(
deployment_name=openai_deployment_name,
model=openai_model,
temperature=0.1,
max_tokens=256
)
return llm You can wrap it with the LangchainLLMWrapper like this: from langchain_openai.chat_models import AzureChatOpenAI
from ragas.llms import LangchainLLMWrapper
evaluator_llm = LangchainLLMWrapper(AzureChatOpenAI(model="gpt-4o-mini")) Let me know if this works for you! |
Hi @sahusiddharth, Thank you for your response and the code suggestions - I followed the documentation in ragas customize model: Codefrom langchain_openai.chat_models import AzureChatOpenAI
from langchain_openai.embeddings import AzureOpenAIEmbeddings
from ragas.llms import LangchainLLMWrapper
from ragas.embeddings import LangchainEmbeddingsWrapper
azure_configs = {
"base_url": "https://test-poc-spriha-isha.openai.azure.com/",
"model_deployment": "gpt-4o-mini-spriha",
"model_name": "gpt-4o-mini",
"embedding_deployment": "text-embedding-ada-002",
"embedding_name": "text-embedding-ada-002", # most likely
}
azure_llm = AzureChatOpenAI(
openai_api_version="2024-05-01-preview",
azure_endpoint=azure_configs["base_url"],
azure_deployment=azure_configs["model_deployment"],
model=azure_configs["model_name"],
validate_base_url=False,
)
# init the embeddings for answer_relevancy, answer_correctness and answer_similarity
azure_embeddings = AzureOpenAIEmbeddings(
openai_api_version="2023-05-15",
azure_endpoint=azure_configs["base_url"],
azure_deployment=azure_configs["embedding_deployment"],
model=azure_configs["embedding_name"],
)
azure_llm = LangchainLLMWrapper(azure_llm)
azure_embeddings = LangchainEmbeddingsWrapper(azure_embeddings)
async def evaluate(context, response, query):
"""
Run LLM response evaluations for several criteria
"""
## RAGAS
sample = SingleTurnSample(
user_input=query,
response=response,
# retrieved_contexts=context
)
# Response Relevancy
scorer = ResponseRelevancy(llm=azure_llm, embeddings=azure_embeddings)
output = await scorer.single_turn_ascore(sample)
return output Traceback ErrorI am getting the following error that I am working on debugging: output = await scorer.single_turn_ascore(sample) ThoughtsI am unsure why my azure openAI resource is causing this issue since it works for the response generation but not the evaluation section of the code - I was curious if a new azure openAI api key is required within this evaluation code section and cannot be reused anywhere else in the code? |
I don’t think a new Azure OpenAI API key is required for this evaluation section, and it should be reusable elsewhere in the code. However, while reviewing your error trace, I noticed the following message, which could be causing the issue:
|
Right, I have been trying to debug this error - it seems to suggest that there is an issue with resource deployment and usage, however, I am able to get the llm to generate a response with this resource - it only throws the error when being called for the evaluation which is what is confusing me. |
@ishachinniah-hds which version are you using? |
Hi @jjmachan , I was just able to work out my issue with the Azure OpenAI resource - I had created the embedding model endpoint incorrectly. When updating it to use the same endpoint url as the llm the issue was resolved. Thank you for the help and support. |
I am also getting the same issue when doing sample SQL evaluation for checking the LLMSQLEquivalence Error during batch scoring: object of type 'StringPromptValue' has no len() Same is taken from from ragas.metrics import LLMSQLEquivalence from random import sample async def process_samples(scorer, sample): async def main(scorer, sample): if name == "main":
Please help |
[✓ ] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug
When running the ResponseRelevancy metric on my query(user_input) and generated llm answer (response), I get a type error relating to the use of 'StringPromptValue' which I am not using anywhere.
Ragas version: 0.2.12
Python version: 3.9.20
Code to Reproduce
async def evaluate(context, response, query):
"""
Run LLM response evaluations for several criteria
"""
eval_llm = evaluator_llm()
eval_embeddings = HuggingFaceEmbeddings(model_name=embedding_model)
Error trace
output = await scorer.single_turn_ascore(sample)
File "/opt/homebrew/anaconda3/envs/chatbot/lib/python3.9/site-packages/ragas/metrics/base.py", line 541, in single_turn_ascore
raise e
File "/opt/homebrew/anaconda3/envs/chatbot/lib/python3.9/site-packages/ragas/metrics/base.py", line 534, in single_turn_ascore
score = await asyncio.wait_for(
File "/opt/homebrew/anaconda3/envs/chatbot/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
return await fut
File "/opt/homebrew/anaconda3/envs/chatbot/lib/python3.9/site-packages/ragas/metrics/_answer_relevance.py", line 134, in _single_turn_ascore
return await self._ascore(row, callbacks)
File "/opt/homebrew/anaconda3/envs/chatbot/lib/python3.9/site-packages/ragas/metrics/_answer_relevance.py", line 148, in _ascore
responses = await asyncio.gather(*tasks)
File "/opt/homebrew/anaconda3/envs/chatbot/lib/python3.9/asyncio/tasks.py", line 328, in __wakeup
future.result()
File "/opt/homebrew/anaconda3/envs/chatbot/lib/python3.9/asyncio/tasks.py", line 256, in __step
result = coro.send(None)
File "/opt/homebrew/anaconda3/envs/chatbot/lib/python3.9/site-packages/ragas/prompt/pydantic_prompt.py", line 127, in generate
output_single = await self.generate_multiple(
File "/opt/homebrew/anaconda3/envs/chatbot/lib/python3.9/site-packages/ragas/prompt/pydantic_prompt.py", line 188, in generate_multiple
resp = await llm.generate(
File "/opt/homebrew/anaconda3/envs/chatbot/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 684, in generate
batch_size=len(messages),
TypeError: object of type 'StringPromptValue' has no len()
Expected behavior
I am expecting a Response Relevancy score to be outputted by the end of the function call.
Additional context
For context I also printed out the object types for the response, query, and context variables I am sending in as arguments and none are 'StringPromptValue':
Context Type: <class 'list'>
Response Type: <class 'str'>
Query Type: <class 'str'>
Thank you, let me know what additional information would be insightful.
The text was updated successfully, but these errors were encountered: