Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please, I need an example that calls AgentChoice.Memory and a custom function in Python. #107

Open
JoseGuilherme1904 opened this issue Nov 3, 2024 · 2 comments

Comments

@JoseGuilherme1904
Copy link

🚀 The feature, motivation and pitch

I need call custom tool called PesquisaCPF
...
async def get_agent(
self,
agent_type: AgentChoice,
agent_params: Optional[Dict[str, Any]] = None,
) -> str:
if agent_type == AgentChoice.Memory:
bank_ids = agent_params.get("bank_ids", [])
tools = [
AgentConfigToolMemoryToolDefinition(
type="memory",
max_chunks=5,
max_tokens_in_context=2048,
memory_bank_configs=[
AgentConfigToolMemoryToolDefinitionMemoryBankConfigUnionMember0(
type="vector",
bank_id=bank_id,
)
for bank_id in bank_ids
],
) ,PesquisaCPF().get_tool_definition()

                  ]

...

And chat function:

async def chat(self, agent_choice, message, attachments) -> str:
    print("Chamou função")
    #print(self)
    assert (
        agent_choice in self.agents
    ), f"Agent of type {agent_choice} not initialized"
    
    agent_id = self.agents[agent_choice]

    messages = []
    # If it's the first turn, send the system message along with the user message
    if self.first_turn[agent_id]:
        if self.system_message[agent_id]:
            messages.append(
                UserMessage(content=self.system_message[agent_id], role="user")
            )
        self.first_turn[agent_id] = False

    session_id = self.sessions[agent_choice]
    atts = []
    if attachments is not None:
        for attachment in attachments:
            atts.append(
                Attachment(
                    content=data_url_from_file(attachment),
                    # hardcoded for now since mimetype is inferred from data_url
                    mime_type="text/plain",
                )
            )
    messages.append(UserMessage(role="user", content=message))
    generator = self.client.agents.turn.create(
        agent_id=self.agents[agent_choice],
        session_id=self.sessions[agent_choice],
        messages=messages,
        attachments=atts,
        stream=True,
    )
    
    turn = process_generator_response(generator)
    
    custom_tools = {"PesquisaCPF": PesquisaCPF2}
    inserted_context = ""
    for step in turn.steps:
        # FIXME: Update to use typed step types instead of strings
        if step.step_type == "memory_retrieval":
            #print("Consultou Memoria Retrieval")
            inserted_context = step.inserted_context
        if step.step_type == "tool_execution":
            print("Consultou Tool:")
            
            inserted_context = "\n".join([tr.content for tr in step.tool_responses])
            #print(inserted_context)
        if step.step_type == "inference":
            
            

            
            print("Consultou Inference")
            print(step)
            for tool_call in step.inference_model_response.tool_calls:       
                 tool_name = tool_call.tool_name
                 arguments = tool_call.arguments
                 if tool_name in custom_tools:

                    parameters = list(arguments.values())
                    tool_result = custom_tools[tool_name](*parameters).replace('\n', '').replace('  ', '')
                    #print(tool_result)
                    response_content = tool_result.replace('\n', '').replace('  ', '')
                    inserted_context = response_content
                    message = ToolResponseMessage(
                          call_id=tool_call.call_id,
                          tool_name=tool_call.tool_name,
                          content=response_content ,
                          role="ipython",
                          type="search_result"
                    )
                    print(message)
                    
                    **generator = self.client.agents.turn.create(
                        agent_id=self.agents[agent_choice],
                        session_id=self.sessions[agent_choice],
                        messages=[message],  # Coloca `message` em uma lista se necessário
                        stream=True
                    )
                    
                    # Processa a resposta sem a configuração `AgentConfigToolMemoryToolDefinition`
                    turn = process_generator_response(generator)**
                    
    
                    
                    # Atualize o conteúdo de `turn` com a resposta processada
                    if turn:
                        break
                    
                    #print(result_messages)
                    #return turn.output_message.content, message
                    #break
                

    if turn is None:
        # Retorna uma mensagem de erro ou valor padrão caso o evento turn_complete não tenha sido encontrado
        return "Erro: Evento turn_complete não foi encontrado.", inserted_context
    else:
        return turn.output_message.content, inserted_context

INFO: 127.0.0.1:33602 - "POST /agents/turn/create HTTP/1.1" 200 OK
Batches: 100% 1/1 [00:00<00:00, 161.48it/s]
role='user' content='Pesquise o cpf: 23121313' context='Here are the retrieved documents for relevant context:\n=== START-RETRIEVED-CONTEXT ===\n\nid:c6b933c135764803990433b5b09990de; content:This is a live bank. It holds live context for this chat\n\n=== END-RETRIEVED-CONTEXT ===\n'
role='assistant' content='' stop_reason=<StopReason.end_of_turn: 'end_of_turn'> tool_calls=[ToolCall(call_id='d883b4a0-d19b-4aca-aab4-e5f6eccaf980', tool_name='PesquisaCPF', arguments={'cpf': '2131312332132'})]
Assistant:
INFO: 127.0.0.1:33604 - "POST /agents/turn/create HTTP/1.1" 200 OK
Batches: 100% 1/1 [00:00<00:00, 109.56it/s]
Traceback (most recent call last):
File "/home/guilherme/.local/lib/python3.10/site-packages/llama_stack/distribution/server/server.py", line 206, in sse_generator
async for item in await event_gen:
File "/home/guilherme/.local/lib/python3.10/site-packages/llama_stack/providers/impls/meta_reference/agents/agents.py", line 138, in _create_agent_turn_streaming
async for event in agent.create_and_execute_turn(request):
File "/home/guilherme/.local/lib/python3.10/site-packages/llama_stack/providers/impls/meta_reference/agents/agent_instance.py", line 179, in create_and_execute_turn
async for chunk in self.run(
File "/home/guilherme/.local/lib/python3.10/site-packages/llama_stack/providers/impls/meta_reference/agents/agent_instance.py", line 252, in run
async for res in self._run(
File "/home/guilherme/.local/lib/python3.10/site-packages/llama_stack/providers/impls/meta_reference/agents/agent_instance.py", line 383, in _run
last_message.context = "\n".join(rag_context)
File "/home/guilherme/.local/lib/python3.10/site-packages/pydantic/main.py", line 884, in setattr
raise ValueError(f'"{self.class.name}" object has no field "{name}"')
ValueError: "ToolResponseMessage" object has no field "context"

The problem is that a second call to AgentConfigToolMemoryToolDefinition occurs; if only the PesquisaCPF tool is present, it works correctly.

Can someone please help?
Guilherme

Alternatives

No response

Additional context

No response

@jageenshukla
Copy link

@JoseGuilherme1904
Hello,

Regarding custom function tooling example in python please go to below small blog, I have wrote it also have working example link.
https://medium.com/@jageenshukla/how-to-configure-agent-to-call-custom-tool-using-llama-stack-using-llama3-2-1b-instruct-fp16-2f8caf1b482f

I am learning llama-stack this holiday session, let me know if above did not work for you.

@heyjustinai
Copy link
Member

heyjustinai commented Jan 13, 2025

Hi @JoseGuilherme1904, we have created notebooks examples here for you to get started. Also this one covers how to create a custom tool. You could also find examples on readthedocs.

We are constantly trying to improve our documentation and provide more relevant examples to help users like you get started with the Llama Stack. If there are anything missing, don't hesitate to let us know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants