Flexible nested chat triggers #4051
Replies: 3 comments 1 reply
-
What you described sounds like a tool use agent where the tool is the code runner. You can certainly build the agent in a way so that it only executes tools after some "pontificating". The v0.4 API will offer much more flexibility to nested chat. |
Beta Was this translation helpful? Give feedback.
-
O yeah, I am aware of that. Perhaps my example wasn't describing a unique enough scenario. Let's say I want to know how which farmlands to invest in based on satellite images and historical weather conditions. I would have an agent to plan out investment strategies (Planner_Agent) speaking to an Executor_Agent agent that will create the automation to do the calculations based on those strategies. However, the agent that develops the straategy realises that its data is incomplete, something the developer did not know. Therefore, since the developer didn'tknow this, the agent workflow they created only has 2 agents, the Planner_Agent and the Executor_Agent. The developer does not know it yet, but the Execturo_Agent will need a Data_Acquisition_Agent to go out and collect more data. I'm basically refering to a system that can identify its own limitations and write add-ons for itself. Also, s a v0.4coming out? what version are we on right now? |
Beta Was this translation helpful? Give feedback.
-
What feature would you like to be added?
I see that the code that handles the activation of nested chats only accepts the a Callable that only takes 1 parameter, the sender
elif isinstance(trigger, Callable): rst = trigger(sender) assert isinstance(rst, bool), f"trigger {trigger} must return a boolean value." return rst
I want to give devs more flexibility to decide when to have Agents initiate nested chats. For example, the user proxy tells a code_writer agent that they want a script for something. The code_writer can write the script, but to test that script it needs a nested chat with a code_runner agent (without LLM, of course) to verify the output and show the user. Sure, there is an option for a group chat where conversation patterns are cyclic (as below)
User_Proxy > Code_Writer > Code_Tester: Rinse and Repeat
But end-users might want the experience to be more conversational, where they can converse with the Code_Writer a little more before sending it to do the work.
The example above is something I personaly faced because i wanted to leverage Autogen's pre-existing support for code execution and peer-to-peer to build an intelligent hive that would take an instruction from the top and autonomously "run-wild" to get it done.
One a personal note, I feel like "pre-setting" the conversation patterns during runtime makes applications built with it feel more like a Zapier automation workflow instead of an intelligent app that truly makes use of the potential of LLMS. To me, it is comparable to giving someone a Jeep and only telling them to drive it on nice flat paved roads when a Jeep is already built to handle rough terrain.
Edit: 9 mins after writing this I remember that there's a GPT that already has code-interpreter to solve this exact problem regarding testing out the code only when the user says so. My point is that I want more flexibility for when Agents decide to divert their attention to accomplish or delegate extra tasks
Why is this needed?
All stated above
Beta Was this translation helpful? Give feedback.
All reactions