The simplest Agentic AI library, specialized in LLM Function Calling.
Don't compose complicate agent graph or workflow, but just deliver Swagger/OpenAPI documents or TypeScript class types linearly to the agentica
. Then agentica
will do everything with the function calling.
Look at the below demonstration, and feel how agentica
is easy and powerful.
import { Agentica } from "@agentica/core";
import typia from "typia";
const agent = new Agentica({
controllers: [
await fetch(
"https://shopping-be.wrtn.ai/editor/swagger.json",
).then(r => r.json()),
typia.llm.application<ShoppingCounselor>(),
typia.llm.application<ShoppingPolicy>(),
typia.llm.application<ShoppingSearchRag>(),
],
});
await agent.conversate("I wanna buy MacBook Pro");
shopping-chat.mp4
Demonstration video of Shopping AI Chatbot
Preparing detailed guide documents.
Until that, please satisfy with README document of each module.
- Core Library
- Benchmark Program
- WebSocket RPC
In here README document, @agentica/core
is introducing its key concepts, principles, and demonstrating some examples.
However, this contents are not fully enough for new comers of AI Chatbot development. We need much more guide documents and example projects are required for education. We have to guide backend developers to write proper definitions optimized for LLM function calling. We should introduce the best way of multi-agent orchestration implementation.
We'll write such fully detailed guide documents until 2025-03-31, and we will continuously release documents that are in the middle of being completed.
https://nestia.io/chat/playground
I had developed Swagger AI chatbot playground website for a long time ago.
However, the another part obtaining function schemas from TypeScript class type, it is not prepared yet. I'll make the TypeScript class type based playground website by embedding TypeScript compiler (tsc
).
The new playground website would be published until 2025-03-15.
As I've concenstrated on POC (Proof of Concept) development on the early stage level, internal agents composing @agentica/core
are not cost optimized yet. Especially, selector
agent is consuming LLM tokens too much repeatedly. We'll optimize the selector
agent by RAG (Retrieval Augmented Generation) skills.
Also, we will support dozens of useful add-on agents which can connect with @agentica/core
by TypeScript class function calling. One of them is @wrtnlabs/hive
which optimizes selector
agent so that reducing LLM costs dramatically. The others would be OpenAI Vector Store handler and Postgres based RAG engine.
With these @agentica/core
providing add-on agents, you can learn how to implement the Multi-agent orchestration through TypeScript class function calling, and understand how @agentica/core
makes the Multi agent system interaction super easily.