Skip to content

(v0.6.0) 【New Year's party】 The convergence of the OpenAI ecosystem and the MCP ecosystem.

Latest
Compare
Choose a tag to compare
@heshengtao heshengtao released this 15 Jan 09:34
· 20 commits to main since this release

✨v0.6.0✨【新年派对】【New Year's party】

This release includes the following features:

  1. The MCP tool has been updated. You can modify the configuration in the 'mcp_config.json' file located in the party project folder to connect to your desired MCP server. You can find various MCP server configuration parameters that you may want to add here: modelcontextprotocol/servers. The default configuration for this project is the Everything server, which serves as a testing MCP server to verify its functionality. Reference workflow: start_with_MCP. Developer note: The MCP tool node can connect to the MCP server you have configured and convert the tools from the server into tools that can be directly used by LLMs. By configuring different local or cloud servers, you can experience all LLM tools available in the world.
  2. A new browser tool node has been developed based on browser-use, which allows the LLM to automatically perform the browser tasks you publish.
    The nodes for loading files, loading folders, loading web content, and all word embedding-related nodes have been upgraded. Now, the file content you load will always include the file name and paragraph index. The loading folder node can filter the files you wish to load through related_characters.
  3. A local model tool for speech-to-text has been added, which is theoretically compatible with all ASR models on HF. For example: openai/whisper-small, nyrahealth/CrisperWhisper, and so forth.
    Added ASR and TTS nodes for fish audio, please refer to the API documentation of fish audio for usage instructions.
  4. Added the aisuite loader node, which is compatible with all APIs that aisuite can accommodate, including: ["openai", "anthropic", "aws", "azure", "vertex", "huggingface"]. Example workflow: start_with_aisuite.
  5. A new category has been added: memory nodes, which can be utilized to manage your LLM conversation history. Currently, memory nodes support three modes for managing your conversation history: local JSON files, Redis, and SQL. By decoupling the LLM's conversation history from the LLM itself, you can employ word embedding models to compress and organize your conversation history, thus saving tokens and context windows for the LLM. Example workflow: External Memory.

本次发行包含如下功能:

  1. MCP 工具已更新。您可以修改 party 项目文件夹中的 'mcp_config.json' 文件中的配置,以连接到您想要的 MCP 服务器。您可以在这里找到可能需要添加的各种 MCP 服务器配置参数:modelcontextprotocol/servers。此项目的默认配置是 Everything 服务器,用作测试 MCP 服务器以验证其功能。参考工作流程:start_with_MCP
    开发者注意:MCP工具节点可以连接到您已配置的MCP服务器,并将服务器上的工具转换为LLM可以直接使用的工具。通过配置不同的本地或云服务器,您可以体验到世界上所有可用的LLM工具。
  2. 基于 browser-use 开发了一个新的浏览器工具节点,它允许 LLM 自动执行你发布的浏览器任务。
    加载文件、加载文件夹、加载网页内容以及所有与词嵌入相关的节点都已升级。现在,您加载的文件内容将始终包含文件名和段落索引。加载文件夹节点可以通过related_characters来筛选您希望加载的文件。
  3. 添加了一个用于语音转文本的本地模型工具,理论上可以兼容HF上的所有ASR模型。例如:openai/whisper-smallnyrahealth/CrisperWhisper,等等。
    fish audio 添加了 ASR 和 TTS 节点,请参考 fish audio 的 API 文档以获取使用说明。
  4. 添加了aisuite加载器节点,它兼容aisuite可以支持的所有API,包括:["openai","anthropic","aws","azure","vertex","huggingface"]。示例工作流程:start_with_aisuite
  5. 新增了一个新类别:记忆节点,可以用来管理你的LLM对话历史。目前,记忆节点支持三种管理对话历史的方式:本地JSON文件、Redis和SQL。通过将LLM的对话历史与LLM本身分离,你可以使用词嵌入模型来压缩和组织你的对话历史,从而为LLM节省令牌和上下文窗口。示例工作流程:外部记忆