Compare commits

..

23 Commits

Author SHA1 Message Date
Soulter
c76b7ec387 Merge remote-tracking branch 'origin/master' into feat/memory 2025-11-21 20:23:41 +08:00
Soulter
b7f3010d72 stage simple webui 2025-11-21 17:59:22 +08:00
Soulter
fbbaf1cd08 delete(memory): remove memory module and its components 2025-11-21 17:34:33 +08:00
Soulter
9c8025acce stage 2025-11-21 17:25:55 +08:00
Soulter
98c5466b5d feat(chat): refactor chat component structure and add new features (#3701)
- Introduced `ConversationSidebar.vue` for improved conversation management and sidebar functionality.
- Enhanced `MessageList.vue` to handle loading states and improved message rendering.
- Created new composables: `useConversations`, `useMessages`, `useMediaHandling`, `useRecording` for better code organization and reusability.
- Added loading indicators and improved user experience during message processing.
- Ensured backward compatibility and maintained existing functionalities.
2025-11-20 17:30:51 +08:00
Soulter
6345ac6ff8 feat(chat): refactor chat component structure and add new features (#3701)
- Introduced `ConversationSidebar.vue` for improved conversation management and sidebar functionality.
- Enhanced `MessageList.vue` to handle loading states and improved message rendering.
- Created new composables: `useConversations`, `useMessages`, `useMediaHandling`, `useRecording` for better code organization and reusability.
- Added loading indicators and improved user experience during message processing.
- Ensured backward compatibility and maintained existing functionalities.
2025-11-20 17:29:27 +08:00
Soulter
5bcd683012 delete: remove useConversations composable 2025-11-20 17:29:27 +08:00
Soulter
eaa193c6c5 feat(chat): refactor chat component structure and add new features (#3701)
- Introduced `ConversationSidebar.vue` for improved conversation management and sidebar functionality.
- Enhanced `MessageList.vue` to handle loading states and improved message rendering.
- Created new composables: `useConversations`, `useMessages`, `useMediaHandling`, `useRecording` for better code organization and reusability.
- Added loading indicators and improved user experience during message processing.
- Ensured backward compatibility and maintained existing functionalities.
2025-11-20 17:29:27 +08:00
Soulter
1bdcaa1318 delete: useConversations 2025-11-20 17:29:27 +08:00
Soulter
6b6c48354d feat(chat): refactor chat component structure and add new features (#3701)
- Introduced `ConversationSidebar.vue` for improved conversation management and sidebar functionality.
- Enhanced `MessageList.vue` to handle loading states and improved message rendering.
- Created new composables: `useConversations`, `useMessages`, `useMediaHandling`, `useRecording` for better code organization and reusability.
- Added loading indicators and improved user experience during message processing.
- Ensured backward compatibility and maintained existing functionalities.
2025-11-20 17:29:27 +08:00
Soulter
774efb2fe0 refactor: update timestamp handling in session management and chat components 2025-11-20 17:29:27 +08:00
Soulter
3ec76636f9 refactor(sqlite): remove auto-generation of session_id in insert method 2025-11-20 17:29:26 +08:00
Soulter
283810d103 feat(chat): refactor chat component structure and add new features (#3701)
- Introduced `ConversationSidebar.vue` for improved conversation management and sidebar functionality.
- Enhanced `MessageList.vue` to handle loading states and improved message rendering.
- Created new composables: `useConversations`, `useMessages`, `useMediaHandling`, `useRecording` for better code organization and reusability.
- Added loading indicators and improved user experience during message processing.
- Ensured backward compatibility and maintained existing functionalities.
2025-11-20 17:29:26 +08:00
Soulter
81a76bc8e5 fix: anyio.ClosedResourceError when calling mcp tools (#3700)
* fix: anyio.ClosedResourceError when calling mcp tools

added reconnect mechanism

fixes: 3676

* fix(mcp_client): implement thread-safe reconnection using asyncio.Lock
2025-11-20 17:29:26 +08:00
Soulter
788764be02 refactor: implement migration for WebChat sessions by creating PlatformSession records from platform_message_history 2025-11-20 17:29:26 +08:00
Soulter
802ab26934 refactor: update session handling by replacing conversation_id with session_id in chat routes and components 2025-11-20 17:29:26 +08:00
Soulter
6857c81a14 refactor: enhance PlatformSession migration by adding display_name from Conversations and improve session item styling 2025-11-20 17:29:26 +08:00
Soulter
a6ed511a30 refactor: update message history deletion logic to remove newer records based on offset 2025-11-20 17:29:26 +08:00
Soulter
44c2b58206 refactor: optimize WebChat session migration by batch inserting records 2025-11-20 17:29:26 +08:00
Soulter
0e2adab3fd refactor: change to platform session 2025-11-20 17:29:26 +08:00
Soulter
0fe87d6b98 fix: restore migration check for version 4.7 2025-11-20 17:29:26 +08:00
Soulter
31ef3d1084 refactor: Implement WebChat session management and migration from version 4.6 to 4.7
- Added WebChatSession model for managing user sessions.
- Introduced methods for creating, retrieving, updating, and deleting WebChat sessions in the database.
- Updated core lifecycle to include migration from version 4.6 to 4.7, creating WebChat sessions from existing platform message history.
- Refactored chat routes to support new session-based architecture, replacing conversation-related endpoints with session endpoints.
- Updated frontend components to handle sessions instead of conversations, including session creation and management.
2025-11-20 17:29:26 +08:00
Soulter
b984bb2513 stage 2025-11-20 13:51:53 +08:00
63 changed files with 3705 additions and 2626 deletions

View File

@@ -32,7 +32,7 @@
<a href="https://github.com/AstrBotDevs/AstrBot/issues">问题提交</a>
</div>
AstrBot 是一个开源的一站式 Agent 聊天机器人平台,可无缝接入主流即时通讯软件,为个人、开发者和团队打造可靠、可扩展的对话式智能基础设施。无论是个人 AI 伙伴、智能客服、自动化助手还是企业知识库AstrBot 都能在你的即时通讯软件平台的工作流中快速构建生产可用的 AI 应用
AstrBot 是一个开源的一站式 Agent 聊天机器人平台及开发框架
## 主要功能

View File

@@ -2,12 +2,13 @@ import abc
import typing as T
from enum import Enum, auto
from astrbot import logger
from astrbot.core.provider import Provider
from astrbot.core.provider.entities import LLMResponse
from ..hooks import BaseAgentRunHooks
from ..response import AgentResponse
from ..run_context import ContextWrapper, TContext
from ..tool_executor import BaseFunctionToolExecutor
class AgentState(Enum):
@@ -23,7 +24,9 @@ class BaseAgentRunner(T.Generic[TContext]):
@abc.abstractmethod
async def reset(
self,
provider: Provider,
run_context: ContextWrapper[TContext],
tool_executor: BaseFunctionToolExecutor[TContext],
agent_hooks: BaseAgentRunHooks[TContext],
**kwargs: T.Any,
) -> None:
@@ -57,9 +60,3 @@ class BaseAgentRunner(T.Generic[TContext]):
This method should be called after the agent is done.
"""
...
def _transition_state(self, new_state: AgentState) -> None:
"""Transition the agent state."""
if self._state != new_state:
logger.debug(f"Agent state transition: {self._state} -> {new_state}")
self._state = new_state

View File

@@ -1,367 +0,0 @@
import base64
import json
import sys
import typing as T
import astrbot.core.message.components as Comp
from astrbot import logger
from astrbot.core import sp
from astrbot.core.message.message_event_result import MessageChain
from astrbot.core.provider.entities import (
LLMResponse,
ProviderRequest,
)
from ...hooks import BaseAgentRunHooks
from ...response import AgentResponseData
from ...run_context import ContextWrapper, TContext
from ..base import AgentResponse, AgentState, BaseAgentRunner
from .coze_api_client import CozeAPIClient
if sys.version_info >= (3, 12):
from typing import override
else:
from typing_extensions import override
class CozeAgentRunner(BaseAgentRunner[TContext]):
"""Coze Agent Runner"""
@override
async def reset(
self,
request: ProviderRequest,
run_context: ContextWrapper[TContext],
agent_hooks: BaseAgentRunHooks[TContext],
provider_config: dict,
**kwargs: T.Any,
) -> None:
self.req = request
self.streaming = kwargs.get("streaming", False)
self.final_llm_resp = None
self._state = AgentState.IDLE
self.agent_hooks = agent_hooks
self.run_context = run_context
self.api_key = provider_config.get("coze_api_key", "")
if not self.api_key:
raise Exception("Coze API Key 不能为空。")
self.bot_id = provider_config.get("bot_id", "")
if not self.bot_id:
raise Exception("Coze Bot ID 不能为空。")
self.api_base: str = provider_config.get("coze_api_base", "https://api.coze.cn")
if not isinstance(self.api_base, str) or not self.api_base.startswith(
("http://", "https://"),
):
raise Exception(
"Coze API Base URL 格式不正确,必须以 http:// 或 https:// 开头。",
)
self.timeout = provider_config.get("timeout", 120)
if isinstance(self.timeout, str):
self.timeout = int(self.timeout)
self.auto_save_history = provider_config.get("auto_save_history", True)
# 创建 API 客户端
self.api_client = CozeAPIClient(api_key=self.api_key, api_base=self.api_base)
# 会话相关缓存
self.file_id_cache: dict[str, dict[str, str]] = {}
@override
async def step(self):
"""
执行 Coze Agent 的一个步骤
"""
if not self.req:
raise ValueError("Request is not set. Please call reset() first.")
if self._state == AgentState.IDLE:
try:
await self.agent_hooks.on_agent_begin(self.run_context)
except Exception as e:
logger.error(f"Error in on_agent_begin hook: {e}", exc_info=True)
# 开始处理,转换到运行状态
self._transition_state(AgentState.RUNNING)
try:
# 执行 Coze 请求并处理结果
async for response in self._execute_coze_request():
yield response
except Exception as e:
logger.error(f"Coze 请求失败:{str(e)}")
self._transition_state(AgentState.ERROR)
self.final_llm_resp = LLMResponse(
role="err", completion_text=f"Coze 请求失败:{str(e)}"
)
yield AgentResponse(
type="err",
data=AgentResponseData(
chain=MessageChain().message(f"Coze 请求失败:{str(e)}")
),
)
finally:
await self.api_client.close()
@override
async def step_until_done(
self, max_step: int = 30
) -> T.AsyncGenerator[AgentResponse, None]:
while not self.done():
async for resp in self.step():
yield resp
async def _execute_coze_request(self):
"""执行 Coze 请求的核心逻辑"""
prompt = self.req.prompt or ""
session_id = self.req.session_id or "unknown"
image_urls = self.req.image_urls or []
contexts = self.req.contexts or []
system_prompt = self.req.system_prompt
# 用户ID参数
user_id = session_id
# 获取或创建会话ID
conversation_id = await sp.get_async(
scope="umo",
scope_id=user_id,
key="coze_conversation_id",
default="",
)
# 构建消息
additional_messages = []
if system_prompt:
if not self.auto_save_history or not conversation_id:
additional_messages.append(
{
"role": "system",
"content": system_prompt,
"content_type": "text",
},
)
# 处理历史上下文
if not self.auto_save_history and contexts:
for ctx in contexts:
if isinstance(ctx, dict) and "role" in ctx and "content" in ctx:
# 处理上下文中的图片
content = ctx["content"]
if isinstance(content, list):
# 多模态内容,需要处理图片
processed_content = []
for item in content:
if isinstance(item, dict):
if item.get("type") == "text":
processed_content.append(item)
elif item.get("type") == "image_url":
# 处理图片上传
try:
image_data = item.get("image_url", {})
url = image_data.get("url", "")
if url:
file_id = (
await self._download_and_upload_image(
url, session_id
)
)
processed_content.append(
{
"type": "file",
"file_id": file_id,
"file_url": url,
}
)
except Exception as e:
logger.warning(f"处理上下文图片失败: {e}")
continue
if processed_content:
additional_messages.append(
{
"role": ctx["role"],
"content": processed_content,
"content_type": "object_string",
}
)
else:
# 纯文本内容
additional_messages.append(
{
"role": ctx["role"],
"content": content,
"content_type": "text",
}
)
# 构建当前消息
if prompt or image_urls:
if image_urls:
# 多模态
object_string_content = []
if prompt:
object_string_content.append({"type": "text", "text": prompt})
for url in image_urls:
# the url is a base64 string
try:
image_data = base64.b64decode(url)
file_id = await self.api_client.upload_file(image_data)
object_string_content.append(
{
"type": "image",
"file_id": file_id,
}
)
except Exception as e:
logger.warning(f"处理图片失败 {url}: {e}")
continue
if object_string_content:
content = json.dumps(object_string_content, ensure_ascii=False)
additional_messages.append(
{
"role": "user",
"content": content,
"content_type": "object_string",
}
)
elif prompt:
# 纯文本
additional_messages.append(
{
"role": "user",
"content": prompt,
"content_type": "text",
},
)
# 执行 Coze API 请求
accumulated_content = ""
message_started = False
async for chunk in self.api_client.chat_messages(
bot_id=self.bot_id,
user_id=user_id,
additional_messages=additional_messages,
conversation_id=conversation_id,
auto_save_history=self.auto_save_history,
stream=True,
timeout=self.timeout,
):
event_type = chunk.get("event")
data = chunk.get("data", {})
if event_type == "conversation.chat.created":
if isinstance(data, dict) and "conversation_id" in data:
await sp.put_async(
scope="umo",
scope_id=user_id,
key="coze_conversation_id",
value=data["conversation_id"],
)
if event_type == "conversation.message.delta":
# 增量消息
content = data.get("content", "")
if not content and "delta" in data:
content = data["delta"].get("content", "")
if not content and "text" in data:
content = data.get("text", "")
if content:
accumulated_content += content
message_started = True
# 如果是流式响应,发送增量数据
if self.streaming:
yield AgentResponse(
type="streaming_delta",
data=AgentResponseData(
chain=MessageChain().message(content)
),
)
elif event_type == "conversation.message.completed":
# 消息完成
logger.debug("Coze message completed")
message_started = True
elif event_type == "conversation.chat.completed":
# 对话完成
logger.debug("Coze chat completed")
break
elif event_type == "error":
# 错误处理
error_msg = data.get("msg", "未知错误")
error_code = data.get("code", "UNKNOWN")
logger.error(f"Coze 出现错误: {error_code} - {error_msg}")
raise Exception(f"Coze 出现错误: {error_code} - {error_msg}")
if not message_started and not accumulated_content:
logger.warning("Coze 未返回任何内容")
accumulated_content = ""
# 创建最终响应
chain = MessageChain(chain=[Comp.Plain(accumulated_content)])
self.final_llm_resp = LLMResponse(role="assistant", result_chain=chain)
self._transition_state(AgentState.DONE)
try:
await self.agent_hooks.on_agent_done(self.run_context, self.final_llm_resp)
except Exception as e:
logger.error(f"Error in on_agent_done hook: {e}", exc_info=True)
# 返回最终结果
yield AgentResponse(
type="llm_result",
data=AgentResponseData(chain=chain),
)
async def _download_and_upload_image(
self,
image_url: str,
session_id: str | None = None,
) -> str:
"""下载图片并上传到 Coze返回 file_id"""
import hashlib
# 计算哈希实现缓存
cache_key = hashlib.md5(image_url.encode("utf-8")).hexdigest()
if session_id:
if session_id not in self.file_id_cache:
self.file_id_cache[session_id] = {}
if cache_key in self.file_id_cache[session_id]:
file_id = self.file_id_cache[session_id][cache_key]
logger.debug(f"[Coze] 使用缓存的 file_id: {file_id}")
return file_id
try:
image_data = await self.api_client.download_image(image_url)
file_id = await self.api_client.upload_file(image_data)
if session_id:
self.file_id_cache[session_id][cache_key] = file_id
logger.debug(f"[Coze] 图片上传成功并缓存file_id: {file_id}")
return file_id
except Exception as e:
logger.error(f"处理图片失败 {image_url}: {e!s}")
raise Exception(f"处理图片失败: {e!s}")
@override
def done(self) -> bool:
"""检查 Agent 是否已完成工作"""
return self._state in (AgentState.DONE, AgentState.ERROR)
@override
def get_final_llm_resp(self) -> LLMResponse | None:
return self.final_llm_resp

View File

@@ -1,403 +0,0 @@
import asyncio
import functools
import queue
import re
import sys
import threading
import typing as T
from dashscope import Application
from dashscope.app.application_response import ApplicationResponse
import astrbot.core.message.components as Comp
from astrbot.core import logger, sp
from astrbot.core.message.message_event_result import MessageChain
from astrbot.core.provider.entities import (
LLMResponse,
ProviderRequest,
)
from ...hooks import BaseAgentRunHooks
from ...response import AgentResponseData
from ...run_context import ContextWrapper, TContext
from ..base import AgentResponse, AgentState, BaseAgentRunner
if sys.version_info >= (3, 12):
from typing import override
else:
from typing_extensions import override
class DashscopeAgentRunner(BaseAgentRunner[TContext]):
"""Dashscope Agent Runner"""
@override
async def reset(
self,
request: ProviderRequest,
run_context: ContextWrapper[TContext],
agent_hooks: BaseAgentRunHooks[TContext],
provider_config: dict,
**kwargs: T.Any,
) -> None:
self.req = request
self.streaming = kwargs.get("streaming", False)
self.final_llm_resp = None
self._state = AgentState.IDLE
self.agent_hooks = agent_hooks
self.run_context = run_context
self.api_key = provider_config.get("dashscope_api_key", "")
if not self.api_key:
raise Exception("阿里云百炼 API Key 不能为空。")
self.app_id = provider_config.get("dashscope_app_id", "")
if not self.app_id:
raise Exception("阿里云百炼 APP ID 不能为空。")
self.dashscope_app_type = provider_config.get("dashscope_app_type", "")
if not self.dashscope_app_type:
raise Exception("阿里云百炼 APP 类型不能为空。")
self.variables: dict = provider_config.get("variables", {}) or {}
self.rag_options: dict = provider_config.get("rag_options", {})
self.output_reference = self.rag_options.get("output_reference", False)
self.rag_options = self.rag_options.copy()
self.rag_options.pop("output_reference", None)
self.timeout = provider_config.get("timeout", 120)
if isinstance(self.timeout, str):
self.timeout = int(self.timeout)
def has_rag_options(self):
"""判断是否有 RAG 选项
Returns:
bool: 是否有 RAG 选项
"""
if self.rag_options and (
len(self.rag_options.get("pipeline_ids", [])) > 0
or len(self.rag_options.get("file_ids", [])) > 0
):
return True
return False
@override
async def step(self):
"""
执行 Dashscope Agent 的一个步骤
"""
if not self.req:
raise ValueError("Request is not set. Please call reset() first.")
if self._state == AgentState.IDLE:
try:
await self.agent_hooks.on_agent_begin(self.run_context)
except Exception as e:
logger.error(f"Error in on_agent_begin hook: {e}", exc_info=True)
# 开始处理,转换到运行状态
self._transition_state(AgentState.RUNNING)
try:
# 执行 Dashscope 请求并处理结果
async for response in self._execute_dashscope_request():
yield response
except Exception as e:
logger.error(f"阿里云百炼请求失败:{str(e)}")
self._transition_state(AgentState.ERROR)
self.final_llm_resp = LLMResponse(
role="err", completion_text=f"阿里云百炼请求失败:{str(e)}"
)
yield AgentResponse(
type="err",
data=AgentResponseData(
chain=MessageChain().message(f"阿里云百炼请求失败:{str(e)}")
),
)
@override
async def step_until_done(
self, max_step: int = 30
) -> T.AsyncGenerator[AgentResponse, None]:
while not self.done():
async for resp in self.step():
yield resp
def _consume_sync_generator(
self, response: T.Any, response_queue: queue.Queue
) -> None:
"""在线程中消费同步generator,将结果放入队列
Args:
response: 同步generator对象
response_queue: 用于传递数据的队列
"""
try:
if self.streaming:
for chunk in response:
response_queue.put(("data", chunk))
else:
response_queue.put(("data", response))
except Exception as e:
response_queue.put(("error", e))
finally:
response_queue.put(("done", None))
async def _process_stream_chunk(
self, chunk: ApplicationResponse, output_text: str
) -> tuple[str, list | None, AgentResponse | None]:
"""处理流式响应的单个chunk
Args:
chunk: Dashscope响应chunk
output_text: 当前累积的输出文本
Returns:
(更新后的output_text, doc_references, AgentResponse或None)
"""
logger.debug(f"dashscope stream chunk: {chunk}")
if chunk.status_code != 200:
logger.error(
f"阿里云百炼请求失败: request_id={chunk.request_id}, code={chunk.status_code}, message={chunk.message}, 请参考文档https://help.aliyun.com/zh/model-studio/developer-reference/error-code",
)
self._transition_state(AgentState.ERROR)
error_msg = (
f"阿里云百炼请求失败: message={chunk.message} code={chunk.status_code}"
)
self.final_llm_resp = LLMResponse(
role="err",
result_chain=MessageChain().message(error_msg),
)
return (
output_text,
None,
AgentResponse(
type="err",
data=AgentResponseData(chain=MessageChain().message(error_msg)),
),
)
chunk_text = chunk.output.get("text", "") or ""
# RAG 引用脚标格式化
chunk_text = re.sub(r"<ref>\[(\d+)\]</ref>", r"[\1]", chunk_text)
response = None
if chunk_text:
output_text += chunk_text
response = AgentResponse(
type="streaming_delta",
data=AgentResponseData(chain=MessageChain().message(chunk_text)),
)
# 获取文档引用
doc_references = chunk.output.get("doc_references", None)
return output_text, doc_references, response
def _format_doc_references(self, doc_references: list) -> str:
"""格式化文档引用为文本
Args:
doc_references: 文档引用列表
Returns:
格式化后的引用文本
"""
ref_parts = []
for ref in doc_references:
ref_title = (
ref.get("title", "") if ref.get("title") else ref.get("doc_name", "")
)
ref_parts.append(f"{ref['index_id']}. {ref_title}\n")
ref_str = "".join(ref_parts)
return f"\n\n回答来源:\n{ref_str}"
async def _build_request_payload(
self, prompt: str, session_id: str, contexts: list, system_prompt: str
) -> dict:
"""构建请求payload
Args:
prompt: 用户输入
session_id: 会话ID
contexts: 上下文列表
system_prompt: 系统提示词
Returns:
请求payload字典
"""
conversation_id = await sp.get_async(
scope="umo",
scope_id=session_id,
key="dashscope_conversation_id",
default="",
)
# 获得会话变量
payload_vars = self.variables.copy()
session_var = await sp.get_async(
scope="umo",
scope_id=session_id,
key="session_variables",
default={},
)
payload_vars.update(session_var)
if (
self.dashscope_app_type in ["agent", "dialog-workflow"]
and not self.has_rag_options()
):
# 支持多轮对话的
p = {
"app_id": self.app_id,
"api_key": self.api_key,
"prompt": prompt,
"biz_params": payload_vars or None,
"stream": self.streaming,
"incremental_output": True,
}
if conversation_id:
p["session_id"] = conversation_id
return p
else:
# 不支持多轮对话的
payload = {
"app_id": self.app_id,
"prompt": prompt,
"api_key": self.api_key,
"biz_params": payload_vars or None,
"stream": self.streaming,
"incremental_output": True,
}
if self.rag_options:
payload["rag_options"] = self.rag_options
return payload
async def _handle_streaming_response(
self, response: T.Any, session_id: str
) -> T.AsyncGenerator[AgentResponse, None]:
"""处理流式响应
Args:
response: Dashscope 流式响应 generator
Yields:
AgentResponse 对象
"""
response_queue = queue.Queue()
consumer_thread = threading.Thread(
target=self._consume_sync_generator,
args=(response, response_queue),
daemon=True,
)
consumer_thread.start()
output_text = ""
doc_references = None
while True:
try:
item_type, item_data = await asyncio.get_event_loop().run_in_executor(
None, response_queue.get, True, 1
)
except queue.Empty:
continue
if item_type == "done":
break
elif item_type == "error":
raise item_data
elif item_type == "data":
chunk = item_data
assert isinstance(chunk, ApplicationResponse)
(
output_text,
chunk_doc_refs,
response,
) = await self._process_stream_chunk(chunk, output_text)
if response:
if response.type == "err":
yield response
return
yield response
if chunk_doc_refs:
doc_references = chunk_doc_refs
if chunk.output.session_id:
await sp.put_async(
scope="umo",
scope_id=session_id,
key="dashscope_conversation_id",
value=chunk.output.session_id,
)
# 添加 RAG 引用
if self.output_reference and doc_references:
ref_text = self._format_doc_references(doc_references)
output_text += ref_text
if self.streaming:
yield AgentResponse(
type="streaming_delta",
data=AgentResponseData(chain=MessageChain().message(ref_text)),
)
# 创建最终响应
chain = MessageChain(chain=[Comp.Plain(output_text)])
self.final_llm_resp = LLMResponse(role="assistant", result_chain=chain)
self._transition_state(AgentState.DONE)
try:
await self.agent_hooks.on_agent_done(self.run_context, self.final_llm_resp)
except Exception as e:
logger.error(f"Error in on_agent_done hook: {e}", exc_info=True)
# 返回最终结果
yield AgentResponse(
type="llm_result",
data=AgentResponseData(chain=chain),
)
async def _execute_dashscope_request(self):
"""执行 Dashscope 请求的核心逻辑"""
prompt = self.req.prompt or ""
session_id = self.req.session_id or "unknown"
image_urls = self.req.image_urls or []
contexts = self.req.contexts or []
system_prompt = self.req.system_prompt
# 检查图片输入
if image_urls:
logger.warning("阿里云百炼暂不支持图片输入,将自动忽略图片内容。")
# 构建请求payload
payload = await self._build_request_payload(
prompt, session_id, contexts, system_prompt
)
if not self.streaming:
payload["incremental_output"] = False
# 发起请求
partial = functools.partial(Application.call, **payload)
response = await asyncio.get_event_loop().run_in_executor(None, partial)
async for resp in self._handle_streaming_response(response, session_id):
yield resp
@override
def done(self) -> bool:
"""检查 Agent 是否已完成工作"""
return self._state in (AgentState.DONE, AgentState.ERROR)
@override
def get_final_llm_resp(self) -> LLMResponse | None:
return self.final_llm_resp

View File

@@ -1,336 +0,0 @@
import base64
import os
import sys
import typing as T
import astrbot.core.message.components as Comp
from astrbot.core import logger, sp
from astrbot.core.message.message_event_result import MessageChain
from astrbot.core.provider.entities import (
LLMResponse,
ProviderRequest,
)
from astrbot.core.utils.astrbot_path import get_astrbot_data_path
from astrbot.core.utils.io import download_file
from ...hooks import BaseAgentRunHooks
from ...response import AgentResponseData
from ...run_context import ContextWrapper, TContext
from ..base import AgentResponse, AgentState, BaseAgentRunner
from .dify_api_client import DifyAPIClient
if sys.version_info >= (3, 12):
from typing import override
else:
from typing_extensions import override
class DifyAgentRunner(BaseAgentRunner[TContext]):
"""Dify Agent Runner"""
@override
async def reset(
self,
request: ProviderRequest,
run_context: ContextWrapper[TContext],
agent_hooks: BaseAgentRunHooks[TContext],
provider_config: dict,
**kwargs: T.Any,
) -> None:
self.req = request
self.streaming = kwargs.get("streaming", False)
self.final_llm_resp = None
self._state = AgentState.IDLE
self.agent_hooks = agent_hooks
self.run_context = run_context
self.api_key = provider_config.get("dify_api_key", "")
self.api_base = provider_config.get("dify_api_base", "https://api.dify.ai/v1")
self.api_type = provider_config.get("dify_api_type", "chat")
self.workflow_output_key = provider_config.get(
"dify_workflow_output_key",
"astrbot_wf_output",
)
self.dify_query_input_key = provider_config.get(
"dify_query_input_key",
"astrbot_text_query",
)
self.variables: dict = provider_config.get("variables", {}) or {}
self.timeout = provider_config.get("timeout", 60)
if isinstance(self.timeout, str):
self.timeout = int(self.timeout)
self.api_client = DifyAPIClient(self.api_key, self.api_base)
@override
async def step(self):
"""
执行 Dify Agent 的一个步骤
"""
if not self.req:
raise ValueError("Request is not set. Please call reset() first.")
if self._state == AgentState.IDLE:
try:
await self.agent_hooks.on_agent_begin(self.run_context)
except Exception as e:
logger.error(f"Error in on_agent_begin hook: {e}", exc_info=True)
# 开始处理,转换到运行状态
self._transition_state(AgentState.RUNNING)
try:
# 执行 Dify 请求并处理结果
async for response in self._execute_dify_request():
yield response
except Exception as e:
logger.error(f"Dify 请求失败:{str(e)}")
self._transition_state(AgentState.ERROR)
self.final_llm_resp = LLMResponse(
role="err", completion_text=f"Dify 请求失败:{str(e)}"
)
yield AgentResponse(
type="err",
data=AgentResponseData(
chain=MessageChain().message(f"Dify 请求失败:{str(e)}")
),
)
finally:
await self.api_client.close()
@override
async def step_until_done(
self, max_step: int = 30
) -> T.AsyncGenerator[AgentResponse, None]:
while not self.done():
async for resp in self.step():
yield resp
async def _execute_dify_request(self):
"""执行 Dify 请求的核心逻辑"""
prompt = self.req.prompt or ""
session_id = self.req.session_id or "unknown"
image_urls = self.req.image_urls or []
system_prompt = self.req.system_prompt
conversation_id = await sp.get_async(
scope="umo",
scope_id=session_id,
key="dify_conversation_id",
default="",
)
result = ""
# 处理图片上传
files_payload = []
for image_url in image_urls:
# image_url is a base64 string
try:
image_data = base64.b64decode(image_url)
file_response = await self.api_client.file_upload(
file_data=image_data,
user=session_id,
mime_type="image/png",
file_name="image.png",
)
logger.debug(f"Dify 上传图片响应:{file_response}")
if "id" not in file_response:
logger.warning(
f"上传图片后得到未知的 Dify 响应:{file_response},图片将忽略。"
)
continue
files_payload.append(
{
"type": "image",
"transfer_method": "local_file",
"upload_file_id": file_response["id"],
}
)
except Exception as e:
logger.warning(f"上传图片失败:{e}")
continue
# 获得会话变量
payload_vars = self.variables.copy()
# 动态变量
session_var = await sp.get_async(
scope="umo",
scope_id=session_id,
key="session_variables",
default={},
)
payload_vars.update(session_var)
payload_vars["system_prompt"] = system_prompt
# 处理不同的 API 类型
match self.api_type:
case "chat" | "agent" | "chatflow":
if not prompt:
prompt = "请描述这张图片。"
async for chunk in self.api_client.chat_messages(
inputs={
**payload_vars,
},
query=prompt,
user=session_id,
conversation_id=conversation_id,
files=files_payload,
timeout=self.timeout,
):
logger.debug(f"dify resp chunk: {chunk}")
if chunk["event"] == "message" or chunk["event"] == "agent_message":
result += chunk["answer"]
if not conversation_id:
await sp.put_async(
scope="umo",
scope_id=session_id,
key="dify_conversation_id",
value=chunk["conversation_id"],
)
conversation_id = chunk["conversation_id"]
# 如果是流式响应,发送增量数据
if self.streaming and chunk["answer"]:
yield AgentResponse(
type="streaming_delta",
data=AgentResponseData(
chain=MessageChain().message(chunk["answer"])
),
)
elif chunk["event"] == "message_end":
logger.debug("Dify message end")
break
elif chunk["event"] == "error":
logger.error(f"Dify 出现错误:{chunk}")
raise Exception(
f"Dify 出现错误 status: {chunk['status']} message: {chunk['message']}"
)
case "workflow":
async for chunk in self.api_client.workflow_run(
inputs={
self.dify_query_input_key: prompt,
"astrbot_session_id": session_id,
**payload_vars,
},
user=session_id,
files=files_payload,
timeout=self.timeout,
):
logger.debug(f"dify workflow resp chunk: {chunk}")
match chunk["event"]:
case "workflow_started":
logger.info(
f"Dify 工作流(ID: {chunk['workflow_run_id']})开始运行。"
)
case "node_finished":
logger.debug(
f"Dify 工作流节点(ID: {chunk['data']['node_id']} Title: {chunk['data'].get('title', '')})运行结束。"
)
case "text_chunk":
if self.streaming and chunk["data"]["text"]:
yield AgentResponse(
type="streaming_delta",
data=AgentResponseData(
chain=MessageChain().message(
chunk["data"]["text"]
)
),
)
case "workflow_finished":
logger.info(
f"Dify 工作流(ID: {chunk['workflow_run_id']})运行结束"
)
logger.debug(f"Dify 工作流结果:{chunk}")
if chunk["data"]["error"]:
logger.error(
f"Dify 工作流出现错误:{chunk['data']['error']}"
)
raise Exception(
f"Dify 工作流出现错误:{chunk['data']['error']}"
)
if self.workflow_output_key not in chunk["data"]["outputs"]:
raise Exception(
f"Dify 工作流的输出不包含指定的键名:{self.workflow_output_key}"
)
result = chunk
case _:
raise Exception(f"未知的 Dify API 类型:{self.api_type}")
if not result:
logger.warning("Dify 请求结果为空,请查看 Debug 日志。")
# 解析结果
chain = await self.parse_dify_result(result)
# 创建最终响应
self.final_llm_resp = LLMResponse(role="assistant", result_chain=chain)
self._transition_state(AgentState.DONE)
try:
await self.agent_hooks.on_agent_done(self.run_context, self.final_llm_resp)
except Exception as e:
logger.error(f"Error in on_agent_done hook: {e}", exc_info=True)
# 返回最终结果
yield AgentResponse(
type="llm_result",
data=AgentResponseData(chain=chain),
)
async def parse_dify_result(self, chunk: dict | str) -> MessageChain:
"""解析 Dify 的响应结果"""
if isinstance(chunk, str):
# Chat
return MessageChain(chain=[Comp.Plain(chunk)])
async def parse_file(item: dict):
match item["type"]:
case "image":
return Comp.Image(file=item["url"], url=item["url"])
case "audio":
# 仅支持 wav
temp_dir = os.path.join(get_astrbot_data_path(), "temp")
path = os.path.join(temp_dir, f"{item['filename']}.wav")
await download_file(item["url"], path)
return Comp.Image(file=item["url"], url=item["url"])
case "video":
return Comp.Video(file=item["url"])
case _:
return Comp.File(name=item["filename"], file=item["url"])
output = chunk["data"]["outputs"][self.workflow_output_key]
chains = []
if isinstance(output, str):
# 纯文本输出
chains.append(Comp.Plain(output))
elif isinstance(output, list):
# 主要适配 Dify 的 HTTP 请求结点的多模态输出
for item in output:
# handle Array[File]
if (
not isinstance(item, dict)
or item.get("dify_model_identity", "") != "__dify__file__"
):
chains.append(Comp.Plain(str(output)))
break
else:
chains.append(Comp.Plain(str(output)))
# scan file
files = chunk["data"].get("files", [])
for item in files:
comp = await parse_file(item)
chains.append(comp)
return MessageChain(chain=chains)
@override
def done(self) -> bool:
"""检查 Agent 是否已完成工作"""
return self._state in (AgentState.DONE, AgentState.ERROR)
@override
def get_final_llm_resp(self) -> LLMResponse | None:
return self.final_llm_resp

View File

@@ -69,6 +69,12 @@ class ToolLoopAgentRunner(BaseAgentRunner[TContext]):
)
self.run_context.messages = messages
def _transition_state(self, new_state: AgentState) -> None:
"""转换 Agent 状态"""
if self._state != new_state:
logger.debug(f"Agent state transition: {self._state} -> {new_state}")
self._state = new_state
async def _iter_llm_responses(self) -> T.AsyncGenerator[LLMResponse, None]:
"""Yields chunks *and* a final LLMResponse."""
if self.streaming:

View File

@@ -4,7 +4,7 @@ import os
from astrbot.core.utils.astrbot_path import get_astrbot_data_path
VERSION = "4.6.1"
VERSION = "4.6.0"
DB_PATH = os.path.join(get_astrbot_data_path(), "data_v4.db")
# 默认配置
@@ -68,10 +68,6 @@ DEFAULT_CONFIG = {
"dequeue_context_length": 1,
"streaming_response": False,
"show_tool_use_status": False,
"agent_runner_type": "local",
"dify_agent_runner_provider_id": "",
"coze_agent_runner_provider_id": "",
"dashscope_agent_runner_provider_id": "",
"unsupported_streaming_strategy": "realtime_segmenting",
"max_agent_step": 30,
"tool_call_timeout": 60,
@@ -1015,7 +1011,7 @@ CONFIG_METADATA_2 = {
"id": "dify_app_default",
"provider": "dify",
"type": "dify",
"provider_type": "agent_runner",
"provider_type": "chat_completion",
"enable": True,
"dify_api_type": "chat",
"dify_api_key": "",
@@ -1029,20 +1025,20 @@ CONFIG_METADATA_2 = {
"Coze": {
"id": "coze",
"provider": "coze",
"provider_type": "agent_runner",
"provider_type": "chat_completion",
"type": "coze",
"enable": True,
"coze_api_key": "",
"bot_id": "",
"coze_api_base": "https://api.coze.cn",
"timeout": 60,
# "auto_save_history": True,
"auto_save_history": True,
},
"阿里云百炼应用": {
"id": "dashscope",
"provider": "dashscope",
"type": "dashscope",
"provider_type": "agent_runner",
"provider_type": "chat_completion",
"enable": True,
"dashscope_app_type": "agent",
"dashscope_api_key": "",
@@ -1312,19 +1308,6 @@ CONFIG_METADATA_2 = {
"timeout": 20,
"launch_model_if_not_running": False,
},
"阿里云百炼重排序": {
"id": "bailian_rerank",
"type": "bailian_rerank",
"provider": "bailian",
"provider_type": "rerank",
"enable": True,
"rerank_api_key": "",
"rerank_api_base": "https://dashscope.aliyuncs.com/api/v1/services/rerank/text-rerank/text-rerank",
"rerank_model": "qwen3-rerank",
"timeout": 30,
"return_documents": False,
"instruct": "",
},
"Xinference STT": {
"id": "xinference_stt",
"type": "xinference_stt",
@@ -1359,16 +1342,6 @@ CONFIG_METADATA_2 = {
"description": "重排序模型名称",
"type": "string",
},
"return_documents": {
"description": "是否在排序结果中返回文档原文",
"type": "bool",
"hint": "默认值false以减少网络传输开销。",
},
"instruct": {
"description": "自定义排序任务类型说明",
"type": "string",
"hint": "仅在使用 qwen3-rerank 模型时生效。建议使用英文撰写。",
},
"launch_model_if_not_running": {
"description": "模型未运行时自动启动",
"type": "bool",
@@ -1911,6 +1884,7 @@ CONFIG_METADATA_2 = {
"enable": {
"description": "启用",
"type": "bool",
"hint": "是否启用。",
},
"key": {
"description": "API Key",
@@ -2040,22 +2014,12 @@ CONFIG_METADATA_2 = {
"unsupported_streaming_strategy": {
"type": "string",
},
"agent_runner_type": {
"type": "string",
},
"dify_agent_runner_provider_id": {
"type": "string",
},
"coze_agent_runner_provider_id": {
"type": "string",
},
"dashscope_agent_runner_provider_id": {
"type": "string",
},
"max_agent_step": {
"description": "工具调用轮数上限",
"type": "int",
},
"tool_call_timeout": {
"description": "工具调用超时时间(秒)",
"type": "int",
},
},
@@ -2193,75 +2157,30 @@ CONFIG_METADATA_3 = {
"ai_group": {
"name": "AI 配置",
"metadata": {
"agent_runner": {
"description": "Agent 执行方式",
"hint": "选择 AI 对话的执行器,默认为 AstrBot 内置 Agent 执行器,可使用 AstrBot 内的知识库、人格、工具调用功能。如果不打算接入 Dify 或 Coze 等第三方 Agent 执行器,不需要修改此节。",
"ai": {
"description": "模型",
"type": "object",
"items": {
"provider_settings.enable": {
"description": "启用",
"description": "启用大语言模型聊天",
"type": "bool",
"hint": "AI 对话总开关",
},
"provider_settings.agent_runner_type": {
"description": "执行器",
"type": "string",
"options": ["local", "dify", "coze", "dashscope"],
"labels": ["内置 Agent", "Dify", "Coze", "阿里云百炼应用"],
"condition": {
"provider_settings.enable": True,
},
},
"provider_settings.coze_agent_runner_provider_id": {
"description": "Coze Agent 执行器提供商 ID",
"type": "string",
"_special": "select_agent_runner_provider:coze",
"condition": {
"provider_settings.agent_runner_type": "coze",
"provider_settings.enable": True,
},
},
"provider_settings.dify_agent_runner_provider_id": {
"description": "Dify Agent 执行器提供商 ID",
"type": "string",
"_special": "select_agent_runner_provider:dify",
"condition": {
"provider_settings.agent_runner_type": "dify",
"provider_settings.enable": True,
},
},
"provider_settings.dashscope_agent_runner_provider_id": {
"description": "阿里云百炼应用 Agent 执行器提供商 ID",
"type": "string",
"_special": "select_agent_runner_provider:dashscope",
"condition": {
"provider_settings.agent_runner_type": "dashscope",
"provider_settings.enable": True,
},
},
},
},
"ai": {
"description": "模型",
"hint": "当使用非内置 Agent 执行器时,默认聊天模型和默认图片转述模型可能会无效,但某些插件会依赖此配置项来调用 AI 能力。",
"type": "object",
"items": {
"provider_settings.default_provider_id": {
"description": "默认聊天模型",
"type": "string",
"_special": "select_provider",
"hint": "留空时使用第一个模型",
"hint": "留空时使用第一个模型",
},
"provider_settings.default_image_caption_provider_id": {
"description": "默认图片转述模型",
"type": "string",
"_special": "select_provider",
"hint": "留空代表不使用可用于非多模态模型",
"hint": "留空代表不使用可用于不支持视觉模态的聊天模型",
},
"provider_stt_settings.enable": {
"description": "启用语音转文本",
"type": "bool",
"hint": "STT 总开关",
"hint": "STT 总开关",
},
"provider_stt_settings.provider_id": {
"description": "默认语音转文本模型",
@@ -2275,11 +2194,12 @@ CONFIG_METADATA_3 = {
"provider_tts_settings.enable": {
"description": "启用文本转语音",
"type": "bool",
"hint": "TTS 总开关",
"hint": "TTS 总开关。当关闭时,会话启用 TTS 也不会生效。",
},
"provider_tts_settings.provider_id": {
"description": "默认文本转语音模型",
"type": "string",
"hint": "用户也可使用 /provider 单独选择会话的 TTS 模型。",
"_special": "select_provider_tts",
"condition": {
"provider_tts_settings.enable": True,
@@ -2290,9 +2210,6 @@ CONFIG_METADATA_3 = {
"type": "text",
},
},
"condition": {
"provider_settings.enable": True,
},
},
"persona": {
"description": "人格",
@@ -2304,10 +2221,6 @@ CONFIG_METADATA_3 = {
"_special": "select_persona",
},
},
"condition": {
"provider_settings.agent_runner_type": "local",
"provider_settings.enable": True,
},
},
"knowledgebase": {
"description": "知识库",
@@ -2336,10 +2249,6 @@ CONFIG_METADATA_3 = {
"hint": "启用后,知识库检索将作为 LLM Tool由模型自主决定何时调用知识库进行查询。需要模型支持函数调用能力。",
},
},
"condition": {
"provider_settings.agent_runner_type": "local",
"provider_settings.enable": True,
},
},
"websearch": {
"description": "网页搜索",
@@ -2376,10 +2285,6 @@ CONFIG_METADATA_3 = {
"type": "bool",
},
},
"condition": {
"provider_settings.agent_runner_type": "local",
"provider_settings.enable": True,
},
},
"others": {
"description": "其他配置",
@@ -2388,51 +2293,34 @@ CONFIG_METADATA_3 = {
"provider_settings.display_reasoning_text": {
"description": "显示思考内容",
"type": "bool",
"condition": {
"provider_settings.agent_runner_type": "local",
},
},
"provider_settings.identifier": {
"description": "用户识别",
"type": "bool",
"hint": "启用后,会在提示词前包含用户 ID 信息。",
},
"provider_settings.group_name_display": {
"description": "显示群名称",
"type": "bool",
"hint": "启用后,在支持的平台(OneBot v11)上会在提示词前包含群名称信息。",
"hint": "启用后,在支持的平台(aiocqhttp)上会在 prompt 中包含群名称信息。",
},
"provider_settings.datetime_system_prompt": {
"description": "现实世界时间感知",
"type": "bool",
"hint": "启用后,会在系统提示词中附带当前时间信息。",
"condition": {
"provider_settings.agent_runner_type": "local",
},
},
"provider_settings.show_tool_use_status": {
"description": "输出函数调用状态",
"type": "bool",
"condition": {
"provider_settings.agent_runner_type": "local",
},
},
"provider_settings.max_agent_step": {
"description": "工具调用轮数上限",
"type": "int",
"condition": {
"provider_settings.agent_runner_type": "local",
},
},
"provider_settings.tool_call_timeout": {
"description": "工具调用超时时间(秒)",
"type": "int",
"condition": {
"provider_settings.agent_runner_type": "local",
},
},
"provider_settings.streaming_response": {
"description": "流式输出",
"description": "流式回复",
"type": "bool",
},
"provider_settings.unsupported_streaming_strategy": {
@@ -2448,23 +2336,17 @@ CONFIG_METADATA_3 = {
"provider_settings.max_context_length": {
"description": "最多携带对话轮数",
"type": "int",
"hint": "超出这个数量时丢弃最旧的部分,一轮聊天记为 1 条-1 为不限制",
"condition": {
"provider_settings.agent_runner_type": "local",
},
"hint": "超出这个数量时丢弃最旧的部分,一轮聊天记为 1 条-1 为不限制",
},
"provider_settings.dequeue_context_length": {
"description": "丢弃对话轮数",
"type": "int",
"hint": "超出最多携带对话轮数时, 一次丢弃的聊天轮数",
"condition": {
"provider_settings.agent_runner_type": "local",
},
"hint": "超出最多携带对话轮数时, 一次丢弃的聊天轮数",
},
"provider_settings.wake_prefix": {
"description": "LLM 聊天额外唤醒前缀 ",
"type": "string",
"hint": "如果唤醒前缀为 /, 额外聊天唤醒前缀为 chat则需要 /chat 才会触发 LLM 请求",
"hint": "如果唤醒前缀为 `/`, 额外聊天唤醒前缀为 `chat`,则需要 `/chat` 才会触发 LLM 请求。默认为空。",
},
"provider_settings.prompt_prefix": {
"description": "用户提示词",
@@ -2476,9 +2358,6 @@ CONFIG_METADATA_3 = {
"type": "bool",
},
},
"condition": {
"provider_settings.enable": True,
},
},
},
},

View File

@@ -16,13 +16,15 @@ import time
import traceback
from asyncio import Queue
from astrbot.api import logger, sp
from astrbot.core import LogBroker
from astrbot.core import LogBroker, logger, sp
from astrbot.core.astrbot_config_mgr import AstrBotConfigManager
from astrbot.core.config.default import VERSION
from astrbot.core.conversation_mgr import ConversationManager
from astrbot.core.db import BaseDatabase
from astrbot.core.db.migration.migra_45_to_46 import migrate_45_to_46
from astrbot.core.db.migration.migra_webchat_session import migrate_webchat_session
from astrbot.core.knowledge_base.kb_mgr import KnowledgeBaseManager
from astrbot.core.memory.memory_manager import MemoryManager
from astrbot.core.persona_mgr import PersonaManager
from astrbot.core.pipeline.scheduler import PipelineContext, PipelineScheduler
from astrbot.core.platform.manager import PlatformManager
@@ -33,7 +35,6 @@ from astrbot.core.star.context import Context
from astrbot.core.star.star_handler import EventType, star_handlers_registry, star_map
from astrbot.core.umop_config_router import UmopConfigRouter
from astrbot.core.updator import AstrBotUpdator
from astrbot.core.utils.migra_helper import migra
from . import astrbot_config, html_renderer
from .event_bus import EventBus
@@ -97,16 +98,18 @@ class AstrBotCoreLifecycle:
sp=sp,
)
# apply migration
# 4.5 to 4.6 migration for umop_config_router
try:
await migra(
self.db,
self.astrbot_config_mgr,
self.umop_config_router,
self.astrbot_config_mgr,
)
await migrate_45_to_46(self.astrbot_config_mgr, self.umop_config_router)
except Exception as e:
logger.error(f"AstrBot migration failed: {e!s}")
logger.error(f"Migration from version 4.5 to 4.6 failed: {e!s}")
logger.error(traceback.format_exc())
# migration for webchat session
try:
await migrate_webchat_session(self.db)
except Exception as e:
logger.error(f"Migration for webchat session failed: {e!s}")
logger.error(traceback.format_exc())
# 初始化事件队列
@@ -134,6 +137,8 @@ class AstrBotCoreLifecycle:
# 初始化知识库管理器
self.kb_manager = KnowledgeBaseManager(self.provider_manager)
# 初始化记忆管理器
self.memory_manager = MemoryManager()
# 初始化提供给插件的上下文
self.star_context = Context(
@@ -147,6 +152,7 @@ class AstrBotCoreLifecycle:
self.persona_mgr,
self.astrbot_config_mgr,
self.kb_manager,
self.memory_manager,
)
# 初始化插件管理器

View File

@@ -25,7 +25,7 @@ async def migrate_webchat_session(db_helper: BaseDatabase):
"""
# 检查是否已经完成迁移
migration_done = await db_helper.get_preference(
"global", "global", "migration_done_webchat_session_1"
"global", "global", "migration_done_webchat_session"
)
if migration_done:
return
@@ -43,7 +43,7 @@ async def migrate_webchat_session(db_helper: BaseDatabase):
func.max(PlatformMessageHistory.updated_at).label("latest"),
)
.where(col(PlatformMessageHistory.platform_id) == "webchat")
.where(col(PlatformMessageHistory.sender_id) != "bot")
.where(col(PlatformMessageHistory.sender_id) == "astrbot")
.group_by(col(PlatformMessageHistory.user_id))
)
@@ -53,7 +53,7 @@ async def migrate_webchat_session(db_helper: BaseDatabase):
if not webchat_users:
logger.info("没有找到需要迁移的 WebChat 数据")
await sp.put_async(
"global", "global", "migration_done_webchat_session_1", True
"global", "global", "migration_done_webchat_session", True
)
return
@@ -124,7 +124,7 @@ async def migrate_webchat_session(db_helper: BaseDatabase):
logger.info("没有新会话需要迁移")
# 标记迁移完成
await sp.put_async("global", "global", "migration_done_webchat_session_1", True)
await sp.put_async("global", "global", "migration_done_webchat_session", True)
except Exception as e:
logger.error(f"迁移过程中发生错误: {e}", exc_info=True)

View File

@@ -173,7 +173,7 @@ class PlatformSession(SQLModel, table=True):
max_length=100,
nullable=False,
unique=True,
default_factory=lambda: str(uuid.uuid4()),
default_factory=lambda: f"webchat_{uuid.uuid4()}",
)
platform_id: str = Field(default="webchat", nullable=False)
"""Platform identifier (e.g., 'webchat', 'qq', 'discord')"""

View File

@@ -794,7 +794,7 @@ class SQLiteDatabase(BaseDatabase):
await session.execute(
update(PlatformSession)
.where(col(PlatformSession.session_id) == session_id)
.where(col(PlatformSession.session_id == session_id))
.values(**values),
)
@@ -805,6 +805,6 @@ class SQLiteDatabase(BaseDatabase):
async with session.begin():
await session.execute(
delete(PlatformSession).where(
col(PlatformSession.session_id) == session_id,
col(PlatformSession.session_id == session_id),
),
)

View File

@@ -1,11 +1,20 @@
import abc
from dataclasses import dataclass
from typing import TypedDict
@dataclass
class Result:
class ResultData(TypedDict):
id: str
doc_id: str
text: str
metadata: str
created_at: int
updated_at: int
similarity: float
data: dict
data: ResultData | dict
class BaseVecDB:

View File

@@ -0,0 +1,822 @@
{
"type": "excalidraw",
"version": 2,
"source": "https://marketplace.visualstudio.com/items?itemName=pomdtr.excalidraw-editor",
"elements": [
{
"id": "l6cYurMvF69IM4Kc33Qou",
"type": "rectangle",
"x": 173.140625,
"y": -29.0234375,
"width": 92.95703125,
"height": 77.109375,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "a0",
"roundness": {
"type": 3
},
"seed": 1409469537,
"version": 91,
"versionNonce": 307958671,
"isDeleted": false,
"boundElements": [],
"updated": 1763703733605,
"link": null,
"locked": false
},
{
"id": "1ZvS6t8U6ihUjNU0dakgl",
"type": "arrow",
"x": 409.30859375,
"y": 9.6875,
"width": 118.2734375,
"height": 1.9609375,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "a1",
"roundness": {
"type": 2
},
"seed": 326508865,
"version": 120,
"versionNonce": 199367023,
"isDeleted": false,
"boundElements": null,
"updated": 1763703733605,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
-118.2734375,
-1.9609375
]
],
"lastCommittedPoint": null,
"startBinding": null,
"endBinding": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"elbowed": false
},
{
"id": "tfdUGiJdcMoOHGfqFHXK6",
"type": "text",
"x": 153.46875,
"y": -70.9765625,
"width": 136.4598846435547,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "a2",
"roundness": null,
"seed": 688712865,
"version": 67,
"versionNonce": 300660705,
"isDeleted": false,
"boundElements": null,
"updated": 1763703743816,
"link": null,
"locked": false,
"text": "FAISS+SQLite",
"fontSize": 20,
"fontFamily": 5,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "FAISS+SQLite",
"autoResize": true,
"lineHeight": 1.25
},
{
"id": "AeL3kEB9a8_TAvAXpAbpl",
"type": "text",
"x": 438.36328125,
"y": -3.78125,
"width": 116.109375,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "a3",
"roundness": null,
"seed": 788579535,
"version": 33,
"versionNonce": 946602095,
"isDeleted": false,
"boundElements": null,
"updated": 1763703932431,
"link": null,
"locked": false,
"text": "FACT",
"fontSize": 20,
"fontFamily": 5,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "FACT",
"autoResize": false,
"lineHeight": 1.25
},
{
"id": "Pe3TeMZvxQ8tRTcbD5v6P",
"type": "arrow",
"x": 297.125,
"y": 40.2578125,
"width": 120.2421875,
"height": 1.421875,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "a4",
"roundness": {
"type": 2
},
"seed": 1146229999,
"version": 44,
"versionNonce": 636917679,
"isDeleted": false,
"boundElements": null,
"updated": 1763703759050,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
120.2421875,
1.421875
]
],
"lastCommittedPoint": null,
"startBinding": null,
"endBinding": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"elbowed": false
},
{
"id": "GhmQoadtQRK8c8aEEbYKQ",
"type": "text",
"x": 283.53515625,
"y": 64.76171875,
"width": 130.85989379882812,
"height": 50,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "a5",
"roundness": null,
"seed": 1445650959,
"version": 79,
"versionNonce": 566193167,
"isDeleted": false,
"boundElements": null,
"updated": 1763703768982,
"link": null,
"locked": false,
"text": "top-n Similary\n",
"fontSize": 20,
"fontFamily": 5,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "top-n Similary\n",
"autoResize": true,
"lineHeight": 1.25
},
{
"id": "uTEFJs8cNS09WFq2pi9P7",
"type": "rectangle",
"x": 528.1586158430439,
"y": -173.43472375183552,
"width": 135.7578125,
"height": 128.73828125,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "a6",
"roundness": {
"type": 3
},
"seed": 223409231,
"version": 44,
"versionNonce": 1066827105,
"isDeleted": false,
"boundElements": [
{
"id": "FfWdx1_yCq6UYfXamJX9N",
"type": "arrow"
}
],
"updated": 1763704050188,
"link": null,
"locked": false
},
{
"id": "2SzqzpJ4C2ymVj8-8vN7H",
"type": "text",
"x": 548.1480270948795,
"y": -211,
"width": 86.43992614746094,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "a7",
"roundness": null,
"seed": 1015608623,
"version": 23,
"versionNonce": 950374849,
"isDeleted": false,
"boundElements": null,
"updated": 1763704047884,
"link": null,
"locked": false,
"text": "Memories",
"fontSize": 20,
"fontFamily": 5,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "Memories",
"autoResize": true,
"lineHeight": 1.25
},
{
"id": "CgW6Yf9v0a9q1tsjhDl7b",
"type": "text",
"x": 568.3099317299038,
"y": -154.69469411681115,
"width": 62.099945068359375,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aA",
"roundness": null,
"seed": 452254927,
"version": 10,
"versionNonce": 972895023,
"isDeleted": false,
"boundElements": null,
"updated": 1763704057762,
"link": null,
"locked": false,
"text": "chunk1",
"fontSize": 20,
"fontFamily": 5,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "chunk1",
"autoResize": true,
"lineHeight": 1.25
},
{
"id": "knvlKpaFZ8lY-73Y-e9W6",
"type": "text",
"x": 569.11328125,
"y": -116.91056665512056,
"width": 67.55995178222656,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aB",
"roundness": null,
"seed": 914644015,
"version": 90,
"versionNonce": 158135631,
"isDeleted": false,
"boundElements": null,
"updated": 1763704057762,
"link": null,
"locked": false,
"text": "chunk2",
"fontSize": 20,
"fontFamily": 5,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "chunk2",
"autoResize": true,
"lineHeight": 1.25
},
{
"id": "Q7URqvTSMpvj08ye-afTT",
"type": "rectangle",
"x": 444.515625,
"y": 36.7890625,
"width": 58.859375,
"height": 29.41796875,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aC",
"roundness": {
"type": 3
},
"seed": 1642537601,
"version": 19,
"versionNonce": 948406575,
"isDeleted": false,
"boundElements": null,
"updated": 1763703870173,
"link": null,
"locked": false
},
{
"id": "JjxBt9cZIZXNTd6CmwyKL",
"type": "rectangle",
"x": 452.203125,
"y": 46.064453125,
"width": 58.859375,
"height": 29.41796875,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aD",
"roundness": {
"type": 3
},
"seed": 1746916641,
"version": 40,
"versionNonce": 1650978255,
"isDeleted": false,
"boundElements": [],
"updated": 1763703871882,
"link": null,
"locked": false
},
{
"id": "XGBCPPFnjriqsL8LvLwyQ",
"type": "rectangle",
"x": 461.56640625,
"y": 56.162109375,
"width": 58.859375,
"height": 29.41796875,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aE",
"roundness": {
"type": 3
},
"seed": 529794575,
"version": 85,
"versionNonce": 2131900641,
"isDeleted": false,
"boundElements": [],
"updated": 1763703874182,
"link": null,
"locked": false
},
{
"id": "FfWdx1_yCq6UYfXamJX9N",
"type": "arrow",
"x": 537.6875,
"y": 48.203125,
"width": 6.615850226297994,
"height": 75.81335873223107,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aF",
"roundness": {
"type": 2
},
"seed": 1982870689,
"version": 90,
"versionNonce": 25307457,
"isDeleted": false,
"boundElements": null,
"updated": 1763704050188,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
6.615850226297994,
-75.81335873223107
]
],
"lastCommittedPoint": null,
"startBinding": null,
"endBinding": {
"elementId": "uTEFJs8cNS09WFq2pi9P7",
"focus": 0.6071885090336794,
"gap": 24.64453125
},
"startArrowhead": null,
"endArrowhead": "arrow",
"elbowed": false
},
{
"id": "jgJgqGMRWcaNX_28wY4CU",
"type": "text",
"x": 570,
"y": 10,
"width": 67.11994934082031,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aG",
"roundness": null,
"seed": 1065220559,
"version": 26,
"versionNonce": 2115991521,
"isDeleted": false,
"boundElements": null,
"updated": 1763703959397,
"link": null,
"locked": false,
"text": "update",
"fontSize": 20,
"fontFamily": 5,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "update",
"autoResize": true,
"lineHeight": 1.25
},
{
"id": "_5pSPPOpp9h1TpFCIc055",
"type": "text",
"x": 292.36328125,
"y": -138.5703125,
"width": 122.87992858886719,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aH",
"roundness": null,
"seed": 51461025,
"version": 26,
"versionNonce": 1647492655,
"isDeleted": false,
"boundElements": null,
"updated": 1763703925147,
"link": null,
"locked": false,
"text": "ADD Memory",
"fontSize": 20,
"fontFamily": 5,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "ADD Memory",
"autoResize": true,
"lineHeight": 1.25
},
{
"id": "YG6MdL14l7lk4ypQNMZ_k",
"type": "text",
"x": 296.71885397566257,
"y": 161.399157096715,
"width": 295.27984619140625,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aJ",
"roundness": null,
"seed": 1183210273,
"version": 122,
"versionNonce": 1702733281,
"isDeleted": false,
"boundElements": [],
"updated": 1763704085083,
"link": null,
"locked": false,
"text": "RETRIEVE Memory (STATIC)",
"fontSize": 20,
"fontFamily": 5,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "RETRIEVE Memory (STATIC)",
"autoResize": true,
"lineHeight": 1.25
},
{
"id": "Foa3VPJYqhj1uAX5mn3n0",
"type": "rectangle",
"x": 324.7616636099071,
"y": 248.63213980937013,
"width": 135.7578125,
"height": 128.73828125,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aL",
"roundness": {
"type": 3
},
"seed": 995116257,
"version": 225,
"versionNonce": 1886900225,
"isDeleted": false,
"boundElements": [],
"updated": 1763704055846,
"link": null,
"locked": false
},
{
"id": "pe3veI_yBFKYtbaJwDKQT",
"type": "text",
"x": 344.7510748617428,
"y": 211.06686356120565,
"width": 86.43992614746094,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aM",
"roundness": null,
"seed": 26673345,
"version": 204,
"versionNonce": 1004546017,
"isDeleted": false,
"boundElements": [],
"updated": 1763704055846,
"link": null,
"locked": false,
"text": "Memories",
"fontSize": 20,
"fontFamily": 5,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "Memories",
"autoResize": true,
"lineHeight": 1.25
},
{
"id": "bOlhO8AaKE86_43viu5UG",
"type": "text",
"x": 365.50408375566445,
"y": 269.24725381983865,
"width": 62.099945068359375,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aN",
"roundness": null,
"seed": 1849784033,
"version": 106,
"versionNonce": 762320737,
"isDeleted": false,
"boundElements": [],
"updated": 1763704060295,
"link": null,
"locked": false,
"text": "chunk1",
"fontSize": 20,
"fontFamily": 5,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "chunk1",
"autoResize": true,
"lineHeight": 1.25
},
{
"id": "V_iDW10PKwMe7vWb5S5HF",
"type": "text",
"x": 366.3074332757606,
"y": 307.03138128152926,
"width": 67.55995178222656,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aO",
"roundness": null,
"seed": 1670509249,
"version": 186,
"versionNonce": 1964540737,
"isDeleted": false,
"boundElements": [],
"updated": 1763704060295,
"link": null,
"locked": false,
"text": "chunk2",
"fontSize": 20,
"fontFamily": 5,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "chunk2",
"autoResize": true,
"lineHeight": 1.25
},
{
"id": "LHKMRdSowgcl2LsKacxTz",
"type": "text",
"x": 484.9493410573871,
"y": 292.45619471187945,
"width": 273.579833984375,
"height": 50,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"index": "aP",
"roundness": null,
"seed": 945666991,
"version": 104,
"versionNonce": 1512137505,
"isDeleted": false,
"boundElements": null,
"updated": 1763704096016,
"link": null,
"locked": false,
"text": "RANKED By DECAY SCORE,\nTOP K",
"fontSize": 20,
"fontFamily": 5,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "RANKED By DECAY SCORE,\nTOP K",
"autoResize": true,
"lineHeight": 1.25
}
],
"appState": {
"gridSize": 20,
"gridStep": 5,
"gridModeEnabled": false,
"viewBackgroundColor": "#ffffff"
},
"files": {}
}

View File

@@ -0,0 +1,76 @@
## Decay Score
记忆衰减分数定义为:
\[
\text{decay\_score}
= \alpha \cdot e^{-\lambda \cdot \Delta t \cdot \beta}
+ (1-\alpha)\cdot (1 - e^{-\gamma \cdot c})
\]
其中:
+ \(\Delta t\):自上次检索以来经过的时间(天),由 `last_retrieval_at` 计算;
+ \(c\):检索次数,对应字段 `retrieval_count`
+ \(\alpha\):控制时间衰减和检索次数影响的权重;
+ \(\gamma\):控制检索次数影响的速率;
+ \(\lambda\):控制时间衰减的速率;
+ \(\beta\):时间衰减调节因子;
\[
\beta = \frac{1}{1 + a \cdot c}
\]
+ \(a\):控制检索次数对时间衰减影响的权重。
## ADD MEMORY
+ LLM 通过 `astr_add_memory` 工具调用,传入记忆内容和记忆类型。
+ 生成 `mem_id = uuid4()`
+ 从上下文中获取 `owner_id = unified_message_origin`
步骤:
1. 使用 VecDB 以新记忆内容为 query检索前 20 条相似记忆。
2. 从中取相似度最高的前 5 条:
+ 若相似度超过“合并阈值”(如 `sim >= merge_threshold`
+ 将该条记忆视为同一记忆,使用 LLM 将旧内容与新内容合并;
+ 在同一个 `mem_id` 上更新 MemoryDB 和 VecDBUPDATE而非新建
+ 否则:
+ 作为全新的记忆插入:
+ 写入 VecDBmetadata 中包含 `mem_id`, `owner_id`
+ 写入 MemoryDB 的 `memory_chunks` 表,初始化:
+ `created_at = now`
+ `last_retrieval_at = now`
+ `retrieval_count = 1` 等。
3. 对 VecDB 返回的前 20 条记忆,如果相似度高于某个“赫布阈值”(`hebb_threshold`),则:
+ `retrieval_count += 1`
+ `last_retrieval_at = now`
这一步体现了赫布学习:与新记忆共同被激活的旧记忆会获得一次强化。
## QUERY MEMORY (STATIC)
+ LLM 通过 `astr_query_memory` 工具调用,无参数。
步骤:
1. 从 MemoryDB 的 `memory_chunks` 表中查询当前用户所有活跃记忆:
+ `SELECT * FROM memory_chunks WHERE owner_id = ? AND is_active = 1`
2. 对每条记忆,根据 `last_retrieval_at``retrieval_count` 计算对应的 `decay_score`
3.`decay_score` 从高到低排序,返回前 `top_k` 条记忆内容给 LLM。
4. 对返回的这 `top_k` 条记忆:
+ `retrieval_count += 1`
+ `last_retrieval_at = now`
## QUERY MEMORY (DYNAMIC)(暂不实现)
+ LLM 提供查询内容作为语义 query。
+ 使用 VecDB 检索与该 query 最相似的前 `N` 条记忆(`N > top_k`)。
+ 根据 `mem_id``memory_chunks` 中加载对应记录。
+ 对这批候选记忆计算:
+ 语义相似度(来自 VecDB
+ `decay_score`
+ 最终排序分数(例如 `w1 * sim + w2 * decay_score`
+ 按最终排序分数从高到低返回前 `top_k` 条记忆内容,并更新它们的 `retrieval_count``last_retrieval_at`

View File

@@ -0,0 +1,63 @@
import uuid
from datetime import datetime, timezone
import numpy as np
from sqlmodel import Field, MetaData, SQLModel
MEMORY_TYPE_IMPORTANCE = {"persona": 1.3, "fact": 1.0, "ephemeral": 0.8}
class BaseMemoryModel(SQLModel, table=False):
metadata = MetaData()
class MemoryChunk(BaseMemoryModel, table=True):
"""A chunk of memory stored in the system."""
__tablename__ = "memory_chunks" # type: ignore
id: int | None = Field(
primary_key=True,
sa_column_kwargs={"autoincrement": True},
default=None,
)
mem_id: str = Field(
max_length=36,
nullable=False,
unique=True,
default_factory=lambda: str(uuid.uuid4()),
index=True,
)
fact: str = Field(nullable=False)
"""The factual content of the memory chunk."""
owner_id: str = Field(max_length=255, nullable=False, index=True)
"""The identifier of the owner (user) of the memory chunk."""
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
"""The timestamp when the memory chunk was created."""
last_retrieval_at: datetime = Field(
default_factory=lambda: datetime.now(timezone.utc)
)
"""The timestamp when the memory chunk was last retrieved."""
retrieval_count: int = Field(default=1, nullable=False)
"""The number of times the memory chunk has been retrieved."""
memory_type: str = Field(max_length=20, nullable=False, default="fact")
"""The type of memory (e.g., 'persona', 'fact', 'ephemeral')."""
is_active: bool = Field(default=True, nullable=False)
"""Whether the memory chunk is active."""
def compute_decay_score(self, current_time: datetime) -> float:
"""Compute the decay score of the memory chunk based on time and retrievals."""
# Constants for the decay formula
alpha = 0.5
gamma = 0.1
lambda_ = 0.05
a = 0.1
# Calculate delta_t in days
delta_t = (current_time - self.last_retrieval_at).total_seconds() / 86400
c = self.retrieval_count
beta = 1 / (1 + a * c)
decay_score = alpha * np.exp(-lambda_ * delta_t * beta) + (1 - alpha) * (
1 - np.exp(-gamma * c)
)
return decay_score * MEMORY_TYPE_IMPORTANCE.get(self.memory_type, 1.0)

View File

@@ -0,0 +1,174 @@
from contextlib import asynccontextmanager
from datetime import datetime, timezone
from pathlib import Path
from sqlalchemy import select, text, update
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker, create_async_engine
from sqlmodel import col
from astrbot.core import logger
from .entities import BaseMemoryModel, MemoryChunk
class MemoryDatabase:
def __init__(self, db_path: str = "data/astr_memory/memory.db") -> None:
"""Initialize memory database
Args:
db_path: Database file path, default is data/astr_memory/memory.db
"""
self.db_path = db_path
self.DATABASE_URL = f"sqlite+aiosqlite:///{db_path}"
self.inited = False
# Ensure directory exists
Path(db_path).parent.mkdir(parents=True, exist_ok=True)
# Create async engine
self.engine = create_async_engine(
self.DATABASE_URL,
echo=False,
pool_pre_ping=True,
pool_recycle=3600,
)
# Create session factory
self.async_session = async_sessionmaker(
self.engine,
class_=AsyncSession,
expire_on_commit=False,
)
@asynccontextmanager
async def get_db(self):
"""Get database session
Usage:
async with mem_db.get_db() as session:
# Perform database operations
result = await session.execute(stmt)
"""
async with self.async_session() as session:
yield session
async def initialize(self) -> None:
"""Initialize database, create tables and configure SQLite parameters"""
async with self.engine.begin() as conn:
# Create all memory related tables
await conn.run_sync(BaseMemoryModel.metadata.create_all)
# Configure SQLite performance optimization parameters
await conn.execute(text("PRAGMA journal_mode=WAL"))
await conn.execute(text("PRAGMA synchronous=NORMAL"))
await conn.execute(text("PRAGMA cache_size=20000"))
await conn.execute(text("PRAGMA temp_store=MEMORY"))
await conn.execute(text("PRAGMA mmap_size=134217728"))
await conn.execute(text("PRAGMA optimize"))
await conn.commit()
await self._create_indexes()
self.inited = True
logger.info(f"Memory database initialized: {self.db_path}")
async def _create_indexes(self) -> None:
"""Create indexes for memory_chunks table"""
async with self.get_db() as session:
async with session.begin():
# Create memory chunks table indexes
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_mem_mem_id "
"ON memory_chunks(mem_id)",
),
)
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_mem_owner_id "
"ON memory_chunks(owner_id)",
),
)
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_mem_owner_active "
"ON memory_chunks(owner_id, is_active)",
),
)
await session.commit()
async def close(self) -> None:
"""Close database connection"""
await self.engine.dispose()
logger.info(f"Memory database closed: {self.db_path}")
async def insert_memory(self, memory: MemoryChunk) -> MemoryChunk:
"""Insert a new memory chunk"""
async with self.get_db() as session:
session.add(memory)
await session.commit()
await session.refresh(memory)
return memory
async def get_memory_by_id(self, mem_id: str) -> MemoryChunk | None:
"""Get memory chunk by mem_id"""
async with self.get_db() as session:
stmt = select(MemoryChunk).where(col(MemoryChunk.mem_id) == mem_id)
result = await session.execute(stmt)
return result.scalar_one_or_none()
async def update_memory(self, memory: MemoryChunk) -> MemoryChunk:
"""Update an existing memory chunk"""
async with self.get_db() as session:
session.add(memory)
await session.commit()
await session.refresh(memory)
return memory
async def get_active_memories(self, owner_id: str) -> list[MemoryChunk]:
"""Get all active memories for a user"""
async with self.get_db() as session:
stmt = select(MemoryChunk).where(
col(MemoryChunk.owner_id) == owner_id,
col(MemoryChunk.is_active) == True, # noqa: E712
)
result = await session.execute(stmt)
return list(result.scalars().all())
async def update_retrieval_stats(
self,
mem_ids: list[str],
current_time: datetime | None = None,
) -> None:
"""Update retrieval statistics for multiple memories"""
if not mem_ids:
return
if current_time is None:
current_time = datetime.now(timezone.utc)
async with self.get_db() as session:
async with session.begin():
stmt = (
update(MemoryChunk)
.where(col(MemoryChunk.mem_id).in_(mem_ids))
.values(
retrieval_count=MemoryChunk.retrieval_count + 1,
last_retrieval_at=current_time,
)
)
await session.execute(stmt)
await session.commit()
async def deactivate_memory(self, mem_id: str) -> bool:
"""Deactivate a memory chunk"""
async with self.get_db() as session:
async with session.begin():
stmt = (
update(MemoryChunk)
.where(col(MemoryChunk.mem_id) == mem_id)
.values(is_active=False)
)
result = await session.execute(stmt)
await session.commit()
return result.rowcount > 0 if result.rowcount else False # type: ignore

View File

@@ -0,0 +1,281 @@
import json
import uuid
from datetime import datetime, timezone
from pathlib import Path
from astrbot.core import logger
from astrbot.core.db.vec_db.faiss_impl import FaissVecDB
from astrbot.core.provider.provider import EmbeddingProvider
from astrbot.core.provider.provider import Provider as LLMProvider
from .entities import MemoryChunk
from .mem_db_sqlite import MemoryDatabase
MERGE_THRESHOLD = 0.85
"""Similarity threshold for merging memories"""
HEBB_THRESHOLD = 0.70
"""Similarity threshold for Hebbian learning reinforcement"""
MERGE_SYSTEM_PROMPT = """You are a memory consolidation assistant. Your task is to merge two related memory entries into a single, comprehensive memory.
Input format:
- Old memory: [existing memory content]
- New memory: [new memory content to be integrated]
Your output should be a single, concise memory that combines the essential information from both entries. Preserve specific details, update outdated information, and eliminate redundancy. Output only the merged memory content without any explanations or meta-commentary."""
class MemoryManager:
"""Manager for user long-term memory storage and retrieval"""
def __init__(self, memory_root_dir: str = "data/astr_memory"):
self.memory_root_dir = Path(memory_root_dir)
self.memory_root_dir.mkdir(parents=True, exist_ok=True)
self.mem_db: MemoryDatabase | None = None
self.vec_db: FaissVecDB | None = None
self._initialized = False
async def initialize(
self,
embedding_provider: EmbeddingProvider,
merge_llm_provider: LLMProvider,
):
"""Initialize memory database and vector database"""
# Initialize MemoryDB
db_path = self.memory_root_dir / "memory.db"
self.mem_db = MemoryDatabase(db_path.as_posix())
await self.mem_db.initialize()
self.embedding_provider = embedding_provider
self.merge_llm_provider = merge_llm_provider
# Initialize VecDB
doc_store_path = self.memory_root_dir / "doc.db"
index_store_path = self.memory_root_dir / "index.faiss"
self.vec_db = FaissVecDB(
doc_store_path=doc_store_path.as_posix(),
index_store_path=index_store_path.as_posix(),
embedding_provider=self.embedding_provider,
)
await self.vec_db.initialize()
logger.info("Memory manager initialized")
self._initialized = True
async def terminate(self):
"""Close all database connections"""
if self.vec_db:
await self.vec_db.close()
if self.mem_db:
await self.mem_db.close()
async def add_memory(
self,
fact: str,
owner_id: str,
memory_type: str = "fact",
) -> MemoryChunk:
"""Add a new memory with similarity check and merge logic
Implements the ADD MEMORY workflow from _README.md:
1. Search for similar memories using VecDB
2. If similarity >= merge_threshold, merge with existing memory
3. Otherwise, create new memory
4. Apply Hebbian learning to similar memories (similarity >= hebb_threshold)
Args:
fact: Memory content
owner_id: User identifier
memory_type: Memory type ('persona', 'fact', 'ephemeral')
Returns:
The created or updated MemoryChunk
"""
if not self.vec_db or not self.mem_db:
raise RuntimeError("Memory manager not initialized")
current_time = datetime.now(timezone.utc)
# Step 1: Search for similar memories
similar_results = await self.vec_db.retrieve(
query=fact,
k=20,
fetch_k=50,
metadata_filters={"owner_id": owner_id},
)
# Step 2: Check if we should merge with existing memories (top 3 similar ones)
merge_candidates = [
r for r in similar_results[:3] if r.similarity >= MERGE_THRESHOLD
]
if merge_candidates:
# Get all candidate memories from database
candidate_memories: list[tuple[str, MemoryChunk]] = []
for candidate in merge_candidates:
mem_id = json.loads(candidate.data["metadata"])["mem_id"]
memory = await self.mem_db.get_memory_by_id(mem_id)
if memory:
candidate_memories.append((mem_id, memory))
if candidate_memories:
# Use the most similar memory as the base
base_mem_id, base_memory = candidate_memories[0]
# Collect all facts to merge (existing candidates + new fact)
all_facts = [mem.fact for _, mem in candidate_memories] + [fact]
merged_fact = await self._merge_multiple_memories(all_facts)
# Update the base memory
base_memory.fact = merged_fact
base_memory.last_retrieval_at = current_time
base_memory.retrieval_count += 1
updated_memory = await self.mem_db.update_memory(base_memory)
# Update VecDB for base memory
await self.vec_db.delete(base_mem_id)
await self.vec_db.insert(
content=merged_fact,
metadata={
"mem_id": base_mem_id,
"owner_id": owner_id,
"memory_type": memory_type,
},
id=base_mem_id,
)
# Deactivate and remove other merged memories
for mem_id, _ in candidate_memories[1:]:
await self.mem_db.deactivate_memory(mem_id)
await self.vec_db.delete(mem_id)
logger.info(
f"Merged {len(candidate_memories)} memories into {base_mem_id} for user {owner_id}"
)
return updated_memory
# Step 3: Create new memory
mem_id = str(uuid.uuid4())
new_memory = MemoryChunk(
mem_id=mem_id,
fact=fact,
owner_id=owner_id,
memory_type=memory_type,
created_at=current_time,
last_retrieval_at=current_time,
retrieval_count=1,
is_active=True,
)
# Insert into MemoryDB
created_memory = await self.mem_db.insert_memory(new_memory)
# Insert into VecDB
await self.vec_db.insert(
content=fact,
metadata={
"mem_id": mem_id,
"owner_id": owner_id,
"memory_type": memory_type,
},
id=mem_id,
)
# Step 4: Apply Hebbian learning to similar memories
hebb_mem_ids = [
json.loads(r.data["metadata"])["mem_id"]
for r in similar_results
if r.similarity >= HEBB_THRESHOLD
]
if hebb_mem_ids:
await self.mem_db.update_retrieval_stats(hebb_mem_ids, current_time)
logger.debug(
f"Applied Hebbian learning to {len(hebb_mem_ids)} memories for user {owner_id}",
)
logger.info(f"Created new memory {mem_id} for user {owner_id}")
return created_memory
async def query_memory(
self,
owner_id: str,
top_k: int = 5,
) -> list[MemoryChunk]:
"""Query user's memories using static retrieval with decay score ranking
Implements the QUERY MEMORY (STATIC) workflow from _README.md:
1. Get all active memories for user from MemoryDB
2. Compute decay_score for each memory
3. Sort by decay_score and return top_k
4. Update retrieval statistics for returned memories
Args:
owner_id: User identifier
top_k: Number of memories to return
Returns:
List of top_k MemoryChunk sorted by decay score
"""
if not self.mem_db:
raise RuntimeError("Memory manager not initialized")
current_time = datetime.now(timezone.utc)
# Step 1: Get all active memories for user
all_memories = await self.mem_db.get_active_memories(owner_id)
if not all_memories:
return []
# Step 2-3: Compute decay scores and sort
memories_with_scores = [
(mem, mem.compute_decay_score(current_time)) for mem in all_memories
]
memories_with_scores.sort(key=lambda x: x[1], reverse=True)
# Get top_k memories
top_memories = [mem for mem, _ in memories_with_scores[:top_k]]
# Step 4: Update retrieval statistics
mem_ids = [mem.mem_id for mem in top_memories]
await self.mem_db.update_retrieval_stats(mem_ids, current_time)
logger.debug(f"Retrieved {len(top_memories)} memories for user {owner_id}")
return top_memories
async def _merge_multiple_memories(self, facts: list[str]) -> str:
"""Merge multiple memory facts using LLM in one call
Args:
facts: List of memory facts to merge
Returns:
Merged memory content
"""
if not self.merge_llm_provider:
return " ".join(facts)
if len(facts) == 1:
return facts[0]
try:
# Format all facts as a numbered list
facts_list = "\n".join(f"{i + 1}. {fact}" for i, fact in enumerate(facts))
user_prompt = (
f"Please merge the following {len(facts)} related memory entries "
"into a single, comprehensive memory:"
f"\n{facts_list}\n\nOutput only the merged memory content."
)
response = await self.merge_llm_provider.text_chat(
prompt=user_prompt,
system_prompt=MERGE_SYSTEM_PROMPT,
)
merged_content = response.completion_text.strip()
return merged_content if merged_content else " ".join(facts)
except Exception as e:
logger.warning(f"Failed to merge memories with LLM: {e}, using fallback")
return " ".join(facts)

View File

@@ -0,0 +1,156 @@
from pydantic import Field
from pydantic.dataclasses import dataclass
from astrbot.core.agent.tool import FunctionTool, ToolExecResult
from astrbot.core.astr_agent_context import AstrAgentContext, ContextWrapper
@dataclass
class AddMemory(FunctionTool[AstrAgentContext]):
"""Tool for adding memories to user's long-term memory storage"""
name: str = "astr_add_memory"
description: str = (
"Add a new memory to the user's long-term memory storage. "
"Use this tool only when the user explicitly asks you to remember something, "
"or when they share stable preferences, identity, or long-term goals that will be useful in future interactions."
)
parameters: dict = Field(
default_factory=lambda: {
"type": "object",
"properties": {
"fact": {
"type": "string",
"description": (
"The concrete memory content to store, such as a user preference, "
"identity detail, long-term goal, or stable profile fact."
),
},
"memory_type": {
"type": "string",
"enum": ["persona", "fact", "ephemeral"],
"description": (
"The relative importance of this memory. "
"Use 'persona' for core identity or highly impactful information, "
"'fact' for normal long-term preferences, "
"and 'ephemeral' for minor or tentative facts."
),
},
},
"required": ["fact", "memory_type"],
}
)
async def call(
self, context: ContextWrapper[AstrAgentContext], **kwargs
) -> ToolExecResult:
"""Add a memory to long-term storage
Args:
context: Agent context
**kwargs: Must contain 'fact' and 'memory_type'
Returns:
ToolExecResult with success message
"""
mm = context.context.context.memory_manager
fact = kwargs.get("fact")
memory_type = kwargs.get("memory_type", "fact")
if not fact:
return "Missing required parameter: fact"
try:
# Get owner_id from context
owner_id = context.context.event.unified_msg_origin
# Add memory using memory manager
memory = await mm.add_memory(
fact=fact,
owner_id=owner_id,
memory_type=memory_type,
)
return f"Memory added successfully (ID: {memory.mem_id})"
except Exception as e:
return f"Failed to add memory: {str(e)}"
@dataclass
class QueryMemory(FunctionTool[AstrAgentContext]):
"""Tool for querying user's long-term memories"""
name: str = "astr_query_memory"
description: str = (
"Query the user's long-term memory storage and return the most relevant memories. "
"Use this tool when you need user-specific context, preferences, or past facts "
"that are not explicitly present in the current conversation."
)
parameters: dict = Field(
default_factory=lambda: {
"type": "object",
"properties": {
"top_k": {
"type": "integer",
"description": (
"Maximum number of memories to retrieve after retention-based ranking. "
"Typically between 3 and 10."
),
"default": 5,
"minimum": 1,
"maximum": 20,
},
},
"required": [],
}
)
async def call(
self, context: ContextWrapper[AstrAgentContext], **kwargs
) -> ToolExecResult:
"""Query memories from long-term storage
Args:
context: Agent context
**kwargs: Optional 'top_k' parameter
Returns:
ToolExecResult with formatted memory list
"""
mm = context.context.context.memory_manager
top_k = kwargs.get("top_k", 5)
try:
# Get owner_id from context
owner_id = context.context.event.unified_msg_origin
# Query memories using memory manager
memories = await mm.query_memory(
owner_id=owner_id,
top_k=top_k,
)
if not memories:
return "No memories found for this user."
# Format memories for output
formatted_memories = []
for i, mem in enumerate(memories, 1):
formatted_memories.append(
f"{i}. [{mem.memory_type.upper()}] {mem.fact} "
f"(retrieved {mem.retrieval_count} times, "
f"last: {mem.last_retrieval_at.strftime('%Y-%m-%d')})"
)
result_text = "Retrieved memories:\n" + "\n".join(formatted_memories)
return result_text
except Exception as e:
return f"Failed to query memories: {str(e)}"
ADD_MEMORY_TOOL = AddMemory()
QUERY_MEMORY_TOOL = QueryMemory()

View File

@@ -1,48 +0,0 @@
from collections.abc import AsyncGenerator
from astrbot.core import logger
from astrbot.core.platform.astr_message_event import AstrMessageEvent
from astrbot.core.star.session_llm_manager import SessionServiceManager
from ...context import PipelineContext
from ..stage import Stage
from .agent_sub_stages.internal import InternalAgentSubStage
from .agent_sub_stages.third_party import ThirdPartyAgentSubStage
class AgentRequestSubStage(Stage):
async def initialize(self, ctx: PipelineContext) -> None:
self.ctx = ctx
self.config = ctx.astrbot_config
self.bot_wake_prefixs: list[str] = self.config["wake_prefix"]
self.prov_wake_prefix: str = self.config["provider_settings"]["wake_prefix"]
for bwp in self.bot_wake_prefixs:
if self.prov_wake_prefix.startswith(bwp):
logger.info(
f"识别 LLM 聊天额外唤醒前缀 {self.prov_wake_prefix} 以机器人唤醒前缀 {bwp} 开头,已自动去除。",
)
self.prov_wake_prefix = self.prov_wake_prefix[len(bwp) :]
agent_runner_type = self.config["provider_settings"]["agent_runner_type"]
if agent_runner_type == "local":
self.agent_sub_stage = InternalAgentSubStage()
else:
self.agent_sub_stage = ThirdPartyAgentSubStage()
await self.agent_sub_stage.initialize(ctx)
async def process(self, event: AstrMessageEvent) -> AsyncGenerator[None, None]:
if not self.ctx.astrbot_config["provider_settings"]["enable"]:
logger.debug(
"This pipeline does not enable AI capability, skip processing."
)
return
if not SessionServiceManager.should_process_llm_request(event):
logger.debug(
f"The session {event.unified_msg_origin} has disabled AI capability, skipping processing."
)
return
async for resp in self.agent_sub_stage.process(event, self.prov_wake_prefix):
yield resp

View File

@@ -1,202 +0,0 @@
import asyncio
from collections.abc import AsyncGenerator
from typing import TYPE_CHECKING
from astrbot.core import logger
from astrbot.core.agent.runners.coze.coze_agent_runner import CozeAgentRunner
from astrbot.core.agent.runners.dashscope.dashscope_agent_runner import (
DashscopeAgentRunner,
)
from astrbot.core.agent.runners.dify.dify_agent_runner import DifyAgentRunner
from astrbot.core.message.components import Image
from astrbot.core.message.message_event_result import (
MessageChain,
MessageEventResult,
ResultContentType,
)
if TYPE_CHECKING:
from astrbot.core.agent.runners.base import BaseAgentRunner
from astrbot.core.platform.astr_message_event import AstrMessageEvent
from astrbot.core.provider.entities import (
ProviderRequest,
)
from astrbot.core.star.star_handler import EventType
from astrbot.core.utils.metrics import Metric
from .....astr_agent_context import AgentContextWrapper, AstrAgentContext
from .....astr_agent_hooks import MAIN_AGENT_HOOKS
from ....context import PipelineContext, call_event_hook
from ...stage import Stage
AGENT_RUNNER_TYPE_KEY = {
"dify": "dify_agent_runner_provider_id",
"coze": "coze_agent_runner_provider_id",
"dashscope": "dashscope_agent_runner_provider_id",
}
async def run_third_party_agent(
runner: "BaseAgentRunner",
stream_to_general: bool = False,
) -> AsyncGenerator[MessageChain | None, None]:
"""
运行第三方 agent runner 并转换响应格式
类似于 run_agent 函数,但专门处理第三方 agent runner
"""
try:
async for resp in runner.step_until_done(max_step=30): # type: ignore[misc]
if resp.type == "streaming_delta":
if stream_to_general:
continue
yield resp.data["chain"]
elif resp.type == "llm_result":
if stream_to_general:
yield resp.data["chain"]
except Exception as e:
logger.error(f"Third party agent runner error: {e}")
err_msg = (
f"\nAstrBot 请求失败。\n错误类型: {type(e).__name__}\n"
f"错误信息: {e!s}\n\n请在控制台查看和分享错误详情。\n"
)
yield MessageChain().message(err_msg)
class ThirdPartyAgentSubStage(Stage):
async def initialize(self, ctx: PipelineContext) -> None:
self.ctx = ctx
self.conf = ctx.astrbot_config
self.runner_type = self.conf["provider_settings"]["agent_runner_type"]
self.prov_id = self.conf["provider_settings"].get(
AGENT_RUNNER_TYPE_KEY.get(self.runner_type, ""),
"",
)
settings = ctx.astrbot_config["provider_settings"]
self.streaming_response: bool = settings["streaming_response"]
self.unsupported_streaming_strategy: str = settings[
"unsupported_streaming_strategy"
]
async def process(
self, event: AstrMessageEvent, provider_wake_prefix: str
) -> AsyncGenerator[None, None]:
req: ProviderRequest | None = None
if provider_wake_prefix and not event.message_str.startswith(
provider_wake_prefix
):
return
self.prov_cfg: dict = next(
(p for p in self.conf["provider"] if p["id"] == self.prov_id),
{},
)
if not self.prov_id or not self.prov_cfg:
logger.error(
"Third Party Agent Runner provider ID is not configured properly."
)
return
# make provider request
req = ProviderRequest()
req.session_id = event.unified_msg_origin
req.prompt = event.message_str[len(provider_wake_prefix) :]
for comp in event.message_obj.message:
if isinstance(comp, Image):
image_path = await comp.convert_to_base64()
req.image_urls.append(image_path)
if not req.prompt and not req.image_urls:
return
# call event hook
if await call_event_hook(event, EventType.OnLLMRequestEvent, req):
return
if self.runner_type == "dify":
runner = DifyAgentRunner[AstrAgentContext]()
elif self.runner_type == "coze":
runner = CozeAgentRunner[AstrAgentContext]()
elif self.runner_type == "dashscope":
runner = DashscopeAgentRunner[AstrAgentContext]()
else:
raise ValueError(
f"Unsupported third party agent runner type: {self.runner_type}",
)
astr_agent_ctx = AstrAgentContext(
context=self.ctx.plugin_manager.context,
event=event,
)
streaming_response = self.streaming_response
if (enable_streaming := event.get_extra("enable_streaming")) is not None:
streaming_response = bool(enable_streaming)
stream_to_general = (
self.unsupported_streaming_strategy == "turn_off"
and not event.platform_meta.support_streaming_message
)
await runner.reset(
request=req,
run_context=AgentContextWrapper(
context=astr_agent_ctx,
tool_call_timeout=60,
),
agent_hooks=MAIN_AGENT_HOOKS,
provider_config=self.prov_cfg,
streaming=streaming_response,
)
if streaming_response and not stream_to_general:
# 流式响应
event.set_result(
MessageEventResult()
.set_result_content_type(ResultContentType.STREAMING_RESULT)
.set_async_stream(
run_third_party_agent(
runner,
stream_to_general=False,
),
),
)
yield
if runner.done():
final_resp = runner.get_final_llm_resp()
if final_resp and final_resp.result_chain:
event.set_result(
MessageEventResult(
chain=final_resp.result_chain.chain or [],
result_content_type=ResultContentType.STREAMING_FINISH,
),
)
else:
# 非流式响应或转换为普通响应
async for _ in run_third_party_agent(
runner,
stream_to_general=stream_to_general,
):
yield
final_resp = runner.get_final_llm_resp()
if not final_resp or not final_resp.result_chain:
logger.warning("Agent Runner 未返回最终结果。")
return
event.set_result(
MessageEventResult(
chain=final_resp.result_chain.chain or [],
result_content_type=ResultContentType.LLM_RESULT,
),
)
yield
asyncio.create_task(
Metric.upload(
llm_tick=1,
model_name=self.runner_type,
provider_type=self.runner_type,
),
)

View File

@@ -21,24 +21,28 @@ from astrbot.core.provider.entities import (
LLMResponse,
ProviderRequest,
)
from astrbot.core.star.session_llm_manager import SessionServiceManager
from astrbot.core.star.star_handler import EventType, star_map
from astrbot.core.utils.metrics import Metric
from astrbot.core.utils.session_lock import session_lock_manager
from .....astr_agent_context import AgentContextWrapper
from .....astr_agent_hooks import MAIN_AGENT_HOOKS
from .....astr_agent_run_util import AgentRunner, run_agent
from .....astr_agent_tool_exec import FunctionToolExecutor
from ....context import PipelineContext, call_event_hook
from ...stage import Stage
from ...utils import KNOWLEDGE_BASE_QUERY_TOOL, retrieve_knowledge_base
from ....astr_agent_context import AgentContextWrapper
from ....astr_agent_hooks import MAIN_AGENT_HOOKS
from ....astr_agent_run_util import AgentRunner, run_agent
from ....astr_agent_tool_exec import FunctionToolExecutor
from ....memory.tools import ADD_MEMORY_TOOL, QUERY_MEMORY_TOOL
from ...context import PipelineContext, call_event_hook
from ..stage import Stage
from ..utils import KNOWLEDGE_BASE_QUERY_TOOL, retrieve_knowledge_base
class InternalAgentSubStage(Stage):
class LLMRequestSubStage(Stage):
async def initialize(self, ctx: PipelineContext) -> None:
self.ctx = ctx
conf = ctx.astrbot_config
settings = conf["provider_settings"]
self.bot_wake_prefixs: list[str] = conf["wake_prefix"] # list
self.provider_wake_prefix: str = settings["wake_prefix"] # str
self.max_context_length = settings["max_context_length"] # int
self.dequeue_context_length: int = min(
max(1, settings["dequeue_context_length"]),
@@ -56,6 +60,13 @@ class InternalAgentSubStage(Stage):
self.show_reasoning = settings.get("display_reasoning_text", False)
self.kb_agentic_mode: bool = conf.get("kb_agentic_mode", False)
for bwp in self.bot_wake_prefixs:
if self.provider_wake_prefix.startswith(bwp):
logger.info(
f"识别 LLM 聊天额外唤醒前缀 {self.provider_wake_prefix} 以机器人唤醒前缀 {bwp} 开头,已自动去除。",
)
self.provider_wake_prefix = self.provider_wake_prefix[len(bwp) :]
self.conv_manager = ctx.plugin_manager.context.conversation_manager
def _select_provider(self, event: AstrMessageEvent):
@@ -114,6 +125,15 @@ class InternalAgentSubStage(Stage):
req.func_tool = ToolSet()
req.func_tool.add_tool(KNOWLEDGE_BASE_QUERY_TOOL)
async def _apply_memory(self, req: ProviderRequest):
mm = self.ctx.plugin_manager.context.memory_manager
if not mm or not mm._initialized:
return
if req.func_tool is None:
req.func_tool = ToolSet()
req.func_tool.add_tool(ADD_MEMORY_TOOL)
req.func_tool.add_tool(QUERY_MEMORY_TOOL)
def _truncate_contexts(
self,
contexts: list[dict],
@@ -294,10 +314,21 @@ class InternalAgentSubStage(Stage):
return fixed_messages
async def process(
self, event: AstrMessageEvent, provider_wake_prefix: str
) -> AsyncGenerator[None, None]:
self,
event: AstrMessageEvent,
_nested: bool = False,
) -> None | AsyncGenerator[None, None]:
req: ProviderRequest | None = None
if not self.ctx.astrbot_config["provider_settings"]["enable"]:
logger.debug("未启用 LLM 能力,跳过处理。")
return
# 检查会话级别的LLM启停状态
if not SessionServiceManager.should_process_llm_request(event):
logger.debug(f"会话 {event.unified_msg_origin} 禁用了 LLM跳过处理。")
return
provider = self._select_provider(event)
if provider is None:
return
@@ -327,12 +358,12 @@ class InternalAgentSubStage(Stage):
req.image_urls = []
if sel_model := event.get_extra("selected_model"):
req.model = sel_model
if provider_wake_prefix and not event.message_str.startswith(
provider_wake_prefix
if self.provider_wake_prefix and not event.message_str.startswith(
self.provider_wake_prefix
):
return
req.prompt = event.message_str[len(provider_wake_prefix) :]
req.prompt = event.message_str[len(self.provider_wake_prefix) :]
# func_tool selection 现在已经转移到 packages/astrbot 插件中进行选择。
# req.func_tool = self.ctx.plugin_manager.context.get_llm_tool_manager()
for comp in event.message_obj.message:
@@ -356,6 +387,9 @@ class InternalAgentSubStage(Stage):
# apply knowledge base feature
await self._apply_kb(event, req)
# apply memory feature
await self._apply_memory(req)
# fix contexts json str
if isinstance(req.contexts, str):
req.contexts = json.loads(req.contexts)

View File

@@ -24,7 +24,7 @@ class StarRequestSubStage(Stage):
async def process(
self,
event: AstrMessageEvent,
) -> AsyncGenerator[None, None]:
) -> None | AsyncGenerator[None, None]:
activated_handlers: list[StarHandlerMetadata] = event.get_extra(
"activated_handlers",
)

View File

@@ -7,7 +7,7 @@ from astrbot.core.star.star_handler import StarHandlerMetadata
from ..context import PipelineContext
from ..stage import Stage, register_stage
from .method.agent_request import AgentRequestSubStage
from .method.llm_request import LLMRequestSubStage
from .method.star_request import StarRequestSubStage
@@ -17,12 +17,9 @@ class ProcessStage(Stage):
self.ctx = ctx
self.config = ctx.astrbot_config
self.plugin_manager = ctx.plugin_manager
self.llm_request_sub_stage = LLMRequestSubStage()
await self.llm_request_sub_stage.initialize(ctx)
# initialize agent sub stage
self.agent_sub_stage = AgentRequestSubStage()
await self.agent_sub_stage.initialize(ctx)
# initialize star request sub stage
self.star_request_sub_stage = StarRequestSubStage()
await self.star_request_sub_stage.initialize(ctx)
@@ -42,7 +39,7 @@ class ProcessStage(Stage):
# Handler 的 LLM 请求
event.set_extra("provider_request", resp)
_t = False
async for _ in self.agent_sub_stage.process(event):
async for _ in self.llm_request_sub_stage.process(event):
_t = True
yield
if not _t:
@@ -70,5 +67,5 @@ class ProcessStage(Stage):
logger.info("未找到可用的 LLM 提供商,请先前往配置服务提供商。")
return
async for _ in self.agent_sub_stage.process(event):
async for _ in self.llm_request_sub_stage.process(event):
yield

View File

@@ -227,8 +227,6 @@ class ProviderManager:
async def load_provider(self, provider_config: dict):
if not provider_config["enable"]:
return
if provider_config.get("provider_type", "") == "agent_runner":
return
logger.info(
f"载入 {provider_config['type']}({provider_config['id']}) 服务提供商 ...",
@@ -249,6 +247,14 @@ class ProviderManager:
from .sources.anthropic_source import (
ProviderAnthropic as ProviderAnthropic,
)
case "dify":
from .sources.dify_source import ProviderDify as ProviderDify
case "coze":
from .sources.coze_source import ProviderCoze as ProviderCoze
case "dashscope":
from .sources.dashscope_source import (
ProviderDashscope as ProviderDashscope,
)
case "googlegenai_chat_completion":
from .sources.gemini_source import (
ProviderGoogleGenAI as ProviderGoogleGenAI,
@@ -325,10 +331,6 @@ class ProviderManager:
from .sources.xinference_rerank_source import (
XinferenceRerankProvider as XinferenceRerankProvider,
)
case "bailian_rerank":
from .sources.bailian_rerank_source import (
BailianRerankProvider as BailianRerankProvider,
)
except (ImportError, ModuleNotFoundError) as e:
logger.critical(
f"加载 {provider_config['type']}({provider_config['id']}) 提供商适配器失败:{e}。可能是因为有未安装的依赖。",

View File

@@ -1,236 +0,0 @@
import os
import aiohttp
from astrbot import logger
from ..entities import ProviderType, RerankResult
from ..provider import RerankProvider
from ..register import register_provider_adapter
class BailianRerankError(Exception):
"""百炼重排序服务异常基类"""
pass
class BailianAPIError(BailianRerankError):
"""百炼API返回错误"""
pass
class BailianNetworkError(BailianRerankError):
"""百炼网络请求错误"""
pass
@register_provider_adapter(
"bailian_rerank", "阿里云百炼文本排序适配器", provider_type=ProviderType.RERANK
)
class BailianRerankProvider(RerankProvider):
"""阿里云百炼文本重排序适配器."""
def __init__(self, provider_config: dict, provider_settings: dict) -> None:
super().__init__(provider_config, provider_settings)
self.provider_config = provider_config
self.provider_settings = provider_settings
# API配置
self.api_key = provider_config.get("rerank_api_key") or os.getenv(
"DASHSCOPE_API_KEY", ""
)
if not self.api_key:
raise ValueError("阿里云百炼 API Key 不能为空。")
self.model = provider_config.get("rerank_model", "qwen3-rerank")
self.timeout = provider_config.get("timeout", 30)
self.return_documents = provider_config.get("return_documents", False)
self.instruct = provider_config.get("instruct", "")
self.base_url = provider_config.get(
"rerank_api_base",
"https://dashscope.aliyuncs.com/api/v1/services/rerank/text-rerank/text-rerank",
)
# 设置HTTP客户端
headers = {
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json",
}
self.client = aiohttp.ClientSession(
headers=headers, timeout=aiohttp.ClientTimeout(total=self.timeout)
)
# 设置模型名称
self.set_model(self.model)
logger.info(f"AstrBot 百炼 Rerank 初始化完成。模型: {self.model}")
def _build_payload(
self, query: str, documents: list[str], top_n: int | None
) -> dict:
"""构建请求载荷
Args:
query: 查询文本
documents: 文档列表
top_n: 返回前N个结果如果为None则返回所有结果
Returns:
请求载荷字典
"""
base = {"model": self.model, "input": {"query": query, "documents": documents}}
params = {
k: v
for k, v in [
("top_n", top_n if top_n is not None and top_n > 0 else None),
("return_documents", True if self.return_documents else None),
(
"instruct",
self.instruct
if self.instruct and self.model == "qwen3-rerank"
else None,
),
]
if v is not None
}
if params:
base["parameters"] = params
return base
def _parse_results(self, data: dict) -> list[RerankResult]:
"""解析API响应结果
Args:
data: API响应数据
Returns:
重排序结果列表
Raises:
BailianAPIError: API返回错误
KeyError: 结果缺少必要字段
"""
# 检查响应状态
if data.get("code", "200") != "200":
raise BailianAPIError(
f"百炼 API 错误: {data.get('code')} {data.get('message', '')}"
)
results = data.get("output", {}).get("results", [])
if not results:
logger.warning(f"百炼 Rerank 返回空结果: {data}")
return []
# 转换为RerankResult对象使用.get()避免KeyError
rerank_results = []
for idx, result in enumerate(results):
try:
index = result.get("index", idx)
relevance_score = result.get("relevance_score", 0.0)
if relevance_score is None:
logger.warning(f"结果 {idx} 缺少 relevance_score使用默认值 0.0")
relevance_score = 0.0
rerank_result = RerankResult(
index=index, relevance_score=relevance_score
)
rerank_results.append(rerank_result)
except Exception as e:
logger.warning(f"解析结果 {idx} 时出错: {e}, result={result}")
continue
return rerank_results
def _log_usage(self, data: dict) -> None:
"""记录使用量信息
Args:
data: API响应数据
"""
tokens = data.get("usage", {}).get("total_tokens", 0)
if tokens > 0:
logger.debug(f"百炼 Rerank 消耗 Token: {tokens}")
async def rerank(
self,
query: str,
documents: list[str],
top_n: int | None = None,
) -> list[RerankResult]:
"""
对文档进行重排序
Args:
query: 查询文本
documents: 待排序的文档列表
top_n: 返回前N个结果如果为None则使用配置中的默认值
Returns:
重排序结果列表
"""
if not documents:
logger.warning("文档列表为空,返回空结果")
return []
if not query.strip():
logger.warning("查询文本为空,返回空结果")
return []
# 检查限制
if len(documents) > 500:
logger.warning(
f"文档数量({len(documents)})超过限制(500)将截断前500个文档"
)
documents = documents[:500]
try:
# 构建请求载荷如果top_n为None则返回所有重排序结果
payload = self._build_payload(query, documents, top_n)
logger.debug(
f"百炼 Rerank 请求: query='{query[:50]}...', 文档数量={len(documents)}"
)
# 发送请求
async with self.client.post(self.base_url, json=payload) as response:
response.raise_for_status()
response_data = await response.json()
# 解析结果并记录使用量
results = self._parse_results(response_data)
self._log_usage(response_data)
logger.debug(f"百炼 Rerank 成功返回 {len(results)} 个结果")
return results
except aiohttp.ClientError as e:
error_msg = f"网络请求失败: {e}"
logger.error(f"百炼 Rerank 网络请求失败: {e}")
raise BailianNetworkError(error_msg) from e
except BailianRerankError:
raise
except Exception as e:
error_msg = f"重排序失败: {e}"
logger.error(f"百炼 Rerank 处理失败: {e}")
raise BailianRerankError(error_msg) from e
async def terminate(self) -> None:
"""关闭HTTP客户端会话."""
if self.client:
logger.info("关闭 百炼 Rerank 客户端会话")
try:
await self.client.close()
except Exception as e:
logger.error(f"关闭 百炼 Rerank 客户端时出错: {e}")
finally:
self.client = None

View File

@@ -0,0 +1,650 @@
import base64
import hashlib
import json
import os
from collections.abc import AsyncGenerator
import astrbot.core.message.components as Comp
from astrbot import logger
from astrbot.api.provider import Provider
from astrbot.core.message.message_event_result import MessageChain
from astrbot.core.provider.entities import LLMResponse
from ..register import register_provider_adapter
from .coze_api_client import CozeAPIClient
@register_provider_adapter("coze", "Coze (扣子) 智能体适配器")
class ProviderCoze(Provider):
def __init__(
self,
provider_config,
provider_settings,
) -> None:
super().__init__(
provider_config,
provider_settings,
)
self.api_key = provider_config.get("coze_api_key", "")
if not self.api_key:
raise Exception("Coze API Key 不能为空。")
self.bot_id = provider_config.get("bot_id", "")
if not self.bot_id:
raise Exception("Coze Bot ID 不能为空。")
self.api_base: str = provider_config.get("coze_api_base", "https://api.coze.cn")
if not isinstance(self.api_base, str) or not self.api_base.startswith(
("http://", "https://"),
):
raise Exception(
"Coze API Base URL 格式不正确,必须以 http:// 或 https:// 开头。",
)
self.timeout = provider_config.get("timeout", 120)
if isinstance(self.timeout, str):
self.timeout = int(self.timeout)
self.auto_save_history = provider_config.get("auto_save_history", True)
self.conversation_ids: dict[str, str] = {}
self.file_id_cache: dict[str, dict[str, str]] = {}
# 创建 API 客户端
self.api_client = CozeAPIClient(api_key=self.api_key, api_base=self.api_base)
def _generate_cache_key(self, data: str, is_base64: bool = False) -> str:
"""生成统一的缓存键
Args:
data: 图片数据或路径
is_base64: 是否是 base64 数据
Returns:
str: 缓存键
"""
try:
if is_base64 and data.startswith("data:image/"):
try:
header, encoded = data.split(",", 1)
image_bytes = base64.b64decode(encoded)
cache_key = hashlib.md5(image_bytes).hexdigest()
return cache_key
except Exception:
cache_key = hashlib.md5(encoded.encode("utf-8")).hexdigest()
return cache_key
elif data.startswith(("http://", "https://")):
# URL图片使用URL作为缓存键
cache_key = hashlib.md5(data.encode("utf-8")).hexdigest()
return cache_key
else:
clean_path = (
data.split("_")[0]
if "_" in data and len(data.split("_")) >= 3
else data
)
if os.path.exists(clean_path):
with open(clean_path, "rb") as f:
file_content = f.read()
cache_key = hashlib.md5(file_content).hexdigest()
return cache_key
cache_key = hashlib.md5(clean_path.encode("utf-8")).hexdigest()
return cache_key
except Exception as e:
cache_key = hashlib.md5(data.encode("utf-8")).hexdigest()
logger.debug(f"[Coze] 异常文件缓存键: {cache_key}, error={e}")
return cache_key
async def _upload_file(
self,
file_data: bytes,
session_id: str | None = None,
cache_key: str | None = None,
) -> str:
"""上传文件到 Coze 并返回 file_id"""
# 使用 API 客户端上传文件
file_id = await self.api_client.upload_file(file_data)
# 缓存 file_id
if session_id and cache_key:
if session_id not in self.file_id_cache:
self.file_id_cache[session_id] = {}
self.file_id_cache[session_id][cache_key] = file_id
logger.debug(f"[Coze] 图片上传成功并缓存file_id: {file_id}")
return file_id
async def _download_and_upload_image(
self,
image_url: str,
session_id: str | None = None,
) -> str:
"""下载图片并上传到 Coze返回 file_id"""
# 计算哈希实现缓存
cache_key = self._generate_cache_key(image_url) if session_id else None
if session_id and cache_key:
if session_id not in self.file_id_cache:
self.file_id_cache[session_id] = {}
if cache_key in self.file_id_cache[session_id]:
file_id = self.file_id_cache[session_id][cache_key]
return file_id
try:
image_data = await self.api_client.download_image(image_url)
file_id = await self._upload_file(image_data, session_id, cache_key)
if session_id and cache_key:
self.file_id_cache[session_id][cache_key] = file_id
return file_id
except Exception as e:
logger.error(f"处理图片失败 {image_url}: {e!s}")
raise Exception(f"处理图片失败: {e!s}")
async def _process_context_images(
self,
content: str | list,
session_id: str,
) -> str:
"""处理上下文中的图片内容,将 base64 图片上传并替换为 file_id"""
try:
if isinstance(content, str):
return content
processed_content = []
if session_id not in self.file_id_cache:
self.file_id_cache[session_id] = {}
for item in content:
if not isinstance(item, dict):
processed_content.append(item)
continue
if item.get("type") == "text":
processed_content.append(item)
elif item.get("type") == "image_url":
# 处理图片逻辑
if "file_id" in item:
# 已经有 file_id
logger.debug(f"[Coze] 图片已有file_id: {item['file_id']}")
processed_content.append(item)
else:
# 获取图片数据
image_data = ""
if "image_url" in item and isinstance(item["image_url"], dict):
image_data = item["image_url"].get("url", "")
elif "data" in item:
image_data = item.get("data", "")
elif "url" in item:
image_data = item.get("url", "")
if not image_data:
continue
# 计算哈希用于缓存
cache_key = self._generate_cache_key(
image_data,
is_base64=image_data.startswith("data:image/"),
)
# 检查缓存
if cache_key in self.file_id_cache[session_id]:
file_id = self.file_id_cache[session_id][cache_key]
processed_content.append(
{"type": "image", "file_id": file_id},
)
else:
# 上传图片并缓存
if image_data.startswith("data:image/"):
# base64 处理
_, encoded = image_data.split(",", 1)
image_bytes = base64.b64decode(encoded)
file_id = await self._upload_file(
image_bytes,
session_id,
cache_key,
)
elif image_data.startswith(("http://", "https://")):
# URL 图片
file_id = await self._download_and_upload_image(
image_data,
session_id,
)
# 为URL图片也添加缓存
self.file_id_cache[session_id][cache_key] = file_id
elif os.path.exists(image_data):
# 本地文件
with open(image_data, "rb") as f:
image_bytes = f.read()
file_id = await self._upload_file(
image_bytes,
session_id,
cache_key,
)
else:
logger.warning(
f"无法处理的图片格式: {image_data[:50]}...",
)
continue
processed_content.append(
{"type": "image", "file_id": file_id},
)
result = json.dumps(processed_content, ensure_ascii=False)
return result
except Exception as e:
logger.error(f"处理上下文图片失败: {e!s}")
if isinstance(content, str):
return content
return json.dumps(content, ensure_ascii=False)
async def text_chat(
self,
prompt: str,
session_id=None,
image_urls=None,
func_tool=None,
contexts=None,
system_prompt=None,
tool_calls_result=None,
model=None,
**kwargs,
) -> LLMResponse:
"""文本对话, 内部使用流式接口实现非流式
Args:
prompt (str): 用户提示词
session_id (str): 会话ID
image_urls (List[str]): 图片URL列表
func_tool (FuncCall): 函数调用工具(不支持)
contexts (List): 上下文列表
system_prompt (str): 系统提示语
tool_calls_result (ToolCallsResult | List[ToolCallsResult]): 工具调用结果(不支持)
model (str): 模型名称(不支持)
Returns:
LLMResponse: LLM响应对象
"""
accumulated_content = ""
final_response = None
async for llm_response in self.text_chat_stream(
prompt=prompt,
session_id=session_id,
image_urls=image_urls,
func_tool=func_tool,
contexts=contexts,
system_prompt=system_prompt,
tool_calls_result=tool_calls_result,
model=model,
**kwargs,
):
if llm_response.is_chunk:
if llm_response.completion_text:
accumulated_content += llm_response.completion_text
else:
final_response = llm_response
if final_response:
return final_response
if accumulated_content:
chain = MessageChain(chain=[Comp.Plain(accumulated_content)])
return LLMResponse(role="assistant", result_chain=chain)
return LLMResponse(role="assistant", completion_text="")
async def text_chat_stream(
self,
prompt: str,
session_id=None,
image_urls=None,
func_tool=None,
contexts=None,
system_prompt=None,
tool_calls_result=None,
model=None,
**kwargs,
) -> AsyncGenerator[LLMResponse, None]:
"""流式对话接口"""
# 用户ID参数(参考文档, 可以自定义)
user_id = session_id or kwargs.get("user", "default_user")
# 获取或创建会话ID
conversation_id = self.conversation_ids.get(user_id)
# 构建消息
additional_messages = []
if system_prompt:
if not self.auto_save_history or not conversation_id:
additional_messages.append(
{
"role": "system",
"content": system_prompt,
"content_type": "text",
},
)
contexts = self._ensure_message_to_dicts(contexts)
if not self.auto_save_history and contexts:
# 如果关闭了自动保存历史,传入上下文
for ctx in contexts:
if isinstance(ctx, dict) and "role" in ctx and "content" in ctx:
content = ctx["content"]
content_type = ctx.get("content_type", "text")
# 处理可能包含图片的上下文
if (
content_type == "object_string"
or (isinstance(content, str) and content.startswith("["))
or (
isinstance(content, list)
and any(
isinstance(item, dict)
and item.get("type") == "image_url"
for item in content
)
)
):
processed_content = await self._process_context_images(
content,
user_id,
)
additional_messages.append(
{
"role": ctx["role"],
"content": processed_content,
"content_type": "object_string",
},
)
else:
# 纯文本
additional_messages.append(
{
"role": ctx["role"],
"content": (
content
if isinstance(content, str)
else json.dumps(content, ensure_ascii=False)
),
"content_type": "text",
},
)
else:
logger.info(f"[Coze] 跳过格式不正确的上下文: {ctx}")
if prompt or image_urls:
if image_urls:
# 多模态
object_string_content = []
if prompt:
object_string_content.append({"type": "text", "text": prompt})
for url in image_urls:
try:
if url.startswith(("http://", "https://")):
# 网络图片
file_id = await self._download_and_upload_image(
url,
user_id,
)
else:
# 本地文件或 base64
if url.startswith("data:image/"):
# base64
_, encoded = url.split(",", 1)
image_data = base64.b64decode(encoded)
cache_key = self._generate_cache_key(
url,
is_base64=True,
)
file_id = await self._upload_file(
image_data,
user_id,
cache_key,
)
# 本地文件
elif os.path.exists(url):
with open(url, "rb") as f:
image_data = f.read()
# 用文件路径和修改时间来缓存
file_stat = os.stat(url)
cache_key = self._generate_cache_key(
f"{url}_{file_stat.st_mtime}_{file_stat.st_size}",
is_base64=False,
)
file_id = await self._upload_file(
image_data,
user_id,
cache_key,
)
else:
logger.warning(f"图片文件不存在: {url}")
continue
object_string_content.append(
{
"type": "image",
"file_id": file_id,
},
)
except Exception as e:
logger.error(f"处理图片失败 {url}: {e!s}")
continue
if object_string_content:
content = json.dumps(object_string_content, ensure_ascii=False)
additional_messages.append(
{
"role": "user",
"content": content,
"content_type": "object_string",
},
)
# 纯文本
elif prompt:
additional_messages.append(
{
"role": "user",
"content": prompt,
"content_type": "text",
},
)
try:
accumulated_content = ""
message_started = False
async for chunk in self.api_client.chat_messages(
bot_id=self.bot_id,
user_id=user_id,
additional_messages=additional_messages,
conversation_id=conversation_id,
auto_save_history=self.auto_save_history,
stream=True,
timeout=self.timeout,
):
event_type = chunk.get("event")
data = chunk.get("data", {})
if event_type == "conversation.chat.created":
if isinstance(data, dict) and "conversation_id" in data:
self.conversation_ids[user_id] = data["conversation_id"]
elif event_type == "conversation.message.delta":
if isinstance(data, dict):
content = data.get("content", "")
if not content and "delta" in data:
content = data["delta"].get("content", "")
if not content and "text" in data:
content = data.get("text", "")
if content:
message_started = True
accumulated_content += content
yield LLMResponse(
role="assistant",
completion_text=content,
is_chunk=True,
)
elif event_type == "conversation.message.completed":
if isinstance(data, dict):
msg_type = data.get("type")
if msg_type == "answer" and data.get("role") == "assistant":
final_content = data.get("content", "")
if not accumulated_content and final_content:
chain = MessageChain(chain=[Comp.Plain(final_content)])
yield LLMResponse(
role="assistant",
result_chain=chain,
is_chunk=False,
)
elif event_type == "conversation.chat.completed":
if accumulated_content:
chain = MessageChain(chain=[Comp.Plain(accumulated_content)])
yield LLMResponse(
role="assistant",
result_chain=chain,
is_chunk=False,
)
break
elif event_type == "done":
break
elif event_type == "error":
error_msg = (
data.get("message", "未知错误")
if isinstance(data, dict)
else str(data)
)
logger.error(f"Coze 流式响应错误: {error_msg}")
yield LLMResponse(
role="err",
completion_text=f"Coze 错误: {error_msg}",
is_chunk=False,
)
break
if not message_started and not accumulated_content:
yield LLMResponse(
role="assistant",
completion_text="LLM 未响应任何内容。",
is_chunk=False,
)
elif message_started and accumulated_content:
chain = MessageChain(chain=[Comp.Plain(accumulated_content)])
yield LLMResponse(
role="assistant",
result_chain=chain,
is_chunk=False,
)
except Exception as e:
logger.error(f"Coze 流式请求失败: {e!s}")
yield LLMResponse(
role="err",
completion_text=f"Coze 流式请求失败: {e!s}",
is_chunk=False,
)
async def forget(self, session_id: str):
"""清空指定会话的上下文"""
user_id = session_id
conversation_id = self.conversation_ids.get(user_id)
if user_id in self.file_id_cache:
self.file_id_cache.pop(user_id, None)
if not conversation_id:
return True
try:
response = await self.api_client.clear_context(conversation_id)
if "code" in response and response["code"] == 0:
self.conversation_ids.pop(user_id, None)
return True
logger.warning(f"清空 Coze 会话上下文失败: {response}")
return False
except Exception as e:
logger.error(f"清空 Coze 会话失败: {e!s}")
return False
async def get_current_key(self):
"""获取当前API Key"""
return self.api_key
async def set_key(self, key: str):
"""设置新的API Key"""
raise NotImplementedError("Coze 适配器不支持设置 API Key。")
async def get_models(self):
"""获取可用模型列表"""
return [f"bot_{self.bot_id}"]
def get_model(self):
"""获取当前模型"""
return f"bot_{self.bot_id}"
def set_model(self, model: str):
"""设置模型在Coze中是Bot ID"""
if model.startswith("bot_"):
self.bot_id = model[4:]
else:
self.bot_id = model
async def get_human_readable_context(
self,
session_id: str,
page: int = 1,
page_size: int = 10,
):
"""获取人类可读的上下文历史"""
user_id = session_id
conversation_id = self.conversation_ids.get(user_id)
if not conversation_id:
return []
try:
data = await self.api_client.get_message_list(
conversation_id=conversation_id,
order="desc",
limit=page_size,
offset=(page - 1) * page_size,
)
if data.get("code") != 0:
logger.warning(f"获取 Coze 消息历史失败: {data}")
return []
messages = data.get("data", {}).get("messages", [])
readable_history = []
for msg in messages:
role = msg.get("role", "unknown")
content = msg.get("content", "")
msg_type = msg.get("type", "")
if role == "user":
readable_history.append(f"用户: {content}")
elif role == "assistant" and msg_type == "answer":
readable_history.append(f"助手: {content}")
return readable_history
except Exception as e:
logger.error(f"获取 Coze 消息历史失败: {e!s}")
return []
async def terminate(self):
"""清理资源"""
await self.api_client.close()

View File

@@ -0,0 +1,207 @@
import asyncio
import functools
import re
from dashscope import Application
from dashscope.app.application_response import ApplicationResponse
from astrbot.core import logger, sp
from astrbot.core.message.message_event_result import MessageChain
from .. import Provider
from ..entities import LLMResponse
from ..register import register_provider_adapter
from .openai_source import ProviderOpenAIOfficial
@register_provider_adapter("dashscope", "Dashscope APP 适配器。")
class ProviderDashscope(ProviderOpenAIOfficial):
def __init__(
self,
provider_config: dict,
provider_settings: dict,
) -> None:
Provider.__init__(
self,
provider_config,
provider_settings,
)
self.api_key = provider_config.get("dashscope_api_key", "")
if not self.api_key:
raise Exception("阿里云百炼 API Key 不能为空。")
self.app_id = provider_config.get("dashscope_app_id", "")
if not self.app_id:
raise Exception("阿里云百炼 APP ID 不能为空。")
self.dashscope_app_type = provider_config.get("dashscope_app_type", "")
if not self.dashscope_app_type:
raise Exception("阿里云百炼 APP 类型不能为空。")
self.model_name = "dashscope"
self.variables: dict = provider_config.get("variables", {})
self.rag_options: dict = provider_config.get("rag_options", {})
self.output_reference = self.rag_options.get("output_reference", False)
self.rag_options = self.rag_options.copy()
self.rag_options.pop("output_reference", None)
self.timeout = provider_config.get("timeout", 120)
if isinstance(self.timeout, str):
self.timeout = int(self.timeout)
def has_rag_options(self):
"""判断是否有 RAG 选项
Returns:
bool: 是否有 RAG 选项
"""
if self.rag_options and (
len(self.rag_options.get("pipeline_ids", [])) > 0
or len(self.rag_options.get("file_ids", [])) > 0
):
return True
return False
async def text_chat(
self,
prompt: str,
session_id=None,
image_urls=None,
func_tool=None,
contexts=None,
system_prompt=None,
model=None,
**kwargs,
) -> LLMResponse:
if image_urls is None:
image_urls = []
if contexts is None:
contexts = []
# 获得会话变量
payload_vars = self.variables.copy()
# 动态变量
session_var = await sp.session_get(session_id, "session_variables", default={})
payload_vars.update(session_var)
if (
self.dashscope_app_type in ["agent", "dialog-workflow"]
and not self.has_rag_options()
):
# 支持多轮对话的
new_record = {"role": "user", "content": prompt}
if image_urls:
logger.warning("阿里云百炼暂不支持图片输入,将自动忽略图片内容。")
contexts_no_img = await self._remove_image_from_context(contexts)
context_query = [*contexts_no_img, new_record]
if system_prompt:
context_query.insert(0, {"role": "system", "content": system_prompt})
for part in context_query:
if "_no_save" in part:
del part["_no_save"]
# 调用阿里云百炼 API
payload = {
"app_id": self.app_id,
"api_key": self.api_key,
"messages": context_query,
"biz_params": payload_vars or None,
}
partial = functools.partial(
Application.call,
**payload,
)
response = await asyncio.get_event_loop().run_in_executor(None, partial)
else:
# 不支持多轮对话的
# 调用阿里云百炼 API
payload = {
"app_id": self.app_id,
"prompt": prompt,
"api_key": self.api_key,
"biz_params": payload_vars or None,
}
if self.rag_options:
payload["rag_options"] = self.rag_options
partial = functools.partial(
Application.call,
**payload,
)
response = await asyncio.get_event_loop().run_in_executor(None, partial)
assert isinstance(response, ApplicationResponse)
logger.debug(f"dashscope resp: {response}")
if response.status_code != 200:
logger.error(
f"阿里云百炼请求失败: request_id={response.request_id}, code={response.status_code}, message={response.message}, 请参考文档https://help.aliyun.com/zh/model-studio/developer-reference/error-code",
)
return LLMResponse(
role="err",
result_chain=MessageChain().message(
f"阿里云百炼请求失败: message={response.message} code={response.status_code}",
),
)
output_text = response.output.get("text", "") or ""
# RAG 引用脚标格式化
output_text = re.sub(r"<ref>\[(\d+)\]</ref>", r"[\1]", output_text)
if self.output_reference and response.output.get("doc_references", None):
ref_parts = []
for ref in response.output.get("doc_references", []) or []:
ref_title = (
ref.get("title", "")
if ref.get("title")
else ref.get("doc_name", "")
)
ref_parts.append(f"{ref['index_id']}. {ref_title}\n")
ref_str = "".join(ref_parts)
output_text += f"\n\n回答来源:\n{ref_str}"
llm_response = LLMResponse("assistant")
llm_response.result_chain = MessageChain().message(output_text)
return llm_response
async def text_chat_stream(
self,
prompt,
session_id=None,
image_urls=...,
func_tool=None,
contexts=...,
system_prompt=None,
tool_calls_result=None,
model=None,
**kwargs,
):
# raise NotImplementedError("This method is not implemented yet.")
# 调用 text_chat 模拟流式
llm_response = await self.text_chat(
prompt=prompt,
session_id=session_id,
image_urls=image_urls,
func_tool=func_tool,
contexts=contexts,
system_prompt=system_prompt,
tool_calls_result=tool_calls_result,
)
llm_response.is_chunk = True
yield llm_response
llm_response.is_chunk = False
yield llm_response
async def forget(self, session_id):
return True
async def get_current_key(self):
return self.api_key
async def set_key(self, key):
raise Exception("阿里云百炼 适配器不支持设置 API Key。")
async def get_models(self):
return [self.get_model()]
async def get_human_readable_context(self, session_id, page, page_size):
raise Exception("暂不支持获得 阿里云百炼 的历史消息记录。")
async def terminate(self):
pass

View File

@@ -0,0 +1,285 @@
import os
import astrbot.core.message.components as Comp
from astrbot.core import logger, sp
from astrbot.core.message.message_event_result import MessageChain
from astrbot.core.utils.astrbot_path import get_astrbot_data_path
from astrbot.core.utils.dify_api_client import DifyAPIClient
from astrbot.core.utils.io import download_file, download_image_by_url
from .. import Provider
from ..entities import LLMResponse
from ..register import register_provider_adapter
@register_provider_adapter("dify", "Dify APP 适配器。")
class ProviderDify(Provider):
def __init__(
self,
provider_config,
provider_settings,
) -> None:
super().__init__(
provider_config,
provider_settings,
)
self.api_key = provider_config.get("dify_api_key", "")
if not self.api_key:
raise Exception("Dify API Key 不能为空。")
api_base = provider_config.get("dify_api_base", "https://api.dify.ai/v1")
self.api_type = provider_config.get("dify_api_type", "")
if not self.api_type:
raise Exception("Dify API 类型不能为空。")
self.model_name = "dify"
self.workflow_output_key = provider_config.get(
"dify_workflow_output_key",
"astrbot_wf_output",
)
self.dify_query_input_key = provider_config.get(
"dify_query_input_key",
"astrbot_text_query",
)
if not self.dify_query_input_key:
self.dify_query_input_key = "astrbot_text_query"
if not self.workflow_output_key:
self.workflow_output_key = "astrbot_wf_output"
self.variables: dict = provider_config.get("variables", {})
self.timeout = provider_config.get("timeout", 120)
if isinstance(self.timeout, str):
self.timeout = int(self.timeout)
self.conversation_ids = {}
"""记录当前 session id 的对话 ID"""
self.api_client = DifyAPIClient(self.api_key, api_base)
async def text_chat(
self,
prompt: str,
session_id=None,
image_urls=None,
func_tool=None,
contexts=None,
system_prompt=None,
tool_calls_result=None,
model=None,
**kwargs,
) -> LLMResponse:
if image_urls is None:
image_urls = []
result = ""
session_id = session_id or kwargs.get("user") or "unknown" # 1734
conversation_id = self.conversation_ids.get(session_id, "")
files_payload = []
for image_url in image_urls:
image_path = (
await download_image_by_url(image_url)
if image_url.startswith("http")
else image_url
)
file_response = await self.api_client.file_upload(
image_path,
user=session_id,
)
logger.debug(f"Dify 上传图片响应:{file_response}")
if "id" not in file_response:
logger.warning(
f"上传图片后得到未知的 Dify 响应:{file_response},图片将忽略。",
)
continue
files_payload.append(
{
"type": "image",
"transfer_method": "local_file",
"upload_file_id": file_response["id"],
},
)
# 获得会话变量
payload_vars = self.variables.copy()
# 动态变量
session_var = await sp.session_get(session_id, "session_variables", default={})
payload_vars.update(session_var)
payload_vars["system_prompt"] = system_prompt
try:
match self.api_type:
case "chat" | "agent" | "chatflow":
if not prompt:
prompt = "请描述这张图片。"
async for chunk in self.api_client.chat_messages(
inputs={
**payload_vars,
},
query=prompt,
user=session_id,
conversation_id=conversation_id,
files=files_payload,
timeout=self.timeout,
):
logger.debug(f"dify resp chunk: {chunk}")
if (
chunk["event"] == "message"
or chunk["event"] == "agent_message"
):
result += chunk["answer"]
if not conversation_id:
self.conversation_ids[session_id] = chunk[
"conversation_id"
]
conversation_id = chunk["conversation_id"]
elif chunk["event"] == "message_end":
logger.debug("Dify message end")
break
elif chunk["event"] == "error":
logger.error(f"Dify 出现错误:{chunk}")
raise Exception(
f"Dify 出现错误 status: {chunk['status']} message: {chunk['message']}",
)
case "workflow":
async for chunk in self.api_client.workflow_run(
inputs={
self.dify_query_input_key: prompt,
"astrbot_session_id": session_id,
**payload_vars,
},
user=session_id,
files=files_payload,
timeout=self.timeout,
):
match chunk["event"]:
case "workflow_started":
logger.info(
f"Dify 工作流(ID: {chunk['workflow_run_id']})开始运行。",
)
case "node_finished":
logger.debug(
f"Dify 工作流节点(ID: {chunk['data']['node_id']} Title: {chunk['data'].get('title', '')})运行结束。",
)
case "workflow_finished":
logger.info(
f"Dify 工作流(ID: {chunk['workflow_run_id']})运行结束",
)
logger.debug(f"Dify 工作流结果:{chunk}")
if chunk["data"]["error"]:
logger.error(
f"Dify 工作流出现错误:{chunk['data']['error']}",
)
raise Exception(
f"Dify 工作流出现错误:{chunk['data']['error']}",
)
if (
self.workflow_output_key
not in chunk["data"]["outputs"]
):
raise Exception(
f"Dify 工作流的输出不包含指定的键名:{self.workflow_output_key}",
)
result = chunk
case _:
raise Exception(f"未知的 Dify API 类型:{self.api_type}")
except Exception as e:
logger.error(f"Dify 请求失败:{e!s}")
return LLMResponse(role="err", completion_text=f"Dify 请求失败:{e!s}")
if not result:
logger.warning("Dify 请求结果为空,请查看 Debug 日志。")
chain = await self.parse_dify_result(result)
return LLMResponse(role="assistant", result_chain=chain)
async def text_chat_stream(
self,
prompt,
session_id=None,
image_urls=...,
func_tool=None,
contexts=...,
system_prompt=None,
tool_calls_result=None,
model=None,
**kwargs,
):
# raise NotImplementedError("This method is not implemented yet.")
# 调用 text_chat 模拟流式
llm_response = await self.text_chat(
prompt=prompt,
session_id=session_id,
image_urls=image_urls,
func_tool=func_tool,
contexts=contexts,
system_prompt=system_prompt,
tool_calls_result=tool_calls_result,
)
llm_response.is_chunk = True
yield llm_response
llm_response.is_chunk = False
yield llm_response
async def parse_dify_result(self, chunk: dict | str) -> MessageChain:
if isinstance(chunk, str):
# Chat
return MessageChain(chain=[Comp.Plain(chunk)])
async def parse_file(item: dict):
match item["type"]:
case "image":
return Comp.Image(file=item["url"], url=item["url"])
case "audio":
# 仅支持 wav
temp_dir = os.path.join(get_astrbot_data_path(), "temp")
path = os.path.join(temp_dir, f"{item['filename']}.wav")
await download_file(item["url"], path)
return Comp.Image(file=item["url"], url=item["url"])
case "video":
return Comp.Video(file=item["url"])
case _:
return Comp.File(name=item["filename"], file=item["url"])
output = chunk["data"]["outputs"][self.workflow_output_key]
chains = []
if isinstance(output, str):
# 纯文本输出
chains.append(Comp.Plain(output))
elif isinstance(output, list):
# 主要适配 Dify 的 HTTP 请求结点的多模态输出
for item in output:
# handle Array[File]
if (
not isinstance(item, dict)
or item.get("dify_model_identity", "") != "__dify__file__"
):
chains.append(Comp.Plain(str(output)))
break
else:
chains.append(Comp.Plain(str(output)))
# scan file
files = chunk["data"].get("files", [])
for item in files:
comp = await parse_file(item)
chains.append(comp)
return MessageChain(chain=chains)
async def forget(self, session_id):
self.conversation_ids[session_id] = ""
return True
async def get_current_key(self):
return self.api_key
async def set_key(self, key):
raise Exception("Dify 适配器不支持设置 API Key。")
async def get_models(self):
return [self.get_model()]
async def get_human_readable_context(self, session_id, page, page_size):
raise Exception("暂不支持获得 Dify 的历史消息记录。")
async def terminate(self):
await self.api_client.close()

View File

@@ -14,6 +14,7 @@ from astrbot.core.config.astrbot_config import AstrBotConfig
from astrbot.core.conversation_mgr import ConversationManager
from astrbot.core.db import BaseDatabase
from astrbot.core.knowledge_base.kb_mgr import KnowledgeBaseManager
from astrbot.core.memory.memory_manager import MemoryManager
from astrbot.core.message.message_event_result import MessageChain
from astrbot.core.persona_mgr import PersonaManager
from astrbot.core.platform import Platform
@@ -65,6 +66,7 @@ class Context:
persona_manager: PersonaManager,
astrbot_config_mgr: AstrBotConfigManager,
knowledge_base_manager: KnowledgeBaseManager,
memory_manager: MemoryManager,
):
self._event_queue = event_queue
"""事件队列。消息平台通过事件队列传递消息事件。"""
@@ -79,6 +81,7 @@ class Context:
self.persona_manager = persona_manager
self.astrbot_config_mgr = astrbot_config_mgr
self.kb_manager = knowledge_base_manager
self.memory_manager = memory_manager
async def llm_generate(
self,

View File

@@ -85,22 +85,3 @@ class UmopConfigRouter:
self.umop_to_conf_id[umo] = conf_id
await self.sp.global_put("umop_config_routing", self.umop_to_conf_id)
async def delete_route(self, umo: str):
"""删除一条路由
Args:
umo (str): 需要删除的 UMO 字符串
Raises:
ValueError: 当 umo 格式不正确时抛出
"""
if not isinstance(umo, str) or len(umo.split(":")) != 3:
raise ValueError(
"umop must be a string in the format [platform_id]:[message_type]:[session_id], with optional wildcards * or empty for all",
)
if umo in self.umop_to_conf_id:
del self.umop_to_conf_id[umo]
await self.sp.global_put("umop_config_routing", self.umop_to_conf_id)

View File

@@ -3,7 +3,7 @@ import json
from collections.abc import AsyncGenerator
from typing import Any
from aiohttp import ClientResponse, ClientSession, FormData
from aiohttp import ClientResponse, ClientSession
from astrbot.core import logger
@@ -101,59 +101,21 @@ class DifyAPIClient:
async def file_upload(
self,
file_path: str,
user: str,
file_path: str | None = None,
file_data: bytes | None = None,
file_name: str | None = None,
mime_type: str | None = None,
) -> dict[str, Any]:
"""Upload a file to Dify. Must provide either file_path or file_data.
Args:
user: The user ID.
file_path: The path to the file to upload.
file_data: The file data in bytes.
file_name: Optional file name when using file_data.
Returns:
A dictionary containing the uploaded file information.
"""
url = f"{self.api_base}/files/upload"
form = FormData()
form.add_field("user", user)
if file_data is not None:
# 使用 bytes 数据
form.add_field(
"file",
file_data,
filename=file_name or "uploaded_file",
content_type=mime_type or "application/octet-stream",
)
elif file_path is not None:
# 使用文件路径
import os
with open(file_path, "rb") as f:
file_content = f.read()
form.add_field(
"file",
file_content,
filename=os.path.basename(file_path),
content_type=mime_type or "application/octet-stream",
)
else:
raise ValueError("file_path 和 file_data 不能同时为 None")
async with self.session.post(
url,
data=form,
headers=self.headers, # 不包含 Content-Type让 aiohttp 自动设置
) as resp:
if resp.status != 200 and resp.status != 201:
text = await resp.text()
raise Exception(f"Dify 文件上传失败:{resp.status}. {text}")
return await resp.json() # {"id": "xxx", ...}
with open(file_path, "rb") as f:
payload = {
"user": user,
"file": f,
}
async with self.session.post(
url,
data=payload,
headers=self.headers,
) as resp:
return await resp.json() # {"id": "xxx", ...}
async def close(self):
await self.session.close()

View File

@@ -1,73 +0,0 @@
import traceback
from astrbot.core import astrbot_config, logger
from astrbot.core.astrbot_config_mgr import AstrBotConfig, AstrBotConfigManager
from astrbot.core.db.migration.migra_45_to_46 import migrate_45_to_46
from astrbot.core.db.migration.migra_webchat_session import migrate_webchat_session
def _migra_agent_runner_configs(conf: AstrBotConfig, ids_map: dict) -> None:
"""
Migra agent runner configs from provider configs.
"""
try:
default_prov_id = conf["provider_settings"]["default_provider_id"]
if default_prov_id in ids_map:
conf["provider_settings"]["default_provider_id"] = ""
p = ids_map[default_prov_id]
if p["type"] == "dify":
conf["provider_settings"]["dify_agent_runner_provider_id"] = p["id"]
conf["provider_settings"]["agent_runner_type"] = "dify"
elif p["type"] == "coze":
conf["provider_settings"]["coze_agent_runner_provider_id"] = p["id"]
conf["provider_settings"]["agent_runner_type"] = "coze"
elif p["type"] == "dashscope":
conf["provider_settings"]["dashscope_agent_runner_provider_id"] = p[
"id"
]
conf["provider_settings"]["agent_runner_type"] = "dashscope"
conf.save_config()
except Exception as e:
logger.error(f"Migration for third party agent runner configs failed: {e!s}")
logger.error(traceback.format_exc())
async def migra(
db, astrbot_config_mgr, umop_config_router, acm: AstrBotConfigManager
) -> None:
"""
Stores the migration logic here.
btw, i really don't like migration :(
"""
# 4.5 to 4.6 migration for umop_config_router
try:
await migrate_45_to_46(astrbot_config_mgr, umop_config_router)
except Exception as e:
logger.error(f"Migration from version 4.5 to 4.6 failed: {e!s}")
logger.error(traceback.format_exc())
# migration for webchat session
try:
await migrate_webchat_session(db)
except Exception as e:
logger.error(f"Migration for webchat session failed: {e!s}")
logger.error(traceback.format_exc())
# migra third party agent runner configs
_c = False
providers = astrbot_config["provider"]
ids_map = {}
for prov in providers:
type_ = prov.get("type")
if type_ in ["dify", "coze", "dashscope"]:
prov["provider_type"] = "agent_runner"
ids_map[prov["id"]] = {
"type": type_,
"id": prov["id"],
}
_c = True
if _c:
astrbot_config.save_config()
for conf in acm.confs.values():
_migra_agent_runner_configs(conf, ids_map)

View File

@@ -40,6 +40,9 @@ class SharedPreferences:
else:
ret = default
return ret
raise ValueError(
"scope_id and key cannot be None when getting a specific preference.",
)
async def range_get_async(
self,
@@ -53,6 +56,30 @@ class SharedPreferences:
ret = await self.db_helper.get_preferences(scope, scope_id, key)
return ret
@overload
async def session_get(
self,
umo: None,
key: str,
default: Any = None,
) -> list[Preference]: ...
@overload
async def session_get(
self,
umo: str,
key: None,
default: Any = None,
) -> list[Preference]: ...
@overload
async def session_get(
self,
umo: None,
key: None,
default: Any = None,
) -> list[Preference]: ...
async def session_get(
self,
umo: str | None,
@@ -61,7 +88,7 @@ class SharedPreferences:
) -> _VT | list[Preference]:
"""获取会话范围的偏好设置
Note: 当 umo 或者 key 为 None返回 Preference 列表,其中的 value 属性是一个 dictvalue["val"] 为值。
Note: 当 scope_id 或者 key 为 None返回 Preference 列表,其中的 value 属性是一个 dictvalue["val"] 为值。
"""
if umo is None or key is None:
return await self.range_get_async("umo", umo, key)

View File

@@ -5,6 +5,7 @@ from .conversation import ConversationRoute
from .file import FileRoute
from .knowledge_base import KnowledgeBaseRoute
from .log import LogRoute
from .memory import MemoryRoute
from .persona import PersonaRoute
from .plugin import PluginRoute
from .session_management import SessionManagementRoute
@@ -21,6 +22,7 @@ __all__ = [
"FileRoute",
"KnowledgeBaseRoute",
"LogRoute",
"MemoryRoute",
"PersonaRoute",
"PluginRoute",
"SessionManagementRoute",

View File

@@ -56,7 +56,6 @@ class ChatRoute(Route):
self.conv_mgr = core_lifecycle.conversation_manager
self.platform_history_mgr = core_lifecycle.platform_message_history_manager
self.db = db
self.umop_config_router = core_lifecycle.umop_config_router
self.running_convs: dict[str, bool] = {}
@@ -267,8 +266,7 @@ class ChatRoute(Route):
return Response().error("Permission denied").__dict__
# 删除该会话下的所有对话
message_type = "GroupMessage" if session.is_group else "FriendMessage"
unified_msg_origin = f"{session.platform_id}:{message_type}:{session.platform_id}!{username}!{session_id}"
unified_msg_origin = f"{session.platform_id}:FriendMessage:{session.platform_id}!{username}!{session_id}"
await self.conv_mgr.delete_conversations_by_user_id(unified_msg_origin)
# 删除消息历史
@@ -278,16 +276,6 @@ class ChatRoute(Route):
offset_sec=99999999,
)
# 删除与会话关联的配置路由
try:
await self.umop_config_router.delete_route(unified_msg_origin)
except ValueError as exc:
logger.warning(
"Failed to delete UMO route %s during session cleanup: %s",
unified_msg_origin,
exc,
)
# 清理队列(仅对 webchat
if session.platform_id == "webchat":
webchat_queue_mgr.remove_queues(session_id)

View File

@@ -0,0 +1,174 @@
"""Memory management API routes"""
from quart import jsonify, request
from astrbot.core import logger
from astrbot.core.core_lifecycle import AstrBotCoreLifecycle
from astrbot.core.db import BaseDatabase
from .route import Response, Route, RouteContext
class MemoryRoute(Route):
"""Memory management routes"""
def __init__(
self,
context: RouteContext,
db: BaseDatabase,
core_lifecycle: AstrBotCoreLifecycle,
):
super().__init__(context)
self.db = db
self.core_lifecycle = core_lifecycle
self.memory_manager = core_lifecycle.memory_manager
self.provider_manager = core_lifecycle.provider_manager
self.routes = [
("/memory/status", ("GET", self.get_status)),
("/memory/initialize", ("POST", self.initialize)),
("/memory/update_merge_llm", ("POST", self.update_merge_llm)),
]
self.register_routes()
async def get_status(self):
"""Get memory system status"""
try:
is_initialized = self.memory_manager._initialized
status_data = {
"initialized": is_initialized,
"embedding_provider_id": None,
"merge_llm_provider_id": None,
}
if is_initialized:
# Get embedding provider info
if self.memory_manager.embedding_provider:
status_data["embedding_provider_id"] = (
self.memory_manager.embedding_provider.provider_config["id"]
)
# Get merge LLM provider info
if self.memory_manager.merge_llm_provider:
status_data["merge_llm_provider_id"] = (
self.memory_manager.merge_llm_provider.provider_config["id"]
)
return jsonify(Response().ok(status_data).__dict__)
except Exception as e:
logger.error(f"Failed to get memory status: {e}")
return jsonify(Response().error(str(e)).__dict__)
async def initialize(self):
"""Initialize memory system with embedding and merge LLM providers"""
try:
data = await request.get_json()
embedding_provider_id = data.get("embedding_provider_id")
merge_llm_provider_id = data.get("merge_llm_provider_id")
if not embedding_provider_id or not merge_llm_provider_id:
return jsonify(
Response()
.error(
"embedding_provider_id and merge_llm_provider_id are required"
)
.__dict__,
)
# Check if already initialized
if self.memory_manager._initialized:
return jsonify(
Response()
.error(
"Memory system already initialized. Embedding provider cannot be changed.",
)
.__dict__,
)
# Get providers
embedding_provider = await self.provider_manager.get_provider_by_id(
embedding_provider_id,
)
merge_llm_provider = await self.provider_manager.get_provider_by_id(
merge_llm_provider_id,
)
if not embedding_provider:
return jsonify(
Response()
.error(f"Embedding provider {embedding_provider_id} not found")
.__dict__,
)
if not merge_llm_provider:
return jsonify(
Response()
.error(f"Merge LLM provider {merge_llm_provider_id} not found")
.__dict__,
)
# Initialize memory manager
await self.memory_manager.initialize(
embedding_provider=embedding_provider,
merge_llm_provider=merge_llm_provider,
)
logger.info(
f"Memory system initialized with embedding: {embedding_provider_id}, "
f"merge LLM: {merge_llm_provider_id}",
)
return jsonify(
Response()
.ok({"message": "Memory system initialized successfully"})
.__dict__,
)
except Exception as e:
logger.error(f"Failed to initialize memory system: {e}")
return jsonify(Response().error(str(e)).__dict__)
async def update_merge_llm(self):
"""Update merge LLM provider (only allowed after initialization)"""
try:
data = await request.get_json()
merge_llm_provider_id = data.get("merge_llm_provider_id")
if not merge_llm_provider_id:
return jsonify(
Response().error("merge_llm_provider_id is required").__dict__,
)
# Check if initialized
if not self.memory_manager._initialized:
return jsonify(
Response()
.error("Memory system not initialized. Please initialize first.")
.__dict__,
)
# Get new merge LLM provider
merge_llm_provider = await self.provider_manager.get_provider_by_id(
merge_llm_provider_id,
)
if not merge_llm_provider:
return jsonify(
Response()
.error(f"Merge LLM provider {merge_llm_provider_id} not found")
.__dict__,
)
# Update merge LLM provider
self.memory_manager.merge_llm_provider = merge_llm_provider
logger.info(f"Updated merge LLM provider to: {merge_llm_provider_id}")
return jsonify(
Response()
.ok({"message": "Merge LLM provider updated successfully"})
.__dict__,
)
except Exception as e:
logger.error(f"Failed to update merge LLM provider: {e}")
return jsonify(Response().error(str(e)).__dict__)

View File

@@ -79,6 +79,7 @@ class AstrBotDashboard:
self.persona_route = PersonaRoute(self.context, db, core_lifecycle)
self.t2i_route = T2iRoute(self.context, core_lifecycle)
self.kb_route = KnowledgeBaseRoute(self.context, core_lifecycle)
self.memory_route = MemoryRoute(self.context, db, core_lifecycle)
self.app.add_url_rule(
"/api/plug/<path:subpath>",

View File

@@ -1,29 +0,0 @@
## What's Changed
**hot fix of v4.6.0**
fix(core.db): 修复升级后 webchat 相关对话数据未正确迁移的问题 ([#3745](https://github.com/AstrBotDevs/AstrBot/issues/3745))
---
1. 新增: 支持 gemini-3 系列的 thought signature ([#3698](https://github.com/AstrBotDevs/AstrBot/issues/3698))
2. 新增: 支持知识库的 Agentic 检索功能 ([#3667](https://github.com/AstrBotDevs/AstrBot/issues/3667))
3. 新增: 为知识库添加 URL 文档解析器 ([#3622](https://github.com/AstrBotDevs/AstrBot/issues/3622))
4. 修复(core.platform): 修复启用多个企业微信智能机器人适配器时消息混乱的问题 ([#3693](https://github.com/AstrBotDevs/AstrBot/issues/3693))
5. 修复: MCP Server 连接成功一段时间后,调用 mcp 工具时可能出现 `anyio.ClosedResourceError` 错误 ([#3700](https://github.com/AstrBotDevs/AstrBot/issues/3700))
6. 新增(chat): 重构聊天组件结构并添加新功能 ([#3701](https://github.com/AstrBotDevs/AstrBot/issues/3701))
7. 修复(dashboard.i18n): 完善缺失的英文国际化键值 ([#3699](https://github.com/AstrBotDevs/AstrBot/issues/3699))
8. 重构: 实现 WebChat 会话管理及从版本 4.6 迁移到 4.7
9. 持续集成(docker-build): 每日构建 Nightly 版本 Docker 镜像 ([#3120](https://github.com/AstrBotDevs/AstrBot/issues/3120))
---
1. feat: add supports for gemini-3 series thought signature ([#3698](https://github.com/AstrBotDevs/AstrBot/issues/3698))
2. feat: supports knowledge base agentic search ([#3667](https://github.com/AstrBotDevs/AstrBot/issues/3667))
3. feat: Add URL document parser for knowledge base ([#3622](https://github.com/AstrBotDevs/AstrBot/issues/3622))
4. fix(core.platform): fix message mix-up issue when enabling multiple WeCom AI Bot adapters ([#3693](https://github.com/AstrBotDevs/AstrBot/issues/3693))
5. fix: fix `anyio.ClosedResourceError` that may occur when calling mcp tools after a period of successful connection to MCP Server ([#3700](https://github.com/AstrBotDevs/AstrBot/issues/3700))
6. feat(chat): refactor chat component structure and add new features ([#3701](https://github.com/AstrBotDevs/AstrBot/issues/3701))
7. fix(dashboard.i18n): complete the missing i18n keys for en([#3699](https://github.com/AstrBotDevs/AstrBot/issues/3699))
8. refactor: Implement WebChat session management and migration from version 4.6 to 4.7
9. ci(docker-build): build nightly image everyday ([#3120](https://github.com/AstrBotDevs/AstrBot/issues/3120))

View File

@@ -87,8 +87,6 @@
:disabled="isStreaming || isConvRunning"
:enableStreaming="enableStreaming"
:isRecording="isRecording"
:session-id="currSessionId || null"
:current-session="getCurrentSession"
@send="handleSendMessage"
@toggleStreaming="toggleStreaming"
@removeImage="removeImage"

View File

@@ -11,13 +11,7 @@
style="width: 100%; resize: none; outline: none; border: 1px solid var(--v-theme-border); border-radius: 12px; padding: 8px 16px; min-height: 40px; font-family: inherit; font-size: 16px; background-color: var(--v-theme-surface);"></textarea>
<div style="display: flex; justify-content: space-between; align-items: center; padding: 0px 12px;">
<div style="display: flex; justify-content: flex-start; margin-top: 4px; align-items: center; gap: 8px;">
<ConfigSelector
:session-id="sessionId || null"
:platform-id="sessionPlatformId"
:is-group="sessionIsGroup"
@config-changed="handleConfigChange"
/>
<ProviderModelSelector v-if="showProviderSelector" ref="providerModelSelectorRef" />
<ProviderModelSelector ref="providerModelSelectorRef" />
<v-tooltip :text="enableStreaming ? tm('streaming.enabled') : tm('streaming.disabled')" location="top">
<template v-slot:activator="{ props }">
@@ -64,11 +58,9 @@
</template>
<script setup lang="ts">
import { ref, computed, onMounted, onBeforeUnmount } from 'vue';
import { ref, computed, onMounted, onBeforeUnmount, watch } from 'vue';
import { useModuleI18n } from '@/i18n/composables';
import ProviderModelSelector from './ProviderModelSelector.vue';
import ConfigSelector from './ConfigSelector.vue';
import type { Session } from '@/composables/useSessions';
interface Props {
prompt: string;
@@ -77,14 +69,9 @@ interface Props {
disabled: boolean;
enableStreaming: boolean;
isRecording: boolean;
sessionId?: string | null;
currentSession?: Session | null;
}
const props = withDefaults(defineProps<Props>(), {
sessionId: null,
currentSession: null
});
const props = defineProps<Props>();
const emit = defineEmits<{
'update:prompt': [value: string];
@@ -103,16 +90,12 @@ const { tm } = useModuleI18n('features/chat');
const inputField = ref<HTMLTextAreaElement | null>(null);
const imageInputRef = ref<HTMLInputElement | null>(null);
const providerModelSelectorRef = ref<InstanceType<typeof ProviderModelSelector> | null>(null);
const showProviderSelector = ref(true);
const localPrompt = computed({
get: () => props.prompt,
set: (value) => emit('update:prompt', value)
});
const sessionPlatformId = computed(() => props.currentSession?.platform_id || 'webchat');
const sessionIsGroup = computed(() => Boolean(props.currentSession?.is_group));
const canSend = computed(() => {
return (props.prompt && props.prompt.trim()) || props.stagedImagesUrl.length > 0 || props.stagedAudioUrl;
});
@@ -185,16 +168,7 @@ function handleRecordClick() {
}
}
function handleConfigChange(payload: { configId: string; agentRunnerType: string }) {
const runnerType = (payload.agentRunnerType || '').toLowerCase();
const isInternal = runnerType === 'internal' || runnerType === 'local';
showProviderSelector.value = isInternal;
}
function getCurrentSelection() {
if (!showProviderSelector.value) {
return null;
}
return providerModelSelectorRef.value?.getCurrentSelection();
}

View File

@@ -1,311 +0,0 @@
<template>
<div>
<v-tooltip text="选择用于当前会话的配置文件" location="top">
<template #activator="{ props: tooltipProps }">
<v-chip
v-bind="tooltipProps"
class="text-none config-chip"
variant="tonal"
size="x-small"
rounded="lg"
@click="openDialog"
:disabled="loadingConfigs || saving"
>
<v-icon start size="14">mdi-cog</v-icon>
{{ selectedConfigLabel }}
</v-chip>
</template>
</v-tooltip>
<v-dialog v-model="dialog" max-width="480" persistent>
<v-card>
<v-card-title class="d-flex align-center justify-space-between">
<span>选择配置文件</span>
<v-btn icon variant="text" @click="closeDialog">
<v-icon>mdi-close</v-icon>
</v-btn>
</v-card-title>
<v-card-text>
<div v-if="loadingConfigs" class="text-center py-6">
<v-progress-circular indeterminate color="primary"></v-progress-circular>
</div>
<v-list v-else class="config-list" density="comfortable">
<v-list-item
v-for="config in configOptions"
:key="config.id"
:active="tempSelectedConfig === config.id"
rounded="lg"
variant="text"
@click="tempSelectedConfig = config.id"
>
<v-list-item-title>{{ config.name }}</v-list-item-title>
<v-list-item-subtitle class="text-caption text-grey">
{{ config.id }}
</v-list-item-subtitle>
<template #append>
<v-icon v-if="tempSelectedConfig === config.id" color="primary">mdi-check</v-icon>
</template>
</v-list-item>
<div v-if="configOptions.length === 0" class="text-center text-body-2 text-medium-emphasis">
暂无可选配置请先在配置页创建
</div>
</v-list>
</v-card-text>
<v-card-actions>
<v-spacer></v-spacer>
<v-btn variant="text" @click="closeDialog">取消</v-btn>
<v-btn
color="primary"
@click="confirmSelection"
:disabled="!tempSelectedConfig"
:loading="saving"
>
应用
</v-btn>
</v-card-actions>
</v-card>
</v-dialog>
</div>
</template>
<script setup lang="ts">
import { computed, onMounted, ref, watch } from 'vue';
import axios from 'axios';
import { useToast } from '@/utils/toast';
interface ConfigInfo {
id: string;
name: string;
}
interface ConfigChangedPayload {
configId: string;
agentRunnerType: string;
}
const STORAGE_KEY = 'chat.selectedConfigId';
const props = withDefaults(defineProps<{
sessionId?: string | null;
platformId?: string;
isGroup?: boolean;
}>(), {
sessionId: null,
platformId: 'webchat',
isGroup: false
});
const emit = defineEmits<{ 'config-changed': [ConfigChangedPayload] }>();
const configOptions = ref<ConfigInfo[]>([]);
const loadingConfigs = ref(false);
const dialog = ref(false);
const tempSelectedConfig = ref('');
const selectedConfigId = ref('default');
const agentRunnerType = ref('local');
const saving = ref(false);
const pendingSync = ref(false);
const routingEntries = ref<Array<{ pattern: string; confId: string }>>([]);
const configCache = ref<Record<string, string>>({});
const toast = useToast();
const normalizedSessionId = computed(() => {
const id = props.sessionId?.trim();
return id ? id : null;
});
const hasActiveSession = computed(() => !!normalizedSessionId.value);
const messageType = computed(() => (props.isGroup ? 'GroupMessage' : 'FriendMessage'));
const username = computed(() => localStorage.getItem('user') || 'guest');
const sessionKey = computed(() => {
if (!normalizedSessionId.value) {
return null;
}
return `${props.platformId}!${username.value}!${normalizedSessionId.value}`;
});
const targetUmo = computed(() => {
if (!sessionKey.value) {
return null;
}
return `${props.platformId}:${messageType.value}:${sessionKey.value}`;
});
const selectedConfigLabel = computed(() => {
const target = configOptions.value.find((item) => item.id === selectedConfigId.value);
return target?.name || selectedConfigId.value || 'default';
});
function openDialog() {
tempSelectedConfig.value = selectedConfigId.value;
dialog.value = true;
}
function closeDialog() {
dialog.value = false;
}
async function fetchConfigList() {
loadingConfigs.value = true;
try {
const res = await axios.get('/api/config/abconfs');
configOptions.value = res.data.data?.info_list || [];
} catch (error) {
console.error('加载配置文件列表失败', error);
configOptions.value = [];
} finally {
loadingConfigs.value = false;
}
}
async function fetchRoutingEntries() {
try {
const res = await axios.get('/api/config/umo_abconf_routes');
const routing = res.data.data?.routing || {};
routingEntries.value = Object.entries(routing).map(([pattern, confId]) => ({
pattern,
confId: confId as string
}));
} catch (error) {
console.error('获取配置路由失败', error);
routingEntries.value = [];
}
}
function matchesPattern(pattern: string, target: string): boolean {
const parts = pattern.split(':');
const targetParts = target.split(':');
if (parts.length !== 3 || targetParts.length !== 3) {
return false;
}
return parts.every((part, index) => part === '' || part === '*' || part === targetParts[index]);
}
function resolveConfigId(umo: string | null): string {
if (!umo) {
return 'default';
}
for (const entry of routingEntries.value) {
if (matchesPattern(entry.pattern, umo)) {
return entry.confId;
}
}
return 'default';
}
async function getAgentRunnerType(confId: string): Promise<string> {
if (configCache.value[confId]) {
return configCache.value[confId];
}
try {
const res = await axios.get('/api/config/abconf', {
params: { id: confId }
});
const type = res.data.data?.config?.provider_settings?.agent_runner_type || 'local';
configCache.value[confId] = type;
return type;
} catch (error) {
console.error('获取配置文件详情失败', error);
return 'local';
}
}
async function setSelection(confId: string) {
const normalized = confId || 'default';
selectedConfigId.value = normalized;
const runnerType = await getAgentRunnerType(normalized);
agentRunnerType.value = runnerType;
emit('config-changed', {
configId: normalized,
agentRunnerType: runnerType
});
}
async function applySelectionToBackend(confId: string): Promise<boolean> {
if (!targetUmo.value) {
pendingSync.value = true;
return true;
}
saving.value = true;
try {
await axios.post('/api/config/umo_abconf_route/update', {
umo: targetUmo.value,
conf_id: confId
});
const filtered = routingEntries.value.filter((entry) => entry.pattern !== targetUmo.value);
filtered.push({ pattern: targetUmo.value, confId });
routingEntries.value = filtered;
return true;
} catch (error) {
const err = error as any;
console.error('更新配置文件失败', err);
toast.error(err?.response?.data?.message || '配置文件应用失败');
return false;
} finally {
saving.value = false;
}
}
async function confirmSelection() {
if (!tempSelectedConfig.value) {
return;
}
const previousId = selectedConfigId.value;
await setSelection(tempSelectedConfig.value);
localStorage.setItem(STORAGE_KEY, tempSelectedConfig.value);
const applied = await applySelectionToBackend(tempSelectedConfig.value);
if (!applied) {
localStorage.setItem(STORAGE_KEY, previousId);
await setSelection(previousId);
}
dialog.value = false;
}
async function syncSelectionForSession() {
if (!targetUmo.value) {
pendingSync.value = true;
return;
}
if (pendingSync.value) {
pendingSync.value = false;
await applySelectionToBackend(selectedConfigId.value);
return;
}
await fetchRoutingEntries();
const resolved = resolveConfigId(targetUmo.value);
await setSelection(resolved);
localStorage.setItem(STORAGE_KEY, resolved);
}
watch(
() => [props.sessionId, props.platformId, props.isGroup],
async () => {
await syncSelectionForSession();
}
);
onMounted(async () => {
await fetchConfigList();
const stored = localStorage.getItem(STORAGE_KEY) || 'default';
selectedConfigId.value = stored;
await setSelection(stored);
await syncSelectionForSession();
});
</script>
<style scoped>
.config-chip {
cursor: pointer;
justify-content: flex-start;
}
.config-list {
max-height: 360px;
overflow-y: auto;
}
</style>

View File

@@ -64,7 +64,7 @@
@click.stop="$emit('editTitle', item.session_id, item.display_name)" />
<v-btn icon="mdi-delete" size="x-small" variant="text"
class="delete-conversation-btn" color="error"
@click.stop="handleDeleteConversation(item)" />
@click.stop="$emit('deleteConversation', item.session_id)" />
</div>
</template>
</v-list-item>
@@ -85,7 +85,7 @@
<script setup lang="ts">
import { ref } from 'vue';
import { useModuleI18n } from '@/i18n/composables';
import { useI18n, useModuleI18n } from '@/i18n/composables';
import type { Session } from '@/composables/useSessions';
interface Props {
@@ -109,6 +109,7 @@ const emit = defineEmits<{
}>();
const { tm } = useModuleI18n('features/chat');
const { t } = useI18n();
const sidebarCollapsed = ref(true);
const sidebarHovered = ref(false);
@@ -158,14 +159,6 @@ function handleSidebarMouseLeave() {
}
sidebarHoverExpanded.value = false;
}
function handleDeleteConversation(session: Session) {
const sessionTitle = session.display_name || tm('conversation.newConversation');
const message = tm('conversation.confirmDelete', { name: sessionTitle });
if (window.confirm(message)) {
emit('deleteConversation', session.session_id);
}
}
</script>
<style scoped>
@@ -300,4 +293,3 @@ function handleDeleteConversation(session: Session) {
}
}
</style>

View File

@@ -3,7 +3,6 @@
<!-- 选择提供商和模型按钮 -->
<v-chip class="text-none" variant="tonal" size="x-small"
v-if="selectedProviderId && selectedModelName" @click="openDialog">
<v-icon start size="14">mdi-creation</v-icon>
{{ selectedProviderId }} / {{ selectedModelName }}
</v-chip>
<v-chip variant="tonal" rounded="xl" size="x-small" v-else @click="openDialog">

View File

@@ -7,10 +7,6 @@
<v-icon start>mdi-message-text</v-icon>
{{ tm('dialogs.addProvider.tabs.basic') }}
</v-tab>
<v-tab value="agent_runner" class="font-weight-medium px-3">
<v-icon start>mdi-cogs</v-icon>
{{ tm('dialogs.addProvider.tabs.agentRunner') }}
</v-tab>
<v-tab value="speech_to_text" class="font-weight-medium px-3">
<v-icon start>mdi-microphone-message</v-icon>
{{ tm('dialogs.addProvider.tabs.speechToText') }}
@@ -31,7 +27,7 @@
<v-window v-model="activeProviderTab" class="mt-4">
<v-window-item
v-for="tabType in ['chat_completion', 'agent_runner', 'speech_to_text', 'text_to_speech', 'embedding', 'rerank']"
v-for="tabType in ['chat_completion', 'speech_to_text', 'text_to_speech', 'embedding', 'rerank']"
:key="tabType" :value="tabType">
<v-row class="mt-1">
<v-col v-for="(template, name) in getTemplatesByType(tabType)" :key="name" cols="12" sm="6"
@@ -40,7 +36,7 @@
@click="selectProviderTemplate(name)">
<div class="provider-card-content">
<div class="provider-card-text">
<v-card-title class="provider-card-title">{{ name }}</v-card-title>
<v-card-title class="provider-card-title">接入 {{ name }}</v-card-title>
<v-card-text
class="text-caption text-medium-emphasis provider-card-description">
{{ getProviderDescription(template, name) }}
@@ -58,7 +54,7 @@
</v-col>
<v-col v-if="Object.keys(getTemplatesByType(tabType)).length === 0" cols="12">
<v-alert type="info" variant="tonal">
{{ t('dialogs.addProvider.noTemplates') }}
{{ tm('dialogs.addProvider.noTemplates', { type: getTabTypeName(tabType) }) }}
</v-alert>
</v-col>
</v-row>
@@ -108,6 +104,19 @@ export default {
this.$emit('update:show', value);
}
},
// 翻译消息的计算属性
messages() {
return {
tabTypes: {
'chat_completion': this.tm('providers.tabs.chatCompletion'),
'speech_to_text': this.tm('providers.tabs.speechToText'),
'text_to_speech': this.tm('providers.tabs.textToSpeech'),
'embedding': this.tm('providers.tabs.embedding'),
'rerank': this.tm('providers.tabs.rerank')
}
};
}
},
methods: {
closeDialog() {
@@ -131,6 +140,11 @@ export default {
// 从工具函数导入
getProviderIcon,
// 获取Tab类型的中文名称
getTabTypeName(tabType) {
return this.messages.tabTypes[tabType] || tabType;
},
// 获取提供商简介
getProviderDescription(template, name) {
return getProviderDescription(template, name, this.tm);

View File

@@ -101,21 +101,6 @@ function shouldShowItem(itemMeta, itemKey) {
return true
}
// 检查最外层的 object 是否应该显示
function shouldShowSection() {
const sectionMeta = props.metadata[props.metadataKey]
if (!sectionMeta?.condition) {
return true
}
for (const [conditionKey, expectedValue] of Object.entries(sectionMeta.condition)) {
const actualValue = getValueBySelector(props.iterable, conditionKey)
if (actualValue !== expectedValue) {
return false
}
}
return true
}
function hasVisibleItemsAfter(items, currentIndex) {
const itemEntries = Object.entries(items)
@@ -129,33 +114,12 @@ function hasVisibleItemsAfter(items, currentIndex) {
return false
}
function parseSpecialValue(value) {
if (!value || typeof value !== 'string') {
return { name: '', subtype: '' }
}
const [name, ...rest] = value.split(':')
return {
name,
subtype: rest.join(':') || ''
}
}
function getSpecialName(value) {
return parseSpecialValue(value).name
}
function getSpecialSubtype(value) {
return parseSpecialValue(value).subtype
}
</script>
<template>
<v-card v-if="shouldShowSection()" style="margin-bottom: 16px; padding-bottom: 8px; background-color: rgb(var(--v-theme-background));"
rounded="md" variant="outlined">
<v-card style="margin-bottom: 16px; padding-bottom: 8px; background-color: rgb(var(--v-theme-background));" rounded="md" variant="outlined">
<v-card-text class="config-section" v-if="metadata[metadataKey]?.type === 'object'" style="padding-bottom: 8px;">
<v-list-item-title class="config-title">
{{ metadata[metadataKey]?.description }}
@@ -223,16 +187,22 @@ function getSpecialSubtype(value) {
<!-- Boolean switch for JSON selector -->
<v-switch v-else-if="itemMeta?.type === 'bool'" v-model="createSelectorModel(itemKey).value"
color="primary" inset density="compact" hide-details
style="display: flex; justify-content: end;"></v-switch>
color="primary" inset density="compact" hide-details style="display: flex; justify-content: end;"></v-switch>
<!-- List item for JSON selector -->
<ListConfigItem v-else-if="itemMeta?.type === 'list'" v-model="createSelectorModel(itemKey).value"
button-text="修改" class="config-field" />
<ListConfigItem
v-else-if="itemMeta?.type === 'list'"
v-model="createSelectorModel(itemKey).value"
button-text="修改"
class="config-field"
/>
<!-- Object editor for JSON selector -->
<ObjectEditor v-else-if="itemMeta?.type === 'dict'" v-model="createSelectorModel(itemKey).value"
class="config-field" />
<ObjectEditor
v-else-if="itemMeta?.type === 'dict'"
v-model="createSelectorModel(itemKey).value"
class="config-field"
/>
<!-- Fallback for JSON selector -->
<v-text-field v-else v-model="createSelectorModel(itemKey).value" density="compact" variant="outlined"
@@ -241,36 +211,50 @@ function getSpecialSubtype(value) {
<!-- Special handling for specific metadata types -->
<div v-else-if="itemMeta?._special === 'select_provider'">
<ProviderSelector v-model="createSelectorModel(itemKey).value" :provider-type="'chat_completion'" />
</div>
<div v-else-if="itemMeta?._special === 'select_provider_stt'">
<ProviderSelector v-model="createSelectorModel(itemKey).value" :provider-type="'speech_to_text'" />
</div>
<div v-else-if="itemMeta?._special === 'select_provider_tts'">
<ProviderSelector v-model="createSelectorModel(itemKey).value" :provider-type="'text_to_speech'" />
</div>
<div v-else-if="getSpecialName(itemMeta?._special) === 'select_agent_runner_provider'">
<ProviderSelector
v-model="createSelectorModel(itemKey).value"
:provider-type="'agent_runner'"
:provider-subtype="getSpecialSubtype(itemMeta?._special)"
:provider-type="'chat_completion'"
/>
</div>
<div v-else-if="itemMeta?._special === 'select_provider_stt'">
<ProviderSelector
v-model="createSelectorModel(itemKey).value"
:provider-type="'speech_to_text'"
/>
</div>
<div v-else-if="itemMeta?._special === 'select_provider_tts'">
<ProviderSelector
v-model="createSelectorModel(itemKey).value"
:provider-type="'text_to_speech'"
/>
</div>
<div v-else-if="itemMeta?._special === 'provider_pool'">
<ProviderSelector v-model="createSelectorModel(itemKey).value" :provider-type="'chat_completion'"
button-text="选择提供商池..." />
<ProviderSelector
v-model="createSelectorModel(itemKey).value"
:provider-type="'chat_completion'"
button-text="选择提供商池..."
/>
</div>
<div v-else-if="itemMeta?._special === 'select_persona'">
<PersonaSelector v-model="createSelectorModel(itemKey).value" />
<PersonaSelector
v-model="createSelectorModel(itemKey).value"
/>
</div>
<div v-else-if="itemMeta?._special === 'persona_pool'">
<PersonaSelector v-model="createSelectorModel(itemKey).value" button-text="选择人格池..." />
<PersonaSelector
v-model="createSelectorModel(itemKey).value"
button-text="选择人格池..."
/>
</div>
<div v-else-if="itemMeta?._special === 'select_knowledgebase'">
<KnowledgeBaseSelector v-model="createSelectorModel(itemKey).value" />
<KnowledgeBaseSelector
v-model="createSelectorModel(itemKey).value"
/>
</div>
<div v-else-if="itemMeta?._special === 'select_plugin_set'">
<PluginSetSelector v-model="createSelectorModel(itemKey).value" />
<PluginSetSelector
v-model="createSelectorModel(itemKey).value"
/>
</div>
<div v-else-if="itemMeta?._special === 't2i_template'">
<T2ITemplateEditor />
@@ -279,17 +263,21 @@ function getSpecialSubtype(value) {
</v-row>
<!-- Plugin Set Selector 全宽显示区域 -->
<v-row v-if="!itemMeta?.invisible && itemMeta?._special === 'select_plugin_set'"
class="plugin-set-display-row">
<v-row v-if="!itemMeta?.invisible && itemMeta?._special === 'select_plugin_set'" class="plugin-set-display-row">
<v-col cols="12" class="plugin-set-display">
<div v-if="createSelectorModel(itemKey).value && createSelectorModel(itemKey).value.length > 0"
class="selected-plugins-full-width">
<div v-if="createSelectorModel(itemKey).value && createSelectorModel(itemKey).value.length > 0" class="selected-plugins-full-width">
<div class="plugins-header">
<small class="text-grey">已选择的插件</small>
</div>
<div class="d-flex flex-wrap ga-2 mt-2">
<v-chip v-for="plugin in (createSelectorModel(itemKey).value || [])" :key="plugin" size="small" label
color="primary" variant="outlined">
<v-chip
v-for="plugin in (createSelectorModel(itemKey).value || [])"
:key="plugin"
size="small"
label
color="primary"
variant="outlined"
>
{{ plugin === '*' ? '所有插件' : plugin }}
</v-chip>
</div>
@@ -297,8 +285,7 @@ function getSpecialSubtype(value) {
</v-col>
</v-row>
</template>
<v-divider class="config-divider"
v-if="shouldShowItem(itemMeta, itemKey) && hasVisibleItemsAfter(metadata[metadataKey].items, index)"></v-divider>
<v-divider class="config-divider" v-if="shouldShowItem(itemMeta, itemKey) && hasVisibleItemsAfter(metadata[metadataKey].items, index)"></v-divider>
</div>
</div>

View File

@@ -94,10 +94,6 @@ const props = defineProps({
type: String,
default: 'chat_completion'
},
providerSubtype: {
type: String,
default: ''
},
buttonText: {
type: String,
default: '选择提供商...'
@@ -131,10 +127,7 @@ async function loadProviders() {
}
})
if (response.data.status === 'ok') {
const providers = response.data.data || []
providerList.value = props.providerSubtype
? providers.filter((provider) => matchesProviderSubtype(provider, props.providerSubtype))
: providers
providerList.value = response.data.data || []
}
} catch (error) {
console.error('加载提供商列表失败:', error)
@@ -144,17 +137,6 @@ async function loadProviders() {
}
}
function matchesProviderSubtype(provider, subtype) {
if (!subtype) {
return true
}
const normalized = String(subtype).toLowerCase()
const candidates = [provider.type, provider.provider, provider.id]
.filter(Boolean)
.map((value) => String(value).toLowerCase())
return candidates.includes(normalized)
}
function selectProvider(provider) {
selectedProvider.value = provider.id
}

View File

@@ -301,4 +301,3 @@ export function useMessages(
toggleStreaming
};
}

View File

@@ -4,12 +4,8 @@ import { useRouter } from 'vue-router';
export interface Session {
session_id: string;
display_name: string | null;
display_name: string;
updated_at: string;
platform_id: string;
creator: string;
is_group: number;
created_at: string;
}
export function useSessions(chatboxMode: boolean = false) {

View File

@@ -74,7 +74,6 @@
"delete": "Delete",
"copy": "Copy",
"edit": "Edit",
"copy": "Copy",
"noData": "No data available"
}
}

View File

@@ -12,6 +12,7 @@
"console": "Console",
"alkaid": "Alkaid Lab",
"knowledgeBase": "Knowledge Base",
"memory": "Long-term Memory",
"about": "About",
"settings": "Settings",
"documentation": "Documentation",

View File

@@ -51,8 +51,7 @@
"editDisplayName": "Edit Session Name",
"displayName": "Session Name",
"displayNameUpdated": "Session name updated",
"displayNameUpdateFailed": "Failed to update session name",
"confirmDelete": "Are you sure you want to delete \"{name}\"? This action cannot be undone."
"displayNameUpdateFailed": "Failed to update session name"
},
"modes": {
"darkMode": "Switch to Dark Mode",

View File

@@ -9,7 +9,6 @@
"tabs": {
"all": "All",
"chatCompletion": "Chat Completion",
"agentRunner": "Agent Runner",
"speechToText": "Speech to Text",
"textToSpeech": "Text to Speech",
"embedding": "Embedding",
@@ -45,13 +44,12 @@
"title": "Service Provider",
"tabs": {
"basic": "Basic",
"agentRunner": "Agent Runner",
"speechToText": "Speech to Text",
"textToSpeech": "Text to Speech",
"embedding": "Embedding",
"rerank": "Rerank"
},
"noTemplates": "No this type provider templates available"
"noTemplates": "No {type} type provider templates available"
},
"config": {
"addTitle": "Add",

View File

@@ -12,6 +12,7 @@
"console": "控制台",
"alkaid": "Alkaid",
"knowledgeBase": "知识库",
"memory": "长期记忆",
"about": "关于",
"settings": "设置",
"documentation": "官方文档",

View File

@@ -51,8 +51,7 @@
"editDisplayName": "编辑会话名称",
"displayName": "会话名称",
"displayNameUpdated": "会话名称已更新",
"displayNameUpdateFailed": "更新会话名称失败",
"confirmDelete": "确定要删除“{name}”吗?此操作无法撤销。"
"displayNameUpdateFailed": "更新会话名称失败"
},
"modes": {
"darkMode": "切换到夜间模式",

View File

@@ -8,8 +8,7 @@
"providerType": "提供商类型",
"tabs": {
"all": "全部",
"chatCompletion": "对话",
"agentRunner": "Agent 执行器",
"chatCompletion": "基本对话",
"speechToText": "语音转文字",
"textToSpeech": "文字转语音",
"embedding": "嵌入(Embedding)",
@@ -45,14 +44,13 @@
"addProvider": {
"title": "模型提供商",
"tabs": {
"basic": "对话",
"agentRunner": "Agent 执行器",
"basic": "基本",
"speechToText": "语音转文字",
"textToSpeech": "文字转语音",
"embedding": "嵌入(Embedding)",
"rerank": "重排序(Rerank)"
},
"noTemplates": "暂无类型的提供商模板"
"noTemplates": "暂无{type}类型的提供商模板"
},
"config": {
"addTitle": "新增",

View File

@@ -48,6 +48,11 @@ const sidebarItem: menu[] = [
icon: 'mdi-book-open-variant',
to: '/knowledge-base',
},
{
title: 'core.navigation.memory',
icon: 'mdi-brain',
to: '/memory',
},
{
title: 'core.navigation.chat',
icon: 'mdi-chat',

View File

@@ -90,6 +90,11 @@ const MainRoutes = {
}
]
},
{
name: 'Memory',
path: '/memory',
component: () => import('@/views/MemoryPage.vue')
},
// 旧版本的知识库路由
{

View File

@@ -0,0 +1,358 @@
<template>
<div class="memory-page">
<v-container fluid class="pa-0">
<!-- 页面标题 -->
<v-row class="d-flex justify-space-between align-center px-4 py-3 pb-8">
<div>
<h1 class="text-h1 font-weight-bold mb-2">
<v-icon color="black" class="me-2">mdi-brain</v-icon>{{ t('core.navigation.memory') }}
</h1>
<p class="text-subtitle-1 text-medium-emphasis mb-4">
管理长期记忆系统的配置
</p>
</div>
</v-row>
<!-- 加载状态 -->
<v-row v-if="loading">
<v-col cols="12">
<v-card>
<v-card-text class="text-center">
<v-progress-circular indeterminate color="primary"></v-progress-circular>
</v-card-text>
</v-card>
</v-col>
</v-row>
<!-- 主内容 -->
<v-row v-else>
<v-col cols="12" md="8" lg="6">
<v-card rounded="lg">
<v-card-title class="d-flex align-center">
<v-icon class="mr-2">mdi-cog</v-icon>
记忆系统配置
</v-card-title>
<v-divider></v-divider>
<v-card-text>
<!-- 状态显示 -->
<v-alert
:type="memoryStatus.initialized ? 'success' : 'info'"
variant="tonal"
class="mb-4"
>
<div class="d-flex align-center">
<v-icon class="mr-2">
{{ memoryStatus.initialized ? 'mdi-check-circle' : 'mdi-information' }}
</v-icon>
<div>
<strong>状态</strong>
{{ memoryStatus.initialized ? '已初始化' : '未初始化' }}
</div>
</div>
</v-alert>
<!-- 未初始化时显示初始化表单 -->
<div v-if="!memoryStatus.initialized">
<v-form @submit.prevent="initializeMemory">
<v-select
v-model="selectedEmbeddingProvider"
:items="embeddingProviders"
item-title="text"
item-value="value"
label="Embedding 模型 *"
hint="用于生成向量表示,初始化后不可更改"
persistent-hint
class="mb-4"
required
:disabled="initializing"
></v-select>
<v-select
v-model="selectedMergeLLM"
:items="llmProviders"
item-title="text"
item-value="value"
label="合并 LLM *"
hint="用于合并相似记忆,可在初始化后更改"
persistent-hint
class="mb-4"
required
:disabled="initializing"
></v-select>
<v-btn
type="submit"
color="primary"
:loading="initializing"
:disabled="!selectedEmbeddingProvider || !selectedMergeLLM"
block
size="large"
>
初始化记忆系统
</v-btn>
</v-form>
</div>
<!-- 已初始化时显示配置信息 -->
<div v-else>
<v-list>
<v-list-item>
<template v-slot:prepend>
<v-icon>mdi-vector-triangle</v-icon>
</template>
<v-list-item-title>Embedding 模型</v-list-item-title>
<v-list-item-subtitle>
{{ getProviderName(memoryStatus.embedding_provider_id) }}
</v-list-item-subtitle>
</v-list-item>
<v-divider class="my-2"></v-divider>
<v-list-item>
<template v-slot:prepend>
<v-icon>mdi-robot</v-icon>
</template>
<v-list-item-title>合并 LLM</v-list-item-title>
<v-list-item-subtitle>
{{ getProviderName(memoryStatus.merge_llm_provider_id) }}
</v-list-item-subtitle>
</v-list-item>
</v-list>
<v-divider class="my-4"></v-divider>
<v-form @submit.prevent="updateMergeLLM">
<v-select
v-model="newMergeLLM"
:items="llmProviders"
item-title="text"
item-value="value"
label="更新合并 LLM"
hint="可以更换用于合并记忆的 LLM"
persistent-hint
class="mb-4"
:disabled="updating"
></v-select>
<v-btn
type="submit"
color="primary"
:loading="updating"
:disabled="!newMergeLLM || newMergeLLM === memoryStatus.merge_llm_provider_id"
block
variant="tonal"
>
更新合并 LLM
</v-btn>
</v-form>
</div>
</v-card-text>
</v-card>
</v-col>
<!-- 说明卡片 -->
<v-col cols="12" md="4" lg="6">
<v-card rounded="lg">
<v-card-title class="d-flex align-center">
<v-icon class="mr-2">mdi-information</v-icon>
说明
</v-card-title>
<v-divider></v-divider>
<v-card-text>
<v-list density="compact">
<v-list-item>
<v-list-item-title class="text-wrap">
<strong>Embedding 模型</strong>用于将文本转换为向量支持语义相似度搜索
<v-chip size="x-small" color="warning" class="ml-2">不可更改</v-chip>
</v-list-item-title>
</v-list-item>
<v-list-item>
<v-list-item-title class="text-wrap">
<strong>合并 LLM</strong>当检测到相似记忆时使用此模型合并为一条记忆
<v-chip size="x-small" color="success" class="ml-2">可更改</v-chip>
</v-list-item-title>
</v-list-item>
<v-list-item>
<v-list-item-title class="text-wrap">
<strong>注意</strong>Embedding 模型一旦选择后无法更改请谨慎选择
</v-list-item-title>
</v-list-item>
</v-list>
</v-card-text>
</v-card>
</v-col>
</v-row>
</v-container>
<!-- 提示框 -->
<v-snackbar v-model="snackbar.show" :color="snackbar.color" :timeout="3000">
{{ snackbar.message }}
</v-snackbar>
</div>
</template>
<script setup lang="ts">
import { ref, onMounted } from 'vue';
import axios from 'axios';
import { useI18n } from '@/i18n/composables';
const { t } = useI18n();
interface MemoryStatus {
initialized: boolean;
embedding_provider_id: string | null;
merge_llm_provider_id: string | null;
}
interface Provider {
value: string;
text: string;
}
const loading = ref(true);
const initializing = ref(false);
const updating = ref(false);
const memoryStatus = ref<MemoryStatus>({
initialized: false,
embedding_provider_id: null,
merge_llm_provider_id: null,
});
const embeddingProviders = ref<Provider[]>([]);
const llmProviders = ref<Provider[]>([]);
const selectedEmbeddingProvider = ref<string>('');
const selectedMergeLLM = ref<string>('');
const newMergeLLM = ref<string>('');
const snackbar = ref({
show: false,
message: '',
color: 'success',
});
const showMessage = (message: string, color: string = 'success') => {
snackbar.value.message = message;
snackbar.value.color = color;
snackbar.value.show = true;
};
const getProviderName = (providerId: string | null): string => {
if (!providerId) return '未设置';
const embedding = embeddingProviders.value.find(p => p.value === providerId);
const llm = llmProviders.value.find(p => p.value === providerId);
return embedding?.text || llm?.text || providerId;
};
const loadProviders = async () => {
try {
// Load embedding providers
const embeddingResponse = await axios.get('/api/config/provider/list', {
params: { provider_type: 'embedding' }
});
if (embeddingResponse.data.status === 'ok') {
embeddingProviders.value = (embeddingResponse.data.data || []).map((p: any) => ({
value: p.id,
text: `${p.embedding_model} (${p.id})`,
}));
}
// Load LLM providers
const llmResponse = await axios.get('/api/config/provider/list', {
params: { provider_type: 'chat_completion' }
});
if (llmResponse.data.status === 'ok') {
llmProviders.value = (llmResponse.data.data || []).map((p: any) => ({
value: p.id,
text: `${p?.model_config?.model} (${p.id})`,
}));
}
} catch (error) {
console.error('Failed to load providers:', error);
showMessage('加载提供商列表失败', 'error');
}
};
const loadStatus = async () => {
try {
const response = await axios.get('/api/memory/status');
if (response.data.status === 'ok') {
memoryStatus.value = response.data.data;
if (memoryStatus.value.merge_llm_provider_id) {
newMergeLLM.value = memoryStatus.value.merge_llm_provider_id;
}
}
} catch (error) {
console.error('Failed to load memory status:', error);
showMessage('加载记忆系统状态失败', 'error');
}
};
const initializeMemory = async () => {
if (!selectedEmbeddingProvider.value || !selectedMergeLLM.value) {
showMessage('请选择 Embedding 模型和合并 LLM', 'warning');
return;
}
initializing.value = true;
try {
const response = await axios.post('/api/memory/initialize', {
embedding_provider_id: selectedEmbeddingProvider.value,
merge_llm_provider_id: selectedMergeLLM.value,
});
if (response.data.status === 'ok') {
showMessage('记忆系统初始化成功', 'success');
await loadStatus();
} else {
showMessage(response.data.message || '初始化失败', 'error');
}
} catch (error: any) {
console.error('Failed to initialize memory:', error);
showMessage(error.response?.data?.message || '初始化失败', 'error');
} finally {
initializing.value = false;
}
};
const updateMergeLLM = async () => {
if (!newMergeLLM.value) {
showMessage('请选择新的合并 LLM', 'warning');
return;
}
updating.value = true;
try {
const response = await axios.post('/api/memory/update_merge_llm', {
merge_llm_provider_id: newMergeLLM.value,
});
if (response.data.status === 'ok') {
showMessage('合并 LLM 更新成功', 'success');
await loadStatus();
} else {
showMessage(response.data.message || '更新失败', 'error');
}
} catch (error: any) {
console.error('Failed to update merge LLM:', error);
showMessage(error.response?.data?.message || '更新失败', 'error');
} finally {
updating.value = false;
}
};
onMounted(async () => {
loading.value = true;
await Promise.all([loadProviders(), loadStatus()]);
loading.value = false;
});
</script>
<style scoped>
.memory-page {
min-height: 100vh;
padding: 8px;
}
</style>

View File

@@ -30,10 +30,6 @@
<v-icon start>mdi-message-text</v-icon>
{{ tm('providers.tabs.chatCompletion') }}
</v-tab>
<v-tab value="agent_runner" class="font-weight-medium px-3">
<v-icon start>mdi-message-text</v-icon>
{{ tm('providers.tabs.agentRunner') }}
</v-tab>
<v-tab value="speech_to_text" class="font-weight-medium px-3">
<v-icon start>mdi-microphone-message</v-icon>
{{ tm('providers.tabs.speechToText') }}
@@ -52,62 +48,30 @@
</v-tab>
</v-tabs>
<template v-if="activeProviderTypeTab === 'all'">
<v-row v-if="groupedProviders.length === 0">
<v-col cols="12" class="text-center pa-8">
<v-icon size="64" color="grey-lighten-1">mdi-api-off</v-icon>
<p class="text-grey mt-4">{{ getEmptyText() }}</p>
</v-col>
</v-row>
<div v-else>
<div v-for="group in groupedProviders" :key="group.typeKey" class="mb-8">
<h1 class="text-h3 font-weight-bold mb-4">{{ group.label }}</h1>
<v-row>
<v-col v-for="(provider, index) in group.items" :key="`${group.typeKey}-${index}`" cols="12" md="6"
lg="4" xl="3">
<item-card :item="provider" title-field="id" enabled-field="enable"
:loading="isProviderTesting(provider.id)" @toggle-enabled="providerStatusChange"
:bglogo="getProviderIcon(provider.provider)" @delete="deleteProvider" @edit="configExistingProvider"
@copy="copyProvider" :show-copy-button="true">
<template #actions="{ item }">
<v-btn style="z-index: 100000;" variant="tonal" color="info" rounded="xl" size="small"
:loading="isProviderTesting(item.id)" @click="testSingleProvider(item)">
{{ tm('availability.test') }}
</v-btn>
</template>
<template v-slot:details="{ item }">
</template>
</item-card>
</v-col>
</v-row>
</div>
</div>
</template>
<template v-else>
<v-row v-if="filteredProviders.length === 0">
<v-col cols="12" class="text-center pa-8">
<v-icon size="64" color="grey-lighten-1">mdi-api-off</v-icon>
<p class="text-grey mt-4">{{ getEmptyText() }}</p>
</v-col>
</v-row>
<v-row v-else>
<v-col v-for="(provider, index) in filteredProviders" :key="index" cols="12" md="6" lg="4" xl="3">
<item-card :item="provider" title-field="id" enabled-field="enable"
:loading="isProviderTesting(provider.id)" @toggle-enabled="providerStatusChange"
:bglogo="getProviderIcon(provider.provider)" @delete="deleteProvider" @edit="configExistingProvider"
@copy="copyProvider" :show-copy-button="true">
<template #actions="{ item }">
<v-btn style="z-index: 100000;" variant="tonal" color="info" rounded="xl" size="small"
:loading="isProviderTesting(item.id)" @click="testSingleProvider(item)">
{{ tm('availability.test') }}
</v-btn>
</template>
<template v-slot:details="{ item }">
</template>
</item-card>
</v-col>
</v-row>
</template>
<v-row v-if="filteredProviders.length === 0">
<v-col cols="12" class="text-center pa-8">
<v-icon size="64" color="grey-lighten-1">mdi-api-off</v-icon>
<p class="text-grey mt-4">{{ getEmptyText() }}</p>
</v-col>
</v-row>
<v-row v-else>
<v-col v-for="(provider, index) in filteredProviders" :key="index" cols="12" md="6" lg="4" xl="3">
<item-card :item="provider" title-field="id" enabled-field="enable"
:loading="isProviderTesting(provider.id)" @toggle-enabled="providerStatusChange"
:bglogo="getProviderIcon(provider.provider)" @delete="deleteProvider" @edit="configExistingProvider"
@copy="copyProvider" :show-copy-button="true">
<template #actions="{ item }">
<v-btn style="z-index: 100000;" variant="tonal" color="info" rounded="xl" size="small"
:loading="isProviderTesting(item.id)" @click="testSingleProvider(item)">
{{ tm('availability.test') }}
</v-btn>
</template>
<template v-slot:details="{ item }">
</template>
</item-card>
</v-col>
</v-row>
</div>
<!-- 供应商状态部分 -->
@@ -325,8 +289,8 @@ export default {
"anthropic_chat_completion": "chat_completion",
"googlegenai_chat_completion": "chat_completion",
"zhipu_chat_completion": "chat_completion",
"dify": "agent_runner",
"coze": "agent_runner",
"dify": "chat_completion",
"coze": "chat_completion",
"dashscope": "chat_completion",
"openai_whisper_api": "speech_to_text",
"openai_whisper_selfhost": "speech_to_text",
@@ -370,7 +334,6 @@ export default {
},
tabTypes: {
'chat_completion': this.tm('providers.tabs.chatCompletion'),
'agent_runner': this.tm('providers.tabs.agentRunner'),
'speech_to_text': this.tm('providers.tabs.speechToText'),
'text_to_speech': this.tm('providers.tabs.textToSpeech'),
'embedding': this.tm('providers.tabs.embedding'),
@@ -400,52 +363,6 @@ export default {
};
},
groupedProviders() {
if (!this.config_data.provider) {
return [];
}
const typeOrder = [
'chat_completion',
'agent_runner',
'speech_to_text',
'text_to_speech',
'embedding',
'rerank',
];
const assigned = new Set();
const groups = typeOrder
.map((typeKey) => {
const items = this.config_data.provider.filter((provider) => {
const resolved = this.getProviderType(provider);
if (resolved === typeKey) {
assigned.add(provider.id);
return true;
}
return false;
});
return {
typeKey,
label: this.messages.tabTypes[typeKey] || typeKey,
items,
};
})
.filter((group) => group.items.length > 0);
const remaining = this.config_data.provider.filter(
(provider) => !assigned.has(provider.id),
);
if (remaining.length > 0) {
groups.push({
typeKey: 'others',
label: this.tm('providers.tabs.all'),
items: remaining,
});
}
return groups;
},
// 根据选择的标签过滤提供商列表
filteredProviders() {
if (!this.config_data.provider || this.activeProviderTypeTab === 'all') {
@@ -454,7 +371,13 @@ export default {
return this.config_data.provider.filter(provider => {
// 如果provider.provider_type已经存在直接使用它
return this.getProviderType(provider) === this.activeProviderTypeTab;
if (provider.provider_type) {
return provider.provider_type === this.activeProviderTypeTab;
}
// 否则使用映射关系
const mappedType = this.oldVersionProviderTypeMapping[provider.type];
return mappedType === this.activeProviderTypeTab;
});
}
},
@@ -464,14 +387,6 @@ export default {
},
methods: {
getProviderType(provider) {
if (!provider) return undefined;
if (provider.provider_type) {
return provider.provider_type;
}
return this.oldVersionProviderTypeMapping[provider.type];
},
getConfig() {
axios.get('/api/config/get').then((res) => {
this.config_data = res.data.data.config;
@@ -775,9 +690,6 @@ export default {
if (!provider.enable) {
throw new Error('该提供商未被用户启用');
}
if (provider.provider_type === 'agent_runner') {
throw new Error('暂时无法测试 Agent Runner 类型的提供商');
}
const res = await axios.get(`/api/config/provider/check_one?id=${provider.id}`);
if (res.data && res.data.status === 'ok') {

View File

@@ -2,18 +2,14 @@ import datetime
from astrbot.api import logger, sp, star
from astrbot.api.event import AstrMessageEvent, MessageEventResult
from astrbot.core.platform.astr_message_event import MessageSession
from astrbot.core.platform.astr_message_event import MessageSesion
from astrbot.core.platform.message_type import MessageType
from astrbot.core.provider.sources.coze_source import ProviderCoze
from astrbot.core.provider.sources.dify_source import ProviderDify
from ..long_term_memory import LongTermMemory
from .utils.rst_scene import RstScene
THIRD_PARTY_AGENT_RUNNER_KEY = {
"dify": "dify_conversation_id",
"coze": "coze_conversation_id",
}
THIRD_PARTY_AGENT_RUNNER_STR = ", ".join(THIRD_PARTY_AGENT_RUNNER_KEY.keys())
class ConversationCommands:
def __init__(self, context: star.Context, ltm: LongTermMemory | None = None):
@@ -42,9 +38,9 @@ class ConversationCommands:
async def reset(self, message: AstrMessageEvent):
"""重置 LLM 会话"""
umo = message.unified_msg_origin
cfg = self.context.get_config(umo=message.unified_msg_origin)
is_unique_session = cfg["platform_settings"]["unique_session"]
is_unique_session = self.context.get_config()["platform_settings"][
"unique_session"
]
is_group = bool(message.get_group_id())
scene = RstScene.get_scene(is_group, is_unique_session)
@@ -67,23 +63,28 @@ class ConversationCommands:
)
return
agent_runner_type = cfg["provider_settings"]["agent_runner_type"]
if agent_runner_type in THIRD_PARTY_AGENT_RUNNER_KEY:
await sp.remove_async(
scope="umo",
scope_id=umo,
key=THIRD_PARTY_AGENT_RUNNER_KEY[agent_runner_type],
)
message.set_result(MessageEventResult().message("重置对话成功。"))
return
if not self.context.get_using_provider(umo):
if not self.context.get_using_provider(message.unified_msg_origin):
message.set_result(
MessageEventResult().message("未找到任何 LLM 提供商。请先配置。"),
)
return
cid = await self.context.conversation_manager.get_curr_conversation_id(umo)
provider = self.context.get_using_provider(message.unified_msg_origin)
if provider and provider.meta().type in ["dify", "coze"]:
assert isinstance(provider, (ProviderDify, ProviderCoze)), (
"provider type is not dify or coze"
)
await provider.forget(message.unified_msg_origin)
message.set_result(
MessageEventResult().message(
"已重置当前 Dify / Coze 会话,新聊天将更换到新的会话。",
),
)
return
cid = await self.context.conversation_manager.get_curr_conversation_id(
message.unified_msg_origin,
)
if not cid:
message.set_result(
@@ -94,7 +95,7 @@ class ConversationCommands:
return
await self.context.conversation_manager.update_conversation(
umo,
message.unified_msg_origin,
cid,
[],
)
@@ -151,14 +152,29 @@ class ConversationCommands:
async def convs(self, message: AstrMessageEvent, page: int = 1):
"""查看对话列表"""
cfg = self.context.get_config(umo=message.unified_msg_origin)
agent_runner_type = cfg["provider_settings"]["agent_runner_type"]
if agent_runner_type in THIRD_PARTY_AGENT_RUNNER_KEY:
message.set_result(
MessageEventResult().message(
f"{THIRD_PARTY_AGENT_RUNNER_STR} 对话列表功能暂不支持。",
),
provider = self.context.get_using_provider(message.unified_msg_origin)
if provider and provider.meta().type == "dify":
"""原有的Dify处理逻辑保持不变"""
parts = ["Dify 对话列表:\n"]
assert isinstance(provider, ProviderDify)
data = await provider.api_client.get_chat_convs(message.unified_msg_origin)
idx = 1
for conv in data["data"]:
ts_h = datetime.datetime.fromtimestamp(conv["updated_at"]).strftime(
"%m-%d %H:%M",
)
parts.append(
f"{idx}. {conv['name']}({conv['id'][:4]})\n 上次更新:{ts_h}\n"
)
idx += 1
if idx == 1:
parts.append("没有找到任何对话。")
dify_cid = provider.conversation_ids.get(message.unified_msg_origin, None)
parts.append(
f"\n\n用户: {message.unified_msg_origin}\n当前对话: {dify_cid}\n使用 /switch <序号> 切换对话。"
)
ret = "".join(parts)
message.set_result(MessageEventResult().message(ret))
return
size_per_page = 6
@@ -211,8 +227,9 @@ class ConversationCommands:
else:
ret += "\n当前对话: 无"
cfg = self.context.get_config(umo=message.unified_msg_origin)
unique_session = cfg["platform_settings"]["unique_session"]
unique_session = self.context.get_config()["platform_settings"][
"unique_session"
]
if unique_session:
ret += "\n会话隔离粒度: 个人"
else:
@@ -226,15 +243,15 @@ class ConversationCommands:
async def new_conv(self, message: AstrMessageEvent):
"""创建新对话"""
cfg = self.context.get_config(umo=message.unified_msg_origin)
agent_runner_type = cfg["provider_settings"]["agent_runner_type"]
if agent_runner_type in THIRD_PARTY_AGENT_RUNNER_KEY:
await sp.remove_async(
scope="umo",
scope_id=message.unified_msg_origin,
key=THIRD_PARTY_AGENT_RUNNER_KEY[agent_runner_type],
provider = self.context.get_using_provider(message.unified_msg_origin)
if provider and provider.meta().type in ["dify", "coze"]:
assert isinstance(provider, (ProviderDify, ProviderCoze)), (
"provider type is not dify or coze"
)
await provider.forget(message.unified_msg_origin)
message.set_result(
MessageEventResult().message("成功,下次聊天将是新对话。"),
)
message.set_result(MessageEventResult().message("已创建新对话。"))
return
cpersona = await self._get_current_persona_id(message.unified_msg_origin)
@@ -257,9 +274,19 @@ class ConversationCommands:
async def groupnew_conv(self, message: AstrMessageEvent, sid: str = ""):
"""创建新群聊对话"""
provider = self.context.get_using_provider(message.unified_msg_origin)
if provider and provider.meta().type in ["dify", "coze"]:
assert isinstance(provider, (ProviderDify, ProviderCoze)), (
"provider type is not dify or coze"
)
await provider.forget(message.unified_msg_origin)
message.set_result(
MessageEventResult().message("成功,下次聊天将是新对话。"),
)
return
if sid:
session = str(
MessageSession(
MessageSesion(
platform_name=message.platform_meta.id,
message_type=MessageType("GroupMessage"),
session_id=sid,
@@ -294,6 +321,31 @@ class ConversationCommands:
)
return
provider = self.context.get_using_provider(message.unified_msg_origin)
if provider and provider.meta().type == "dify":
assert isinstance(provider, ProviderDify), "provider type is not dify"
data = await provider.api_client.get_chat_convs(message.unified_msg_origin)
if not data["data"]:
message.set_result(MessageEventResult().message("未找到任何对话。"))
return
selected_conv = None
if index is not None:
try:
selected_conv = data["data"][index - 1]
except IndexError:
message.set_result(
MessageEventResult().message("对话序号错误,请使用 /ls 查看"),
)
return
else:
selected_conv = data["data"][0]
ret = (
f"Dify 切换到对话: {selected_conv['name']}({selected_conv['id'][:4]})。"
)
provider.conversation_ids[message.unified_msg_origin] = selected_conv["id"]
message.set_result(MessageEventResult().message(ret))
return
if index is None:
message.set_result(
MessageEventResult().message(
@@ -326,6 +378,19 @@ class ConversationCommands:
if not new_name:
message.set_result(MessageEventResult().message("请输入新的对话名称。"))
return
provider = self.context.get_using_provider(message.unified_msg_origin)
if provider and provider.meta().type == "dify":
assert isinstance(provider, ProviderDify)
cid = provider.conversation_ids.get(message.unified_msg_origin, None)
if not cid:
message.set_result(MessageEventResult().message("未找到当前对话。"))
return
await provider.api_client.rename(cid, new_name, message.unified_msg_origin)
message.set_result(MessageEventResult().message("重命名对话成功。"))
return
await self.context.conversation_manager.update_conversation_title(
message.unified_msg_origin,
new_name,
@@ -334,8 +399,9 @@ class ConversationCommands:
async def del_conv(self, message: AstrMessageEvent):
"""删除当前对话"""
cfg = self.context.get_config(umo=message.unified_msg_origin)
is_unique_session = cfg["platform_settings"]["unique_session"]
is_unique_session = self.context.get_config()["platform_settings"][
"unique_session"
]
if message.get_group_id() and not is_unique_session and message.role != "admin":
# 群聊,没开独立会话,发送人不是管理员
message.set_result(
@@ -345,14 +411,20 @@ class ConversationCommands:
)
return
agent_runner_type = cfg["provider_settings"]["agent_runner_type"]
if agent_runner_type in THIRD_PARTY_AGENT_RUNNER_KEY:
await sp.remove_async(
scope="umo",
scope_id=message.unified_msg_origin,
key=THIRD_PARTY_AGENT_RUNNER_KEY[agent_runner_type],
provider = self.context.get_using_provider(message.unified_msg_origin)
if provider and provider.meta().type == "dify":
assert isinstance(provider, ProviderDify)
dify_cid = provider.conversation_ids.pop(message.unified_msg_origin, None)
if dify_cid:
await provider.api_client.delete_chat_conv(
message.unified_msg_origin,
dify_cid,
)
message.set_result(
MessageEventResult().message(
"删除当前对话成功。不再处于对话状态,使用 /switch 序号 切换到其他对话或 /new 创建。",
),
)
message.set_result(MessageEventResult().message("重置对话成功。"))
return
session_curr_cid = (

View File

@@ -5,6 +5,7 @@ from astrbot.api.event import AstrMessageEvent, filter
from astrbot.api.message_components import Image, Plain
from astrbot.api.provider import LLMResponse, ProviderRequest
from astrbot.core import logger
from astrbot.core.provider.sources.dify_source import ProviderDify
from .commands import (
AdminCommands,
@@ -278,20 +279,33 @@ class Main(star.Star):
return
try:
conv = None
session_curr_cid = await self.context.conversation_manager.get_curr_conversation_id(
event.unified_msg_origin,
)
if not session_curr_cid:
logger.error(
"当前未处于对话状态,无法主动回复,请确保 平台设置->会话隔离(unique_session) 未开启,并使用 /switch 序号 切换或者 /new 创建一个会话。",
if provider.meta().type != "dify":
session_curr_cid = await self.context.conversation_manager.get_curr_conversation_id(
event.unified_msg_origin,
)
return
conv = await self.context.conversation_manager.get_conversation(
event.unified_msg_origin,
session_curr_cid,
)
if not session_curr_cid:
logger.error(
"当前未处于对话状态,无法主动回复,请确保 平台设置->会话隔离(unique_session) 未开启,并使用 /switch 序号 切换或者 /new 创建一个会话。",
)
return
conv = await self.context.conversation_manager.get_conversation(
event.unified_msg_origin,
session_curr_cid,
)
else:
# Dify 自己有维护对话,不需要 bot 端维护。
assert isinstance(provider, ProviderDify)
cid = provider.conversation_ids.get(
event.unified_msg_origin,
None,
)
if cid is None:
logger.error(
"[Dify] 当前未处于对话状态,无法主动回复,请确保 平台设置->会话隔离(unique_session) 未开启,并使用 /switch 序号 切换或者 /new 创建一个会话。",
)
return
prompt = event.message_str

View File

@@ -1,6 +1,6 @@
[project]
name = "AstrBot"
version = "4.6.1"
version = "4.6.0"
description = "Easy-to-use multi-platform LLM chatbot and development framework"
readme = "README.md"
requires-python = ">=3.10"